Hong, KyungPyo; Jeong, Eun-Kee; Wall, T. Scott; Drakos, Stavros G.; Kim, Daniel
2015-01-01
Purpose To develop and evaluate a wideband arrhythmia-insensitive-rapid (AIR) pulse sequence for cardiac T1 mapping without image artifacts induced by implantable-cardioverter-defibrillator (ICD). Methods We developed a wideband AIR pulse sequence by incorporating a saturation pulse with wide frequency bandwidth (8.9 kHz), in order to achieve uniform T1 weighting in the heart with ICD. We tested the performance of original and “wideband” AIR cardiac T1 mapping pulse sequences in phantom and human experiments at 1.5T. Results In 5 phantoms representing native myocardium and blood and post-contrast blood/tissue T1 values, compared with the control T1 values measured with an inversion-recovery pulse sequence without ICD, T1 values measured with original AIR with ICD were considerably lower (absolute percent error >29%), whereas T1 values measured with wideband AIR with ICD were similar (absolute percent error <5%). Similarly, in 11 human subjects, compared with the control T1 values measured with original AIR without ICD, T1 measured with original AIR with ICD was significantly lower (absolute percent error >10.1%), whereas T1 measured with wideband AIR with ICD was similar (absolute percent error <2.0%). Conclusion This study demonstrates the feasibility of a wideband pulse sequence for cardiac T1 mapping without significant image artifacts induced by ICD. PMID:25975192
Radiometric properties of the NS001 Thematic Mapper Simulator aircraft multispectral scanner
NASA Technical Reports Server (NTRS)
Markham, Brian L.; Ahmad, Suraiya P.
1990-01-01
Laboratory tests of the NS001 TM are described emphasizing absolute calibration to determine the radiometry of the simulator's reflective channels. In-flight calibration of the data is accomplished with the NS001 internal integrating-sphere source because instabilities in the source can limit the absolute calibration. The data from 1987-89 indicate uncertainties of up to 25 percent with an apparent average uncertainty of about 15 percent. Also identified are dark current drift and sensitivity changes along the scan line, random noise, and nonlinearity which contribute errors of 1-2 percent. Uncertainties similar to hysteresis are also noted especially in the 2.08-2.35-micron range which can reduce sensitivity and cause errors. The NS001 TM Simulator demonstrates a polarization sensitivity that can generate errors of up to about 10 percent depending on the wavelength.
The absolute radiometric calibration of the advanced very high resolution radiometer
NASA Technical Reports Server (NTRS)
Slater, P. N.; Teillet, P. M.; Ding, Y.
1988-01-01
The need for independent, redundant absolute radiometric calibration methods is discussed with reference to the Thematic Mapper. Uncertainty requirements for absolute calibration of between 0.5 and 4 percent are defined based on the accuracy of reflectance retrievals at an agricultural site. It is shown that even very approximate atmospheric corrections can reduce the error in reflectance retrieval to 0.02 over the reflectance range 0 to 0.4.
Flow interference in a variable porosity trisonic wind tunnel.
NASA Technical Reports Server (NTRS)
Davis, J. W.; Graham, R. F.
1972-01-01
Pressure data from a 20-degree cone-cylinder in a variable porosity wind tunnel for the Mach range 0.2 to 5.0 are compared to an interference free standard in order to determine wall interference effects. Four 20-degree cone-cylinder models representing an approximate range of percent blockage from one to six were compared to curve-fits of the interference free standard at each Mach number and errors determined at each pressure tap location. The average of the absolute values of the percent error over the length of the model was determined and used as the criterion for evaluating model blockage interference effects. The results are presented in the form of the percent error as a function of model blockage and Mach number.
Validation of SenseWear Armband in children, adolescents, and adults.
Lopez, G A; Brønd, J C; Andersen, L B; Dencker, M; Arvidsson, D
2018-02-01
SenseWear Armband (SW) is a multisensor monitor to assess physical activity and energy expenditure. Its prediction algorithms have been updated periodically. The aim was to validate SW in children, adolescents, and adults. The most recent SW algorithm 5.2 (SW5.2) and the previous version 2.2 (SW2.2) were evaluated for estimation of energy expenditure during semi-structured activities in 35 children, 31 adolescents, and 36 adults with indirect calorimetry as reference. Energy expenditure estimated from waist-worn ActiGraph GT3X+ data (AG) was used for comparison. Improvements in measurement errors were demonstrated with SW5.2 compared to SW2.2, especially in children and for biking. The overall mean absolute percent error with SW5.2 was 24% in children, 23% in adolescents, and 20% in adults. The error was larger for sitting and standing (23%-32%) and for basketball and biking (19%-35%), compared to walking and running (8%-20%). The overall mean absolute error with AG was 28% in children, 22% in adolescents, and 28% in adults. The absolute percent error for biking was 32%-74% with AG. In general, SW and AG underestimated energy expenditure. However, both methods demonstrated a proportional bias, with increasing underestimation for increasing energy expenditure level, in addition to the large individual error. SW provides measures of energy expenditure level with similar accuracy in children, adolescents, and adults with the improvements in the updated algorithms. Although SW captures biking better than AG, these methods share remaining measurements errors requiring further improvements for accurate measures of physical activity and energy expenditure in clinical and epidemiological research. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
How is the weather? Forecasting inpatient glycemic control
Saulnier, George E; Castro, Janna C; Cook, Curtiss B; Thompson, Bithika M
2017-01-01
Aim: Apply methods of damped trend analysis to forecast inpatient glycemic control. Method: Observed and calculated point-of-care blood glucose data trends were determined over 62 weeks. Mean absolute percent error was used to calculate differences between observed and forecasted values. Comparisons were drawn between model results and linear regression forecasting. Results: The forecasted mean glucose trends observed during the first 24 and 48 weeks of projections compared favorably to the results provided by linear regression forecasting. However, in some scenarios, the damped trend method changed inferences compared with linear regression. In all scenarios, mean absolute percent error values remained below the 10% accepted by demand industries. Conclusion: Results indicate that forecasting methods historically applied within demand industries can project future inpatient glycemic control. Additional study is needed to determine if forecasting is useful in the analyses of other glucometric parameters and, if so, how to apply the techniques to quality improvement. PMID:29134125
A rocket ozonesonde for geophysical research and satellite intercomparison
NASA Technical Reports Server (NTRS)
Hilsenrath, E.; Coley, R. L.; Kirschner, P. T.; Gammill, B.
1979-01-01
The in-situ rocketsonde for ozone profile measurements developed and flown for geophysical research and satellite comparison is reviewed. The measurement principle involves the chemiluminescence caused by ambient ozone striking a detector and passive pumping as a means of sampling the atmosphere as the sonde descends through the atmosphere on a parachute. The sonde is flown on a meteorological sounding rocket, and flight data are telemetered via the standard meteorological GMD ground receiving system. The payload operation, sensor performance, and calibration procedures simulating flight conditions are described. An error analysis indicated an absolute accuracy of about 12 percent and a precision of about 8 percent. These are combined to give a measurement error of 14 percent.
Green-Ampt approximations: A comprehensive analysis
NASA Astrophysics Data System (ADS)
Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.
2016-04-01
Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2012 CFR
2012-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2010 CFR
2010-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2014 CFR
2014-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2013 CFR
2013-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2011 CFR
2011-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven
The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less
NASA Technical Reports Server (NTRS)
Smith, James A.
1992-01-01
The inversion of the leaf area index (LAI) canopy parameter from optical spectral reflectance measurements is obtained using a backpropagation artificial neural network trained using input-output pairs generated by a multiple scattering reflectance model. The problem of LAI estimation over sparse canopies (LAI < 1.0) with varying soil reflectance backgrounds is particularly difficult. Standard multiple regression methods applied to canopies within a single homogeneous soil type yield good results but perform unacceptably when applied across soil boundaries, resulting in absolute percentage errors of >1000 percent for low LAI. Minimization methods applied to merit functions constructed from differences between measured reflectances and predicted reflectances using multiple-scattering models are unacceptably sensitive to a good initial guess for the desired parameter. In contrast, the neural network reported generally yields absolute percentage errors of <30 percent when weighting coefficients trained on one soil type were applied to predicted canopy reflectance at a different soil background.
Absolute measurement of the extreme UV solar flux
NASA Technical Reports Server (NTRS)
Carlson, R. W.; Ogawa, H. S.; Judge, D. L.; Phillips, E.
1984-01-01
A windowless rare-gas ionization chamber has been developed to measure the absolute value of the solar extreme UV flux in the 50-575-A region. Successful results were obtained on a solar-pointing sounding rocket. The ionization chamber, operated in total absorption, is an inherently stable absolute detector of ionizing UV radiation and was designed to be independent of effects from secondary ionization and gas effusion. The net error of the measurement is + or - 7.3 percent, which is primarily due to residual outgassing in the instrument, other errors such as multiple ionization, photoelectron collection, and extrapolation to the zero atmospheric optical depth being small in comparison. For the day of the flight, Aug. 10, 1982, the solar irradiance (50-575 A), normalized to unit solar distance, was found to be 5.71 + or - 0.42 x 10 to the 10th photons per sq cm sec.
Evaluation of quality of commercial pedometers.
Tudor-Locke, Catrine; Sisson, Susan B; Lee, Sarah M; Craig, Cora L; Plotnikoff, Ronald C; Bauman, Adrian
2006-01-01
The purpose of this study was to: 1) evaluate the quality of promotional pedometers widely distributed through cereal boxes at the time of the 2004 Canada on the Move campaign; and 2) establish a battery of testing protocols to provide direction for future consensus on industry standards for pedometer quality. Fifteen Kellogg's* Special K* Step Counters (K pedometers or K; manufactured for Kellogg Canada by Sasco, Inc.) and 9 Yamax pedometers (Yamax; Yamax Corporation, Tokyo, Japan) were tested with 9 participants accordingly: 1) 20 Step Test; 2) treadmill at 80m x min(-1) (3 miles x hr(-1)) and motor vehicle controlled conditions; and 3) 24-hour free-living conditions against an accelerometer criterion. Fifty-three percent of the K pedometers passed the 20 Step Test compared to 100% of the Yamax. Mean absolute percent error for the K during treadmill walking was 24.2+/-33.9 vs. 3.9+/-6.6% for the Yamax. The K detected 5.7-fold more non-steps compared to the Yamax during the motor vehicle condition. In the free-living condition, mean absolute percent error relative to the ActiGraph was 44.9+/-34.5% for the K vs. 19.5+/-21.2% for the Yamax. K pedometers are unacceptably inaccurate. We suggest that research grade pedometers: 1) be manufactured to a sensitivity threshold of 0.35 Gs; 2) detect +/-1 step error on the 20 Step Test (i.e., within 5%); 3) detect +/-1% error most of the time during treadmill walking at 80m x min(-1) (3 miles x hr(-1)); as well as, 4) detect steps/day within 10% of the ActiGraph at least 60% of the time, or be within 10% of the Yamax under free-living conditions.
Quantitative endoscopy: initial accuracy measurements.
Truitt, T O; Adelman, R A; Kelly, D H; Willging, J P
2000-02-01
The geometric optics of an endoscope can be used to determine the absolute size of an object in an endoscopic field without knowing the actual distance from the object. This study explores the accuracy of a technique that estimates absolute object size from endoscopic images. Quantitative endoscopy involves calibrating a rigid endoscope to produce size estimates from 2 images taken with a known traveled distance between the images. The heights of 12 samples, ranging in size from 0.78 to 11.80 mm, were estimated with this calibrated endoscope. Backup distances of 5 mm and 10 mm were used for comparison. The mean percent error for all estimated measurements when compared with the actual object sizes was 1.12%. The mean errors for 5-mm and 10-mm backup distances were 0.76% and 1.65%, respectively. The mean errors for objects <2 mm and > or =2 mm were 0.94% and 1.18%, respectively. Quantitative endoscopy estimates endoscopic image size to within 5% of the actual object size. This method remains promising for quantitatively evaluating object size from endoscopic images. It does not require knowledge of the absolute distance of the endoscope from the object, rather, only the distance traveled by the endoscope between images.
Photon scattering cross sections of H2 and He measured with synchrotron radiation
NASA Technical Reports Server (NTRS)
Ice, G. E.
1977-01-01
Total (elastic + inelastic) differential photon scattering cross sections have been measured for H2 gas and He, using an X-ray beam. Absolute measured cross sections agree with theory within the probable errors. Relative cross sections (normalized to theory at large S) agree to better than one percent with theoretical values calculated from wave functions that include the effect of electron-electron Coulomb correlation, but the data deviate significantly from theoretical independent-particle (e.g., Hartree-Fock) results. The ratios of measured absolute He cross sections to those of H2, at any given S, also agree to better than one percent with theoretical He-to-H2 cross-section ratios computed from correlated wave functions. It appears that photon scattering constitutes a very promising tool for probing electron correlation in light atoms and molecules.
Multiple regression technique for Pth degree polynominals with and without linear cross products
NASA Technical Reports Server (NTRS)
Davis, J. W.
1973-01-01
A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thakkar, Ajit J., E-mail: ajit@unb.ca; Wu, Taozhe
2015-10-14
Static electronic dipole polarizabilities for 135 molecules are calculated using second-order Møller-Plesset perturbation theory and six density functionals recently recommended for polarizabilities. Comparison is made with the best gas-phase experimental data. The lowest mean absolute percent deviations from the best experimental values for all 135 molecules are 3.03% and 3.08% for the LC-τHCTH and M11 functionals, respectively. Excluding the eight extreme outliers for which the experimental values are almost certainly in error, the mean absolute percent deviation for the remaining 127 molecules drops to 2.42% and 2.48% for the LC-τHCTH and M11 functionals, respectively. Detailed comparison enables us to identifymore » 32 molecules for which the discrepancy between the calculated and experimental values warrants further investigation.« less
Total absorption and photoionization cross sections of water vapor between 100 and 1000 A
NASA Technical Reports Server (NTRS)
Haddad, G. N.; Samson, J. A. R.
1986-01-01
Absolute photoabsorption and photoionization cross sections of water vapor are reported at a large number of discrete wavelengths between 100 and 1000 A with an estimate error of + or - 3 percent in regions free from any discrete structure. The double ionization chamber technique utilized is described. Recent calculations are shown to be in reasonable agreement with the present data.
NASA Technical Reports Server (NTRS)
Otterson, D. A.; Seng, G. T.
1985-01-01
An high performance liquid chromatography (HPLC) method to estimate four aromatic classes in middistillate fuels is presented. Average refractive indices are used in a correlation to obtain the concentrations of each of the aromatic classes from HPLC data. The aromatic class concentrations can be obtained in about 15 min when the concentration of the aromatic group is known. Seven fuels with a wide range of compositions were used to test the method. Relative errors in the concentration of the two major aromatic classes were not over 10 percent. Absolute errors of the minor classes were all less than 0.3 percent. The data show that errors in group-type analyses using sulfuric acid derived standards are greater for fuels containing high concentrations of polycyclic aromatics. Corrections are based on the change in refractive index of the aromatic fraction which can occur when sulfuric acid and the fuel react. These corrections improved both the precision and the accuracy of the group-type results.
Streamflow simulation studies of the Hillsborough, Alafia, and Anclote Rivers, west-central Florida
Turner, J.F.
1979-01-01
A modified version of the Georgia Tech Watershed Model was applied for the purpose of flow simulation in three large river basins of west-central Florida. Calibrations were evaluated by comparing the following synthesized and observed data: annual hydrographs for the 1959, 1960, 1973 and 1974 water years, flood hydrographs (maximum daily discharge and flood volume), and long-term annual flood-peak discharges (1950-72). Annual hydrographs, excluding the 1973 water year, were compared using average absolute error in annual runoff and daily flows and correlation coefficients of monthly and daily flows. Correlations coefficients for simulated and observed maximum daily discharges and flood volumes used for calibrating range from 0.91 to 0.98 and average standard errors of estimate range from 18 to 45 percent. Correlation coefficients for simulated and observed annual flood-peak discharges range from 0.60 to 0.74 and average standard errors of estimate range from 33 to 44 percent. (Woodard-USGS)
Effect of contrast on human speed perception
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Thompson, Peter
1992-01-01
This study is part of an ongoing collaborative research effort between the Life Science and Human Factors Divisions at NASA ARC to measure the accuracy of human motion perception in order to predict potential errors in human perception/performance and to facilitate the design of display systems that minimize the effects of such deficits. The study describes how contrast manipulations can produce significant errors in human speed perception. Specifically, when two simultaneously presented parallel gratings are moving at the same speed within stationary windows, the lower-contrast grating appears to move more slowly. This contrast-induced misperception of relative speed is evident across a wide range of contrasts (2.5-50 percent) and does not appear to saturate (e.g., a 50 percent contrast grating appears slower than a 70 percent contrast grating moving at the same speed). The misperception is large: a 70 percent contrast grating must, on average, be slowed by 35 percent to match a 10 percent contrast grating moving at 2 deg/sec (N = 6). Furthermore, it is largely independent of the absolute contrast level and is a quasilinear function of log contrast ratio. A preliminary parametric study shows that, although spatial frequency has little effect, the relative orientation of the two gratings is important. Finally, the effect depends on the temporal presentation of the stimuli: the effects of contrast on perceived speed appears lessened when the stimuli to be matched are presented sequentially. These data constrain both physiological models of visual cortex and models of human performance. We conclude that viewing conditions that effect contrast, such as fog, may cause significant errors in speed judgments.
Brand, Judith S; Humphreys, Keith; Thompson, Deborah J; Li, Jingmei; Eriksson, Mikael; Hall, Per; Czene, Kamila
2014-12-01
Mammographic density is a strong heritable trait, but data on its genetic component are limited to area-based and qualitative measures. We studied the heritability of volumetric mammographic density ascertained by a fully-automated method and the association with breast cancer susceptibility loci. Heritability of volumetric mammographic density was estimated with a variance component model in a sib-pair sample (N pairs = 955) of a Swedish screening based cohort. Associations with 82 established breast cancer loci were assessed in an independent sample of the same cohort (N = 4025 unrelated women) using linear models, adjusting for age, body mass index, and menopausal status. All tests were two-sided, except for heritability analyses where one-sided tests were used. After multivariable adjustment, heritability estimates (standard error) for percent dense volume, absolute dense volume, and absolute nondense volume were 0.63 (0.06) and 0.43 (0.06) and 0.61 (0.06), respectively (all P < .001). Percent and absolute dense volume were associated with rs10995190 (ZNF365; P = 9.0 × 10(-6) and 8.9 × 10(-7), respectively) and rs9485372 (TAB2; P = 1.8 × 10(-5) and 1.8 × 10(-3), respectively). We also observed associations of rs9383938 (ESR1) and rs2046210 (ESR1) with the absolute dense volume (P = 2.6 × 10(-4) and 4.6 × 10(-4), respectively), and rs6001930 (MLK1) and rs17356907 (NTN4) with the absolute nondense volume (P = 6.7 × 10(-6) and 8.4 × 10(-5), respectively). Our results support the high heritability of mammographic density, though estimates are weaker for absolute than percent dense volume. We also demonstrate that the shared genetic component with breast cancer is not restricted to dense tissues only. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements
NASA Technical Reports Server (NTRS)
Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are superior in performance compared to other radiosondes, with average 26 km errors of -0.12 hPa or +0.61 percent O3MR error. iMet-P radiosondes had average 26 km errors of -1.95 hPa or +8.75 percent O3MR error. Based on our analysis, we suggest that ozonesondes always be coupled with a GPS-enabled radiosonde and that pressure-dependent variables, such as O3MR, be recalculated-reprocessed using the GPS-measured altitude, especially when 26 km pressure offsets exceed 1.0 hPa 5 percent.
Gaonkar, Narayan; Vaidya, R G
2016-05-01
A simple method to estimate the density of biodiesel blend as simultaneous function of temperature and volume percent of biodiesel is proposed. Employing the Kay's mixing rule, we developed a model and investigated theoretically the density of different vegetable oil biodiesel blends as a simultaneous function of temperature and volume percent of biodiesel. Key advantage of the proposed model is that it requires only a single set of density values of components of biodiesel blends at any two different temperatures. We notice that the density of blend linearly decreases with increase in temperature and increases with increase in volume percent of the biodiesel. The lower values of standard estimate of error (SEE = 0.0003-0.0022) and absolute average deviation (AAD = 0.03-0.15 %) obtained using the proposed model indicate the predictive capability. The predicted values found good agreement with the recent available experimental data.
NASA Technical Reports Server (NTRS)
Kuehn, C. E.; Himwich, W. E.; Clark, T. A.; Ma, C.
1991-01-01
The internal consistency of the baseline-length measurements derived from analysis of several independent VLBI experiments is an estimate of the measurement precision. The paper investigates whether the inclusion of water vapor radiometer (WVR) data as an absolute calibration of the propagation delay due to water vapor improves the precision of VLBI baseline-length measurements. The paper analyzes 28 International Radio Interferometric Surveying runs between June 1988 and January 1989; WVR measurements were made during each session. The addition of WVR data decreased the scatter of the length measurements of the baselines by 5-10 percent. The observed reduction in the scatter of the baseline lengths is less than what is expected from the behavior of the formal errors, which suggest that the baseline-length measurement precision should improve 10-20 percent if WVR data are included in the analysis. The discrepancy between the formal errors and the baseline-length results can be explained as the consequence of systematic errors in the dry-mapping function parameters, instrumental biases in the WVR and the barometer, or both.
Time series forecasting of future claims amount of SOCSO's employment injury scheme (EIS)
NASA Astrophysics Data System (ADS)
Zulkifli, Faiz; Ismail, Isma Liana; Chek, Mohd Zaki Awang; Jamal, Nur Faezah; Ridzwan, Ahmad Nur Azam Ahmad; Jelas, Imran Md; Noor, Syamsul Ikram Mohd; Ahmad, Abu Bakar
2012-09-01
The Employment Injury Scheme (EIS) provides protection to employees who are injured due to accidents whilst working, commuting from home to the work place or during employee takes a break during an authorized recess time or while travelling that is related with his work. The main purpose of this study is to forecast value on claims amount of EIS for the year 2011 until 2015 by using appropriate models. These models were tested on the actual EIS data from year 1972 until year 2010. Three different forecasting models are chosen for comparisons. These are the Naïve with Trend Model, Average Percent Change Model and Double Exponential Smoothing Model. The best model is selected based on the smallest value of error measures using the Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE). From the result, the best model that best fit the forecast for the EIS is the Average Percent Change Model. Furthermore, the result also shows the claims amount of EIS for the year 2011 to year 2015 continue to trend upwards from year 2010.
NASA Astrophysics Data System (ADS)
Sasmita, Yoga; Darmawan, Gumgum
2017-08-01
This research aims to evaluate the performance of forecasting by Fourier Series Analysis (FSA) and Singular Spectrum Analysis (SSA) which are more explorative and not requiring parametric assumption. Those methods are applied to predicting the volume of motorcycle sales in Indonesia from January 2005 to December 2016 (monthly). Both models are suitable for seasonal and trend component data. Technically, FSA defines time domain as the result of trend and seasonal component in different frequencies which is difficult to identify in the time domain analysis. With the hidden period is 2,918 ≈ 3 and significant model order is 3, FSA model is used to predict testing data. Meanwhile, SSA has two main processes, decomposition and reconstruction. SSA decomposes the time series data into different components. The reconstruction process starts with grouping the decomposition result based on similarity period of each component in trajectory matrix. With the optimum of window length (L = 53) and grouping effect (r = 4), SSA predicting testing data. Forecasting accuracy evaluation is done based on Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). The result shows that in the next 12 month, SSA has MAPE = 13.54 percent, MAE = 61,168.43 and RMSE = 75,244.92 and FSA has MAPE = 28.19 percent, MAE = 119,718.43 and RMSE = 142,511.17. Therefore, to predict volume of motorcycle sales in the next period should use SSA method which has better performance based on its accuracy.
NASA Technical Reports Server (NTRS)
Ulich, B. L.; Rhodes, P. J.; Davis, J. H.; Hollis, J. M.
1980-01-01
Careful observations have been made at 86.1 GHz to derive the absolute brightness temperatures of the sun (7914 + or - 192 K), Venus (357.5 + or - 13.1 K), Jupiter (179.4 + or - 4.7 K), and Saturn (153.4 + or - 4.8 K) with a standard error of about three percent. This is a significant improvement in accuracy over previous results at millimeter wavelengths. A stable transmitter and novel superheterodyne receiver were constructed and used to determine the effective collecting area of the Millimeter Wave Observatory (MWO) 4.9-m antenna relative to a previously calibrated standard gain horn. The thermal scale was set by calibrating the radiometer with carefully constructed and tested hot and cold loads. The brightness temperatures may be used to establish an absolute calibration scale and to determine the antenna aperture and beam efficiencies of other radio telescopes at 3.5-mm wavelength.
Learning Kinematic Constraints in Laparoscopic Surgery
Huang, Felix C.; Mussa-Ivaldi, Ferdinando A.; Pugh, Carla M.; Patton, James L.
2012-01-01
To better understand how kinematic variables impact learning in surgical training, we devised an interactive environment for simulated laparoscopic maneuvers, using either 1) mechanical constraints typical of a surgical “box-trainer” or 2) virtual constraints in which free hand movements control virtual tool motion. During training, the virtual tool responded to the absolute position in space (Position-Based) or the orientation (Orientation-Based) of a hand-held sensor. Volunteers were further assigned to different sequences of target distances (Near-Far-Near or Far-Near-Far). Training with the Orientation-Based constraint enabled much lower path error and shorter movement times during training, which suggests that tool motion that simply mirrors joint motion is easier to learn. When evaluated in physically constrained (physical box-trainer) conditions, each group exhibited improved performance from training. However, Position-Based training enabled greater reductions in movement error relative to Orientation-Based (mean difference: 14.0 percent; CI: 0.7, 28.6). Furthermore, the Near-Far-Near schedule allowed a greater decrease in task time relative to the Far-Near-Far sequence (mean −13:5 percent, CI: −19:5, −7:5). Training that focused on shallow tool insertion (near targets) might promote more efficient movement strategies by emphasizing the curvature of tool motion. In addition, our findings suggest that an understanding of absolute tool position is critical to coping with mechanical interactions between the tool and trocar. PMID:23293709
VizieR Online Data Catalog: WISE/NEOWISE Mars-crossing asteroids (Ali-Lagoa+, 2017)
NASA Astrophysics Data System (ADS)
Ali-Lagoa, V.; Delbo, M.
2017-07-01
We fitted the near-Earth asteroid thermal model of Harris (1998, Icarus, 131, 29) to WISE/NEOWISE thermal infrared data (see, e.g., Mainzer et al. 2011ApJ...736..100M, and Masiero et al. 2014, Cat. J/ApJ/791/121). The table contains the best-fitting values of size and beaming parameter. We note that the beaming parameter is a strictly positive quantity, but a negative sign is given to indicate whenever we could not fit it and had to assume a default value. We also provide the visible geometric albedos computed from the diameter and the tabulated absolute magnitudes. Minimum relative errors of 10, 15, and 20 percent should be considered for size, beaming parameter and albedo in those cases for which the beaming parameter could be fitted. Otherwise, the minimum relative errors in size and albedo increase to 20 and 40 percent (see, e.g., Mainzer et al. 2011ApJ...736..100M). The asteroid absolute magnitudes and slope parameters retrieved from the Minor Planet Center (MPC) are included, as well as the number of observations used in each WISE band (nW2, nW3, nW4) and the corresponding average values of heliocentric and geocentric distances and phase angle of the observations. The ephemerides were retrieved from the MIRIADE service (http://vo.imcce.fr/webservices/miriade/?ephemph). (1 data file).
Piston manometer as an absolute standard for vacuum-gage calibration in the range 2 to 500 millitorr
NASA Technical Reports Server (NTRS)
Warshawsky, I.
1972-01-01
A thin disk is suspended, with very small annular clearance, in a cylindrical opening in the base plate of a calibration chamber. A continuous flow of calibration gas passes through the chamber and annular opening to a downstream high vacuum pump. The ratio of pressures on the two faces of the disk is very large, so that the upstream pressure is substantially equal to net force on the disk divided by disk area. This force is measured with a dynamometer that is calibrated in place with dead weights. A probable error of + or - (0.2 millitorr plus 0.2 percent) is attainable when downstream pressure is known to 10 percent.
Azeez, Adeboye; Obaromi, Davies; Odeyemi, Akinwumi; Ndege, James; Muntabayi, Ruffin
2016-07-26
Tuberculosis (TB) is a deadly infectious disease caused by Mycobacteria tuberculosis. Tuberculosis as a chronic and highly infectious disease is prevalent in almost every part of the globe. More than 95% of TB mortality occurs in low/middle income countries. In 2014, approximately 10 million people were diagnosed with active TB and two million died from the disease. In this study, our aim is to compare the predictive powers of the seasonal autoregressive integrated moving average (SARIMA) and neural network auto-regression (SARIMA-NNAR) models of TB incidence and analyse its seasonality in South Africa. TB incidence cases data from January 2010 to December 2015 were extracted from the Eastern Cape Health facility report of the electronic Tuberculosis Register (ERT.Net). A SARIMA model and a combined model of SARIMA model and a neural network auto-regression (SARIMA-NNAR) model were used in analysing and predicting the TB data from 2010 to 2015. Simulation performance parameters of mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), mean percent error (MPE), mean absolute scaled error (MASE) and mean absolute percentage error (MAPE) were applied to assess the better performance of prediction between the models. Though practically, both models could predict TB incidence, the combined model displayed better performance. For the combined model, the Akaike information criterion (AIC), second-order AIC (AICc) and Bayesian information criterion (BIC) are 288.56, 308.31 and 299.09 respectively, which were lower than the SARIMA model with corresponding values of 329.02, 327.20 and 341.99, respectively. The seasonality trend of TB incidence was forecast to have a slightly increased seasonal TB incidence trend from the SARIMA-NNAR model compared to the single model. The combined model indicated a better TB incidence forecasting with a lower AICc. The model also indicates the need for resolute intervention to reduce infectious disease transmission with co-infection with HIV and other concomitant diseases, and also at festival peak periods.
The growth pattern and fuel life cycle analysis of the electricity consumption of Hong Kong.
To, W M; Lai, T M; Lo, W C; Lam, K H; Chung, W L
2012-06-01
As the consumption of electricity increases, air pollutants from power generation increase. In metropolitans such as Hong Kong and other Asian cities, the surge of electricity consumption has been phenomenal over the past decades. This paper presents a historical review about electricity consumption, population, and change in economic structure in Hong Kong. It is hypothesized that the growth of electricity consumption and change in gross domestic product can be modeled by 4-parameter logistic functions. The accuracy of the functions was assessed by Pearson's correlation coefficient, mean absolute percent error, and root mean squared percent error. The paper also applies the life cycle approach to determine carbon dioxide, methane, nitrous oxide, sulfur dioxide, and nitrogen oxide emissions for the electricity consumption of Hong Kong. Monte Carlo simulations were applied to determine the confidence intervals of pollutant emissions. The implications of importing more nuclear power are discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.
Nimbus-7 Total Ozone Mapping Spectrometer (TOMS) Data Products User's Guide
NASA Technical Reports Server (NTRS)
McPeters, Richard D.; Bhartia, P. K.; Krueger, Arlin J.; Herman, Jay R.; Schlesinger, Barry M.; Wellemeyer, Charles G.; Seftor, Colin J.; Jaross, Glen; Taylor, Steven L.; Swissler, Tom;
1996-01-01
Two data products from the Total Ozone Mapping Spectrometer (TOMS) onboard Nimbus-7 have been archived at the Distributed Active Archive Center, in the form of Hierarchical Data Format files. The instrument measures backscattered Earth radiance and incoming solar irradiance; their ratio is used in ozone retrievals. Changes in the instrument sensitivity are monitored by a spectral discrimination technique using measurements of the intrinsically stable wavelength dependence of derived surface reflectivity. The algorithm to retrieve total column ozone compares measured Earth radiances at sets of three wavelengths with radiances calculated for different total ozone values, solar zenith angles, and optical paths. The initial error in the absolute scale for TOMS total ozone is 3 percent, the one standard deviation random error is 2 percent, and drift is less than 1.0 percent per decade. The Level-2 product contains the measured radiances, the derived total ozone amount, and reflectivity information for each scan position. The Level-3 product contains daily total ozone amount and reflectivity in a I - degree latitude by 1.25 degrees longitude grid. The Level-3 product also is available on CD-ROM. Detailed descriptions of both HDF data files and the CD-ROM product are provided.
Nimbus-7 Total Ozone Mapping Spectrometer (TOMS) data products user's guide
NASA Technical Reports Server (NTRS)
Mcpeters, Richard D.; Krueger, Arlin J.; Bhartia, P. K.; Herman, Jay R.; Oaks, Arnold; Ahmad, Ziuddin; Cebula, Richard P.; Schlesinger, Barry M.; Swissler, Tom; Taylor, Steven L.
1993-01-01
Two tape products from the Total Ozone Mapping Spectrometer (TOMS) aboard the Nimbus-7 have been archived at the National Space Science Data Center. The instrument measures backscattered Earth radiance and incoming solar irradiance; their ratio -- the albedo -- is used in ozone retrievals. In-flight measurements are used to monitor changes in the instrument sensitivity. The algorithm to retrieve total column ozone compares the observed ratios of albedos at pairs of wavelengths with pair ratios calculated for different ozone values, solar zenith angles, and optical paths. The initial error in the absolute scale for TOMS total ozone is 3 percent, the one standard-deviation random error is 2 percent, and the drift is +/- 1.5 percent over 14.5 years. The High Density TOMS (HDTOMS) tape contains the measured albedos, the derived total ozone amount, reflectivity, and cloud-height information for each scan position. It also contains an index of SO2 contamination for each position. The Gridded TOMS (GRIDTOMS) tape contains daily total ozone and reflectivity in roughly equal area grids (110 km in latitude by about 100-150 km in longitude). Detailed descriptions of the tape structure and record formats are provided.
Measurement of children's physical activity using a pedometer with a built-in memory.
Trapp, Georgina S A; Giles-Corti, Billie; Bulsara, Max; Christian, Hayley E; Timperio, Anna F; McCormack, Gavin R; Villanueva, Karen
2013-05-01
We evaluated the accuracy of the Accusplit AH120 pedometer (built-in memory) for recording step counts of children during treadmill walking against (1) observer counted steps and (2) concurrently measured steps using the previously validated Yamax Digiwalker SW-700 pedometer. This was a cross-sectional validation study performed under controlled settings. Forty five 9-12-year-olds walked on treadmills at speeds of 42, 66 and 90m/min to simulate slow, moderate and fast walking wearing Accusplit and Yamax pedometers concurrently on their right hip. Observer counted steps were captured by video camera and manually counted. Absolute value of percent error was calculated for each comparison. Bland-Altman plots were constructed to show the distribution of the individual (criterion-comparison) scores around zero. Both pedometers under-recorded observer counted steps at all three walk speeds. Absolute value of percent error was highest at the slowest walk speed (Accusplit=46.9%; Yamax=44.1%) and lowest at the fastest walk speed (Accusplit=8.6%; Yamax=8.9%). Bland-Altman plots showed high agreement between the pedometers for all three walk speeds. Using pedometers with built-in memory capabilities eliminates the need for children to manually log step counts daily, potentially improving data accuracy and completeness. Step counts from the Accusplit (built-in memory) and Yamax (widely used) pedometers were comparable across all speeds, but their level of accuracy was dependent on walking pace. Pedometers should be used with caution in children as they significantly undercount steps, and this error is greatest at slower walk speeds. Copyright © 2012 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
An optimized network for phosphorus load monitoring for Lake Okeechobee, Florida
Gain, W.S.
1997-01-01
Phosphorus load data were evaluated for Lake Okeechobee, Florida, for water years 1982 through 1991. Standard errors for load estimates were computed from available phosphorus concentration and daily discharge data. Components of error were associated with uncertainty in concentration and discharge data and were calculated for existing conditions and for 6 alternative load-monitoring scenarios for each of 48 distinct inflows. Benefit-cost ratios were computed for each alternative monitoring scenario at each site by dividing estimated reductions in load uncertainty by the 5-year average costs of each scenario in 1992 dollars. Absolute and marginal benefit-cost ratios were compared in an iterative optimization scheme to determine the most cost-effective combination of discharge and concentration monitoring scenarios for the lake. If the current (1992) discharge-monitoring network around the lake is maintained, the water-quality sampling at each inflow site twice each year is continued, and the nature of loading remains the same, the standard error of computed mean-annual load is estimated at about 98 metric tons per year compared to an absolute loading rate (inflows and outflows) of 530 metric tons per year. This produces a relative uncertainty of nearly 20 percent. The standard error in load can be reduced to about 20 metric tons per year (4 percent) by adopting an optimized set of monitoring alternatives at a cost of an additional $200,000 per year. The final optimized network prescribes changes to improve both concentration and discharge monitoring. These changes include the addition of intensive sampling with automatic samplers at 11 sites, the initiation of event-based sampling by observers at another 5 sites, the continuation of periodic sampling 12 times per year at 1 site, the installation of acoustic velocity meters to improve discharge gaging at 9 sites, and the improvement of a discharge rating at 1 site.
Performance Evaluation of Five Turbidity Sensors in Three Primary Standards
Snazelle, Teri T.
2015-10-28
Open-File Report 2015-1172 is temporarily unavailable.Five commercially available turbidity sensors were evaluated by the U.S. Geological Survey, Hydrologic Instrumentation Facility (HIF) for accuracy and precision in three types of turbidity standards; formazin, StablCal, and AMCO Clear (AMCO–AEPA). The U.S. Environmental Protection Agency (EPA) recognizes all three turbidity standards as primary standards, meaning they are acceptable for reporting purposes. The Forrest Technology Systems (FTS) DTS-12, the Hach SOLITAX sc, the Xylem EXO turbidity sensor, the Yellow Springs Instrument (YSI) 6136 turbidity sensor, and the Hydrolab Series 5 self-cleaning turbidity sensor were evaluated to determine if turbidity measurements in the three primary standards are comparable to each other, and to ascertain if the primary standards are truly interchangeable. A formazin 4000 nephelometric turbidity unit (NTU) stock was purchased and dilutions of 40, 100, 400, 800, and 1000 NTU were made fresh the day of testing. StablCal and AMCO Clear (for Hach 2100N) standards with corresponding concentrations were also purchased for the evaluation. Sensor performance was not evaluated in turbidity levels less than 40 NTU due to the unavailability of polymer-bead turbidity standards rated for general use. The percent error was calculated as the true (not absolute) difference between the measured turbidity and the standard value, divided by the standard value.The sensors that demonstrated the best overall performance in the evaluation were the Hach SOLITAX and the Hydrolab Series 5 turbidity sensor when the operating range (0.001–4000 NTU for the SOLITAX and 0.1–3000 NTU for the Hydrolab) was considered in addition to sensor accuracy and precision. The average percent error in the three standards was 3.80 percent for the SOLITAX and -4.46 percent for the Hydrolab. The DTS-12 also demonstrated good accuracy with an average percent error of 2.02 percent and a maximum relative standard deviation of 0.51 percent for the operating range, which was limited to 0.01–1600 NTU at the time of this report. Test results indicated an average percent error of 19.81 percent in the three standards for the EXO turbidity sensor and 9.66 percent for the YSI 6136. The significant variability in sensor performance in the three primary standards suggests that although all three types are accepted as primary calibration standards, they are not interchangeable, and sensor results in the three types of standards are not directly comparable.
Surface albedo from bidirectional reflectance
NASA Technical Reports Server (NTRS)
Ranson, K. J.; Irons, J. R.; Daughtry, C. S. T.
1991-01-01
The validity of integrating over discrete wavelength bands is examined to estimate total shortwave bidirectional reflectance of vegetated and bare soil surfaces. Methods for estimating albedo from multiple angle, discrete wavelength band radiometer measurements are studied. These methods include a numerical integration technique and the integration of an empirically derived equation for bidirectional reflectance. It is concluded that shortwave albedos estimated through both techniques agree favorably with the independent pyranometer measurements. Absolute rms errors are found to be 0.5 percent or less for both grass sod and bare soil surfaces.
NASA Technical Reports Server (NTRS)
Lockwood, G. W.; Tueg, H.; White, N. M.
1992-01-01
By imaging sunlight diffracted by 20- and 30-micron diameter pinholes onto the entrance aperture of a photoelectric grating scanner, the solar spectral irradiance was determined relative to the spectrophotometric standard star Vega, observed at night with the same instrument. Solar irradiances are tabulated at 4 A increments from 3295 A to 8500 A. Over most of the visible spectrum, the internal error of measurement is less than 2 percent. This calibration is compared with earlier irradiance measurements by Neckel and Labs (1984) and by Arvesen et al. (1969) and with the high-resolution solar atlas by Kurucz et al. The three calibrations agree well in visible light but differ by as much as 10 percent in the ultraviolet.
Wetherbee, Gregory A.; Latysh, Natalie E.; Burke, Kevin P.
2005-01-01
Six external quality-assurance programs were operated by the U.S. Geological Survey (USGS) External Quality-Assurance (QA) Project for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) from 2002 through 2003. Each program measured specific components of the overall error inherent in NADP/NTN wet-deposition measurements. The intersite-comparison program assessed the variability and bias of pH and specific conductance determinations made by NADP/NTN site operators twice per year with respect to accuracy goals. The percentage of site operators that met the pH accuracy goals decreased from 92.0 percent in spring 2002 to 86.3 percent in spring 2003. In these same four intersite-comparison studies, the percentage of site operators that met the accuracy goals for specific conductance ranged from 94.4 to 97.5 percent. The blind-audit program and the sample-handling evaluation (SHE) program evaluated the effects of routine sample handling, processing, and shipping on the chemistry of weekly NADP/NTN samples. The blind-audit program data indicated that the variability introduced by sample handling might be environmentally significant to data users for sodium, potassium, chloride, and hydrogen ion concentrations during 2002. In 2003, the blind-audit program was modified and replaced by the SHE program. The SHE program was designed to control the effects of laboratory-analysis variability. The 2003 SHE data had less overall variability than the 2002 blind-audit data. The SHE data indicated that sample handling buffers the pH of the precipitation samples and, in turn, results in slightly lower conductivity. Otherwise, the SHE data provided error estimates that were not environmentally significant to data users. The field-audit program was designed to evaluate the effects of onsite exposure, sample handling, and shipping on the chemistry of NADP/NTN precipitation samples. Field-audit results indicated that exposure of NADP/NTN wet-deposition samples to onsite conditions tended to neutralize the acidity of the samples by less than 1.0 microequivalent per liter. Onsite exposure of the sampling bucket appeared to slightly increase the concentration of most of the analytes but not to an extent that was environmentally significant to NADP data users. An interlaboratory-comparison program was used to estimate the analytical variability and bias of the NADP Central Analytical Laboratory (CAL) during 2002-03. Bias was identified in the CAL data for calcium, magnesium, sodium, potassium, ammonium, chloride, nitrate, sulfate, hydrogen ion, and specific conductance, but the absolute value of the bias was less than analytical minimum detection limits for all constituents except magnesium, nitrate, sulfate, and specific conductance. Control charts showed that CAL results were within statistical control approximately 90 percent of the time. Data for the analysis of ultrapure deionized-water samples indicated that CAL did not have problems with laboratory contamination. During 2002-03, the overall variability of data from the NADP/NTN precipitation-monitoring system was estimated using data from three collocated monitoring sites. Measurement differences of constituent concentration and deposition for paired samples from the collocated samplers were evaluated to compute error terms. The medians of the absolute percentage errors (MAEs) for the paired samples generally were larger for cations (approximately 8 to 50 percent) than for anions (approximately 3 to 33 percent). MAEs were approximately 16 to 30 percent for hydrogen-ion concentration, less than 10 percent for specific conductance, less than 5 percent for sample volume, and less than 8 percent for precipitation depth. The variability attributed to each component of the sample-collection and analysis processes, as estimated by USGS quality-assurance programs, varied among analytes. Laboratory analysis variability accounted for approximately 2 percent of the
NASA Technical Reports Server (NTRS)
Nitta, Nariaki
1988-01-01
Hard X-ray spectra in solar flares obtained by the broadband spectrometers aboard Hinotori and SMM are compared. Within the uncertainty brought about by assuming the typical energy of the background X-rays, spectra by the Hinotori spectrometer are usually consistent with those by the SMM spectrometer for flares in 1981. On the contrary, flares in 1982 persistently show 20-50-percent higher flux by Hinotori than by SMM. If this discrepancy is entirely attributable to errors in the calibration of energy ranges, the errors would be about 10 percent. Despite such a discrepancy in absolute flux, in the the decay phase of one flare, spectra revealed a hard X-ray component (probably a 'superhot' component) that could be explained neither by emission from a plasma at about 2 x 10 to the 7th K nor by a nonthermal power-law component. Imaging observations during this period show hard X-ray emission nearly cospatial with soft X-ray emission, in contrast with earlier times at which hard and soft X-rays come from different places.
NASA Technical Reports Server (NTRS)
Lahti, G. P.; Mueller, R. A.
1973-01-01
Measurements of MeV neutron were made at the surface of a lithium hydride and depleted uranium shielded reactor. Four shield configurations were considered: these were assembled progressively with cylindrical shells of 5-centimeter-thick depleted uranium, 13-centimeter-thick lithium hydride, 5-centimeter-thick depleted uranium, 13-centimeter-thick lithium hydride, 5-centimeter-thick depleted uranium, and 3-centimeter-thick depleted uranium. Measurements were made with a NE-218 scintillation spectrometer; proton pulse height distributions were differentiated to obtain neutron spectra. Calculations were made using the two-dimensional discrete ordinates code DOT and ENDF/B (version 3) cross sections. Good agreement between measured and calculated spectral shape was observed. Absolute measured and calculated fluxes were within 50 percent of one another; observed discrepancies in absolute flux may be due to cross section errors.
Ultrasonographic Fetal Weight Estimation: Should Macrosomia-Specific Formulas Be Utilized?
Porter, Blake; Neely, Cherry; Szychowski, Jeff; Owen, John
2015-08-01
This study aims to derive an estimated fetal weight (EFW) formula in macrosomic fetuses, compare its accuracy to the 1986 Hadlock IV formula, and assess whether including maternal diabetes (MDM) improves estimation. Retrospective review of nonanomalous live-born singletons with birth weight (BWT) ≥ 4 kg and biometry within 14 days of birth. Formula accuracy included: (1) mean error (ME = EFW - BWT), (2) absolute mean error (AME = absolute value of [1]), and (3) mean percent error (MPE, [1]/BWT × 100%). Using loge BWT as the dependent variable, multivariable linear regression produced a macrosomic-specific formula in a "training" dataset which was verified by "validation" data. Formulas specific for MDM were also developed. Out of the 403 pregnancies, birth gestational age was 39.5 ± 1.4 weeks, and median BWT was 4,240 g. The macrosomic formula from the training data (n = 201) had associated ME = 54 ± 284 g, AME = 234 ± 167 g, and MPE = 1.6 ± 6.2%; evaluation in the validation dataset (n = 202) showed similar errors. The Hadlock formula had associated ME = -369 ± 422 g, AME = 451 ± 332 g, MPE = -8.3 ± 9.3% (all p < 0.0001). Diabetes-specific formula errors were similar to the macrosomic formula errors (all p = NS). With BWT ≥ 4 kg, the macrosomic formula was significantly more accurate than Hadlock IV, which systematically underestimates fetal/BWT. Diabetes-specific formulas did not improve accuracy. A specific formula should be considered when macrosomia is suspected. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Effect of limbal marking prior to laser ablation on the magnitude of cyclotorsional error.
Chen, Xiangjun; Stojanovic, Aleksandar; Stojanovic, Filip; Eidet, Jon Roger; Raeder, Sten; Øritsland, Haakon; Utheim, Tor Paaske
2012-05-01
To evaluate the residual registration error after limbal-marking-based manual adjustment in cyclotorsional tracker-controlled laser refractive surgery. Two hundred eyes undergoing custom surface ablation with the iVIS Suite (iVIS Technologies) were divided into limbal marked (marked) and non-limbal marked (unmarked) groups. Iris registration information was acquired preoperatively from all eyes. Preoperatively, the horizontal axis was recorded in the marked group for use in manual cyclotorsional alignment prior to surgical iris registration. During iris registration, the preoperative iris information was compared to the eye-tracker captured image. The magnitudes of the registration error angle and cyclotorsional movement during the subsequent laser ablation were recorded and analyzed. Mean magnitude of registration error angle (absolute value) was 1.82°±1.31° (range: 0.00° to 5.50°) and 2.90°±2.40° (range: 0.00° to 13.50°) for the marked and unmarked groups, respectively (P<.001). Mean magnitude of cyclotorsional movement during the laser ablation (absolute value) was 1.15°±1.34° (range: 0.00° to 7.00°) and 0.68°±0.97° (range: 0.00° to 6.00°) for the marked and unmarked groups, respectively (P=.005). Forty-six percent and 60% of eyes had registration error >2°, whereas 22% and 20% of eyes had cyclotorsional movement during ablation >2° in the marked and unmarked groups, respectively. Limbal-marking-based manual alignment prior to laser ablation significantly reduced cyclotorsional registration error. However, residual registration misalignment and cyclotorsional movements remained during ablation. Copyright 2012, SLACK Incorporated.
Reliable absolute analog code retrieval approach for 3D measurement
NASA Astrophysics Data System (ADS)
Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Chen, Deyun
2017-11-01
The wrapped phase of phase-shifting approach can be unwrapped by using Gray code, but both the wrapped phase error and Gray code decoding error can result in period jump error, which will lead to gross measurement error. Therefore, this paper presents a reliable absolute analog code retrieval approach. The combination of unequal-period Gray code and phase shifting patterns at high frequencies are used to obtain high-frequency absolute analog code, and at low frequencies, the same unequal-period combination patterns are used to obtain the low-frequency absolute analog code. Next, the difference between the two absolute analog codes was employed to eliminate period jump errors, and a reliable unwrapped result can be obtained. Error analysis was used to determine the applicable conditions, and this approach was verified through theoretical analysis. The proposed approach was further verified experimentally. Theoretical analysis and experimental results demonstrate that the proposed approach can perform reliable analog code unwrapping.
Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error
NASA Astrophysics Data System (ADS)
Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi
2017-12-01
Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.
Sando, Roy; Chase, Katherine J.
2017-03-23
A common statistical procedure for estimating streamflow statistics at ungaged locations is to develop a relational model between streamflow and drainage basin characteristics at gaged locations using least squares regression analysis; however, least squares regression methods are parametric and make constraining assumptions about the data distribution. The random forest regression method provides an alternative nonparametric method for estimating streamflow characteristics at ungaged sites and requires that the data meet fewer statistical conditions than least squares regression methods.Random forest regression analysis was used to develop predictive models for 89 streamflow characteristics using Precipitation-Runoff Modeling System simulated streamflow data and drainage basin characteristics at 179 sites in central and eastern Montana. The predictive models were developed from streamflow data simulated for current (baseline, water years 1982–99) conditions and three future periods (water years 2021–38, 2046–63, and 2071–88) under three different climate-change scenarios. These predictive models were then used to predict streamflow characteristics for baseline conditions and three future periods at 1,707 fish sampling sites in central and eastern Montana. The average root mean square error for all predictive models was about 50 percent. When streamflow predictions at 23 fish sampling sites were compared to nearby locations with simulated data, the mean relative percent difference was about 43 percent. When predictions were compared to streamflow data recorded at 21 U.S. Geological Survey streamflow-gaging stations outside of the calibration basins, the average mean absolute percent error was about 73 percent.
Radiometric calibration of an airborne multispectral scanner. [of Thematic Mapper Simulator
NASA Technical Reports Server (NTRS)
Markham, Brian L.; Ahmad, Suraiya P.; Jackson, Ray D.; Moran, M. S.; Biggar, Stuart F.; Gellman, David I.; Slater, Philip N.
1991-01-01
The absolute radiometric calibration of the NS001 Thematic Mapper Simulator reflective channels was examined based on laboratory tests and in-flight comparisons to ground measurements. The NS001 data are calibrated in-flight by reference to the NS001 internal integrating sphere source. This source's power supply or monitoring circuitry exhibited greater instability in-flight during 1988-1989 than in the laboratory. Extrapolating laboratory behavior to in-flight data resulted in 7-20 percent radiance errors relative to ground measurements and atmospheric modeling. Assuming constancy in the source's output between laboraotry and in-flight resulted in generally smaller errors. Upgrades to the source's power supply and monitoring circuitry in 1990 improved its in-flight stability, though in-flight ground reflectance based calibration tests have not yet been performed.
Modeling Heavy/Medium-Duty Fuel Consumption Based on Drive Cycle Properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Lijuan; Duran, Adam; Gonder, Jeffrey
This paper presents multiple methods for predicting heavy/medium-duty vehicle fuel consumption based on driving cycle information. A polynomial model, a black box artificial neural net model, a polynomial neural network model, and a multivariate adaptive regression splines (MARS) model were developed and verified using data collected from chassis testing performed on a parcel delivery diesel truck operating over the Heavy Heavy-Duty Diesel Truck (HHDDT), City Suburban Heavy Vehicle Cycle (CSHVC), New York Composite Cycle (NYCC), and hydraulic hybrid vehicle (HHV) drive cycles. Each model was trained using one of four drive cycles as a training cycle and the other threemore » as testing cycles. By comparing the training and testing results, a representative training cycle was chosen and used to further tune each method. HHDDT as the training cycle gave the best predictive results, because HHDDT contains a variety of drive characteristics, such as high speed, acceleration, idling, and deceleration. Among the four model approaches, MARS gave the best predictive performance, with an average absolute percent error of -1.84% over the four chassis dynamometer drive cycles. To further evaluate the accuracy of the predictive models, the approaches were first applied to real-world data. MARS outperformed the other three approaches, providing an average absolute percent error of -2.2% of four real-world road segments. The MARS model performance was then compared to HHDDT, CSHVC, NYCC, and HHV drive cycles with the performance from Future Automotive System Technology Simulator (FASTSim). The results indicated that the MARS method achieved a comparative predictive performance with FASTSim.« less
Cotter, Christopher; Turcotte, Julie Catherine; Crawford, Bruce; Sharp, Gregory; Mah'D, Mufeed
2015-01-01
This work aims at three goals: first, to define a set of statistical parameters and plan structures for a 3D pretreatment thoracic and prostate intensity‐modulated radiation therapy (IMRT) quality assurance (QA) protocol; secondly, to test if the 3D QA protocol is able to detect certain clinical errors; and third, to compare the 3D QA method with QA performed with single ion chamber and 2D gamma test in detecting those errors. The 3D QA protocol measurements were performed on 13 prostate and 25 thoracic IMRT patients using IBA's COMPASS system. For each treatment planning structure included in the protocol, the following statistical parameters were evaluated: average absolute dose difference (AADD), percent structure volume with absolute dose difference greater than 6% (ADD6), and 3D gamma test. To test the 3D QA protocol error sensitivity, two prostate and two thoracic step‐and‐shoot IMRT patients were investigated. Errors introduced to each of the treatment plans included energy switched from 6 MV to 10 MV, multileaf collimator (MLC) leaf errors, linac jaws errors, monitor unit (MU) errors, MLC and gantry angle errors, and detector shift errors. QA was performed on each plan using a single ion chamber and 2D array of ion chambers for 2D and 3D QA. Based on the measurements performed, we established a uniform set of tolerance levels to determine if QA passes for each IMRT treatment plan structure: maximum allowed AADD is 6%; maximum 4% of any structure volume can be with ADD6 greater than 6%, and maximum 4% of any structure volume may fail 3D gamma test with test parameters 3%/3 mm DTA. Out of the three QA methods tested the single ion chamber performed the worst by detecting 4 out of 18 introduced errors, 2D QA detected 11 out of 18 errors, and 3D QA detected 14 out of 18 errors. PACS number: 87.56.Fc PMID:26699299
Evaluation of mean-monthly streamflow-regression equations for Colorado, 2014
Kohn, Michael S.; Stevens, Michael R.; Bock, Andrew R.; Char, Stephen J.
2015-01-01
The median absolute differences between the observed and computed mean-monthly streamflow for Mountain, Northwest, and Southwest hydrologic regions are fairly uniform throughout the year, with the exception of late summer and early fall (July, August, and September), when each hydrologic region exhibits a substantial increase in median absolute percent difference. The greatest difference occurs in the Northwest hydrologic region, and the smallest difference occurs in the Mountain hydrologic region. The Rio Grande hydrologic region shows seasonal variation in median absolute percent difference with March, April, August, and September having a median absolute difference near or below 40 percent, and the remaining months of the year having a median absolute difference near or above 50 percent. In the Mountain, Northwest, and Southwest hydrologic regions, the mean-monthly streamflow equations perform the best during spring (March, April, and May). However, in the Rio Grande hydrologic region, the mean-monthly streamflow equations perform the best during late summer and early fall (August and September).
40 CFR 92.105 - General equipment specifications.
Code of Federal Regulations, 2011 CFR
2011-07-01
... accuracy and precision of 0.1 percent of absolute pressure at point or better. (2) Gauges and transducers used to measure any other pressures shall have an accuracy and precision of 1 percent of absolute...
Real-Gas Correction Factors for Hypersonic Flow Parameters in Helium
NASA Technical Reports Server (NTRS)
Erickson, Wayne D.
1960-01-01
The real-gas hypersonic flow parameters for helium have been calculated for stagnation temperatures from 0 F to 600 F and stagnation pressures up to 6,000 pounds per square inch absolute. The results of these calculations are presented in the form of simple correction factors which must be applied to the tabulated ideal-gas parameters. It has been shown that the deviations from the ideal-gas law which exist at high pressures may cause a corresponding significant error in the hypersonic flow parameters when calculated as an ideal gas. For example the ratio of the free-stream static to stagnation pressure as calculated from the thermodynamic properties of helium for a stagnation temperature of 80 F and pressure of 4,000 pounds per square inch absolute was found to be approximately 13 percent greater than that determined from the ideal-gas tabulation with a specific heat ratio of 5/3.
ADEOS Total Ozone Mapping Spectrometer (TOMS) Data Products User's Guide
NASA Technical Reports Server (NTRS)
Krueger, A.; Bhartia, P. K.; McPeters, R.; Herman, J.; Wellemeyer, C.; Jaross, G.; Seftor, C.; Torres, O.; Labow, G.; Byerly, W.;
1998-01-01
Two data products from the Total Ozone Mapping Spectrometer (ADEOS/TOMS) have been archived at the Distributed Active Archive Center, in the form of Hierarchical Data Format files. The ADEOS/ TOMS began taking measurements on September 11, 1996, and ended on June 29, 1997. The instrument measured backscattered Earth radiance and incoming solar irradiance; their ratio was used in ozone retrievals. Changes in the reflectivity of the solar diffuser used for the irradiance measurement were monitored using a carousel of three diffusers, each exposed to the degrading effects of solar irradiation at different rates. The algorithm to retrieve total column ozone compares measured Earth radiances at sets of three wavelengths with radiances calculated for different total ozone values, solar zenith angles, and optical paths. The initial error in the absolute scale for TOMS total ozone is 3 percent, the one standard deviation random error is 2 percent, and the drift is less than 0.5 percent over the 9-month data record. The Level 2 product contains the measured radiances, the derived total ozone amount, and reflectivity information for each scan position. The Level 3 product contains daily total ozone and reflectivity in a 1-degree latitude by 1.25 degrees longitude grid. The Level 3 files containing estimates of UVB at the Earth surface and tropospheric aerosol information will also be available. Detailed descriptions of both HDF data files and the CDROM product are provided.
Earth Probe Total Ozone Mapping Spectrometer (TOMS) Data Product User's Guide
NASA Technical Reports Server (NTRS)
McPeters, R.; Bhartia, P. K.; Krueger, A.; Herman, J.; Wellemeyer, C.; Seftor, C.; Jaross, G.; Torres, O.; Moy, L.; Labow, G.;
1998-01-01
Two data products from the Earth Probe Total Ozone Mapping Spectrometer (EP/TOMS) have been archived at the Distributed Active Archive Center, in the form of Hierarchical Data Format files. The EP/ TOMS began taking measurements on July 15, 1996. The instrument measures backscattered Earth radiance and incoming solar irradiance; their ratio is used in ozone retrievals. Changes in the reflectivity of the solar diffuser used for the irradiance measurement are monitored using a carousel of three diffusers, each exposed to the degrading effects of solar irradiation at different rates. The algorithm to retrieve total column ozone compares measured Earth radiances at sets of three wavelengths with radiances calculated for different total ozone values. The initial error in the absolute scale for TOMS total ozone is 3 percent, the one standard deviation random error is 2 percent, and the drift is less than 0.5 percent over the first year of data. The Level-2 product contains the measured radiances, the derived total ozone amount, and reflectivity information for each scan position. The Level-3 product contains daily total ozone and reflectivity in a 1-degree latitude by 1.25 degrees longitude grid. Level-3 files containing estimates of LTVB at the Earth surface and tropospheric aerosol information are also available, Detailed descriptions of both HDF data-files and the CD-ROM product are provided.
Linhart, S. Mike; Nania, Jon F.; Sanders, Curtis L.; Archfield, Stacey A.
2012-01-01
The U.S. Geological Survey (USGS) maintains approximately 148 real-time streamgages in Iowa for which daily mean streamflow information is available, but daily mean streamflow data commonly are needed at locations where no streamgages are present. Therefore, the USGS conducted a study as part of a larger project in cooperation with the Iowa Department of Natural Resources to develop methods to estimate daily mean streamflow at locations in ungaged watersheds in Iowa by using two regression-based statistical methods. The regression equations for the statistical methods were developed from historical daily mean streamflow and basin characteristics from streamgages within the study area, which includes the entire State of Iowa and adjacent areas within a 50-mile buffer of Iowa in neighboring states. Results of this study can be used with other techniques to determine the best method for application in Iowa and can be used to produce a Web-based geographic information system tool to compute streamflow estimates automatically. The Flow Anywhere statistical method is a variation of the drainage-area-ratio method, which transfers same-day streamflow information from a reference streamgage to another location by using the daily mean streamflow at the reference streamgage and the drainage-area ratio of the two locations. The Flow Anywhere method modifies the drainage-area-ratio method in order to regionalize the equations for Iowa and determine the best reference streamgage from which to transfer same-day streamflow information to an ungaged location. Data used for the Flow Anywhere method were retrieved for 123 continuous-record streamgages located in Iowa and within a 50-mile buffer of Iowa. The final regression equations were computed by using either left-censored regression techniques with a low limit threshold set at 0.1 cubic feet per second (ft3/s) and the daily mean streamflow for the 15th day of every other month, or by using an ordinary-least-squares multiple linear regression method and the daily mean streamflow for the 15th day of every other month. The Flow Duration Curve Transfer method was used to estimate unregulated daily mean streamflow from the physical and climatic characteristics of gaged basins. For the Flow Duration Curve Transfer method, daily mean streamflow quantiles at the ungaged site were estimated with the parameter-based regression model, which results in a continuous daily flow-duration curve (the relation between exceedance probability and streamflow for each day of observed streamflow) at the ungaged site. By the use of a reference streamgage, the Flow Duration Curve Transfer is converted to a time series. Data used in the Flow Duration Curve Transfer method were retrieved for 113 continuous-record streamgages in Iowa and within a 50-mile buffer of Iowa. The final statewide regression equations for Iowa were computed by using a weighted-least-squares multiple linear regression method and were computed for the 0.01-, 0.05-, 0.10-, 0.15-, 0.20-, 0.30-, 0.40-, 0.50-, 0.60-, 0.70-, 0.80-, 0.85-, 0.90-, and 0.95-exceedance probability statistics determined from the daily mean streamflow with a reporting limit set at 0.1 ft3/s. The final statewide regression equation for Iowa computed by using left-censored regression techniques was computed for the 0.99-exceedance probability statistic determined from the daily mean streamflow with a low limit threshold and a reporting limit set at 0.1 ft3/s. For the Flow Anywhere method, results of the validation study conducted by using six streamgages show that differences between the root-mean-square error and the mean absolute error ranged from 1,016 to 138 ft3/s, with the larger value signifying a greater occurrence of outliers between observed and estimated streamflows. Root-mean-square-error values ranged from 1,690 to 237 ft3/s. Values of the percent root-mean-square error ranged from 115 percent to 26.2 percent. The logarithm (base 10) streamflow percent root-mean-square error ranged from 13.0 to 5.3 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.80 to 0.40. Percent-bias values ranged from 25.4 to 4.0 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.35. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.86 to 0.56. For the streamgage with the best agreement between observed and estimated streamflow, higher streamflows appear to be underestimated. For the streamgage with the worst agreement between observed and estimated streamflow, low flows appear to be overestimated whereas higher flows seem to be underestimated. Estimated cumulative streamflows for the period October 1, 2004, to September 30, 2009, are underestimated by -25.8 and -7.4 percent for the closest and poorest comparisons, respectively. For the Flow Duration Curve Transfer method, results of the validation study conducted by using the same six streamgages show that differences between the root-mean-square error and the mean absolute error ranged from 437 to 93.9 ft3/s, with the larger value signifying a greater occurrence of outliers between observed and estimated streamflows. Root-mean-square-error values ranged from 906 to 169 ft3/s. Values of the percent root-mean-square-error ranged from 67.0 to 25.6 percent. The logarithm (base 10) streamflow percent root-mean-square error ranged from 12.5 to 4.4 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.79 to 0.40. Percent-bias values ranged from 22.7 to 0.94 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.38. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.89 to 0.48. For the streamgage with the closest agreement between observed and estimated streamflow, there is relatively good agreement between observed and estimated streamflows. For the streamgage with the poorest agreement between observed and estimated streamflow, streamflows appear to be substantially underestimated for much of the time period. Estimated cumulative streamflow for the period October 1, 2004, to September 30, 2009, are underestimated by -9.3 and -22.7 percent for the closest and poorest comparisons, respectively.
Astigmatism error modification for absolute shape reconstruction using Fourier transform method
NASA Astrophysics Data System (ADS)
He, Yuhang; Li, Qiang; Gao, Bo; Liu, Ang; Xu, Kaiyuan; Wei, Xiaohong; Chai, Liqun
2014-12-01
A method is proposed to modify astigmatism errors in absolute shape reconstruction of optical plane using Fourier transform method. If a transmission and reflection flat are used in an absolute test, two translation measurements lead to obtain the absolute shapes by making use of the characteristic relationship between the differential and original shapes in spatial frequency domain. However, because the translation device cannot guarantee the test and reference flats rigidly parallel to each other after the translations, a tilt error exists in the obtained differential data, which caused power and astigmatism errors in the reconstructed shapes. In order to modify the astigmatism errors, a rotation measurement is added. Based on the rotation invariability of the form of Zernike polynomial in circular domain, the astigmatism terms are calculated by solving polynomial coefficient equations related to the rotation differential data, and subsequently the astigmatism terms including error are modified. Computer simulation proves the validity of the proposed method.
NASA Astrophysics Data System (ADS)
Gillman, M. A.; Lamoureux, S. F.; Lafrenière, M. J.
2017-09-01
The Stream Temperature, Intermittency, and Conductivity (STIC) electrical conductivity (EC) logger as presented by Chapin et al. (2014) serves as an inexpensive (˜50 USD) means to assess relative EC in freshwater environments. This communication demonstrates the calibration of the STIC logger for quantifying EC, and provides examples from a month long field deployment in the High Arctic. Calibration models followed multiple nonlinear regression and produced calibration curves with high coefficient of determination values (R2 = 0.995 - 0.998; n = 5). Percent error of mean predicted specific conductance at 25°C (SpC) to known SpC ranged in magnitude from -0.6% to 13% (mean = -1.4%), and mean absolute percent error (MAPE) ranged from 2.1% to 13% (mean = 5.3%). Across all tested loggers we found good accuracy and precision, with both error metrics increasing with increasing SpC values. During 10, month-long field deployments, there were no logger failures and full data recovery was achieved. Point SpC measurements at the location of STIC loggers recorded via a more expensive commercial electrical conductivity logger followed similar trends to STIC SpC records, with 1:1.05 and 1:1.08 relationships between the STIC and commercial logger SpC values. These results demonstrate that STIC loggers calibrated to quantify EC are an economical means to increase the spatiotemporal resolution of water quality investigations.
2013-01-01
Background Cardiovascular magnetic resonance (CMR) T1 mapping indices, such as T1 time and partition coefficient (λ), have shown potential to assess diffuse myocardial fibrosis. The purpose of this study was to investigate how scanner and field strength variation affect the accuracy and precision/reproducibility of T1 mapping indices. Methods CMR studies were performed on two 1.5T and three 3T scanners. Eight phantoms were made to mimic the T1/T2 of pre- and post-contrast myocardium and blood at 1.5T and 3T. T1 mapping using MOLLI was performed with simulated heart rate of 40-100 bpm. Inversion recovery spin echo (IR-SE) was the reference standard for T1 determination. Accuracy was defined as the percent error between MOLLI and IR-SE, and scan/re-scan reproducibility was defined as the relative percent mean difference between repeat MOLLI scans. Partition coefficient was estimated by ΔR1myocardium phantom/ΔR1blood phantom. Generalized linear mixed model was used to compare the accuracy and precision/reproducibility of T1 and λ across field strength, scanners, and protocols. Results Field strength significantly affected MOLLI T1 accuracy (6.3% error for 1.5T vs. 10.8% error for 3T, p<0.001) but not λ accuracy (8.8% error for 1.5T vs. 8.0% error for 3T, p=0.11). Partition coefficients of MOLLI were not different between two 1.5T scanners (47.2% vs. 47.9%, p=0.13), and showed only slight variation across three 3T scanners (49.2% vs. 49.8% vs. 49.9%, p=0.016). Partition coefficient also had significantly lower percent error for precision (better scan/re-scan reproducibility) than measurement of individual T1 values (3.6% for λ vs. 4.3%-4.8% for T1 values, approximately, for pre/post blood and myocardium values). Conclusion Based on phantom studies, T1 errors using MOLLI ranged from 6-14% across various MR scanners while errors for partition coefficient were less (6-10%). Compared with absolute T1 times, partition coefficient showed less variability across platforms and field strengths as well as higher precision. PMID:23890156
Bao, Xu; Li, Haijian; Xu, Dongwei; Jia, Limin; Ran, Bin; Rong, Jian
2016-11-06
The jam flow condition is one of the main traffic states in traffic flow theory and the most difficult state for sectional traffic information acquisition. Since traffic information acquisition is the basis for the application of an intelligent transportation system, research on traffic vehicle counting methods for the jam flow conditions has been worthwhile. A low-cost and energy-efficient type of multi-function wireless traffic magnetic sensor was designed and developed. Several advantages of the traffic magnetic sensor are that it is suitable for large-scale deployment and time-sustainable detection for traffic information acquisition. Based on the traffic magnetic sensor, a basic vehicle detection algorithm (DWVDA) with less computational complexity was introduced for vehicle counting in low traffic volume conditions. To improve the detection performance in jam flow conditions with a "tailgating effect" between front vehicles and rear vehicles, an improved vehicle detection algorithm (SA-DWVDA) was proposed and applied in field traffic environments. By deploying traffic magnetic sensor nodes in field traffic scenarios, two field experiments were conducted to test and verify the DWVDA and the SA-DWVDA algorithms. The experimental results have shown that both DWVDA and the SA-DWVDA algorithms yield a satisfactory performance in low traffic volume conditions (scenario I) and both of their mean absolute percent errors are less than 1% in this scenario. However, for jam flow conditions with heavy traffic volumes (scenario II), the SA-DWVDA was proven to achieve better results, and the mean absolute percent error of the SA-DWVDA is 2.54% with corresponding results of the DWVDA 7.07%. The results conclude that the proposed SA-DWVDA can implement efficient and accurate vehicle detection in jam flow conditions and can be employed in field traffic environments.
Bao, Xu; Li, Haijian; Xu, Dongwei; Jia, Limin; Ran, Bin; Rong, Jian
2016-01-01
The jam flow condition is one of the main traffic states in traffic flow theory and the most difficult state for sectional traffic information acquisition. Since traffic information acquisition is the basis for the application of an intelligent transportation system, research on traffic vehicle counting methods for the jam flow conditions has been worthwhile. A low-cost and energy-efficient type of multi-function wireless traffic magnetic sensor was designed and developed. Several advantages of the traffic magnetic sensor are that it is suitable for large-scale deployment and time-sustainable detection for traffic information acquisition. Based on the traffic magnetic sensor, a basic vehicle detection algorithm (DWVDA) with less computational complexity was introduced for vehicle counting in low traffic volume conditions. To improve the detection performance in jam flow conditions with a “tailgating effect” between front vehicles and rear vehicles, an improved vehicle detection algorithm (SA-DWVDA) was proposed and applied in field traffic environments. By deploying traffic magnetic sensor nodes in field traffic scenarios, two field experiments were conducted to test and verify the DWVDA and the SA-DWVDA algorithms. The experimental results have shown that both DWVDA and the SA-DWVDA algorithms yield a satisfactory performance in low traffic volume conditions (scenario I) and both of their mean absolute percent errors are less than 1% in this scenario. However, for jam flow conditions with heavy traffic volumes (scenario II), the SA-DWVDA was proven to achieve better results, and the mean absolute percent error of the SA-DWVDA is 2.54% with corresponding results of the DWVDA 7.07%. The results conclude that the proposed SA-DWVDA can implement efficient and accurate vehicle detection in jam flow conditions and can be employed in field traffic environments. PMID:27827974
Communicating data about the benefits and harms of treatment: a randomized trial.
Woloshin, Steven; Schwartz, Lisa M
2011-07-19
Despite limited evidence, it is often asserted that natural frequencies (for example, 2 in 1000) are the best way to communicate absolute risks. To compare comprehension of treatment benefit and harm when absolute risks are presented as natural frequencies, percents, or both. Parallel-group randomized trial with central allocation and masking of investigators to group assignment, conducted through an Internet survey in September 2009. (ClinicalTrials.gov registration number: NCT00950014) National sample of U.S. adults randomly selected from a professional survey firm's research panel of about 30,000 households. 2944 adults aged 18 years or older (all with complete follow-up). Tables presenting absolute risks in 1 of 5 numeric formats: natural frequency (x in 1000), variable frequency (x in 100, x in 1000, or x in 10,000, as needed to keep the numerator >1), percent, percent plus natural frequency, or percent plus variable frequency. Comprehension as assessed by 18 questions (primary outcome) and judgment of treatment benefit and harm. The average number of comprehension questions answered correctly was lowest in the variable frequency group and highest in the percent group (13.1 vs. 13.8; difference, 0.7 [95% CI, 0.3 to 1.1]). The proportion of participants who "passed" the comprehension test (≥13 correct answers) was lowest in the natural and variable frequency groups and highest in the percent group (68% vs. 73%; difference, 5 percentage points [CI, 0 to 10 percentage points]). The largest format effect was seen for the 2 questions about absolute differences: the proportion correct in the natural frequency versus percent groups was 43% versus 72% (P < 0.001) and 73% versus 87% (P < 0.001). Even when data were presented in the percent format, one third of participants failed the comprehension test. Natural frequencies are not the best format for communicating the absolute benefits and harms of treatment. The more succinct percent format resulted in better comprehension: Comprehension was slightly better overall and notably better for absolute differences. Attorney General Consumer and Prescriber Education grant program, the Robert Wood Johnson Pioneer Program, and the National Cancer Institute.
NASA Technical Reports Server (NTRS)
Beck, S. M.
1975-01-01
A mobile self-contained Faraday cup system for beam current measurments of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 + or - 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV.
Wetherbee, G.A.; Latysh, N.E.; Gordon, J.D.
2005-01-01
Data from the U.S. Geological Survey (USGS) collocated-sampler program for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) are used to estimate the overall error of NADP/NTN measurements. Absolute errors are estimated by comparison of paired measurements from collocated instruments. Spatial and temporal differences in absolute error were identified and are consistent with longitudinal distributions of NADP/NTN measurements and spatial differences in precipitation characteristics. The magnitude of error for calcium, magnesium, ammonium, nitrate, and sulfate concentrations, specific conductance, and sample volume is of minor environmental significance to data users. Data collected after a 1994 sample-handling protocol change are prone to less absolute error than data collected prior to 1994. Absolute errors are smaller during non-winter months than during winter months for selected constituents at sites where frozen precipitation is common. Minimum resolvable differences are estimated for different regions of the USA to aid spatial and temporal watershed analyses.
Duncker, James J.; Melching, Charles S.
1998-01-01
Rainfall and streamflow data collected from July 1986 through September 1993 were utilized to calibrate and verify a continuous-simulation rainfall-runoff model for three watersheds (11.8--18.0 square miles in area) in Du Page County. Classification of land cover into three categories of pervious (grassland, forest/wetland, and agricultural land) and one category of impervious subareas was sufficient to accurately simulate the rainfall-runoff relations for the three watersheds. Regional parameter sets were obtained by calibrating jointly all parameters except fraction of ground-water inflow that goes to inactive ground water (DEEPFR), interflow recession constant (IRC), and infiltration (INFILT) for runoff from all three watersheds. DEEPFR and IRC varied among the watersheds because of physical differences among the watersheds. Two values of INFILT were obtained: one representing the rainfall-runoff process on the silty and clayey soils on the uplands and lake plains that characterize Sawmill Creek, St. Joseph Creek, and eastern Du Page County; and one representing the rainfall-runoff process on the silty soils on uplands that characterize Kress Creek and parts of western Du Page County. Regional rainfall-runoff relations, defined through joint calibration of the rainfall-runoff model and verified for independent periods, presented in this report, allow estimation of runoff for watersheds in Du Page County with an error in the total water balance less than 4.0 percent; an average absolute error in the annual-flow estimates of 17.1 percent with the error rarely exceeding 25 percent for annual flows; and correlation coefficients and coefficients of model-fit efficiency for monthly flows of at least 87 and 76 percent, respectively. Close reproduction of the runoff-volume duration curves was obtained. A frequency analysis of storm-runoff volume indicates a tendency of the model to undersimulate large storms, which may result from underestimation of the amount of impervious land cover in the watershed and errors in measuring rainfall for convective storms. Overall, the results of regional calibration and verification of the rainfall-runoff model indicate the simulated rainfall-runoff relations are adequate for stormwater-management planning and design for watersheds in Du Page County.
Kuenze, Christopher; Eltouhky, Moataz; Thomas, Abbey; Sutherlin, Mark; Hart, Joseph
2016-05-01
Collecting torque data using a multimode dynamometer is common in sports-medicine research. The error in torque measurements across multiple sites and dynamometers has not been established. To assess the validity of 2 calibration protocols across 3 dynamometers and the error associated with torque measurement for each system. Observational study. 3 university laboratories at separate institutions. 2 Biodex System 3 dynamometers and 1 Biodex System 4 dynamometer. System calibration was completed using the manufacturer-recommended single-weight method and an experimental calibration method using a series of progressive weights. Both calibration methods were compared with a manually calculated theoretical torque across a range of applied weights. Relative error, absolute error, and percent error were calculated at each weight. Each outcome variable was compared between systems using 95% confidence intervals across low (0-65 Nm), moderate (66-110 Nm), and high (111-165 Nm) torque categorizations. Calibration coefficients were established for each system using both calibration protocols. However, within each system the calibration coefficients generated using the single-weight (System 4 = 2.42 [0.90], System 3a = 1.37 [1.11], System 3b = -0.96 [1.45]) and experimental calibration protocols (System 4 = 3.95 [1.08], System 3a = -0.79 [1.23], System 3b = 2.31 [1.66]) were similar and displayed acceptable mean relative error compared with calculated theoretical torque values. Overall, percent error was greatest for all 3 systems in low-torque conditions (System 4 = 11.66% [6.39], System 3a = 6.82% [11.98], System 3b = 4.35% [9.49]). The System 4 significantly overestimated torque across all 3 weight increments, and the System 3b overestimated torque over the moderate-torque increment. Conversion of raw voltage to torque values using the single-calibration-weight method is valid and comparable to a more complex multiweight calibration process; however, it is clear that calibration must be done for each individual system to ensure accurate data collection.
Alendronate for fracture prevention in postmenopause.
Holder, Kathryn K; Kerley, Sara Shelton
2008-09-01
Osteoporosis is an abnormal reduction in bone mass and bone deterioration leading to increased fracture risk. Alendronate (Fosamax) belongs to the bisphosphonate class of drugs, which act to inhibit bone resorption by interfering with the activity of osteoclasts. To assess the effectiveness of alendronate in the primary and secondary prevention of osteoporotic fractures in postmenopausal women. The authors searched Central, Medline, and EMBASE for relevant randomized controlled trials published from 1966 to 2007. The authors undertook study selection and data abstraction in duplicate. The authors performed meta-analysis of fracture outcomes using relative risks, and a relative change greater than 15 percent was considered clinically important. The authors assessed study quality through reporting of allocation concealment, blinding, and withdrawals. Eleven trials representing 12,068 women were included in the review. Relative and absolute risk reductions for the 10-mg dose were as follows. For vertebral fractures, a 45 percent relative risk reduction was found (relative risk [RR] = 0.55; 95% confidence interval [CI], 0.45 to 0.67). This was significant for primary prevention, with a 45 percent relative risk reduction (RR = 0.55; 95% CI, 0.38 to 0.80) and 2 percent absolute risk reduction; and for secondary prevention, with 45 percent relative risk reduction (RR = 0.55; 95% CI, 0.43 to 0.69) and 6 percent absolute risk reduction. For nonvertebral fractures, a 16 percent relative risk reduction was found (RR = 0.84; 95% CI, 0.74 to 0.94). This was significant for secondary prevention, with a 23 percent relative risk reduction (RR = 0.77; 95% CI, 0.64 to 0.92) and a 2 percent absolute risk reduction, but not for primary prevention (RR = 0.89; 95% CI, 0.76 to 1.04). There was a 40 percent relative risk reduction in hip fractures (RR = 0.60; 95% CI, 0.40 to 0.92), but only secondary prevention was significant, with a 53 percent relative risk reduction (RR = 0.47; 95% CI, 0.26 to 0.85) and a 1 percent absolute risk reduction. The only significance found for wrist fractures was in secondary prevention, with a 50 percent relative risk reduction (RR = 0.50; 95% CI, 0.34 to 0.73) and a 2 percent absolute risk reduction. For adverse events, the authors found no statistically significant difference in any included study. However, observational data raise concerns about potential risk for upper gastrointestinal injury and, less commonly, osteonecrosis of the jaw. At 10 mg of alendronate per day, clinically important and statistically significant reductions in vertebral, nonvertebral, hip, and wrist fractures were observed for secondary prevention. The authors found no statistically significant results for primary prevention, with the exception of vertebral fractures, for which the reduction was clinically important.
Zhang, Liping; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian
2014-06-01
In this paper, by using a particle swarm optimization algorithm to solve the optimal parameter estimation problem, an improved Nash nonlinear grey Bernoulli model termed PSO-NNGBM(1,1) is proposed. To test the forecasting performance, the optimized model is applied for forecasting the incidence of hepatitis B in Xinjiang, China. Four models, traditional GM(1,1), grey Verhulst model (GVM), original nonlinear grey Bernoulli model (NGBM(1,1)) and Holt-Winters exponential smoothing method, are also established for comparison with the proposed model under the criteria of mean absolute percentage error and root mean square percent error. The prediction results show that the optimized NNGBM(1,1) model is more accurate and performs better than the traditional GM(1,1), GVM, NGBM(1,1) and Holt-Winters exponential smoothing method. Copyright © 2014. Published by Elsevier Ltd.
Wiley, Jeffrey B.
2012-01-01
Base flows were compared with published streamflow statistics to assess climate variability and to determine the published statistics that can be substituted for annual and seasonal base flows of unregulated streams in West Virginia. The comparison study was done by the U.S. Geological Survey, in cooperation with the West Virginia Department of Environmental Protection, Division of Water and Waste Management. The seasons were defined as winter (January 1-March 31), spring (April 1-June 30), summer (July 1-September 30), and fall (October 1-December 31). Differences in mean annual base flows for five record sub-periods (1930-42, 1943-62, 1963-69, 1970-79, and 1980-2002) range from -14.9 to 14.6 percent when compared to the values for the period 1930-2002. Differences between mean seasonal base flows and values for the period 1930-2002 are less variable for winter and spring, -11.2 to 11.0 percent, than for summer and fall, -47.0 to 43.6 percent. Mean summer base flows (July-September) and mean monthly base flows for July, August, September, and October are approximately equal, within 7.4 percentage points of mean annual base flow. The mean of each of annual, spring, summer, fall, and winter base flows are approximately equal to the annual 50-percent (standard error of 10.3 percent), 45-percent (error of 14.6 percent), 75-percent (error of 11.8 percent), 55-percent (error of 11.2 percent), and 35-percent duration flows (error of 11.1 percent), respectively. The mean seasonal base flows for spring, summer, fall, and winter are approximately equal to the spring 50- to 55-percent (standard error of 6.8 percent), summer 45- to 50-percent (error of 6.7 percent), fall 45-percent (error of 15.2 percent), and winter 60-percent duration flows (error of 8.5 percent), respectively. Annual and seasonal base flows representative of the period 1930-2002 at unregulated streamflow-gaging stations and ungaged locations in West Virginia can be estimated using previously published values of statistics and procedures.
Nimbus-7 Earth radiation budget calibration history. Part 1: The solar channels
NASA Technical Reports Server (NTRS)
Kyle, H. Lee; Hoyt, Douglas V.; Hickey, John R.; Maschhoff, Robert H.; Vallette, Brenda J.
1993-01-01
The Earth Radiation Budget (ERB) experiment on the Nimbus-7 satellite measured the total solar irradiance plus broadband spectral components on a nearly daily basis from 16 Nov. 1978, until 16 June 1992. Months of additional observations were taken in late 1992 and in 1993. The emphasis is on the electrically self calibrating cavity radiometer, channel 10c, which recorded accurate total solar irradiance measurements over the whole period. The spectral channels did not have inflight calibration adjustment capabilities. These channels can, with some additional corrections, be used for short-term studies (one or two solar rotations - 27 to 60 days), but not for long-term trend analysis. For channel 10c, changing radiometer pointing, the zero offsets, the stability of the gain, the temperature sensitivity, and the influences of other platform instruments are all examined and their effects on the measurements considered. Only the question of relative accuracy (not absolute) is examined. The final channel 10c product is also compared with solar measurements made by independent experiments on other satellites. The Nimbus experiment showed that the mean solar energy was about 0.1 percent (1.4 W/sqm) higher in the excited Sun years of 1979 and 1991 than in the quiet Sun years of 1985 and 1986. The error analysis indicated that the measured long-term trends may be as accurate as +/- 0.005 percent. The worse-case error estimate is +/- 0.03 percent.
Longo, Benedetto; Farcomeni, Alessio; Ferri, Germano; Campanale, Antonella; Sorotos, Micheal; Santanelli, Fabio
2013-07-01
Breast volume assessment enhances preoperative planning of both aesthetic and reconstructive procedures, helping the surgeon in the decision-making process of shaping the breast. Numerous methods of breast size determination are currently reported but are limited by methodologic flaws and variable estimations. The authors aimed to develop a unifying predictive formula for volume assessment in small to large breasts based on anthropomorphic values. Ten anthropomorphic breast measurements and direct volumes of 108 mastectomy specimens from 88 women were collected prospectively. The authors performed a multivariate regression to build the optimal model for development of the predictive formula. The final model was then internally validated. A previously published formula was used as a reference. Mean (±SD) breast weight was 527.9 ± 227.6 g (range, 150 to 1250 g). After model selection, sternal notch-to-nipple, inframammary fold-to-nipple, and inframammary fold-to-fold projection distances emerged as the most important predictors. The resulting formula (the BREAST-V) showed an adjusted R of 0.73. The estimated expected absolute error on new breasts is 89.7 g (95 percent CI, 62.4 to 119.1 g) and the expected relative error is 18.4 percent (95 percent CI, 12.9 to 24.3 percent). Application of reference formula on the sample yielded worse predictions than those derived by the formula, showing an R of 0.55. The BREAST-V is a reliable tool for predicting small to large breast volumes accurately for use as a complementary device in surgeon evaluation. An app entitled BREAST-V for both iOS and Android devices is currently available for free download in the Apple App Store and Google Play Store. Diagnostic, II.
Determination and error analysis of emittance and spectral emittance measurements by remote sensing
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Kumar, R.
1977-01-01
The author has identified the following significant results. From the theory of remote sensing of surface temperatures, an equation of the upper bound of absolute error of emittance was determined. It showed that the absolute error decreased with an increase in contact temperature, whereas, it increased with an increase in environmental integrated radiant flux density. Change in emittance had little effect on the absolute error. A plot of the difference between temperature and band radiance temperature vs. emittance was provided for the wavelength intervals: 4.5 to 5.5 microns, 8 to 13.5 microns, and 10.2 to 12.5 microns.
Li, Beiwen; Liu, Ziping; Zhang, Song
2016-10-03
We propose a hybrid computational framework to reduce motion-induced measurement error by combining the Fourier transform profilometry (FTP) and phase-shifting profilometry (PSP). The proposed method is composed of three major steps: Step 1 is to extract continuous relative phase maps for each isolated object with single-shot FTP method and spatial phase unwrapping; Step 2 is to obtain an absolute phase map of the entire scene using PSP method, albeit motion-induced errors exist on the extracted absolute phase map; and Step 3 is to shift the continuous relative phase maps from Step 1 to generate final absolute phase maps for each isolated object by referring to the absolute phase map with error from Step 2. Experiments demonstrate the success of the proposed computational framework for measuring multiple isolated rapidly moving objects.
Students' Mathematical Work on Absolute Value: Focusing on Conceptions, Errors and Obstacles
ERIC Educational Resources Information Center
Elia, Iliada; Özel, Serkan; Gagatsis, Athanasios; Panaoura, Areti; Özel, Zeynep Ebrar Yetkiner
2016-01-01
This study investigates students' conceptions of absolute value (AV), their performance in various items on AV, their errors in these items and the relationships between students' conceptions and their performance and errors. The Mathematical Working Space (MWS) is used as a framework for studying students' mathematical work on AV and the…
Very-short-term wind power prediction by a hybrid model with single- and multi-step approaches
NASA Astrophysics Data System (ADS)
Mohammed, E.; Wang, S.; Yu, J.
2017-05-01
Very-short-term wind power prediction (VSTWPP) has played an essential role for the operation of electric power systems. This paper aims at improving and applying a hybrid method of VSTWPP based on historical data. The hybrid method is combined by multiple linear regressions and least square (MLR&LS), which is intended for reducing prediction errors. The predicted values are obtained through two sub-processes:1) transform the time-series data of actual wind power into the power ratio, and then predict the power ratio;2) use the predicted power ratio to predict the wind power. Besides, the proposed method can include two prediction approaches: single-step prediction (SSP) and multi-step prediction (MSP). WPP is tested comparatively by auto-regressive moving average (ARMA) model from the predicted values and errors. The validity of the proposed hybrid method is confirmed in terms of error analysis by using probability density function (PDF), mean absolute percent error (MAPE) and means square error (MSE). Meanwhile, comparison of the correlation coefficients between the actual values and the predicted values for different prediction times and window has confirmed that MSP approach by using the hybrid model is the most accurate while comparing to SSP approach and ARMA. The MLR&LS is accurate and promising for solving problems in WPP.
Nimbus 7 solar backscatter ultraviolet (SBUV) ozone products user's guide
NASA Technical Reports Server (NTRS)
Fleig, Albert J.; Mcpeters, R. D.; Bhartia, P. K.; Schlesinger, Barry M.; Cebula, Richard P.; Klenk, K. F.; Taylor, Steven L.; Heath, Donald F.
1990-01-01
Three ozone tape products from the Solar Backscatter Ultraviolet (SBUV) experiment aboard Nimbus 7 were archived at the National Space Science Data Center. The experiment measures the fraction of incoming radiation backscattered by the Earth's atmosphere at 12 wavelengths. In-flight measurements were used to monitor changes in the instrument sensitivity. Total column ozone is derived by comparing the measurements with calculations of what would be measured for different total ozone amounts. The altitude distribution is retrieved using an optimum statistical technique for the inversion. The estimated initial error in the absolute scale for total ozone is 2 percent, with a 3 percent drift over 8 years. The profile error depends on latitude and height, smallest at 3 to 10 mbar; the drift increases with increasing altitude. Three tape products are described. The High Density SBUV (HDSBUV) tape contains the final derived products - the total ozone and the vertical ozone profile - as well as much detailed diagnostic information generated during the retrieval process. The Compressed Ozone (CPOZ) tape contains only that subset of HDSBUV information, including total ozone and ozone profiles, considered most useful for scientific studies. The Zonal Means Tape (ZMT) contains daily, weekly, monthly and quarterly averages of the derived quantities over 10 deg latitude zones.
Ratio of He{sup 2+}/He{sup +} from 80 to 800 eV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samson, J.A.R.; Stolte, W.C.; He, Z.X.
1997-04-01
The importance of studying the double ionization of He by single photons lies in the fact that He presents the simplest structure for the study of electron correlation processes. Even so it has proved a challenging problem to understand and describe theoretically. Surprisingly, it has also proved difficult to agree experimentally on the absolute values of the He{sup 2+}/He{sup +} ratios. The availability of new synchrotron facilities with high intensity light outputs have increased the experimental activity in this area. However, by the very nature of those continuum sources systematic errors occur due to the presence of higher order spectramore » and great care must be exercised. The authors have measured the He{sup 2+}/He{sup +} ratios over a period of 5 years, the last three at the ALS utilizing beamlines 9.0.1 and 6.3.2. The sources of systematic errors that they have considered include: scattered light, higher order spectra, detector sensitivity to differently charged ions, discriminator levels in the counting equipment, gas purity, and stray electrons from filters and metal supports. The measurements have been made at three different synchrotron facilities with different types of monochromators and their potential for different sources of systematic errors. However, the authors data from all these different measurements agree within a few percent of each other. From the above results and their precision total photoionization cross sections for He, the authors can obtain the absolute photoionization cross section for He{sup 2+}. They find similar near perfect agreement with several of the latest calculations.« less
Accuracy assessment of the global TanDEM-X Digital Elevation Model with GPS data
NASA Astrophysics Data System (ADS)
Wessel, Birgit; Huber, Martin; Wohlfart, Christian; Marschalk, Ursula; Kosmann, Detlev; Roth, Achim
2018-05-01
The primary goal of the German TanDEM-X mission is the generation of a highly accurate and global Digital Elevation Model (DEM) with global accuracies of at least 10 m absolute height error (linear 90% error). The global TanDEM-X DEM acquired with single-pass SAR interferometry was finished in September 2016. This paper provides a unique accuracy assessment of the final TanDEM-X global DEM using two different GPS point reference data sets, which are distributed across all continents, to fully characterize the absolute height error. Firstly, the absolute vertical accuracy is examined by about three million globally distributed kinematic GPS (KGPS) points derived from 19 KGPS tracks covering a total length of about 66,000 km. Secondly, a comparison is performed with more than 23,000 "GPS on Bench Marks" (GPS-on-BM) points provided by the US National Geodetic Survey (NGS) scattered across 14 different land cover types of the US National Land Cover Data base (NLCD). Both GPS comparisons prove an absolute vertical mean error of TanDEM-X DEM smaller than ±0.20 m, a Root Means Square Error (RMSE) smaller than 1.4 m and an excellent absolute 90% linear height error below 2 m. The RMSE values are sensitive to land cover types. For low vegetation the RMSE is ±1.1 m, whereas it is slightly higher for developed areas (±1.4 m) and for forests (±1.8 m). This validation confirms an outstanding absolute height error at 90% confidence level of the global TanDEM-X DEM outperforming the requirement by a factor of five. Due to its extensive and globally distributed reference data sets, this study is of considerable interests for scientific and commercial applications.
Man power/cost estimation model: Automated planetary projects
NASA Technical Reports Server (NTRS)
Kitchen, L. D.
1975-01-01
A manpower/cost estimation model is developed which is based on a detailed level of financial analysis of over 30 million raw data points which are then compacted by more than three orders of magnitude to the level at which the model is applicable. The major parameter of expenditure is manpower (specifically direct labor hours) for all spacecraft subsystem and technical support categories. The resultant model is able to provide a mean absolute error of less than fifteen percent for the eight programs comprising the model data base. The model includes cost saving inheritance factors, broken down in four levels, for estimating follow-on type programs where hardware and design inheritance are evident or expected.
Wetherbee, Gregory A.; Latysh, Natalie E.; Greene, Shannon M.
2006-01-01
The U.S. Geological Survey (USGS) used five programs to provide external quality-assurance monitoring for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) and two programs to provide external quality-assurance monitoring for the NADP/Mercury Deposition Network (NADP/MDN) during 2004. An intersite-comparison program was used to estimate accuracy and precision of field-measured pH and specific-conductance. The variability and bias of NADP/NTN data attributed to field exposure, sample handling and shipping, and laboratory chemical analysis were estimated using the sample-handling evaluation (SHE), field-audit, and interlaboratory-comparison programs. Overall variability of NADP/NTN data was estimated using a collocated-sampler program. Variability and bias of NADP/MDN data attributed to field exposure, sample handling and shipping, and laboratory chemical analysis were estimated using a system-blank program and an interlaboratory-comparison program. In two intersite-comparison studies, approximately 89 percent of NADP/NTN site operators met the pH measurement accuracy goals, and 94.7 to 97.1 percent of NADP/NTN site operators met the accuracy goals for specific conductance. Field chemistry measurements were discontinued by NADP at the end of 2004. As a result, the USGS intersite-comparison program also was discontinued at the end of 2004. Variability and bias in NADP/NTN data due to sample handling and shipping were estimated from paired-sample concentration differences and specific conductance differences obtained for the SHE program. Median absolute errors (MAEs) equal to less than 3 percent were indicated for all measured analytes except potassium and hydrogen ion. Positive bias was indicated for most of the measured analytes except for calcium, hydrogen ion and specific conductance. Negative bias for hydrogen ion and specific conductance indicated loss of hydrogen ion and decreased specific conductance from contact of the sample with the collector bucket. Field-audit results for 2004 indicate dissolved analyte loss in more than one-half of NADP/NTN wet-deposition samples for all analytes except chloride. Concentrations of contaminants also were estimated from field-audit data. On the basis of 2004 field-audit results, at least 25 percent of the 2004 NADP/NTN concentrations for sodium, potassium, and chloride were lower than the maximum sodium, potassium, and chloride contamination likely to be found in 90 percent of the samples with 90-percent confidence. Variability and bias in NADP/NTN data attributed to chemical analysis by the NADP Central Analytical Laboratory (CAL) were comparable to the variability and bias estimated for other laboratories participating in the interlaboratory-comparison program for all analytes. Variability in NADP/NTN ammonium data evident in 2002-03 was reduced substantially during 2004. Sulfate, hydrogen-ion, and specific conductance data reported by CAL during 2004 were positively biased. A significant (a = 0.05) bias was identified for CAL sodium, potassium, ammonium, and nitrate data, but the absolute values of the median differences for these analytes were less than the method detection limits. No detections were reported for CAL analyses of deionized-water samples, indicating that contamination was not a problem for CAL. Control charts show that CAL data were within statistical control during at least 90 percent of 2004. Most 2004 CAL interlaboratory-comparison results for synthetic wet-deposition solutions were within ?10 percent of the most probable values (MPVs) for solution concentrations except for chloride, nitrate, sulfate, and specific conductance results from one sample in November and one specific conductance result in December. Overall variability of NADP/NTN wet-deposition measurements was estimated during water year 2004 by the median absolute errors for weekly wet-deposition sample concentrations and precipitation measurements for tw
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-01
...) a change of at least five absolute percentage points in, but not less than 25 percent of, the... between a countervailable subsidy rate of zero (or de minimis) and a countervailable subsidy rate of... absolute points and not less than 25 percent of the originally calculated margin. Thus, the ministerial...
Kumar, K Vasanth; Porkodi, K; Rocha, F
2008-01-15
A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of basic red 9 sorption by activated carbon. The r(2) was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions namely coefficient of determination (r(2)), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), the average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. Non-linear regression was found to be a better way to obtain the parameters involved in the isotherms and also the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r(2) was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K(2) was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm.
Claims, errors, and compensation payments in medical malpractice litigation.
Studdert, David M; Mello, Michelle M; Gawande, Atul A; Gandhi, Tejal K; Kachalia, Allen; Yoon, Catherine; Puopolo, Ann Louise; Brennan, Troyen A
2006-05-11
In the current debate over tort reform, critics of the medical malpractice system charge that frivolous litigation--claims that lack evidence of injury, substandard care, or both--is common and costly. Trained physicians reviewed a random sample of 1452 closed malpractice claims from five liability insurers to determine whether a medical injury had occurred and, if so, whether it was due to medical error. We analyzed the prevalence, characteristics, litigation outcomes, and costs of claims that lacked evidence of error. For 3 percent of the claims, there were no verifiable medical injuries, and 37 percent did not involve errors. Most of the claims that were not associated with errors (370 of 515 [72 percent]) or injuries (31 of 37 [84 percent]) did not result in compensation; most that involved injuries due to error did (653 of 889 [73 percent]). Payment of claims not involving errors occurred less frequently than did the converse form of inaccuracy--nonpayment of claims associated with errors. When claims not involving errors were compensated, payments were significantly lower on average than were payments for claims involving errors (313,205 dollars vs. 521,560 dollars, P=0.004). Overall, claims not involving errors accounted for 13 to 16 percent of the system's total monetary costs. For every dollar spent on compensation, 54 cents went to administrative expenses (including those involving lawyers, experts, and courts). Claims involving errors accounted for 78 percent of total administrative costs. Claims that lack evidence of error are not uncommon, but most are denied compensation. The vast majority of expenditures go toward litigation over errors and payment of them. The overhead costs of malpractice litigation are exorbitant. Copyright 2006 Massachusetts Medical Society.
Yang, Eunjoo; Park, Hyun Woo; Choi, Yeon Hwa; Kim, Jusim; Munkhdalai, Lkhagvadorj; Musa, Ibrahim; Ryu, Keun Ho
2018-05-11
Early detection of infectious disease outbreaks is one of the important and significant issues in syndromic surveillance systems. It helps to provide a rapid epidemiological response and reduce morbidity and mortality. In order to upgrade the current system at the Korea Centers for Disease Control and Prevention (KCDC), a comparative study of state-of-the-art techniques is required. We compared four different temporal outbreak detection algorithms: the CUmulative SUM (CUSUM), the Early Aberration Reporting System (EARS), the autoregressive integrated moving average (ARIMA), and the Holt-Winters algorithm. The comparison was performed based on not only 42 different time series generated taking into account trends, seasonality, and randomly occurring outbreaks, but also real-world daily and weekly data related to diarrhea infection. The algorithms were evaluated using different metrics. These were namely, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), F1 score, symmetric mean absolute percent error (sMAPE), root-mean-square error (RMSE), and mean absolute deviation (MAD). Although the comparison results showed better performance for the EARS C3 method with respect to the other algorithms, despite the characteristics of the underlying time series data, Holt⁻Winters showed better performance when the baseline frequency and the dispersion parameter values were both less than 1.5 and 2, respectively.
Laboratory errors and patient safety.
Miligy, Dawlat A
2015-01-01
Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that evaluated the encountered laboratory errors and launch the great need for universal standardization and bench marking measures to control the laboratory work.
An absolute photometric system at 10 and 20 microns
NASA Technical Reports Server (NTRS)
Rieke, G. H.; Lebofsky, M. J.; Low, F. J.
1985-01-01
Two new direct calibrations at 10 and 20 microns are presented in which terrestrial flux standards are referred to infrared standard stars. These measurements give both good agreement and higher accuracy when compared with previous direct calibrations. As a result, the absolute calibrations at 10 and 20 microns have now been determined with accuracies of 3 and 8 percent, respectively. A variety of absolute calibrations based on extrapolation of stellar spectra from the visible to 10 microns are reviewed. Current atmospheric models of A-type stars underestimate their fluxes by about 10 percent at 10 microns, whereas models of solar-type stars agree well with the direct calibrations. The calibration at 20 microns can probably be determined to about 5 percent by extrapolation from the more accurate result at 10 microns. The photometric system at 10 and 20 microns is updated to reflect the new absolute calibration, to base its zero point directly on the colors of A0 stars, and to improve the accuracy in the comparison of the standard stars.
Juodzbaliene, Vilma; Darbutas, Tomas; Skurvydas, Albertas
2016-01-01
The aim of the study was to determine the effect of different muscle length and visual feedback information (VFI) on accuracy of isometric contraction of elbow flexors in men after an ischemic stroke (IS). Materials and Methods. Maximum voluntary muscle contraction force (MVMCF) and accurate determinate muscle force (20% of MVMCF) developed during an isometric contraction of elbow flexors in 90° and 60° of elbow flexion were measured by an isokinetic dynamometer in healthy subjects (MH, n = 20) and subjects after an IS during their postrehabilitation period (MS, n = 20). Results. In order to evaluate the accuracy of the isometric contraction of the elbow flexors absolute errors were calculated. The absolute errors provided information about the difference between determinate and achieved muscle force. Conclusions. There is a tendency that greater absolute errors generating determinate force are made by MH and MS subjects in case of a greater elbow flexors length despite presence of VFI. Absolute errors also increase in both groups in case of a greater elbow flexors length without VFI. MS subjects make greater absolute errors generating determinate force without VFI in comparison with MH in shorter elbow flexors length. PMID:27042670
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
Concurrent Validity of Wearable Activity Trackers Under Free-Living Conditions.
Brooke, Skyler M; An, Hyun-Sung; Kang, Seoung-Ki; Noble, John M; Berg, Kris E; Lee, Jung-Min
2017-04-01
Brooke, SM, An, H-S, Kang, S-K, Noble, JM, Berg, KE, and Lee, J-M. Concurrent validity of wearable activity trackers under free-living conditions. J Strength Cond Res 31(4): 1097-1106, 2017-The purpose of this study is to evaluate the concurrent validity of wearable activity trackers in energy expenditure (EE) and sleep period time (SPT) under free-living conditions. Ninety-five (28.5 ± 9.8 years) healthy men (n = 34) and women (n = 61) participated in this study. The total EE and SPT were measured using 8 monitors: Nike+ FuelBand SE (NFB), Garmin VivoFit (VF), Misfit Shine (MF), Fitbit Flex (FF), Jawbone UP (JU), Polar Loop (PL), Fitbit Charge HR (FC), and SenseWear Armband Mini (SWA) (criterion measures: SWA for EE and a sleep log for SPT). The mean absolute percent error (MAPE) for EE was 13.0, 15.2, 15.5, 16.1, 16.2, 22.8, and 24.5% for PL, MF, FF, NFB, FC, JU, and VF, respectively. Mean absolute percent errors were calculated for SPT to be 4.0, 8.8, 10.2, 11.5, 12.9, 13.6, 17.5, and 21.61% for VF, FF, JU, FC, MF, SWA laying down, PL, and SWA, respectively. Concurrent validity was examined using equivalence testing on EE (equivalence zone: 2,889.7-3,531.9 kcal); 2 trackers fell short of falling in the zone: PL (2,714.4-3,164.8 kcal) and FC (2,473.8-3,066.5 kcal). For SPT (equivalence zone: 420.6-514.0 minutes), several monitors fell in the zone: PL (448.3-485.6 minutes), MS (442.8-492.2 minutes), and FF (427.7-486.7 minutes). This study suggests that the PL and FC provide a reasonable estimate of EE under free-living conditions. The PL, FC, and MF were the most valid monitors used for measuring SPT.
Yurkovich, James T.; Yang, Laurence; Palsson, Bernhard O.; ...
2017-03-06
Deep-coverage metabolomic profiling has revealed a well-defined development of metabolic decay in human red blood cells (RBCs) under cold storage conditions. A set of extracellular biomarkers has been recently identified that reliably defines the qualitative state of the metabolic network throughout this metabolic decay process. Here, we extend the utility of these biomarkers by using them to quantitatively predict the concentrations of other metabolites in the red blood cell. We are able to accurately predict the concentration profile of 84 of the 91 (92%) measured metabolites ( p < 0.05) in RBC metabolism using only measurements of these five biomarkers.more » The median of prediction errors (symmetric mean absolute percent error) across all metabolites was 13%. Furthermore, the ability to predict numerous metabolite concentrations from a simple set of biomarkers offers the potential for the development of a powerful workflow that could be used to evaluate the metabolic state of a biological system using a minimal set of measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yurkovich, James T.; Yang, Laurence; Palsson, Bernhard O.
Deep-coverage metabolomic profiling has revealed a well-defined development of metabolic decay in human red blood cells (RBCs) under cold storage conditions. A set of extracellular biomarkers has been recently identified that reliably defines the qualitative state of the metabolic network throughout this metabolic decay process. Here, we extend the utility of these biomarkers by using them to quantitatively predict the concentrations of other metabolites in the red blood cell. We are able to accurately predict the concentration profile of 84 of the 91 (92%) measured metabolites ( p < 0.05) in RBC metabolism using only measurements of these five biomarkers.more » The median of prediction errors (symmetric mean absolute percent error) across all metabolites was 13%. Furthermore, the ability to predict numerous metabolite concentrations from a simple set of biomarkers offers the potential for the development of a powerful workflow that could be used to evaluate the metabolic state of a biological system using a minimal set of measurements.« less
Prediction of stream volatilization coefficients
Rathbun, Ronald E.
1990-01-01
Equations are developed for predicting the liquid-film and gas-film reference-substance parameters for quantifying volatilization of organic solutes from streams. Molecular weight and molecular-diffusion coefficients of the solute are used as correlating parameters. Equations for predicting molecular-diffusion coefficients of organic solutes in water and air are developed, with molecular weight and molal volume as parameters. Mean absolute errors of prediction for diffusion coefficients in water are 9.97% for the molecular-weight equation, 6.45% for the molal-volume equation. The mean absolute error for the diffusion coefficient in air is 5.79% for the molal-volume equation. Molecular weight is not a satisfactory correlating parameter for diffusion in air because two equations are necessary to describe the values in the data set. The best predictive equation for the liquid-film reference-substance parameter has a mean absolute error of 5.74%, with molal volume as the correlating parameter. The best equation for the gas-film parameter has a mean absolute error of 7.80%, with molecular weight as the correlating parameter.
Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun
2017-08-01
The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.
Farooqui, Javed Hussain; Koul, Archana; Dutta, Ranjan; Shroff, Noshir Minoo
2016-01-01
To compare the accuracy of two different methods of preoperative marking for toric intraocular lens (IOL) implantation, bubble marker versus pendulum marker, as a means of establishing the reference point for the final alignment of the toric IOL to achieve an outcome as close as possible to emmetropia. Toric IOLs were implanted in 180 eyes of 110 patients. One group (55 patients) had preoperative marking of both eyes done with bubble marker (ASICO AE-2791TBL) and the other group (55 patients) with pendulum marker (Rumex(®)3-193). Reference marks were placed at 3-, 6-, and 9-o'clock positions on the limbus. Slit-lamp photographs were analyzed using Adobe Photoshop (version 7.0). Amount of alignment error (in degrees) induced in each group was measured. Mean absolute rotation error in the preoperative marking in the horizontal axis was 2.42±1.71 in the bubble marker group and 2.83±2.31in the pendulum marker group (P=0.501). Sixty percent of the pendulum group and 70% of the bubble group had rotation error ≤3 (P=0.589), and 90% eyes of the pendulum group and 96.7% of the bubble group had rotation error ≤5 (P=0.612). Both preoperative marking techniques result in approximately 3 of alignment error. Both marking techniques are simple, predictable, reproducible and easy to perform.
NASA Technical Reports Server (NTRS)
Warshawsky, I.
1972-01-01
Total pressure in a calibration chamber is determined by measuring the force on a disk suspended in an orifice in the baseplate of the chamber. The disk forms a narrow annular gap with the orifice. A continuous flow of calibration gas passes through the chamber and annulus to a downstream pumping system. The ratio of pressures on the two faces of the disk exceeds 100:1, so that chamber pressure is substantially equal to the product of disk area and net force on the disk. This force is measured with an electrodynamometer that can be calibrated in situ with dead weights. Probable error in pressure measurement is plus or minus (0.5 microtorr + 0.6 percent).
Unbiased symmetric metrics provide a useful measure to quickly compare two datasets, with similar interpretations for both under and overestimations. Two examples include the normalized mean bias factor and normalized mean absolute error factor. However, the original formulations...
Dependence of hydrogen arcjet operation on electrode geometry
NASA Technical Reports Server (NTRS)
Pencil, Eric J.; Sankovic, John M.; Sarmiento, Charles J.; Hamley, John A.
1992-01-01
The dependence of 2kW hydrogen arcjet performance on cathode to anode electrode spacing was evaluated at specific impulses of 900 and 1000 s. Less than 2 absolute percent change in efficiency was measured for the spacings tested which did not repeat the 14 absolute percent variation reported in earlier work with similar electrode designs. A different nozzle configuration was used to quantify the variation in hydrogen arcjet performance over an extended range of electrode spacing. Electrode gap variation resulted in less than 3 absolute percent change in efficiency. These null results suggested that electrode spacing is decoupled from hydrogen arcjet ignition. The dependence of breakdown voltage on mass flow rate and electrode agreed with Paschen curves for hydrogen. Preliminary characterization of the dependence of hydrogen arcjet ignition on rates of pulse repetition and pulse voltage rise were also included for comparison with previous results obtained using simulated hydrazine.
Dependence of hydrogen arcjet operation on electrode geometry
NASA Technical Reports Server (NTRS)
Pencil, Eric J.; Sankovic, John M.; Sarmiento, Charles J.; Hamley, John A.
1992-01-01
The dependence of 2 kW hydrogen arcjet performance on cathode to anode electrode spacing was evaluated at specific impulses of 900 and 1000 s. Less than 2 absolute percent change in efficiency was measured for the spacings tested which did not repeat the 14 absolute percent variation reported in earlier work with similar electrode designs. A different nozzle configuration was used to quantify the variation in hydrogen arcjet performance over an extended range of electrode spacing. Electrode gap variation resulted in less than 3 absolute percent change in efficiency. These null results suggested that electrode spacing is decoupled from hydrogen arcjet performance considerations over the ranges tested. Initial studies were conducted on hydrogen arcjet ignition. The dependence of breakdown voltage on mass flow rate and hydrogen arcjet ignition on rates of pulse repetition and pulse voltage rise were also included for comparison with previous results obtained using simulated hydrazine.
NASA Astrophysics Data System (ADS)
Rawat, Kishan Singh; Sehgal, Vinay Kumar; Pradhan, Sanatan; Ray, Shibendu S.
2018-03-01
We have estimated soil moisture (SM) by using circular horizontal polarization backscattering coefficient (σ o_{RH}), differences of circular vertical and horizontal σ o (σ o_{RV} {-} σ o_{RH}) from FRS-1 data of Radar Imaging Satellite (RISAT-1) and surface roughness in terms of RMS height ({RMS}_{height}). We examined the performance of FRS-1 in retrieving SM under wheat crop at tillering stage. Results revealed that it is possible to develop a good semi-empirical model (SEM) to estimate SM of the upper soil layer using RISAT-1 SAR data rather than using existing empirical model based on only single parameter, i.e., σ o. Near surface SM measurements were related to σ o_{RH}, σ o_{RV} {-} σ o_{RH} derived using 5.35 GHz (C-band) image of RISAT-1 and {RMS}_{height}. The roughness component derived in terms of {RMS}_{height} showed a good positive correlation with σ o_{RV} {-} σ o_{RH} (R2 = 0.65). By considering all the major influencing factors (σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}), an SEM was developed where SM (volumetric) predicted values depend on σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}. This SEM showed R2 of 0.87 and adjusted R2 of 0.85, multiple R=0.94 and with standard error of 0.05 at 95% confidence level. Validation of the SM derived from semi-empirical model with observed measurement ({SM}_{Observed}) showed root mean square error (RMSE) = 0.06, relative-RMSE (R-RMSE) = 0.18, mean absolute error (MAE) = 0.04, normalized RMSE (NRMSE) = 0.17, Nash-Sutcliffe efficiency (NSE) = 0.91 ({≈ } 1), index of agreement (d) = 1, coefficient of determination (R2) = 0.87, mean bias error (MBE) = 0.04, standard error of estimate (SEE) = 0.10, volume error (VE) = 0.15, variance of the distribution of differences ({S}d2) = 0.004. The developed SEM showed better performance in estimating SM than Topp empirical model which is based only on σ o. By using the developed SEM, top soil SM can be estimated with low mean absolute percent error (MAPE) = 1.39 and can be used for operational applications.
NASA Astrophysics Data System (ADS)
Radziukynas, V.; Klementavičius, A.
2016-04-01
The paper analyses the performance results of the recently developed short-term forecasting suit for the Latvian power system. The system load and wind power are forecasted using ANN and ARIMA models, respectively, and the forecasting accuracy is evaluated in terms of errors, mean absolute errors and mean absolute percentage errors. The investigation of influence of additional input variables on load forecasting errors is performed. The interplay of hourly loads and wind power forecasting errors is also evaluated for the Latvian power system with historical loads (the year 2011) and planned wind power capacities (the year 2023).
The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...
Hess, G.W.; Bohman, L.R.
1996-01-01
Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada were developed using streamflow records at six gaged sites and basin physical and climatic characteristics. Streamflow data at gaged sites were related by regression techniques to concurrent flows at nearby gaging stations so that monthly mean streamflows for periods of missing or no record can be estimated for gaged sites in central Nevada. The standard error of estimate for relations at these sites ranged from 12 to 196 percent. Also, monthly streamflow data for selected percent exceedence levels were used in regression analyses with basin and climatic variables to determine relations for ungaged basins for annual and monthly percent exceedence levels. Analyses indicate that the drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the annual percent exceedence, the standard error of estimate of the relations for ungaged sites ranged from 51 to 96 percent and standard error of prediction for ungaged sites ranged from 96 to 249 percent. For the monthly percent exceedence values, the standard error of estimate of the relations ranged from 31 to 168 percent, and the standard error of prediction ranged from 115 to 3,124 percent. Reliability and limitations of the estimating methods are described.
On the accuracy of the Head Impact Telemetry (HIT) System used in football helmets.
Jadischke, Ron; Viano, David C; Dau, Nathan; King, Albert I; McCarthy, Joe
2013-09-03
On-field measurement of head impacts has relied on the Head Impact Telemetry (HIT) System, which uses helmet mounted accelerometers to determine linear and angular head accelerations. HIT is used in youth and collegiate football to assess the frequency and severity of helmet impacts. This paper evaluates the accuracy of HIT for individual head impacts. Most HIT validations used a medium helmet on a Hybrid III head. However, the appropriate helmet is large based on the Hybrid III head circumference (58 cm) and manufacturer's fitting instructions. An instrumented skull cap was used to measure the pressure between the head of football players (n=63) and their helmet. The average pressure with a large helmet on the Hybrid III was comparable to the average pressure from helmets used by players. A medium helmet on the Hybrid III produced average pressures greater than the 99th percentile volunteer pressure level. Linear impactor tests were conducted using a large and medium helmet on the Hybrid III. Testing was conducted by two independent laboratories. HIT data were compared to data from the Hybrid III equipped with a 3-2-2-2 accelerometer array. The absolute and root mean square error (RMSE) for HIT were computed for each impact (n=90). Fifty-five percent (n=49) had an absolute error greater than 15% while the RMSE was 59.1% for peak linear acceleration. Copyright © 2013 Elsevier Ltd. All rights reserved.
3D measurement using combined Gray code and dual-frequency phase-shifting approach
NASA Astrophysics Data System (ADS)
Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin
2018-04-01
The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.
NASA Technical Reports Server (NTRS)
Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan
2013-01-01
A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.
Cossich, Victor; Mallrich, Frédéric; Titonelli, Victor; de Sousa, Eduardo Branco; Velasques, Bruna; Salles, José Inácio
2014-01-01
To ascertain whether the proprioceptive deficit in the sense of joint position continues to be present when patients with a limb presenting a deficient anterior cruciate ligament (ACL) are assessed by testing their active reproduction of joint position, in comparison with the contralateral limb. Twenty patients with unilateral ACL tearing participated in the study. Their active reproduction of joint position in the limb with the deficient ACL and in the healthy contralateral limb was tested. Meta-positions of 20% and 50% of the maximum joint range of motion were used. Proprioceptive performance was determined through the values of the absolute error, variable error and constant error. Significant differences in absolute error were found at both of the positions evaluated, and in constant error at 50% of the maximum joint range of motion. When evaluated in terms of absolute error, the proprioceptive deficit continues to be present even when an active evaluation of the sense of joint position is made. Consequently, this sense involves activity of both intramuscular and tendon receptors.
Christiansen, Mark P; Klaff, Leslie J; Brazg, Ronald; Chang, Anna R; Levy, Carol J; Lam, David; Denham, Douglas S; Atiee, George; Bode, Bruce W; Walters, Steven J; Kelley, Lynne; Bailey, Timothy S
2018-03-01
Persistent use of real-time continuous glucose monitoring (CGM) improves diabetes control in individuals with type 1 diabetes (T1D) and type 2 diabetes (T2D). PRECISE II was a nonrandomized, blinded, prospective, single-arm, multicenter study that evaluated the accuracy and safety of the implantable Eversense CGM system among adult participants with T1D and T2D (NCT02647905). The primary endpoint was the mean absolute relative difference (MARD) between paired Eversense and Yellow Springs Instrument (YSI) reference measurements through 90 days postinsertion for reference glucose values from 40 to 400 mg/dL. Additional endpoints included Clarke Error Grid analysis and sensor longevity. The primary safety endpoint was the incidence of device-related or sensor insertion/removal procedure-related serious adverse events (SAEs) through 90 days postinsertion. Ninety participants received the CGM system. The overall MARD value against reference glucose values was 8.8% (95% confidence interval: 8.1%-9.3%), which was significantly lower than the prespecified 20% performance goal for accuracy (P < 0.0001). Ninety-three percent of CGM values were within 20/20% of reference values over the total glucose range of 40-400 mg/dL. Clarke Error Grid analysis showed 99.3% of samples in the clinically acceptable error zones A (92.8%) and B (6.5%). Ninety-one percent of sensors were functional through day 90. One related SAE (1.1%) occurred during the study for removal of a sensor. The PRECISE II trial demonstrated that the Eversense CGM system provided accurate glucose readings through the intended 90-day sensor life with a favorable safety profile.
Absolute calibration of optical flats
Sommargren, Gary E.
2005-04-05
The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.
Modelling tourists arrival using time varying parameter
NASA Astrophysics Data System (ADS)
Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.
2017-06-01
The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.
Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation
ERIC Educational Resources Information Center
Prentice, J. S. C.
2012-01-01
An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…
Assessing Suturing Skills in a Self-Guided Learning Setting: Absolute Symmetry Error
ERIC Educational Resources Information Center
Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam
2009-01-01
Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be…
Zhang, Hai Ping; Li, Feng Ri; Dong, Li Hu; Liu, Qiang
2017-06-18
Based on the 212 re-measured permanent plots for natural Betula platyphylla fore-sts in Daxing'an Mountains and Xiaoxing'an Mountains and 30 meteorological stations data, an individual tree growth model based on meteorological factors was constructed. The differences of stand and meteorological factors between Daxing'an Mountains and Xiaoxing'an Mountains were analyzed and the diameter increment model including the regional effects was developed by dummy variable approach. The results showed that the minimum temperature (T g min ) and mean precipitation (P g m ) in growing season were the main meteorological factors which affected the diameter increment in the two study areas. T g min and P g m were positively correlated with the diameter increment, but the influence strength of T g min was obviously different between the two research areas. The adjusted coefficient of determination (R a 2 ) of the diameter increment model with meteorological factors was 0.56 and had an 11% increase compared to the one without meteorological factors. It was concluded that meteorological factors could well explain the diameter increment of B. platyphylla. R a 2 of the model with regional effects was 0.59, and increased by 18% compared to the one without regional effects, and effectively solved the incompatible problem of parameters between the two research areas. The validation results showed that the individual tree diameter growth model with regional effect had the best prediction accuracy in estimating the diameter increment of B. platyphylla. The mean error, mean absolute error, mean error percent and mean prediction error percent were 0.0086, 0.4476, 5.8% and 20.0%, respectively. Overall, dummy variable model of individual tree diameter increment based on meteorological factors could well describe the diameter increment process of natural B. platyphylla in Daxing'an Mountains and Xiaoxing'an Mountains.
The US Navy Coastal Surge and Inundation Prediction System (CSIPS): Making Forecasts Easier
2013-02-14
produced the best results Peak Water Level Percent Error CD Formulation LAWMA , Amerada Pass Freshwater Canal Locks Calcasieu Pass Sabine Pass...Conclusions Ongoing Work 16 Baseline Simulation Results Peak Water Level Percent Error LAWMA , Amerada Pass Freshwater Canal Locks Calcasieu Pass...Conclusions Ongoing Work 20 Sensitivity Studies Waves Run Water Level – Percent Error of Peak HWM MAPE Lawma , Armeda Pass Freshwater
Bay of Fundy verification of a system for multidate Landsat measurement of suspended sediment
NASA Technical Reports Server (NTRS)
Munday, J. C., Jr.; Afoldi, T. T.; Amos, C. L.
1981-01-01
A system for automated multidate Landsat CCT MSS measurement of suspended sediment concentration (S) has been implemented and verified on nine sets (108 points) of data from the Bay of Fundy, Canada. The system employs 'chromaticity analysis' to provide automatic pixel-by-pixel adjustment of atmospheric variations, permitting reference calibration data from one or several dates to be spatially and temporally extrapolated to other regions and to other dates. For verification, each data set was used in turn as test data against the remainder as a calibration set: the average absolute error was 44 percent of S over the range 1-1000 mg/l. The system can be used to measure chlorophyll (in the absence of atmospheric variations), Secchi disk depth, and turbidity.
Farooqui, Javed Hussain; Koul, Archana; Dutta, Ranjan; Shroff, Noshir Minoo
2016-01-01
AIM To compare the accuracy of two different methods of preoperative marking for toric intraocular lens (IOL) implantation, bubble marker versus pendulum marker, as a means of establishing the reference point for the final alignment of the toric IOL to achieve an outcome as close as possible to emmetropia. METHODS Toric IOLs were implanted in 180 eyes of 110 patients. One group (55 patients) had preoperative marking of both eyes done with bubble marker (ASICO AE-2791TBL) and the other group (55 patients) with pendulum marker (Rumex®3-193). Reference marks were placed at 3-, 6-, and 9-o'clock positions on the limbus. Slit-lamp photographs were analyzed using Adobe Photoshop (version 7.0). Amount of alignment error (in degrees) induced in each group was measured. RESULTS Mean absolute rotation error in the preoperative marking in the horizontal axis was 2.42±1.71 in the bubble marker group and 2.83±2.31in the pendulum marker group (P=0.501). Sixty percent of the pendulum group and 70% of the bubble group had rotation error ≤3 (P=0.589), and 90% eyes of the pendulum group and 96.7% of the bubble group had rotation error ≤5 (P=0.612). CONCLUSION Both preoperative marking techniques result in approximately 3 of alignment error. Both marking techniques are simple, predictable, reproducible and easy to perform. PMID:27275425
Yang, Limin; Xian, George Z.; Klaver, Jacqueline M.; Deal, Brian
2003-01-01
We developed a Sub-pixel Imperviousness Change Detection (SICD) approach to detect urban land-cover changes using Landsat and high-resolution imagery. The sub-pixel percent imperviousness was mapped for two dates (09 March 1993 and 11 March 2001) over western Georgia using a regression tree algorithm. The accuracy of the predicted imperviousness was reasonable based on a comparison using independent reference data. The average absolute error between predicted and reference data was 16.4 percent for 1993 and 15.3 percent for 2001. The correlation coefficient (r) was 0.73 for 1993 and 0.78 for 2001, respectively. Areas with a significant increase (greater than 20 percent) in impervious surface from 1993 to 2001 were mostly related to known land-cover/land-use changes that occurred in this area, suggesting that the spatial change of an impervious surface is a useful indicator for identifying spatial extent, intensity, and, potentially, type of urban land-cover/land-use changes. Compared to other pixel-based change-detection methods (band differencing, rationing, change vector, post-classification), information on changes in sub-pixel percent imperviousness allow users to quantify and interpret urban land-cover/land-use changes based on their own definition. Such information is considered complementary to products generated using other change-detection methods. In addition, the procedure for mapping imperviousness is objective and repeatable, hence, can be used for monitoring urban land-cover/land-use change over a large geographic area. Potential applications and limitations of the products developed through this study in urban environmental studies are also discussed.
Wetherbee, Gregory A.
2016-07-22
Atmospheric wet-deposition monitoring in Rocky Mountain National Park included precipitation depth and aqueous chemical measurements at colocated National Atmospheric Deposition Program/National Trends Network (NADP/NTN) sites CO89 and CO98 (Loch Vale) during water years 2010–14 (study period). The colocated sites were separated by approximately 6.5 meters horizontally and 0.5 meter in elevation, in accordance with NADP siting criteria. Assessment of the 5-year record of colocated data is intended to inform man-agement decisions pertaining to the achievement of nitrogen deposition reduction goals of the Rocky Mountain National Park Nitrogen Deposition Reduction Plan.The data at site CO98 met NADP completeness criteria for the first time in 29 years of operation in 2011 and then again in 2012. During the study period, data at site CO89 met completeness criteria in 2012. Median weekly relative precipitation-depth differences between sites CO89 and CO98 ranged from 0 to 0.25 millimeter during the study period. Median weekly absolute percent differences in sample volume ranged from 5 to 10 percent. Median relative concentration differences for weekly ammonium (NH4+) and nitrate (NO3-) concentrations were near the NADP Central Analytical Laboratory’s method detection limits and thus were considered small. Absolute percent differences for water-year 2010–14 precipitation-weighted mean concentrations of NH4+, NO3-, and inorganic nitrogen (Ninorg) ranged from 0.0 to 25.7 percent. Absolute percent differences for water-year 2010–14 NH4+, NO3-, and Ninorg deposition ranged from 2.1 to 18.9 percent, 3.3 to 24.5 percent, and 0.3 to 17.4 percent, respectively.
Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model
NASA Astrophysics Data System (ADS)
Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.
2015-12-01
Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently provides the most reliable forecasts in the Atlantic basin.
Stone, Jennifer; Thompson, Deborah J.; dos-Santos-Silva, Isabel; Scott, Christopher; Tamimi, Rulla M.; Lindstrom, Sara; Kraft, Peter; Hazra, Aditi; Li, Jingmei; Eriksson, Louise; Czene, Kamila; Hall, Per; Jensen, Matt; Cunningham, Julie; Olson, Janet E.; Purrington, Kristen; Couch, Fergus J.; Brown, Judith; Leyland, Jean; Warren, Ruth M. L.; Luben, Robert N.; Khaw, Kay-Tee; Smith, Paula; Wareham, Nicholas J.; Jud, Sebastian M.; Heusinger, Katharina; Beckmann, Matthias W.; Douglas, Julie A.; Shah, Kaanan P.; Chan, Heang-Ping; Helvie, Mark A.; Le Marchand, Loic; Kolonel, Laurence N.; Woolcott, Christy; Maskarinec, Gertraud; Haiman, Christopher; Giles, Graham G.; Baglietto, Laura; Krishnan, Kavitha; Southey, Melissa C.; Apicella, Carmel; Andrulis, Irene L.; Knight, Julia A.; Ursin, Giske; Grenaker Alnaes, Grethe I.; Kristensen, Vessela N.; Borresen-Dale, Anne-Lise; Gram, Inger Torhild; Bolla, Manjeet K.; Wang, Qin; Michailidou, Kyriaki; Dennis, Joe; Simard, Jacques; Paroah, Paul; Dunning, Alison M.; Easton, Douglas F.; Fasching, Peter A.; Pankratz, V. Shane; Hopper, John; Vachon, Celine M.
2015-01-01
Mammographic density measures adjusted for age and body mass index (BMI) are heritable predictors of breast cancer risk but few mammographic density-associated genetic variants have been identified. Using data for 10,727 women from two international consortia, we estimated associations between 77 common breast cancer susceptibility variants and absolute dense area, percent dense area and absolute non-dense area adjusted for study, age and BMI using mixed linear modeling. We found strong support for established associations between rs10995190 (in the region of ZNF365), rs2046210 (ESR1) and rs3817198 (LSP1) and adjusted absolute and percent dense areas (all p <10−5). Of 41 recently discovered breast cancer susceptibility variants, associations were found between rs1432679 (EBF1), rs17817449 (MIR1972-2: FTO), rs12710696 (2p24.1), and rs3757318 (ESR1) and adjusted absolute and percent dense areas, respectively. There were associations between rs6001930 (MKL1) and both adjusted absolute dense and non-dense areas, and between rs17356907 (NTN4) and adjusted absolute non-dense area. Trends in all but two associations were consistent with those for breast cancer risk. Results suggested that 18% of breast cancer susceptibility variants were associated with at least one mammographic density measure. Genetic variants at multiple loci were associated with both breast cancer risk and the mammographic density measures. Further understanding of the underlying mechanisms at these loci could help identify etiological pathways implicated in how mammographic density predicts breast cancer risk. PMID:25862352
Kolehmainen, V; Vauhkonen, M; Karjalainen, P A; Kaipio, J P
1997-11-01
In electrical impedance tomography (EIT), difference imaging is often preferred over static imaging. This is because of the many unknowns in the forward modelling which make it difficult to obtain reliable absolute resistivity estimates. However, static imaging and absolute resistivity values are needed in some potential applications of EIT. In this paper we demonstrate by simulation the effects of different error components that are included in the reconstruction of static EIT images. All simulations are carried out in two dimensions with the so-called complete electrode model. Errors that are considered are the modelling error in the boundary shape of an object, errors in the electrode sizes and localizations and errors in the contact impedances under the electrodes. Results using both adjacent and trigonometric current patterns are given.
Pyrometer with tracking balancing
NASA Astrophysics Data System (ADS)
Ponomarev, D. B.; Zakharenko, V. A.; Shkaev, A. G.
2018-04-01
Currently, one of the main metrological noncontact temperature measurement challenges is the emissivity uncertainty. This paper describes a pyrometer with emissivity effect diminishing through the use of a measuring scheme with tracking balancing in which the radiation receiver is a null-indicator. In this paper the results of the prototype pyrometer absolute error study in surfaces temperature measurement of aluminum and nickel samples are presented. There is absolute error calculated values comparison considering the emissivity table values with errors on the results of experimental measurements by the proposed method. The practical implementation of the proposed technical solution has allowed two times to reduce the error due to the emissivity uncertainty.
Uncertainty analysis technique for OMEGA Dante measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M. J.; Widmann, K.; Sorce, C.
2010-10-15
The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less
Uncertainty Analysis Technique for OMEGA Dante Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M J; Widmann, K; Sorce, C
2010-05-07
The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less
NASA Astrophysics Data System (ADS)
Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim
2017-09-01
Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.
A new accuracy measure based on bounded relative error for time series forecasting
Twycross, Jamie; Garibaldi, Jonathan M.
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred. PMID:28339480
A new accuracy measure based on bounded relative error for time series forecasting.
Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.
Sub-nanometer periodic nonlinearity error in absolute distance interferometers
NASA Astrophysics Data System (ADS)
Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang
2015-05-01
Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.
Demand forecasting of electricity in Indonesia with limited historical data
NASA Astrophysics Data System (ADS)
Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif
2018-03-01
Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).
1984-05-01
Control Ignored any error of 1/10th degree or less. This was done by setting the error term E and the integral sum PREINT to zero If then absolute value of...signs of two errors jeq tdiff if equal, jump clr @preint else zero integal sum tdiff mov @diff,rl fetch absolute value of OAT-RAT ci rl,25 is...includes a heating coil and thermostatic control to maintain the air in this path at an elevated temperature, typically around 80 degrees Farenheit (80 F
NASA Astrophysics Data System (ADS)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
NASA Technical Reports Server (NTRS)
Long, E. R., Jr.
1986-01-01
Effects of specimen preparation on measured values of an acrylic's electomagnetic properties at X-band microwave frequencies, TE sub 1,0 mode, utilizing an automatic network analyzer have been studied. For 1 percent or less error, a gap between the specimen edge and the 0.901-in. wall of the specimen holder was the most significant parameter. The gap had to be less than 0.002 in. The thickness variation and alignment errors in the direction parallel to the 0.901-in. wall were equally second most significant and had to be less than 1 degree. Errors in the measurement f the thickness were third most significant. They had to be less than 3 percent. The following parameters caused errors of 1 percent or less: ratios of specimen-holder thicknesses of more than 15 percent, gaps between the specimen edge and the 0.401-in. wall less than 0.045 in., position errors less than 15 percent, surface roughness, hickness variation in the direction parallel to the 0.401-in. wall less than 35 percent, and specimen alignment in the direction parallel to the 0.401-in. wall mass than 5 degrees.
Simulation of quantity and quality of storm runoff for urban catchments in Fresno, California
Guay, J.R.; Smith, P.E.
1988-01-01
Rainfall-runoff models were developed for a multiple-dwelling residential catchment (2 applications), a single-dwelling residential catchment, and a commercial catchment in Fresno, California, using the U.S. Geological Survey Distributed Routing Rainfall-Runoff Model (DR3M-II). A runoff-quality model also was developed at the commercial catchment using the Survey 's Multiple-Event Urban Runoff Quality model (DR3M-qual). The purpose of this study was: (1) to demonstrate the capabilites of the two models for use in designing storm drains, estimating the frequency of storm runoff loads, and evaluating the effectiveness of street sweeping on an urban drainage catchment; and (2) to determine the simulation accuracies of these models. Simulation errors of the two models were summarized as the median absolute deviation in percent (mad) between measured and simulated values. Calibration and verification mad errors for runoff volumes and peak discharges ranged from 14 to 20%. The estimated annual storm-runoff loads, in pounds/acre of effective impervious area, that could occur once every hundred years at the commercial catchment was 95 for dissolved solids, 1.6 for the dissolved nitrite plus nitrate, 0.31 for total recoverable lead, and 120 for suspended sediment. Calibration and verification mad errors for the above constituents ranged from 11 to 54%. (USGS)
A review on Black-Scholes model in pricing warrants in Bursa Malaysia
NASA Astrophysics Data System (ADS)
Gunawan, Nur Izzaty Ilmiah Indra; Ibrahim, Siti Nur Iqmal; Rahim, Norhuda Abdul
2017-01-01
This paper studies the accuracy of the Black-Scholes (BS) model and the dilution-adjusted Black-Scholes (DABS) model to pricing some warrants traded in the Malaysian market. Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) are used to compare the two models. Results show that the DABS model is more accurate than the BS model for the selected data.
12 CFR 324.152 - Simple risk weight approach (SRWA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... (that is, between zero and -1), then E equals the absolute value of RVC. If RVC is negative and less... the lowest applicable risk weight in this section. (1) Zero percent risk weight equity exposures. An....131(d)(2) is assigned a zero percent risk weight. (2) 20 percent risk weight equity exposures. An...
Region of influence regression for estimating the 50-year flood at ungaged sites
Tasker, Gary D.; Hodge, S.A.; Barks, C.S.
1996-01-01
Five methods of developing regional regression models to estimate flood characteristics at ungaged sites in Arkansas are examined. The methods differ in the manner in which the State is divided into subrogions. Each successive method (A to E) is computationally more complex than the previous method. Method A makes no subdivision. Methods B and C define two and four geographic subrogions, respectively. Method D uses cluster/discriminant analysis to define subrogions on the basis of similarities in watershed characteristics. Method E, the new region of influence method, defines a unique subregion for each ungaged site. Split-sample results indicate that, in terms of root-mean-square error, method E (38 percent error) is best. Methods C and D (42 and 41 percent error) were in a virtual tie for second, and methods B (44 percent error) and A (49 percent error) were fourth and fifth best.
Huang, David; Tang, Maolong; Wang, Li; Zhang, Xinbo; Armour, Rebecca L.; Gattey, Devin M.; Lombardi, Lorinna H.; Koch, Douglas D.
2013-01-01
Purpose: To use optical coherence tomography (OCT) to measure corneal power and improve the selection of intraocular lens (IOL) power in cataract surgeries after laser vision correction. Methods: Patients with previous myopic laser vision corrections were enrolled in this prospective study from two eye centers. Corneal thickness and power were measured by Fourier-domain OCT. Axial length, anterior chamber depth, and automated keratometry were measured by a partial coherence interferometer. An OCT-based IOL formula was developed. The mean absolute error of the OCT-based formula in predicting postoperative refraction was compared to two regression-based IOL formulae for eyes with previous laser vision correction. Results: Forty-six eyes of 46 patients all had uncomplicated cataract surgery with monofocal IOL implantation. The mean arithmetic prediction error of postoperative refraction was 0.05 ± 0.65 diopter (D) for the OCT formula, 0.14 ± 0.83 D for the Haigis-L formula, and 0.24 ± 0.82 D for the no-history Shammas-PL formula. The mean absolute error was 0.50 D for OCT compared to a mean absolute error of 0.67 D for Haigis-L and 0.67 D for Shammas-PL. The adjusted mean absolute error (average prediction error removed) was 0.49 D for OCT, 0.65 D for Haigis-L (P=.031), and 0.62 D for Shammas-PL (P=.044). For OCT, 61% of the eyes were within 0.5 D of prediction error, whereas 46% were within 0.5 D for both Haigis-L and Shammas-PL (P=.034). Conclusions: The predictive accuracy of OCT-based IOL power calculation was better than Haigis-L and Shammas-PL formulas in eyes after laser vision correction. PMID:24167323
Code of Federal Regulations, 2014 CFR
2014-07-01
... of a zero-percent certificate of indebtedness that is made in error? 363.138 Section 363.138 Money... Zero-Percent Certificate of Indebtedness General § 363.138 Is Treasury liable for the purchase of a zero-percent certificate of indebtedness that is made in error? We are not liable for any deposits of...
Standardising analysis of carbon monoxide rebreathing for application in anti-doping.
Alexander, Anthony C; Garvican, Laura A; Burge, Caroline M; Clark, Sally A; Plowman, James S; Gore, Christopher J
2011-03-01
Determination of total haemoglobin mass (Hbmass) via carbon monoxide (CO) depends critically on repeatable measurement of percent carboxyhaemoglobin (%HbCO) in blood with a hemoximeter. The main aim of this study was to determine, for an OSM3 hemoximeter, the number of replicate measures as well as the theoretical change in percent carboxyhaemoglobin required to yield a random error of analysis (Analyser Error) of ≤1%. Before and after inhalation of CO, nine participants provided a total of 576 blood samples that were each analysed five times for percent carboxyhaemoglobin on one of three OSM3 hemoximeters; with approximately one-third of blood samples analysed on each OSM3. The Analyser Error was calculated for the first two (duplicate), first three (triplicate) and first four (quadruplicate) measures on each OSM3, as well as for all five measures (quintuplicates). Two methods of CO-rebreathing, a 2-min and 10-min procedure, were evaluated for Analyser Error. For duplicate analyses of blood, the Analyser Error for the 2-min method was 3.7, 4.0 and 5.0% for the three OSM3s when the percent carboxyhaemoglobin increased by two above resting values. With quintuplicate analyses of blood, the corresponding errors reduced to .8, .9 and 1.0% for the 2-min method when the percent carboxyhaemoglobin increased by 5.5 above resting values. In summary, to minimise the Analyser Error to ∼≤1% on an OSM3 hemoximeter, researchers should make ≥5 replicates of percent carboxyhaemoglobin and the volume of CO administered should be sufficient increase percent carboxyhaemoglobin by ≥5.5 above baseline levels. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.
Noise-Enhanced Eversion Force Sense in Ankles With or Without Functional Instability.
Ross, Scott E; Linens, Shelley W; Wright, Cynthia J; Arnold, Brent L
2015-08-01
Force sense impairments are associated with functional ankle instability. Stochastic resonance stimulation (SRS) may have implications for correcting these force sense deficits. To determine if SRS improved force sense. Case-control study. Research laboratory. Twelve people with functional ankle instability (age = 23 ± 3 years, height = 174 ± 8 cm, mass = 69 ± 10 kg) and 12 people with stable ankles (age = 22 ± 2 years, height = 170 ± 7 cm, mass = 64 ± 10 kg). The eversion force sense protocol required participants to reproduce a targeted muscle tension (10% of maximum voluntary isometric contraction). This protocol was assessed under SRSon and SRSoff (control) conditions. During SRSon, random subsensory mechanical noise was applied to the lower leg at a customized optimal intensity for each participant. Constant error, absolute error, and variable error measures quantified accuracy, overall performance, and consistency of force reproduction, respectively. With SRS, we observed main effects for force sense absolute error (SRSoff = 1.01 ± 0.67 N, SRSon = 0.69 ± 0.42 N) and variable error (SRSoff = 1.11 ± 0.64 N, SRSon = 0.78 ± 0.56 N) (P < .05). No other main effects or treatment-by-group interactions were found (P > .05). Although SRS reduced the overall magnitude (absolute error) and variability (variable error) of force sense errors, it had no effect on the directionality (constant error). Clinically, SRS may enhance muscle tension ability, which could have treatment implications for ankle stability.
NASA Astrophysics Data System (ADS)
Jung, Jae Hong; Jung, Joo-Young; Cho, Kwang Hwan; Ryu, Mi Ryeong; Bae, Sun Hyun; Moon, Seong Kwon; Kim, Yong Ho; Choe, Bo-Young; Suh, Tae Suk
2017-02-01
The purpose of this study was to analyze the glottis rotational error (GRE) by using a thermoplastic mask for patients with the glottic cancer undergoing intensity-modulated radiation therapy (IMRT). We selected 20 patients with glottic cancer who had received IMRT by using the tomotherapy. The image modalities with both kilovoltage computed tomography (planning kVCT) and megavoltage CT (daily MVCT) images were used for evaluating the error. Six anatomical landmarks in the image were defined to evaluate a correlation between the absolute GRE (°) and the length of contact with the underlying skin of the patient by the mask (mask, mm). We also statistically analyzed the results by using the Pearson's correlation coefficient and a linear regression analysis ( P <0.05). The mask and the absolute GRE were verified to have a statistical correlation ( P < 0.01). We found a statistical significance for each parameter in the linear regression analysis (mask versus absolute roll: P = 0.004 [ P < 0.05]; mask versus 3D-error: P = 0.000 [ P < 0.05]). The range of the 3D-errors with contact by the mask was from 1.2% - 39.7% between the maximumand no-contact case in this study. A thermoplastic mask with a tight, increased contact area may possibly contribute to the uncertainty of the reproducibility as a variation of the absolute GRE. Thus, we suggest that a modified mask, such as one that covers only the glottis area, can significantly reduce the patients' setup errors during the treatment.
NASA Astrophysics Data System (ADS)
Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu
2017-03-01
In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.
NASA Astrophysics Data System (ADS)
Guha, Daipayan; Jakubovic, Raphael; Gupta, Shaurya; Yang, Victor X. D.
2017-02-01
Computer-assisted navigation (CAN) may guide spinal surgeries, reliably reducing screw breach rates. Definitions of screw breach, if reported, vary widely across studies. Absolute quantitative error is theoretically a more precise and generalizable metric of navigation accuracy, but has been computed variably and reported in fewer than 25% of clinical studies of CAN-guided pedicle screw accuracy. We reviewed a prospectively-collected series of 209 pedicle screws placed with CAN guidance to characterize the correlation between clinical pedicle screw accuracy, based on postoperative imaging, and absolute quantitative navigation accuracy. We found that acceptable screw accuracy was achieved for significantly fewer screws based on 2mm grade vs. Heary grade, particularly in the lumbar spine. Inter-rater agreement was good for the Heary classification and moderate for the 2mm grade, significantly greater among radiologists than surgeon raters. Mean absolute translational/angular accuracies were 1.75mm/3.13° and 1.20mm/3.64° in the axial and sagittal planes, respectively. There was no correlation between clinical and absolute navigation accuracy, in part because surgeons appear to compensate for perceived translational navigation error by adjusting screw medialization angle. Future studies of navigation accuracy should therefore report absolute translational and angular errors. Clinical screw grades based on post-operative imaging, if reported, may be more reliable if performed in multiple by radiologist raters.
NASA Astrophysics Data System (ADS)
Mitra, Ashis; Majumdar, Prabal Kumar; Bannerjee, Debamalya
2013-03-01
This paper presents a comparative analysis of two modeling methodologies for the prediction of air permeability of plain woven handloom cotton fabrics. Four basic fabric constructional parameters namely ends per inch, picks per inch, warp count and weft count have been used as inputs for artificial neural network (ANN) and regression models. Out of the four regression models tried, interaction model showed very good prediction performance with a meager mean absolute error of 2.017 %. However, ANN models demonstrated superiority over the regression models both in terms of correlation coefficient and mean absolute error. The ANN model with 10 nodes in the single hidden layer showed very good correlation coefficient of 0.982 and 0.929 and mean absolute error of only 0.923 and 2.043 % for training and testing data respectively.
The PMA Catalogue: 420 million positions and absolute proper motions
NASA Astrophysics Data System (ADS)
Akhmetov, V. S.; Fedorov, P. N.; Velichko, A. B.; Shulga, V. M.
2017-07-01
We present a catalogue that contains about 420 million absolute proper motions of stars. It was derived from the combination of positions from Gaia DR1 and 2MASS, with a mean difference of epochs of about 15 yr. Most of the systematic zonal errors inherent in the 2MASS Catalogue were eliminated before deriving the absolute proper motions. The absolute calibration procedure (zero-pointing of the proper motions) was carried out using about 1.6 million positions of extragalactic sources. The mean formal error of the absolute calibration is less than 0.35 mas yr-1. The derived proper motions cover the whole celestial sphere without gaps for a range of stellar magnitudes from 8 to 21 mag. In the sky areas where the extragalactic sources are invisible (the avoidance zone), a dedicated procedure was used that transforms the relative proper motions into absolute ones. The rms error of proper motions depends on stellar magnitude and ranges from 2-5 mas yr-1 for stars with 10 mag < G < 17 mag to 5-10 mas yr-1 for faint ones. The present catalogue contains the Gaia DR1 positions of stars for the J2015 epoch. The system of the PMA proper motions does not depend on the systematic errors of the 2MASS positions, and in the range from 14 to 21 mag represents an independent realization of a quasi-inertial reference frame in the optical and near-infrared wavelength range. The Catalogue also contains stellar magnitudes taken from the Gaia DR1 and 2MASS catalogues. A comparison of the PMA proper motions of stars with similar data from certain recent catalogues has been undertaken.
UNDERSTANDING OR NURSES' REACTIONS TO ERRORS AND USING THIS UNDERSTANDING TO IMPROVE PATIENT SAFETY.
Taifoori, Ladan; Valiee, Sina
2015-09-01
The operating room can be home to many different types of nursing errors due to the invasiveness of OR procedures. The nurses' reactions towards errors can be a key factor in patient safety. This article is based on a study, with the aim of investigating nurses' reactions toward nursing errors and the various contributing and resulting factors, conducted at Kurdistan University of Medical Sciences in Sanandaj, Iran in 2014. The goal of the study was to determine how OR nurses' reacted to nursing errors with the goal of having this information used to improve patient safety. Research was conducted as a cross-sectional descriptive study. The participants were all nurses employed in the operating rooms of the teaching hospitals of Kurdistan University of Medical Sciences, which was selected by a consensus method (170 persons). The information was gathered through questionnaires that focused on demographic information, error definition, reasons for error occurrence, and emotional reactions for error occurrence, and emotional reactions toward the errors. 153 questionnaires were completed and analyzed by SPSS software version 16.0. "Not following sterile technique" (82.4 percent) was the most reported nursing error, "tiredness" (92.8 percent) was the most reported reason for the error occurrence, "being upset at having harmed the patient" (85.6 percent) was the most reported emotional reaction after error occurrence", with "decision making for a better approach to tasks the next time" (97.7 percent) as the most common goal and "paying more attention to details" (98 percent) was the most reported planned strategy for future improved outcomes. While healthcare facilities are focused on planning for the prevention and elimination of errors it was shown that nurses can also benefit from support after error occurrence. Their reactions, and coping strategies, need guidance and, with both individual and organizational support, can be a factor in improving patient safety.
Li, Aihua; Zhao, Wenguang; Mitchell, Jessica J; Glenn, Nancy F.; Germino, Matthew; Sankey, Joel B.; Allen, Richard G
2017-01-01
The aerodynamic roughness length (Z0 m) serves an important role in the flux exchange between the land surface and atmosphere. In this study, airborne lidar (ALS), terrestrial lidar (TLS), and imaging spectroscopy data were integrated to develop and test two approaches to estimate Z0 m over a shrub dominated dryland study area in south-central Idaho, USA. Sensitivity of the two parameterization methods to estimate Z0 m was analyzed. The comparison of eddy covariance-derived Z0 m and remote sensing-derived Z0 m showed that the accuracy of the estimated Z0 m heavily depends on the estimation model and the representation of shrub (e.g., Artemisia tridentata subsp. wyomingensis) height in the models. The geometrical method (RA1994) led to 9 percent (~0.5 cm) and 25% (~1.1 cm) errors at site 1 and site 2, respectively, which performed better than the height variability-based method (MR1994) with bias error of 20 percent and 48 percent at site 1 and site 2, respectively. The RA1994 model resulted in a larger range of Z0 m than the MR1994 method. We also found that the mean, median and 75th percentiles of heights (H75) from ALS provides the best Z0 m estimates in the MR1994 model, while the mean, median, and MLD (Median Absolute Deviation from Median Height), as well as AAD (Mean Absolute Deviation from Mean Height) heights from ALS provides the best Z0 m estimates in the RA1994 model. In addition, the fractional cover of shrub and grass, distinguished with ALS and imaging spectroscopy data, provided the opportunity to estimate the frontal area index at the pixel-level to assess the influence of grass and shrub on Z0m estimates in the RA1994 method. Results indicate that grass had little effect on Z0 m in the RA1994 method. The Z0 m estimations were tightly coupled with vegetation height and its local variance for the shrubs. Overall, the results demonstrate that the use of height and fractional cover from remote sensing data are promising for estimating Z0 m, and thus refining land surface models at regional scales in semiarid shrublands.
Error Analysis of Wind Measurements for the University of Illinois Sodium Doppler Temperature System
NASA Technical Reports Server (NTRS)
Pfenninger, W. Matthew; Papen, George C.
1992-01-01
Four-frequency lidar measurements of temperature and wind velocity require accurate frequency tuning to an absolute reference and long term frequency stability. We quantify frequency tuning errors for the Illinois sodium system, to measure absolute frequencies and a reference interferometer to measure relative frequencies. To determine laser tuning errors, we monitor the vapor cell and interferometer during lidar data acquisition and analyze the two signals for variations as functions of time. Both sodium cell and interferometer are the same as those used to frequency tune the laser. By quantifying the frequency variations of the laser during data acquisition, an error analysis of temperature and wind measurements can be calculated. These error bounds determine the confidence in the calculated temperatures and wind velocities.
NASA Astrophysics Data System (ADS)
Prentice, Boone M.; Chumbley, Chad W.; Caprioli, Richard M.
2017-01-01
Matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI IMS) allows for the visualization of molecular distributions within tissue sections. While providing excellent molecular specificity and spatial information, absolute quantification by MALDI IMS remains challenging. Especially in the low molecular weight region of the spectrum, analysis is complicated by matrix interferences and ionization suppression. Though tandem mass spectrometry (MS/MS) can be used to ensure chemical specificity and improve sensitivity by eliminating chemical noise, typical MALDI MS/MS modalities only scan for a single MS/MS event per laser shot. Herein, we describe TOF/TOF instrumentation that enables multiple fragmentation events to be performed in a single laser shot, allowing the intensity of the analyte to be referenced to the intensity of the internal standard in each laser shot while maintaining the benefits of MS/MS. This approach is illustrated by the quantitative analyses of rifampicin (RIF), an antibiotic used to treat tuberculosis, in pooled human plasma using rifapentine (RPT) as an internal standard. The results show greater than 4-fold improvements in relative standard deviation as well as improved coefficients of determination (R2) and accuracy (>93% quality controls, <9% relative errors). This technology is used as an imaging modality to measure absolute RIF concentrations in liver tissue from an animal dosed in vivo. Each microspot in the quantitative image measures the local RIF concentration in the tissue section, providing absolute pixel-to-pixel quantification from different tissue microenvironments. The average concentration determined by IMS is in agreement with the concentration determined by HPLC-MS/MS, showing a percent difference of 10.6%.
Trommer, J.T.; Loper, J.E.; Hammett, K.M.
1996-01-01
Several traditional techniques have been used for estimating stormwater runoff from ungaged watersheds. Applying these techniques to water- sheds in west-central Florida requires that some of the empirical relationships be extrapolated beyond tested ranges. As a result, there is uncertainty as to the accuracy of these estimates. Sixty-six storms occurring in 15 west-central Florida watersheds were initially modeled using the Rational Method, the U.S. Geological Survey Regional Regression Equations, the Natural Resources Conservation Service TR-20 model, the U.S. Army Corps of Engineers Hydrologic Engineering Center-1 model, and the Environmental Protection Agency Storm Water Management Model. The techniques were applied according to the guidelines specified in the user manuals or standard engineering textbooks as though no field data were available and the selection of input parameters was not influenced by observed data. Computed estimates were compared with observed runoff to evaluate the accuracy of the techniques. One watershed was eliminated from further evaluation when it was determined that the area contributing runoff to the stream varies with the amount and intensity of rainfall. Therefore, further evaluation and modification of the input parameters were made for only 62 storms in 14 watersheds. Runoff ranged from 1.4 to 99.3 percent percent of rainfall. The average runoff for all watersheds included in this study was about 36 percent of rainfall. The average runoff for the urban, natural, and mixed land-use watersheds was about 41, 27, and 29 percent, respectively. Initial estimates of peak discharge using the rational method produced average watershed errors that ranged from an underestimation of 50.4 percent to an overestimation of 767 percent. The coefficient of runoff ranged from 0.20 to 0.60. Calibration of the technique produced average errors that ranged from an underestimation of 3.3 percent to an overestimation of 1.5 percent. The average calibrated coefficient of runoff for each watershed ranged from 0.02 to 0.72. The average values of the coefficient of runoff necessary to calibrate the urban, natural, and mixed land-use watersheds were 0.39, 0.16, and 0.08, respectively. The U.S. Geological Survey regional regression equations for determining peak discharge produced errors that ranged from an underestimation of 87.3 percent to an over- estimation of 1,140 percent. The regression equations for determining runoff volume produced errors that ranged from an underestimation of 95.6 percent to an overestimation of 324 percent. Regression equations developed from data used for this study produced errors that ranged between an underestimation of 82.8 percent and an over- estimation of 328 percent for peak discharge, and from an underestimation of 71.2 percent to an overestimation of 241 percent for runoff volume. Use of the equations developed for west-central Florida streams produced average errors for each type of watershed that were lower than errors associated with use of the U.S. Geological Survey equations. Initial estimates of peak discharges and runoff volumes using the Natural Resources Conservation Service TR-20 model, produced average errors of 44.6 and 42.7 percent respectively, for all the watersheds. Curve numbers and times of concentration were adjusted to match estimated and observed peak discharges and runoff volumes. The average change in the curve number for all the watersheds was a decrease of 2.8 percent. The average change in the time of concentration was an increase of 59.2 percent. The shape of the input dimensionless unit hydrograph also had to be adjusted to match the shape and peak time of the estimated and observed flood hydrographs. Peak rate factors for the modified input dimensionless unit hydrographs ranged from 162 to 454. The mean errors for peak discharges and runoff volumes were reduced to 18.9 and 19.5 percent, respectively, using the average calibrated input parameters for ea
Predictability of the Arctic sea ice edge
NASA Astrophysics Data System (ADS)
Goessling, H. F.; Tietsche, S.; Day, J. J.; Hawkins, E.; Jung, T.
2016-02-01
Skillful sea ice forecasts from days to years ahead are becoming increasingly important for the operation and planning of human activities in the Arctic. Here we analyze the potential predictability of the Arctic sea ice edge in six climate models. We introduce the integrated ice-edge error (IIEE), a user-relevant verification metric defined as the area where the forecast and the "truth" disagree on the ice concentration being above or below 15%. The IIEE lends itself to decomposition into an absolute extent error, corresponding to the common sea ice extent error, and a misplacement error. We find that the often-neglected misplacement error makes up more than half of the climatological IIEE. In idealized forecast ensembles initialized on 1 July, the IIEE grows faster than the absolute extent error. This means that the Arctic sea ice edge is less predictable than sea ice extent, particularly in September, with implications for the potential skill of end-user relevant forecasts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellefson, S; Department of Human Oncology, University of Wisconsin, Madison, WI; Culberson, W
Purpose: Discrepancies in absolute dose values have been detected between the ViewRay treatment planning system and ArcCHECK readings when performing delivery quality assurance on the ViewRay system with the ArcCHECK-MR diode array (SunNuclear Corporation). In this work, we investigate whether these discrepancies are due to errors in the ViewRay planning and/or delivery system or due to errors in the ArcCHECK’s readings. Methods: Gamma analysis was performed on 19 ViewRay patient plans using the ArcCHECK. Frequency analysis on the dose differences was performed. To investigate whether discrepancies were due to measurement or delivery error, 10 diodes in low-gradient dose regions weremore » chosen to compare with ion chamber measurements in a PMMA phantom with the same size and shape as the ArcCHECK, provided by SunNuclear. The diodes chosen all had significant discrepancies in absolute dose values compared to the ViewRay TPS. Absolute doses to PMMA were compared between the ViewRay TPS calculations, ArcCHECK measurements, and measurements in the PMMA phantom. Results: Three of the 19 patient plans had 3%/3mm gamma passing rates less than 95%, and ten of the 19 plans had 2%/2mm passing rates less than 95%. Frequency analysis implied a non-random error process. Out of the 10 diode locations measured, ion chamber measurements were all within 2.2% error relative to the TPS and had a mean error of 1.2%. ArcCHECK measurements ranged from 4.5% to over 15% error relative to the TPS and had a mean error of 8.0%. Conclusion: The ArcCHECK performs well for quality assurance on the ViewRay under most circumstances. However, under certain conditions the absolute dose readings are significantly higher compared to the planned doses. As the ion chamber measurements consistently agree with the TPS, it can be concluded that the discrepancies are due to ArcCHECK measurement error and not TPS or delivery system error. This work was funded by the Bhudatt Paliwal Professorship and the University of Wisconsin Medical Radiation Research Center.« less
A method for estimating mean and low flows of streams in national forests of Montana
Parrett, Charles; Hull, J.A.
1985-01-01
Equations were developed for estimating mean annual discharge, 80-percent exceedance discharge, and 95-percent exceedance discharge for streams on national forest lands in Montana. The equations for mean annual discharge used active-channel width, drainage area and mean annual precipitation as independent variables, with active-channel width being most significant. The equations for 80-percent exceedance discharge and 95-percent exceedance discharge used only active-channel width as an independent variable. The standard error or estimate for the best equation for estimating mean annual discharge was 27 percent. The standard errors of estimate for the equations were 67 percent for estimating 80-percent exceedance discharge and 75 percent for estimating 95-percent exceedance discharge. (USGS)
Wu, S.-S.; Wang, L.; Qiu, X.
2008-01-01
This article presents a deterministic model for sub-block-level population estimation based on the total building volumes derived from geographic information system (GIS) building data and three census block-level housing statistics. To assess the model, we generated artificial blocks by aggregating census block areas and calculating the respective housing statistics. We then applied the model to estimate populations for sub-artificial-block areas and assessed the estimates with census populations of the areas. Our analyses indicate that the average percent error of population estimation for sub-artificial-block areas is comparable to those for sub-census-block areas of the same size relative to associated blocks. The smaller the sub-block-level areas, the higher the population estimation errors. For example, the average percent error for residential areas is approximately 0.11 percent for 100 percent block areas and 35 percent for 5 percent block areas.
Systematic errors of EIT systems determined by easily-scalable resistive phantoms.
Hahn, G; Just, A; Dittmar, J; Hellige, G
2008-06-01
We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.
Cost-effectiveness of the stream-gaging program in Nebraska
Engel, G.B.; Wahl, K.L.; Boohar, J.A.
1984-01-01
This report documents the results of a study of the cost-effectiveness of the streamflow information program in Nebraska. Presently, 145 continuous surface-water stations are operated in Nebraska on a budget of $908,500. Data uses and funding sources are identified for each of the 145 stations. Data from most stations have multiple uses. All stations have sufficient justification for continuation, but two stations primarily are used in short-term research studies; their continued operation needs to be evaluated when the research studies end. The present measurement frequency produces an average standard error for instantaneous discharges of about 12 percent, including periods when stage data are missing. Altering the travel routes and the measurement frequency will allow a reduction in standard error of about 1 percent with the present budget. Standard error could be reduced to about 8 percent if lost record could be eliminated. A minimum budget of $822,000 is required to operate the present network, but operations at that funding level would result in an increase in standard error to about 16 percent. The maximum budget analyzed was $1,363,000, which would result in an average standard error of 6 percent. (USGS)
12 CFR 217.152 - Simple risk weight approach (SRWA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... than or equal to -1 (that is, between zero and -1), then E equals the absolute value of RVC. If RVC is... this section. (1) Zero percent risk weight equity exposures. An equity exposure to an entity whose credit exposures are exempt from the 0.03 percent PD floor in § 217.131(d)(2) is assigned a zero percent...
12 CFR 217.52 - Simple risk-weight approach (SRWA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... greater than or equal to −1 (that is, between zero and −1), then E equals the absolute value of RVC. If... this section) by the lowest applicable risk weight in this paragraph (b). (1) Zero percent risk weight... credit exposures receive a zero percent risk weight under § 217.32 may be assigned a zero percent risk...
48 CFR 52.241-6 - Service Provisions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... such errors. However, any meter which registers not more than __ percent slow or fast shall be deemed... the Government if the percentage of errors is found to be not more than __ percent slow or fast. (3...
48 CFR 52.241-6 - Service Provisions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... such errors. However, any meter which registers not more than __ percent slow or fast shall be deemed... the Government if the percentage of errors is found to be not more than __ percent slow or fast. (3...
48 CFR 52.241-6 - Service Provisions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... such errors. However, any meter which registers not more than __ percent slow or fast shall be deemed... the Government if the percentage of errors is found to be not more than __ percent slow or fast. (3...
48 CFR 52.241-6 - Service Provisions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... such errors. However, any meter which registers not more than __ percent slow or fast shall be deemed... the Government if the percentage of errors is found to be not more than __ percent slow or fast. (3...
48 CFR 52.241-6 - Service Provisions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... such errors. However, any meter which registers not more than __ percent slow or fast shall be deemed... the Government if the percentage of errors is found to be not more than __ percent slow or fast. (3...
Levofloxacin to prevent bacterial infection in patients with cancer and neutropenia.
Bucaneve, Giampaolo; Micozzi, Alessandra; Menichetti, Francesco; Martino, Pietro; Dionisi, M Stella; Martinelli, Giovanni; Allione, Bernardino; D'Antonio, Domenico; Buelli, Maurizio; Nosari, A Maria; Cilloni, Daniela; Zuffa, Eliana; Cantaffa, Renato; Specchia, Giorgina; Amadori, Sergio; Fabbiano, Francesco; Deliliers, Giorgio Lambertenghi; Lauria, Francesco; Foà, Robin; Del Favero, Albano
2005-09-08
The prophylactic use of fluoroquinolones in patients with cancer and neutropenia is controversial and is not a recommended intervention. We randomly assigned 760 consecutive adult patients with cancer in whom chemotherapy-induced neutropenia (<1000 neutrophils per cubic millimeter) was expected to occur for more than seven days to receive either oral levofloxacin (500 mg daily) or placebo from the start of chemotherapy until the resolution of neutropenia. Patients were stratified according to their underlying disease (acute leukemia vs. solid tumor or lymphoma). An intention-to-treat analysis showed that fever was present for the duration of neutropenia in 65 percent of patients who received levofloxacin prophylaxis, as compared with 85 percent of those receiving placebo (243 of 375 vs. 308 of 363; relative risk, 0.76; absolute difference in risk, -20 percent; 95 percent confidence interval, -26 to -14 percent; P=0.001). The levofloxacin group had a lower rate of microbiologically documented infections (absolute difference in risk, -17 percent; 95 percent confidence interval, -24 to -10 percent; P<0.001), bacteremias (difference in risk, -16 percent; 95 percent confidence interval, -22 to -9 percent; P<0.001), and single-agent gram-negative bacteremias (difference in risk, -7 percent; 95 percent confidence interval, -10 to -2 percent; P<0.01) than did the placebo group. Mortality and tolerability were similar in the two groups. The effects of prophylaxis were also similar between patients with acute leukemia and those with solid tumors or lymphoma. Prophylactic treatment with levofloxacin is an effective and well-tolerated way of preventing febrile episodes and other relevant infection-related outcomes in patients with cancer and profound and protracted neutropenia. The long-term effect of this intervention on microbial resistance in the community is not known. Copyright 2005 Massachusetts Medical Society.
Absolute color scale for improved diagnostics with wavefront error mapping.
Smolek, Michael K; Klyce, Stephen D
2007-11-01
Wavefront data are expressed in micrometers and referenced to the pupil plane, but current methods to map wavefront error lack standardization. Many use normalized or floating scales that may confuse the user by generating ambiguous, noisy, or varying information. An absolute scale that combines consistent clinical information with statistical relevance is needed for wavefront error mapping. The color contours should correspond better to current corneal topography standards to improve clinical interpretation. Retrospective analysis of wavefront error data. Historic ophthalmic medical records. Topographic modeling system topographical examinations of 120 corneas across 12 categories were used. Corneal wavefront error data in micrometers from each topography map were extracted at 8 Zernike polynomial orders and for 3 pupil diameters expressed in millimeters (3, 5, and 7 mm). Both total aberrations (orders 2 through 8) and higher-order aberrations (orders 3 through 8) were expressed in the form of frequency histograms to determine the working range of the scale across all categories. The standard deviation of the mean error of normal corneas determined the map contour resolution. Map colors were based on corneal topography color standards and on the ability to distinguish adjacent color contours through contrast. Higher-order and total wavefront error contour maps for different corneal conditions. An absolute color scale was produced that encompassed a range of +/-6.5 microm and a contour interval of 0.5 microm. All aberrations in the categorical database were plotted with no loss of clinical information necessary for classification. In the few instances where mapped information was beyond the range of the scale, the type and severity of aberration remained legible. When wavefront data are expressed in micrometers, this absolute scale facilitates the determination of the severity of aberrations present compared with a floating scale, particularly for distinguishing normal from abnormal levels of wavefront error. The new color palette makes it easier to identify disorders. The corneal mapping method can be extended to mapping whole eye wavefront errors. When refraction data are expressed in diopters, the previously published corneal topography scale is suggested.
Reliability study of biometrics "do not contact" in myopia.
Migliorini, R; Fratipietro, M; Comberiati, A M; Pattavina, L; Arrico, L
The aim of the study is a comparison between the actually achieved after surgery condition versus the expected refractive condition of the eye as calculated via a biometer. The study was conducted in a random group of 38 eyes of patients undergoing surgery by phacoemulsification. The mean absolute error was calculated between the predicted values from the measurements with the optical biometer and those obtained in the post-operative error which was at around 0.47% Our study shows results not far from those reported in the literature, and in relation, to the mean absolute error is among the lowest values at 0.47 ± 0.11 SEM.
Automated estimation of abdominal effective diameter for body size normalization of CT dose.
Cheng, Phillip M
2013-06-01
Most CT dose data aggregation methods do not currently adjust dose values for patient size. This work proposes a simple heuristic for reliably computing an effective diameter of a patient from an abdominal CT image. Evaluation of this method on 106 patients scanned on Philips Brilliance 64 and Brilliance Big Bore scanners demonstrates close correspondence between computed and manually measured patient effective diameters, with a mean absolute error of 1.0 cm (error range +2.2 to -0.4 cm). This level of correspondence was also demonstrated for 60 patients on Siemens, General Electric, and Toshiba scanners. A calculated effective diameter in the middle slice of an abdominal CT study was found to be a close approximation of the mean calculated effective diameter for the study, with a mean absolute error of approximately 1.0 cm (error range +3.5 to -2.2 cm). Furthermore, the mean absolute error for an adjusted mean volume computed tomography dose index (CTDIvol) using a mid-study calculated effective diameter, versus a mean per-slice adjusted CTDIvol based on the calculated effective diameter of each slice, was 0.59 mGy (error range 1.64 to -3.12 mGy). These results are used to calculate approximate normalized dose length product values in an abdominal CT dose database of 12,506 studies.
Wetherbee, Gregory A.; Latysh, Natalie E.; Gordon, John D.
2004-01-01
Five external quality-assurance programs were operated by the U.S. Geological Survey for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) from 2000 through 2001 (study period): the intersite-comparison program, the blind-audit program, the field-audit program, the interlaboratory-comparison program, and the collocated-sampler program. Each program is designed to measure specific components of the total error inherent in NADP/NTN wet-deposition measurements. The intersite-comparison program assesses the variability and bias of pH and specific-conductance determinations made by NADP/NTN site operators with respect to accuracy goals. The accuracy goals are statistically based using the median of all of the measurements obtained for each of four intersite-comparison studies. The percentage of site operators responding on time that met the pH accuracy goals ranged from 84.2 to 90.5 percent. In these same four intersite-comparison studies, 88.9 to 99.0 percent of the site operators met the accuracy goals for specific conductance. The blind-audit program evaluates the effects of routine sample handling, processing, and shipping on the chemistry of weekly precipitation samples. The blind-audit data for the study period indicate that sample handling introduced a small amount of sulfate contamination and slight changes to hydrogen-ion content of the precipitation samples. The magnitudes of the paired differences are not environmentally significant to NADP/NTN data users. The field-audit program (also known as the 'field-blank program') was designed to measure the effects of field exposure, handling, and processing on the chemistry of NADP/NTN precipitation samples. The results indicate potential low-level contamination of NADP/NTN samples with calcium, ammonium, chloride, and nitrate. Less sodium contamination was detected by the field-audit data than in previous years. Statistical analysis of the paired differences shows that contaminant ions are entrained into the solutions from the field-exposed buckets, but the positive bias that results from the minor amount of contamination appears to affect the analytical results by less than 6 percent. An interlaboratory-comparison program is used to estimate the analytical variability and bias of participating laboratories, especially the NADP Central Analytical Laboratory (CAL). Statistical comparison of the analytical results of participating laboratories implies that analytical data from the various monitoring networks can be compared. Bias was identified in the CAL data for ammonium, chloride, nitrate, sulfate, hydrogen-ion, and specific-conductance measurements, but the absolute value of the bias was less than analytical minimum reporting limits for all constituents except ammonium and sulfate. Control charts show brief time periods when the CAL's analytical precision for sodium, ammonium, and chloride was not within the control limits. Data for the analysis of ultrapure deionized-water samples indicated that the laboratories are maintaining good control of laboratory contamination. Estimated analytical precision among the laboratories indicates that the magnitudes of chemical-analysis errors are not environmentally significant to NADP data users. Overall precision of the precipitation-monitoring system used by the NADP/NTN was estimated by evaluation of samples from collocated monitoring sites at CA99, CO08, and NH02. Precision defined by the median of the absolute percent difference (MAE) was estimated to be approximately 10 percent or less for calcium, magnesium, sodium, chloride, nitrate, sulfate, specific conductance, and sample volume. The MAE values for ammonium and hydrogen-ion concentrations were estimated to be less than 10 percent for CA99 and NH02 but nearly 20 percent for ammonium concentration and about 17 percent for hydrogen-ion concentration for CO08. As in past years, the variability in the collocated-site data for sam
Micro CT based truth estimation of nodule volume
NASA Astrophysics Data System (ADS)
Kinnard, L. M.; Gavrielides, M. A.; Myers, K. J.; Zeng, R.; Whiting, B.; Lin-Gibson, S.; Petrick, N.
2010-03-01
With the advent of high-resolution CT, three-dimensional (3D) methods for nodule volumetry have been introduced, with the hope that such methods will be more accurate and consistent than currently used planar measures of size. However, the error associated with volume estimation methods still needs to be quantified. Volume estimation error is multi-faceted in the sense that there is variability associated with the patient, the software tool and the CT system. A primary goal of our current research efforts is to quantify the various sources of measurement error and, when possible, minimize their effects. In order to assess the bias of an estimate, the actual value, or "truth," must be known. In this work we investigate the reliability of micro CT to determine the "true" volume of synthetic nodules. The advantage of micro CT over other truthing methods is that it can provide both absolute volume and shape information in a single measurement. In the current study we compare micro CT volume truth to weight-density truth for spherical, elliptical, spiculated and lobulated nodules with diameters from 5 to 40 mm, and densities of -630 and +100 HU. The percent differences between micro CT and weight-density volume for -630 HU nodules range from [-21.7%, -0.6%] (mean= -11.9%) and the differences for +100 HU nodules range from [-0.9%, 3.0%] (mean=1.7%).
Telemetry Standards, RCC Standard 106-17, Annex A.1, Pulse Amplitude Modulation Standards
2017-07-01
conform to either Figure Error! No text of specified style in document.-1 or Figure Error! No text of specified style in document.-2. Figure Error...No text of specified style in document.-1. 50 percent duty cycle PAM with amplitude synchronization A 20-25 percent deviation reserved for pulse...synchronization is recommended. Telemetry Standards, RCC Standard 106-17 Annex A.1, July 2017 A.1.2 Figure Error! No text of specified style
Karunaratne, Nicholas
2013-12-01
To compare the accuracy of the Pentacam Holladay equivalent keratometry readings with the IOL Master 500 keratometry in calculating intraocular lens power. Non-randomized, prospective clinical study conducted in private practice. Forty-five consecutive normal patients undergoing cataract surgery. Forty-five consecutive patients had Pentacam equivalent keratometry readings at the 2-, 3 and 4.5-mm corneal zone and IOL Master keratometry measurements prior to cataract surgery. For each Pentacam equivalent keratometry reading zone and IOL Master measurement the difference between the observed and expected refractive error was calculated using the Holladay 2 and Sanders, Retzlaff and Kraff theoretic (SRKT) formulas. Mean keratometric value and mean absolute refractive error. There was a statistically significantly difference between the mean keratometric values of the IOL Master, Pentacam equivalent keratometry reading 2-, 3- and 4.5-mm measurements (P < 0.0001, analysis of variance). There was no statistically significant difference between the mean absolute refraction error for the IOL Master and equivalent keratometry readings 2 mm, 3 mm and 4.5 mm zones for either the Holladay 2 formula (P = 0.14) or SRKT formula (P = 0.47). The lowest mean absolute refraction error for Holladay 2 equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.17 D). The lowest mean absolute refraction error for SRKT equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.19 D). Comparing the absolute refraction error of IOL Master and Pentacam equivalent keratometry reading, best agreement was with Holladay 2 and equivalent keratometry reading 4.5 mm, with mean of the difference of 0.02 D and 95% limits of agreement of -0.35 and 0.39 D. The IOL Master keratometry and Pentacam equivalent keratometry reading were not equivalent when used only for corneal power measurements. However, the keratometry measurements of the IOL Master and Pentacam equivalent keratometry reading 4.5 mm may be similarly effective when used in intraocular lens power calculation formulas, following constant optimization. © 2013 Royal Australian and New Zealand College of Ophthalmologists.
An emulator for minimizing computer resources for finite element analysis
NASA Technical Reports Server (NTRS)
Melosh, R.; Utku, S.; Islam, M.; Salama, M.
1984-01-01
A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).
A two-dimensional, finite-difference model of the high plains aquifer in southern South Dakota
Kolm, K.E.; Case, H. L.
1983-01-01
The High Plains aquifer is the principal source of water for irrigation, industry, municipalities, and domestic use in south-central South Dakota. The aquifer, composed of upper sandstone units of the Arikaree Formation, and the overlying Ogallala and Sand Hills Formations, was simulated using a two-dimensional, finite-difference computer model. The maximum difference between simulated and measured potentiometric heads was less than 60 feet (1- to 4-percent error). Two-thirds of the simulated potentiometric heads were within 26 feet of the measured values (3-percent error). The estimated saturated thickness, computed from simulated potentiometric heads, was within 25-percent error of the known saturated thickness for 95 percent of the study area. (USGS)
Interpreting SBUV Smoothing Errors: an Example Using the Quasi-biennial Oscillation
NASA Technical Reports Server (NTRS)
Kramarova, N. A.; Bhartia, Pawan K.; Frith, S. M.; McPeters, R. D.; Stolarski, R. S.
2013-01-01
The Solar Backscattered Ultraviolet (SBUV) observing system consists of a series of instruments that have been measuring both total ozone and the ozone profile since 1970. SBUV measures the profile in the upper stratosphere with a resolution that is adequate to resolve most of the important features of that region. In the lower stratosphere the limited vertical resolution of the SBUV system means that there are components of the profile variability that SBUV cannot measure. The smoothing error, as defined in the optimal estimation retrieval method, describes the components of the profile variability that the SBUV observing system cannot measure. In this paper we provide a simple visual interpretation of the SBUV smoothing error by comparing SBUV ozone anomalies in the lower tropical stratosphere associated with the quasi-biennial oscillation (QBO) to anomalies obtained from the Aura Microwave Limb Sounder (MLS). We describe a methodology for estimating the SBUV smoothing error for monthly zonal mean (mzm) profiles. We construct covariance matrices that describe the statistics of the inter-annual ozone variability using a 6 yr record of Aura MLS and ozonesonde data. We find that the smoothing error is of the order of 1percent between 10 and 1 hPa, increasing up to 15-20 percent in the troposphere and up to 5 percent in the mesosphere. The smoothing error for total ozone columns is small, mostly less than 0.5 percent. We demonstrate that by merging the partial ozone columns from several layers in the lower stratosphere/troposphere into one thick layer, we can minimize the smoothing error. We recommend using the following layer combinations to reduce the smoothing error to about 1 percent: surface to 25 hPa (16 hPa) outside (inside) of the narrow equatorial zone 20 S-20 N.
Exploiting data representation for fault tolerance
Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; ...
2015-01-06
Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less
Identification of two novel mammographic density loci at 6Q25.1.
Brand, Judith S; Li, Jingmei; Humphreys, Keith; Karlsson, Robert; Eriksson, Mikael; Ivansson, Emma; Hall, Per; Czene, Kamila
2015-06-03
Mammographic density (MD) is a strong heritable and intermediate phenotype for breast cancer, but much of its genetic variation remains unexplained. We performed a large-scale genetic association study including 8,419 women of European ancestry to identify MD loci. Participants of three Swedish studies were genotyped on a custom Illumina iSelect genotyping array and percent and absolute mammographic density were ascertained using semiautomated and fully automated methods from film and digital mammograms. Linear regression analysis was used to test for SNP-MD associations, adjusting for age, body mass index, menopausal status and six principal components. Meta-analyses were performed by combining P values taking sample size, study-specific inflation factor and direction of effect into account. Genome-wide significant associations were observed for two previously identified loci: ZNF365 (rs10995194, P = 2.3 × 10(-8) for percent MD and P = 8.7 × 10(-9) for absolute MD) and AREG (rs10034692, P = 6.7 × 10(-9) for absolute MD). In addition, we found evidence of association for two variants at 6q25.1, both of which are known breast cancer susceptibility loci: rs9485370 in the TAB2 gene (P = 4.8 × 10(-9) for percent MD and P = 2.5 × 10(-8) for absolute MD) and rs60705924 in the CCDC170/ESR1 region (P = 2.2 × 10(-8) for absolute MD). Both regions have been implicated in estrogen receptor signaling with TAB2 being a potential regulator of tamoxifen response. We identified two novel MD loci at 6q25.1. These findings underscore the importance of 6q25.1 as a susceptibility region and provide more insight into the mechanisms through which MD influences breast cancer risk.
TH-AB-201-12: Using Machine Log-Files for Treatment Planning and Delivery QA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stanhope, C; Liang, J; Drake, D
2016-06-15
Purpose: To determine the segment reduction and dose resolution necessary for machine log-files to effectively replace current phantom-based patient-specific quality assurance, while minimizing computational cost. Methods: Elekta’s Log File Convertor R3.2 records linac delivery parameters (dose rate, gantry angle, leaf position) every 40ms. Five VMAT plans [4 H&N, 1 Pulsed Brain] comprised of 2 arcs each were delivered on the ArcCHECK phantom. Log-files were reconstructed in Pinnacle on the phantom geometry using 1/2/3/4° control point spacing and 2/3/4mm dose grid resolution. Reconstruction effectiveness was quantified by comparing 2%/2mm gamma passing rates of the original and log-file plans. Modulation complexity scoresmore » (MCS) were calculated for each beam to correlate reconstruction accuracy and beam modulation. Percent error in absolute dose for each plan-pair combination (log-file vs. ArcCHECK, original vs. ArcCHECK, log-file vs. original) was calculated for each arc and every diode greater than 10% of the maximum measured dose (per beam). Comparing standard deviations of the three plan-pair distributions, relative noise of the ArcCHECK and log-file systems was elucidated. Results: The original plans exhibit a mean passing rate of 95.1±1.3%. The eight more modulated H&N arcs [MCS=0.088±0.014] and two less modulated brain arcs [MCS=0.291±0.004] yielded log-file pass rates most similar to the original plan when using 1°/2mm [0.05%±1.3% lower] and 2°/3mm [0.35±0.64% higher] log-file reconstructions respectively. Log-file and original plans displayed percent diode dose errors 4.29±6.27% and 3.61±6.57% higher than measurement. Excluding the phantom eliminates diode miscalibration and setup errors; log-file dose errors were 0.72±3.06% higher than the original plans – significantly less noisy. Conclusion: For log-file reconstructed VMAT arcs, 1° control point spacing and 2mm dose resolution is recommended, however, less modulated arcs may allow less stringent reconstructions. Following the aforementioned reconstruction recommendations, the log-file technique is capable of detecting delivery errors with equivalent accuracy and less noise than ArcCHECK QA. I am funded by an Elekta Research Grant.« less
Vegetation and Terrain Relationships in South-Central New Mexico and Western Texas
1980-11-01
to Mi. Kevin von Finger, Ecologist, and to Mr. James Conyers, Chief of Environmental Office, Directorate of Facilities Engineering, U.S. Army Air...with ground cover ranging from 6 to 25 percent. The associate, less frequently observed shrub species (40 to 60 percent absolute frequency) waere A
A suggestion for computing objective function in model calibration
Wu, Yiping; Liu, Shuguang
2014-01-01
A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).
Mendiburu, Andrés Z; de Carvalho, João A; Coronado, Christian R
2015-03-21
Estimation of the lower flammability limits of C-H compounds at 25 °C and 1 atm; at moderate temperatures and in presence of diluent was the objective of this study. A set of 120 C-H compounds was divided into a correlation set and a prediction set of 60 compounds each. The absolute average relative error for the total set was 7.89%; for the correlation set, it was 6.09%; and for the prediction set it was 9.68%. However, it was shown that by considering different sources of experimental data the values were reduced to 6.5% for the prediction set and to 6.29% for the total set. The method showed consistency with Le Chatelier's law for binary mixtures of C-H compounds. When tested for a temperature range from 5 °C to 100 °C, the absolute average relative errors were 2.41% for methane; 4.78% for propane; 0.29% for iso-butane and 3.86% for propylene. When nitrogen was added, the absolute average relative errors were 2.48% for methane; 5.13% for propane; 0.11% for iso-butane and 0.15% for propylene. When carbon dioxide was added, the absolute relative errors were 1.80% for methane; 5.38% for propane; 0.86% for iso-butane and 1.06% for propylene. Copyright © 2014 Elsevier B.V. All rights reserved.
Acupuncture for peripheral joint osteoarthritis
Manheimer, Eric; Cheng, Ke; Linde, Klaus; Lao, Lixing; Yoo, Junghee; Wieland, Susan; van der Windt, Daniëlle AWM; Berman, Brian M; Bouter, Lex M
2011-01-01
Background Peripheral joint osteoarthritis is a major cause of pain and functional limitation. Few treatments are safe and effective. Objectives To assess the effects of acupuncture for treating peripheral joint osteoarthritis. Search strategy We searched the Cochrane Central Register of Controlled Trials (The Cochrane Library 2008, Issue 1), MEDLINE, and EMBASE (both through December 2007), and scanned reference lists of articles. Selection criteria Randomized controlled trials (RCTs) comparing needle acupuncture with a sham, another active treatment, or a waiting list control group in people with osteoarthritis of the knee, hip, or hand. Data collection and analysis Two authors independently assessed trial quality and extracted data. We contacted study authors for additional information. We calculated standardized mean differences using the differences in improvements between groups. Main results Sixteen trials involving 3498 people were included. Twelve of the RCTs included only people with OA of the knee, 3 only OA of the hip, and 1 a mix of people with OA of the hip and/or knee. In comparison with a sham control, acupuncture showed statistically significant, short-term improvements in osteoarthritis pain (standardized mean difference -0.28, 95% confidence interval -0.45 to -0.11; 0.9 point greater improvement than sham on 20 point scale; absolute percent change 4.59%; relative percent change 10.32%; 9 trials; 1835 participants) and function (-0.28, -0.46 to -0.09; 2.7 point greater improvement on 68 point scale; absolute percent change 3.97%; relative percent change 8.63%); however, these pooled short-term benefits did not meet our predefined thresholds for clinical relevance (i.e. 1.3 points for pain; 3.57 points for function) and there was substantial statistical heterogeneity. Additionally, restriction to sham-controlled trials using shams judged most likely to adequately blind participants to treatment assignment (which were also the same shams judged most likely to have physiological activity), reduced heterogeneity and resulted in pooled short-term benefits of acupuncture that were smaller and non-significant. In comparison with sham acupuncture at the six-month follow-up, acupuncture showed borderline statistically significant, clinically irrelevant improvements in osteoarthritis pain (-0.10, -0.21 to 0.01; 0.4 point greater improvement than sham on 20 point scale; absolute percent change 1.81%; relative percent change 4.06%; 4 trials;1399 participants) and function (-0.11, -0.22 to 0.00; 1.2 point greater improvement than sham on 68 point scale; absolute percent change 1.79%; relative percent change 3.89%). In a secondary analysis versus a waiting list control, acupuncture was associated with statistically significant, clinically relevant short-term improvements in osteoarthritis pain (-0.96, -1.19 to -0.72; 14.5 point greater improvement than sham on 100 point scale; absolute percent change 14.5%; relative percent change 29.14%; 4 trials; 884 participants) and function (-0.89, -1.18 to -0.60; 13.0 point greater improvement than sham on 100 point scale; absolute percent change 13.0%; relative percent change 25.21%). In the head-on comparisons of acupuncture with the ‘supervised osteoarthritis education’ and the ‘physician consultation’ control groups, acupuncture was associated with clinically relevant short- and long-term improvements in pain and function. In the head on comparisons of acupuncture with ‘home exercises/advice leaflet’ and ‘supervised exercise’, acupuncture was associated with similar treatment effects as the controls. Acupuncture as an adjuvant to an exercise based physiotherapy program did not result in any greater improvements than the exercise program alone. Information on safety was reported in only 8 trials and even in these trials there was limited reporting and heterogeneous methods. Authors' conclusions Sham-controlled trials show statistically significant benefits; however, these benefits are small, do not meet our pre-defined thresholds for clinical relevance, and are probably due at least partially to placebo effects from incomplete blinding. Waiting list-controlled trials of acupuncture for peripheral joint osteoarthritis suggest statistically significant and clinically relevant benefits, much of which may be due to expectation or placebo effects. PMID:20091527
Early results from the Far Infrared Absolute Spectrophotometer (FIRAS)
NASA Technical Reports Server (NTRS)
Mather, J. C.; Cheng, E. S.; Shafer, R. A.; Eplee, R. E.; Isaacman, R. B.; Fixsen, D. J.; Read, S. M.; Meyer, S. S.; Weiss, R.; Wright, E. L.
1991-01-01
The Far Infrared Absolute Spectrophotometer (FIRAS) on the Cosmic Background Explorer (COBE) mapped 98 percent of the sky, 60 percent of it twice, before the liquid helium coolant was exhausted. The FIRAS covers the frequency region from 1 to 100/cm with a 7 deg angular resolution. The spectral resolution is 0.2/cm for frequencies less than 20/cm and 0.8/cm for higher frequencies. Preliminary results include: a limit on the deviations from a Planck curve of 1 percent of the peak brightness from 1 to 20/cm, a temperature of 2.735 +/- 0.06 K, a limit on the Comptonization parameter y of 0.001, on the chemical potential parameter mu of 0.01, a strong limit on the existence of a hot smooth intergalactic medium, and a confirmation that the dipole anisotropy spectrum is that of a Doppler shifted blackbody.
Fundamental principles of absolute radiometry and the philosophy of this NBS program (1968 to 1971)
NASA Technical Reports Server (NTRS)
Geist, J.
1972-01-01
A description is given work performed on a program to develop an electrically calibrated detector (also called absolute radiometer, absolute detector, and electrically calibrated radiometer) that could be used to realize, maintain, and transfer a scale of total irradiance. The program includes a comprehensive investigation of the theoretical basis of absolute detector radiometry, as well as the design and construction of a number of detectors. A theoretical analysis of the sources of error is also included.
ERIC Educational Resources Information Center
Ericson, T. J.
1988-01-01
Describes an apparatus capable of measuring absolute temperatures of a tungsten filament bulb up to normal running temperature and measuring Botzmann's constant to an accuracy of a few percent. Shows that electrical noise techniques are convenient to demonstrate how the concept of temperature is related to the micro- and macroscopic world. (CW)
Grierson, Lawrence E M; Roberts, James W; Welsher, Arthur M
2017-05-01
There is much evidence to suggest that skill learning is enhanced by skill observation. Recent research on this phenomenon indicates a benefit of observing variable/erred demonstrations. In this study, we explore whether it is variability within the relative organization or absolute parameterization of a movement that facilitates skill learning through observation. To do so, participants were randomly allocated into groups that observed a model with no variability, absolute timing variability, relative timing variability, or variability in both absolute and relative timing. All participants performed a four-segment movement pattern with specific absolute and relative timing goals prior to and following the observational intervention, as well as in a 24h retention test and transfers tests that featured new relative and absolute timing goals. Absolute timing error indicated that all groups initially acquired the absolute timing, maintained their performance at 24h retention, and exhibited performance deterioration in both transfer tests. Relative timing error revealed that the observation of no variability and relative timing variability produced greater performance at the post-test, 24h retention and relative timing transfer tests, but for the no variability group, deteriorated at absolute timing transfer test. The results suggest that the learning of absolute timing following observation unfolds irrespective of model variability. However, the learning of relative timing benefits from holding the absolute features constant, while the observation of no variability partially fails in transfer. We suggest learning by observing no variability and variable/erred models unfolds via similar neural mechanisms, although the latter benefits from the additional coding of information pertaining to movements that require a correction. Copyright © 2017 Elsevier B.V. All rights reserved.
Measurement-based analysis of error latency. [in computer operating system
NASA Technical Reports Server (NTRS)
Chillarege, Ram; Iyer, Ravishankar K.
1987-01-01
This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.
A novel validation and calibration method for motion capture systems based on micro-triangulation.
Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M
2018-06-06
Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
ESTIMATION OF RADIOACTIVE CALCIUM-45 BY LIQUID SCINTILLATION COUNTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lutwak, L.
1959-03-01
A liquid sclntillation counting method is developed for determining radioactive calcium-45 in biological materials. The calcium-45 is extracted, concentrated, and dissolved in absolute ethyl alcohol to which is added 0.4% diphenyloxazol in toluene. Counting efficiency is about 65 percent with standard deviation of the J-57 engin 7.36 percent. (auth)
40 CFR Appendix A to Subpart D of... - Tables
Code of Federal Regulations, 2011 CFR
2011-07-01
... post-test values) kPa Ra Relative humidity of the ambient air percent T Absolute temperature at air...) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Emission Test Equipment... torque related to maximum torque for the test mode percent mass Pollutant mass flow g/h nd, i Engine...
40 CFR Appendix A to Subpart D of... - Tables
Code of Federal Regulations, 2013 CFR
2013-07-01
... post-test values) kPa Ra Relative humidity of the ambient air percent T Absolute temperature at air...) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Emission Test Equipment... torque related to maximum torque for the test mode percent mass Pollutant mass flow g/h nd, i Engine...
40 CFR Appendix A to Subpart D of... - Tables
Code of Federal Regulations, 2010 CFR
2010-07-01
... post-test values) kPa Ra Relative humidity of the ambient air percent T Absolute temperature at air...) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Emission Test Equipment... torque related to maximum torque for the test mode percent mass Pollutant mass flow g/h nd, i Engine...
40 CFR Appendix A to Subpart D of... - Tables
Code of Federal Regulations, 2012 CFR
2012-07-01
... post-test values) kPa Ra Relative humidity of the ambient air percent T Absolute temperature at air...) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Emission Test Equipment... torque related to maximum torque for the test mode percent mass Pollutant mass flow g/h nd, i Engine...
40 CFR Appendix A to Subpart D of... - Tables
Code of Federal Regulations, 2014 CFR
2014-07-01
... post-test values) kPa Ra Relative humidity of the ambient air percent T Absolute temperature at air...) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Emission Test Equipment... torque related to maximum torque for the test mode percent mass Pollutant mass flow g/h nd, i Engine...
VizieR Online Data Catalog: AKARI IRC asteroid sample diameters & albedos (Ali-Lagoa+, 2018)
NASA Astrophysics Data System (ADS)
Ali-Lagoa, V.; Mueller, T. G.; Usui, F.; Hasegawa, S.
2017-11-01
Table 1 contains the best-fitting values of size and beaming parameter and corresponding visible geometric albedos for the full AKARI IRC sample. We fitted the near-Earth asteroid thermal model (NEATM) of Harris (1998Icar..131..291H) to the AKARI IRC thermal infrared data (Murakami et al., 2007PASJ...59S.369M, Onaka et al., 2007PASJ...59S.401O, Ishihara et al., 2010A&A...514A...1I, Cat. II/297, Usui et al., 2011PASJ...63.1117U, Cat. J/PASJ/63/1117, Takita et al., 2012PASJ...64..126T, Hasegawa et al., 2013PASJ...65...34H, Cat. J/PASJ/65/34). The NEATM implementation is described in Ali-Lagoa and Delbo' (2017A&A...603A..55A, cat. J/A+A/603/A55). Minimum relative errors of 10, 15, and 20 percent are given for size, beaming parameter and albedo in those cases where the beaming parameter could be fitted. Otherwise, a default value of the beaming parameter is assumed based on Eq. 1 in the article, and the minimum relative errors in size and albedo increase to 20 and 40 percent (see the discussions in Mainzer et al., 2011ApJ...736..100M, Ali-Lagoa et al., 2016A&A...591A..14A, Cat. J/A+A/591/A14). We also provide the asteroid absolute magnitudes and G12 slope parameters retrieved from Oszkiewicz et al. (2012), the number of observations used in each IRC band (S9W and L18W), plus the heliocentric and geocentric distances and phase angle (r, Delta, alpha) based on the ephemerides taken from the MIRIADE service (http://vo.imcce.fr/webservices/miriade/?ephemph). (1 data file).
NASA Technical Reports Server (NTRS)
Ford, Holland C.; Ciardullo, Robin
1988-01-01
Nova shells are characteristically prolate with equatorial bands and polar caps. Failure to account for the geometry can lead to large errors in expansion parallaxes for individual novae. When simple prescriptions are used for deriving expansion parallaxes from an ensemble of randomly oriented prolate spheroids, the average distance will be too small by factors of 10 to 15 percent. The absolute magnitudes of the novae will be underestimated and the resulting distance scale will be too small by the same factors. If observations of partially resolved nova shells select for large inclinations, the systematic error in the resulting distance scale could easily be 20 to 30 percent. Extinction by dust in the bulge of M31 may broaden and shift the intrinsic distribution of maximum nova magnitudes versus decay rates. We investigated this possibility by projecting Arp's and Rosino's novae onto a composite B - 6200A color map of M31's bulge. Thirty two of the 86 novae projected onto a smooth background with no underlying structure due to the presence of a dust cloud along the line of sight. The distribution of maximum magnitudes versus fade rates for these unreddened novae is indistinguishable from the distribution for the entire set of novae. It is concluded that novae suffer very little extinction from the filamentary and patchy distribution of dust seen in the bulge of M31. Time average B and H alpha nova luminosity functions are potentially powerful new ways to use novae as standard candles. Modern CCD observations and the photographic light curves of M31 novae found during the last 60 years were analyzed to show that these functions are power laws. Consequently, unless the eruption times for novae are known, the data cannot be used to obtain distances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dance, M; Chera, B; Falchook, A
2015-06-15
Purpose: Validate the consistency of a gradient-based segmentation tool to facilitate accurate delineation of PET/CT-based GTVs in head and neck cancers by comparing against hybrid PET/MR-derived GTV contours. Materials and Methods: A total of 18 head and neck target volumes (10 primary and 8 nodal) were retrospectively contoured using a gradient-based segmentation tool by two observers. Each observer independently contoured each target five times. Inter-observer variability was evaluated via absolute percent differences. Intra-observer variability was examined by percentage uncertainty. All target volumes were also contoured using the SUV percent threshold method. The thresholds were explored case by case so itsmore » derived volume matched with the gradient-based volume. Dice similarity coefficients (DSC) were calculated to determine overlap of PET/CT GTVs and PET/MR GTVs. Results: The Levene’s test showed there was no statistically significant difference of the variances between the observer’s gradient-derived contours. However, the absolute difference between the observer’s volumes was 10.83%, with a range from 0.39% up to 42.89%. PET-avid regions with qualitatively non-uniform shapes and intensity levels had a higher absolute percent difference near 25%, while regions with uniform shapes and intensity levels had an absolute percent difference of 2% between observers. The average percentage uncertainty between observers was 4.83% and 7%. As the volume of the gradient-derived contours increased, the SUV threshold percent needed to match the volume decreased. Dice coefficients showed good agreement of the PET/CT and PET/MR GTVs with an average DSC value across all volumes at 0.69. Conclusion: Gradient-based segmentation of PET volume showed good consistency in general but can vary considerably for non-uniform target shapes and intensity levels. PET/CT-derived GTV contours stemming from the gradient-based tool show good agreement with the anatomically and metabolically more accurate PET/MR-derived GTV contours, but tumor delineation accuracy can be further improved with the use PET/MR.« less
Error Detection in Mechanized Classification Systems
ERIC Educational Resources Information Center
Hoyle, W. G.
1976-01-01
When documentary material is indexed by a mechanized classification system, and the results judged by trained professionals, the number of documents in disagreement, after suitable adjustment, defines the error rate of the system. In a test case disagreement was 22 percent and, of this 22 percent, the computer correctly identified two-thirds of…
NASA Technical Reports Server (NTRS)
Orme, John S.; Schkolnik, Gerard S.
1995-01-01
Performance Seeking Control (PSC), an onboard, adaptive, real-time optimization algorithm, relies upon an onboard propulsion system model. Flight results illustrated propulsion system performance improvements as calculated by the model. These improvements were subject to uncertainty arising from modeling error. Thus to quantify uncertainty in the PSC performance improvements, modeling accuracy must be assessed. A flight test approach to verify PSC-predicted increases in thrust (FNP) and absolute levels of fan stall margin is developed and applied to flight test data. Application of the excess thrust technique shows that increases of FNP agree to within 3 percent of full-scale measurements for most conditions. Accuracy to these levels is significant because uncertainty bands may now be applied to the performance improvements provided by PSC. Assessment of PSC fan stall margin modeling accuracy was completed with analysis of in-flight stall tests. Results indicate that the model overestimates the stall margin by between 5 to 10 percent. Because PSC achieves performance gains by using available stall margin, this overestimation may represent performance improvements to be recovered with increased modeling accuracy. Assessment of thrust and stall margin modeling accuracy provides a critical piece for a comprehensive understanding of PSC's capabilities and limitations.
Wetherbee, Gregory A.; Martin, RoseAnn
2018-06-29
The U.S. Geological Survey Precipitation Chemistry Quality Assurance project operated five distinct programs to provide external quality assurance monitoring for the National Atmospheric Deposition Program’s (NADP) National Trends Network and Mercury Deposition Network during 2015–16. The National Trends Network programs include (1) a field audit program to evaluate sample contamination and stability, (2) an interlaboratory comparison program to evaluate analytical laboratory performance, and (3) a colocated sampler program to evaluate bias and variability attributed to automated precipitation samplers. The Mercury Deposition Network programs include the (4) system blank program and (5) an interlaboratory comparison program. The results indicate that NADP data continue to be of sufficient quality for the analysis of spatial distributions and time trends for chemical constituents in wet deposition.The field audit program results indicate increased sample contamination for calcium, magnesium, and potassium relative to 2010 levels, and slight fluctuation in sodium contamination. Nitrate contamination levels dropped slightly during 2014–16, and chloride contamination leveled off between 2007 and 2016. Sulfate contamination is similar to the 2000 level. Hydrogen ion contamination has steadily decreased since 2012. Losses of ammonium and nitrate resulting from potential sample instability were negligible.The NADP Central Analytical Laboratory produced interlaboratory comparison results with low bias and variability compared to other domestic and international laboratories that support atmospheric deposition monitoring. Significant absolute bias above the magnitudes of the detection limits was observed for nitrate and sulfate concentrations, but no analyte determinations exceeded the detection limits for blanks.Colocated sampler program results from dissimilar colocated collectors indicate that the retrofit of the National Trends Network with N-CON Systems Company, Inc. precipitation collectors could cause substantial shifts in NADP annual deposition (concentration multiplied by depth) values. Median weekly relative percent differences for analyte concentrations ranged from -4 to +76 percent for cations, from 5 to 6 percent for ammonium, from +14 to +25 percent for anions, and from -21 to +8 percent for hydrogen ion contamination. By comparison, weekly absolute concentration differences for paired identical N-CON Systems Company, Inc., collectors ranged from 4–22 percent for cations; 2–9 percent for anions; 4–5 percent for ammonium; and 13–14 percent for hydrogen ion contamination. The N-CON Systems Company, Inc. collector caught more precipitation than the Aerochem Metrics Model 301 collector (ACM) at the WA99/99WA sites, but it typically caught slightly less precipitation than the ACM at ND11/11ND, sites which receive more wind and snow than WA99/99WA.Paired, identical OTT Pluvio-2 and ETI Noah IV precipitation gages were operated at the same sites. Median absolute percent differences for daily measured precipitation depths ranged from 0 to 7 percent. Annual absolute differences ranged from 0.08 percent (ETI Noah IV precipitation gages) to 11 percent (OTT Pluvio-2 precipitation gages).The Mercury Deposition Network programs include the system blank program and an interlaboratory comparison program. System blank results indicate that maximum total mercury contamination concentrations in samples were less than the third percentile of all Mercury Deposition Network sample concentrations (1.098 nanograms per liter; ng/L). The Mercury Analytical Laboratory produced chemical concentration results with low bias and variability compared with other domestic and international laboratories that support atmospheric-deposition monitoring. The laboratory’s performance results indicate a +1-ng/L shift in bias between 2015 (-0.4 ng/L) and 2016 (+0.5 ng/L).
40 CFR 53.56 - Test for effect of variations in ambient pressure.
Code of Federal Regulations, 2012 CFR
2012-07-01
... the tests and shall be checked at zero and at least one flow rate within ±3 percent of 16.7 L/min... this test, the absolute difference in values calculated in Equation 21 of this paragraph (g)(4) must... absolute difference between the mean ambient air pressure indicated by the test sampler and the ambient...
40 CFR 53.56 - Test for effect of variations in ambient pressure.
Code of Federal Regulations, 2013 CFR
2013-07-01
... the tests and shall be checked at zero and at least one flow rate within ±3 percent of 16.7 L/min... this test, the absolute difference in values calculated in Equation 21 of this paragraph (g)(4) must... absolute difference between the mean ambient air pressure indicated by the test sampler and the ambient...
40 CFR 53.56 - Test for effect of variations in ambient pressure.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the tests and shall be checked at zero and at least one flow rate within ±3 percent of 16.7 L/min... this test, the absolute difference in values calculated in Equation 21 of this paragraph (g)(4) must... absolute difference between the mean ambient air pressure indicated by the test sampler and the ambient...
40 CFR 53.56 - Test for effect of variations in ambient pressure.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the tests and shall be checked at zero and at least one flow rate within ±3 percent of 16.7 L/min... this test, the absolute difference in values calculated in Equation 21 of this paragraph (g)(4) must... absolute difference between the mean ambient air pressure indicated by the test sampler and the ambient...
NASA Astrophysics Data System (ADS)
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf
2015-05-01
All surveying instruments and their measurements suffer from some errors. To refine the measurement results, it is necessary to use procedures restricting influence of the instrument errors on the measured values or to implement numerical corrections. In precise engineering surveying industrial applications the accuracy of the distances usually realized on relatively short distance is a key parameter limiting the resulting accuracy of the determined values (coordinates, etc.). To determine the size of systematic and random errors of the measured distances were made test with the idea of the suppression of the random error by the averaging of the repeating measurement, and reducing systematic errors influence of by identifying their absolute size on the absolute baseline realized in geodetic laboratory at the Faculty of Civil Engineering CTU in Prague. The 16 concrete pillars with forced centerings were set up and the absolute distances between the points were determined with a standard deviation of 0.02 millimetre using a Leica Absolute Tracker AT401. For any distance measured by the calibrated instruments (up to the length of the testing baseline, i.e. 38.6 m) can now be determined the size of error correction of the distance meter in two ways: Firstly by the interpolation on the raw data, or secondly using correction function derived by previous FFT transformation usage. The quality of this calibration and correction procedure was tested on three instruments (Trimble S6 HP, Topcon GPT-7501, Trimble M3) experimentally using Leica Absolute Tracker AT401. By the correction procedure was the standard deviation of the measured distances reduced significantly to less than 0.6 mm. In case of Topcon GPT-7501 is the nominal standard deviation 2 mm, achieved (without corrections) 2.8 mm and after corrections 0.55 mm; in case of Trimble M3 is nominal standard deviation 3 mm, achieved (without corrections) 1.1 mm and after corrections 0.58 mm; and finally in case of Trimble S6 is nominal standard deviation 1 mm, achieved (without corrections) 1.2 mm and after corrections 0.51 mm. Proposed procedure of the calibration and correction is in our opinion very suitable for increasing of the accuracy of the electronic distance measurement and allows the use of the common surveying instrument to achieve uncommonly high precision.
Estimation of clear-sky insolation using satellite and ground meteorological data
NASA Technical Reports Server (NTRS)
Staylor, W. F.; Darnell, W. L.; Gupta, S. K.
1983-01-01
Ground based pyranometer measurements were combined with meteorological data from the Tiros N satellite in order to estimate clear-sky insolations at five U.S. sites for five weeks during the spring of 1979. The estimates were used to develop a semi-empirical model of clear-sky insolation for the interpretation of input data from the Tiros Operational Vertical Sounder (TOVS). Using only satellite data, the estimated standard errors in the model were about 2 percent. The introduction of ground based data reduced errors to around 1 percent. It is shown that although the errors in the model were reduced by only 1 percent, TOVS data products are still adequate for estimating clear-sky insolation.
Failure analysis and modeling of a multicomputer system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Subramani, Sujatha Srinivasan
1990-01-01
This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).
Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures
2016-06-01
inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number
Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.
Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae
2016-01-01
Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (P<0.05). Our study demonstrated that in UKA, cutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.
Methods for estimating flood frequency in Montana based on data through water year 1998
Parrett, Charles; Johnson, Dave R.
2004-01-01
Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.
Grant, R. Stephen; Skavroneck, Steven
1980-01-01
The top five ranking predictive equations were as follows: Tsivoglou-Neal with 18 percent mean error, Negulescu-Rojanski with 21 percent, Padden-Gloyna with 23 percent, Thackston-Krenkel with 29 percent, and Bansal with 32 percent. (USGS).
78 FR 77399 - Basic Health Program: Proposed Federal Funding Methodology for Program Year 2015
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-23
... American Indians and Alaska Natives F. Example Application of the BHP Funding Methodology III. Collection... effectively 138 percent due to the application of a required 5 percent income disregard in determining the... correct errors in applying the methodology (such as mathematical errors). Under section 1331(d)(3)(ii) of...
Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating
ERIC Educational Resources Information Center
Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen
2012-01-01
This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…
Altitude Registration of Limb-Scattered Radiation
NASA Technical Reports Server (NTRS)
Moy, Leslie; Bhartia, Pawan K.; Jaross, Glen; Loughman, Robert; Kramarova, Natalya; Chen, Zhong; Taha, Ghassan; Chen, Grace; Xu, Philippe
2017-01-01
One of the largest constraints to the retrieval of accurate ozone profiles from UV backscatter limb sounding sensors is altitude registration. Two methods, the Rayleigh scattering attitude sensing (RSAS) and absolute radiance residual method (ARRM), are able to determine altitude registration to the accuracy necessary for long-term ozone monitoring. The methods compare model calculations of radiances to measured radiances and are independent of onboard tracking devices. RSAS determines absolute altitude errors, but, because the method is susceptible to aerosol interference, it is limited to latitudes and time periods with minimal aerosol contamination. ARRM, a new technique introduced in this paper, can be applied across all seasons and altitudes. However, it is only appropriate for relative altitude error estimates. The application of RSAS to Limb Profiler (LP) measurements from the Ozone Mapping and Profiler Suite (OMPS) on board the Suomi NPP (SNPP) satellite indicates tangent height (TH) errors greater than 1 km with an absolute accuracy of +/-200 m. Results using ARRM indicate a approx. 300 to 400m intra-orbital TH change varying seasonally +/-100 m, likely due to either errors in the spacecraft pointing or in the geopotential height (GPH) data that we use in our analysis. ARRM shows a change of approx. 200m over 5 years with a relative accuracy (a long-term accuracy) of 100m outside the polar regions.
12 CFR 324.52 - Simple risk-weight approach (SRWA).
Code of Federal Regulations, 2014 CFR
2014-01-01
... greater than or equal to −1 (that is, between zero and −1), then E equals the absolute value of RVC. If...) Zero percent risk weight equity exposures. An equity exposure to a sovereign, the Bank for..., an MDB, and any other entity whose credit exposures receive a zero percent risk weight under § 324.32...
40 CFR 60.334 - Monitoring of operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... continuous monitoring system to monitor and record the fuel consumption and the ratio of water or steam to...) On a ppm basis (for NOX) and a percent O2 basis for oxygen; or (ii) On a ppm at 15 percent O2 basis... temperature (Ta), and minimum combustor inlet absolute pressure (Po) into the ISO correction equation. (iii...
40 CFR 60.334 - Monitoring of operations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... continuous monitoring system to monitor and record the fuel consumption and the ratio of water or steam to...) On a ppm basis (for NOX) and a percent O2 basis for oxygen; or (ii) On a ppm at 15 percent O2 basis... temperature (Ta), and minimum combustor inlet absolute pressure (Po) into the ISO correction equation. (iii...
40 CFR 60.334 - Monitoring of operations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... continuous monitoring system to monitor and record the fuel consumption and the ratio of water or steam to...) On a ppm basis (for NOX) and a percent O2 basis for oxygen; or (ii) On a ppm at 15 percent O2 basis... temperature (Ta), and minimum combustor inlet absolute pressure (Po) into the ISO correction equation. (iii...
40 CFR 60.334 - Monitoring of operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... continuous monitoring system to monitor and record the fuel consumption and the ratio of water or steam to...) On a ppm basis (for NOX) and a percent O2 basis for oxygen; or (ii) On a ppm at 15 percent O2 basis... temperature (Ta), and minimum combustor inlet absolute pressure (Po) into the ISO correction equation. (iii...
40 CFR 60.334 - Monitoring of operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... continuous monitoring system to monitor and record the fuel consumption and the ratio of water or steam to...) On a ppm basis (for NOX) and a percent O2 basis for oxygen; or (ii) On a ppm at 15 percent O2 basis... temperature (Ta), and minimum combustor inlet absolute pressure (Po) into the ISO correction equation. (iii...
Koltun, G.F.; Holtschlag, David J.
2010-01-01
Bootstrapping techniques employing random subsampling were used with the AFINCH (Analysis of Flows In Networks of CHannels) model to gain insights into the effects of variation in streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the 0405 (Southeast Lake Michigan) hydrologic subregion. AFINCH uses stepwise-regression techniques to estimate monthly water yields from catchments based on geospatial-climate and land-cover data in combination with available streamflow and water-use data. Calculations are performed on a hydrologic-subregion scale for each catchment and stream reach contained in a National Hydrography Dataset Plus (NHDPlus) subregion. Water yields from contributing catchments are multiplied by catchment areas and resulting flow values are accumulated to compute streamflows in stream reaches which are referred to as flow lines. AFINCH imposes constraints on water yields to ensure that observed streamflows are conserved at gaged locations. Data from the 0405 hydrologic subregion (referred to as Southeast Lake Michigan) were used for the analyses. Daily streamflow data were measured in the subregion for 1 or more years at a total of 75 streamflow-gaging stations during the analysis period which spanned water years 1971–2003. The number of streamflow gages in operation each year during the analysis period ranged from 42 to 56 and averaged 47. Six sets (one set for each censoring level), each composed of 30 random subsets of the 75 streamflow gages, were created by censoring (removing) approximately 10, 20, 30, 40, 50, and 75 percent of the streamflow gages (the actual percentage of operating streamflow gages censored for each set varied from year to year, and within the year from subset to subset, but averaged approximately the indicated percentages).Streamflow estimates for six flow lines each were aggregated by censoring level, and results were analyzed to assess (a) how the size and composition of the streamflow-gaging network affected the average apparent errors and variability of the estimated flows and (b) whether results for certain months were more variable than for others. The six flow lines were categorized into one of three types depending upon their network topology and position relative to operating streamflow-gaging stations. Statistical analysis of the model results indicates that (1) less precise (that is, more variable) estimates resulted from smaller streamflow-gaging networks as compared to larger streamflow-gaging networks, (2) precision of AFINCH flow estimates at an ungaged flow line is improved by operation of one or more streamflow gages upstream and (or) downstream in the enclosing basin, (3) no consistent seasonal trend in estimate variability was evident, and (4) flow lines from ungaged basins appeared to exhibit the smallest absolute apparent percent errors (APEs) and smallest changes in average APE as a function of increasing censoring level. The counterintuitive results described in item (4) above likely reflect both the nature of the base-streamflow estimate from which the errors were computed and insensitivity in the average model-derived estimates to changes in the streamflow-gaging-network size and composition. Another analysis demonstrated that errors for flow lines in ungaged basins have the potential to be much larger than indicated by their APEs if measured relative to their true (but unknown) flows. “Missing gage” analyses, based on examination of censoring subset results where the streamflow gage of interest was omitted from the calibration data set, were done to better understand the true error characteristics for ungaged flow lines as a function of network size. Results examined for 2 water years indicated that the probability of computing a monthly streamflow estimate within 10 percent of the true value with AFINCH decreased from greater than 0.9 at about a 10-percent network-censoring level to less than 0.6 as the censoring level approached 75 percent. In addition, estimates for typically dry months tended to be characterized by larger percent errors than typically wetter months.
A simple analytical infiltration model for short-duration rainfall
NASA Astrophysics Data System (ADS)
Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming
2017-12-01
Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.
NASA Technical Reports Server (NTRS)
Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.
1993-01-01
Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.
NASA Astrophysics Data System (ADS)
González-Jorge, Higinio; Riveiro, Belén; Varela, María; Arias, Pedro
2012-07-01
A low-cost image orthorectification tool based on the utilization of compact cameras and scale bars is developed to obtain the main geometric parameters of masonry bridges for inventory and routine inspection purposes. The technique is validated in three different bridges by comparison with laser scanning data. The surveying process is very delicate and must make a balance between working distance and angle. Three different cameras are used in the study to establish the relationship between the error and the camera model. Results depict nondependence in error between the length of the bridge element, the type of bridge, and the type of element. Error values for all the cameras are below 4 percent (95 percent of the data). A compact Canon camera, the model with the best technical specifications, shows an error level ranging from 0.5 to 1.5 percent.
Refractive errors in medical students in Singapore.
Woo, W W; Lim, K A; Yang, H; Lim, X Y; Liew, F; Lee, Y S; Saw, S M
2004-10-01
Refractive errors are becoming more of a problem in many societies, with prevalence rates of myopia in many Asian urban countries reaching epidemic proportions. This study aims to determine the prevalence rates of various refractive errors in Singapore medical students. 157 second year medical students (aged 19-23 years) in Singapore were examined. Refractive error measurements were determined using a stand-alone autorefractor. Additional demographical data was obtained via questionnaires filled in by the students. The prevalence rate of myopia in Singapore medical students was 89.8 percent (Spherical equivalence (SE) at least -0.50 D). Hyperopia was present in 1.3 percent (SE more than +0.50 D) of the participants and the overall astigmatism prevalence rate was 82.2 percent (Cylinder at least 0.50 D). Prevalence rates of myopia and astigmatism in second year Singapore medical students are one of the highest in the world.
Accuracy of Robotic Radiosurgical Liver Treatment Throughout the Respiratory Cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winter, Jeff D.; Wong, Raimond; Swaminath, Anand
Purpose: To quantify random uncertainties in robotic radiosurgical treatment of liver lesions with real-time respiratory motion management. Methods and Materials: We conducted a retrospective analysis of 27 liver cancer patients treated with robotic radiosurgery over 118 fractions. The robotic radiosurgical system uses orthogonal x-ray images to determine internal target position and correlates this position with an external surrogate to provide robotic corrections of linear accelerator positioning. Verification and update of this internal–external correlation model was achieved using periodic x-ray images collected throughout treatment. To quantify random uncertainties in targeting, we analyzed logged tracking information and isolated x-ray images collected immediately beforemore » beam delivery. For translational correlation errors, we quantified the difference between correlation model–estimated target position and actual position determined by periodic x-ray imaging. To quantify prediction errors, we computed the mean absolute difference between the predicted coordinates and actual modeled position calculated 115 milliseconds later. We estimated overall random uncertainty by quadratically summing correlation, prediction, and end-to-end targeting errors. We also investigated relationships between tracking errors and motion amplitude using linear regression. Results: The 95th percentile absolute correlation errors in each direction were 2.1 mm left–right, 1.8 mm anterior–posterior, 3.3 mm cranio–caudal, and 3.9 mm 3-dimensional radial, whereas 95th percentile absolute radial prediction errors were 0.5 mm. Overall 95th percentile random uncertainty was 4 mm in the radial direction. Prediction errors were strongly correlated with modeled target amplitude (r=0.53-0.66, P<.001), whereas only weak correlations existed for correlation errors. Conclusions: Study results demonstrate that model correlation errors are the primary random source of uncertainty in Cyberknife liver treatment and, unlike prediction errors, are not strongly correlated with target motion amplitude. Aggregate 3-dimensional radial position errors presented here suggest the target will be within 4 mm of the target volume for 95% of the beam delivery.« less
Bobo, William V; Angleró, Gabriela C; Jenkins, Gregory; Hall-Flavin, Daniel K; Weinshilboum, Richard; Biernacka, Joanna M
2016-05-01
The study aimed to define thresholds of clinically significant change in 17-item Hamilton Depression Rating Scale (HDRS-17) scores using the Clinical Global Impression-Improvement (CGI-I) Scale as a gold standard. We conducted a secondary analysis of individual patient data from the Pharmacogenomic Research Network Antidepressant Medication Pharmacogenomic Study, an 8-week, single-arm clinical trial of citalopram or escitalopram treatment of adults with major depression. We used equipercentile linking to identify levels of absolute and percent change in HDRS-17 scores that equated with scores on the CGI-I at 4 and 8 weeks. Additional analyses equated changes in the HDRS-7 and Bech-6 scale scores with CGI-I scores. A CGI-I score of 2 (much improved) corresponded to an absolute decrease (improvement) in HDRS-17 total score of 11 points and a percent decrease of 50-57%, from baseline values. Similar results were observed for percent change in HDRS-7 and Bech-6 scores. Larger absolute (but not percent) decreases in HDRS-17 scores equated with CGI-I scores of 2 in persons with higher baseline depression severity. Our results support the consensus definition of response based on HDRS-17 scores (>50% decrease from baseline). A similar definition of response may apply to the HDRS-7 and Bech-6. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Freudling, W.; Møller, P.; Patat, F.; Moehler, S.; Romaniello, M.; Jehin, E.; O'Brien, K.; Izzo, C.; Pompei, E.
Photometric calibration observations are routinely carried out with all ESO imaging cameras in every clear night. The nightly zeropoints derived from these observations are accurate to about 10%. Recently, we have started the FORS Absolute Photometry Project (FAP) to investigate, if and how percent-level absolute photometric accuracy can be achieved with FORS1, and how such photometric calibration can be offered to observers. We found that there are significant differences between the sky-flats and the true photometric response of the instrument which partially depend on the rotator angle. A second order correction to the sky-flat significantly improves the relative photometry within the field. We demonstrate the feasibility of percent level photometry and describe the calibrations necessary to achieve that level of accuracy.
40 CFR 53.55 - Test for effect of variations in power line voltage and ambient temperature.
Code of Federal Regulations, 2012 CFR
2012-07-01
... temperatures used in the tests and shall be checked at zero and at least one flow rate within ±3 percent of 16... absolute difference calculated in Equation 15 of this paragraph (g)(4) must not exceed 0.3 (CV%) for each test run. (5) Ambient temperature measurement accuracy. (i) Calculate the absolute value of the...
40 CFR 53.55 - Test for effect of variations in power line voltage and ambient temperature.
Code of Federal Regulations, 2011 CFR
2011-07-01
... temperatures used in the tests and shall be checked at zero and at least one flow rate within ±3 percent of 16... absolute difference calculated in Equation 15 of this paragraph (g)(4) must not exceed 0.3 (CV%) for each test run. (5) Ambient temperature measurement accuracy. (i) Calculate the absolute value of the...
Error Analysis of non-TLD HDR Brachytherapy Dosimetric Techniques
NASA Astrophysics Data System (ADS)
Amoush, Ahmad
The American Association of Physicists in Medicine Task Group Report43 (AAPM-TG43) and its updated version TG-43U1 rely on the LiF TLD detector to determine the experimental absolute dose rate for brachytherapy. The recommended uncertainty estimates associated with TLD experimental dosimetry include 5% for statistical errors (Type A) and 7% for systematic errors (Type B). TG-43U1 protocol does not include recommendation for other experimental dosimetric techniques to calculate the absolute dose for brachytherapy. This research used two independent experimental methods and Monte Carlo simulations to investigate and analyze uncertainties and errors associated with absolute dosimetry of HDR brachytherapy for a Tandem applicator. An A16 MicroChamber* and one dose MOSFET detectors† were selected to meet the TG-43U1 recommendations for experimental dosimetry. Statistical and systematic uncertainty analyses associated with each experimental technique were analyzed quantitatively using MCNPX 2.6‡ to evaluate source positional error, Tandem positional error, the source spectrum, phantom size effect, reproducibility, temperature and pressure effects, volume averaging, stem and wall effects, and Tandem effect. Absolute dose calculations for clinical use are based on Treatment Planning System (TPS) with no corrections for the above uncertainties. Absolute dose and uncertainties along the transverse plane were predicted for the A16 microchamber. The generated overall uncertainties are 22%, 17%, 15%, 15%, 16%, 17%, and 19% at 1cm, 2cm, 3cm, 4cm, and 5cm, respectively. Predicting the dose beyond 5cm is complicated due to low signal-to-noise ratio, cable effect, and stem effect for the A16 microchamber. Since dose beyond 5cm adds no clinical information, it has been ignored in this study. The absolute dose was predicted for the MOSFET detector from 1cm to 7cm along the transverse plane. The generated overall uncertainties are 23%, 11%, 8%, 7%, 7%, 9%, and 8% at 1cm, 2cm, 3cm, and 4cm, 5cm, 6cm, and 7cm, respectively. The Nucletron Freiburg flap applicator is used with the Nucletron remote afterloader HDR machine to deliver dose to surface cancers. Dosimetric data for the Nucletron 192Ir source were generated using Monte Carlo simulation and compared with the published data. Two dimensional dosimetric data were calculated at two source positions; at the center of the sphere of the applicator and between two adjacent spheres. Unlike the TPS dose algorithm, The Monte Carlo code developed for this research accounts for the applicator material, secondary electrons and delta particles, and the air gap between the skin and the applicator. *Standard Imaging, Inc., Middleton, Wisconsin USA † OneDose MOSFET, Sicel Technologies, Morrisville NC ‡ Los Alamos National Laboratory, NM USA
Accounting for hardware imperfections in EIT image reconstruction algorithms.
Hartinger, Alzbeta E; Gagnon, Hervé; Guardo, Robert
2007-07-01
Electrical impedance tomography (EIT) is a non-invasive technique for imaging the conductivity distribution of a body section. Different types of EIT images can be reconstructed: absolute, time difference and frequency difference. Reconstruction algorithms are sensitive to many errors which translate into image artefacts. These errors generally result from incorrect modelling or inaccurate measurements. Every reconstruction algorithm incorporates a model of the physical set-up which must be as accurate as possible since any discrepancy with the actual set-up will cause image artefacts. Several methods have been proposed in the literature to improve the model realism, such as creating anatomical-shaped meshes, adding a complete electrode model and tracking changes in electrode contact impedances and positions. Absolute and frequency difference reconstruction algorithms are particularly sensitive to measurement errors and generally assume that measurements are made with an ideal EIT system. Real EIT systems have hardware imperfections that cause measurement errors. These errors translate into image artefacts since the reconstruction algorithm cannot properly discriminate genuine measurement variations produced by the medium under study from those caused by hardware imperfections. We therefore propose a method for eliminating these artefacts by integrating a model of the system hardware imperfections into the reconstruction algorithms. The effectiveness of the method has been evaluated by reconstructing absolute, time difference and frequency difference images with and without the hardware model from data acquired on a resistor mesh phantom. Results have shown that artefacts are smaller for images reconstructed with the model, especially for frequency difference imaging.
Modeling and forecasting of KLCI weekly return using WT-ANN integrated model
NASA Astrophysics Data System (ADS)
Liew, Wei-Thong; Liong, Choong-Yeun; Hussain, Saiful Izzuan; Isa, Zaidi
2013-04-01
The forecasting of weekly return is one of the most challenging tasks in investment since the time series are volatile and non-stationary. In this study, an integrated model of wavelet transform and artificial neural network, WT-ANN is studied for modeling and forecasting of KLCI weekly return. First, the WT is applied to decompose the weekly return time series in order to eliminate noise. Then, a mathematical model of the time series is constructed using the ANN. The performance of the suggested model will be evaluated by root mean squared error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE). The result shows that the WT-ANN model can be considered as a feasible and powerful model for time series modeling and prediction.
[Application of wavelet neural networks model to forecast incidence of syphilis].
Zhou, Xian-Feng; Feng, Zi-Jian; Yang, Wei-Zhong; Li, Xiao-Song
2011-07-01
To apply Wavelet Neural Networks (WNN) model to forecast incidence of Syphilis. Back Propagation Neural Network (BPNN) and WNN were developed based on the monthly incidence of Syphilis in Sichuan province from 2004 to 2008. The accuracy of forecast was compared between the two models. In the training approximation, the mean absolute error (MAE), rooted mean square error (RMSE) and mean absolute percentage error (MAPE) were 0.0719, 0.0862 and 11.52% respectively for WNN, and 0.0892, 0.1183 and 14.87% respectively for BPNN. The three indexes for generalization of models were 0.0497, 0.0513 and 4.60% for WNN, and 0.0816, 0.1119 and 7.25% for BPNN. WNN is a better model for short-term forecasting of Syphilis.
JPRS Report East Asia Vietnam: TAP CHI CONG SAN No 2, February 1988.
1988-08-30
alarming level ( 18 percent is lost in transmission, 10 percent in con- sumption). It has been calculated that if we reduced these losses by only 1...output. In our country, 15 to 18 percent of the rice produced each year is lost. This amounts to 2.4-2.9 million tons, enough to feed nearly 10...some of the arguments of JPRS-ATC-88-006 30 August 1988 18 the classical authors to represent absolute truth, dogmat- ically exaggerating the degree
NASA Technical Reports Server (NTRS)
Hess, Wayne P.; Leone, Stephen R.
1987-01-01
Absolute I(asterisk) quantum yields have been measured as a function of wavelength for room temperature photodissociation of the ICN A state continuum. The yields are obtained by the technique of time-resolved diode laser gain-vs-absorption spectroscopy. Quantum yields are evaluated at seven wavelengths from 248 to 284 nm. The yield at 266 nm is 66.0 + or - 2 percent and it falls off to 53.4 + or - 2 percent and 44.0 + or - 4 percent at 284 and 248 nm, respectively. The latter values are significantly higher than those obtained by previous workers using infrared fluorescence. Estimates of I(asterisk) quantum yields obtained from analysis of CN photofragment rotational distributions, as discussed by other workers, are in good agreement with the I(asterisk) yields reported here. The results are considered in conjunction with recent theoretical and experimental work on the CN rotational distributions and with previous I(asterisk) quantum yield results.
NASA Technical Reports Server (NTRS)
Clinton, N. J. (Principal Investigator)
1980-01-01
Labeling errors made in the large area crop inventory experiment transition year estimates by Earth Observation Division image analysts are identified and quantified. The analysis was made from a subset of blind sites in six U.S. Great Plains states (Oklahoma, Kansas, Montana, Minnesota, North and South Dakota). The image interpretation basically was well done, resulting in a total omission error rate of 24 percent and a commission error rate of 4 percent. The largest amount of error was caused by factors beyond the control of the analysts who were following the interpretation procedures. The odd signatures, the largest error cause group, occurred mostly in areas of moisture abnormality. Multicrop labeling was tabulated showing the distribution of labeling for all crops.
Whitbeck, David E.
2006-01-01
The Lamoreux Potential Evapotranspiration (LXPET) Program computes potential evapotranspiration (PET) using inputs from four different meteorological sources: temperature, dewpoint, wind speed, and solar radiation. PET and the same four meteorological inputs are used with precipitation data in the Hydrological Simulation Program-Fortran (HSPF) to simulate streamflow in the Salt Creek watershed, DuPage County, Illinois. Streamflows from HSPF are routed with the Full Equations (FEQ) model to determine water-surface elevations. Consequently, variations in meteorological inputs have potential to propagate through many calculations. Sensitivity of PET to variation was simulated by increasing the meteorological input values by 20, 40, and 60 percent and evaluating the change in the calculated PET. Increases in temperatures produced the greatest percent changes, followed by increases in solar radiation, dewpoint, and then wind speed. Additional sensitivity of PET was considered for shifts in input temperatures and dewpoints by absolute differences of ?10, ?20, and ?30 degrees Fahrenheit (degF). Again, changes in input temperatures produced the greatest differences in PET. Sensitivity of streamflow simulated by HSPF was evaluated for 20-percent increases in meteorological inputs. These simulations showed that increases in temperature produced the greatest change in flow. Finally, peak water-surface elevations for nine storm events were compared among unmodified meteorological inputs and inputs with values predicted 6, 24, and 48 hours preceding the simulated peak. Results of this study can be applied to determine how errors specific to a hydrologic system will affect computations of system streamflow and water-surface elevations.
,
2013-01-01
Median weekly absolute percent differences for selected parameters including: sample volume, 8.0 percent; ammonium concentration, 9.1 percent; nitrate concentration, 8.5 percent; sulfate concentration, 10.2 percent. Annual precipitation-weighted mean concentrations were higher for CO98 compared to CO89 for all analytes. The chemical concentration record for CO98 contains more valid samples than the CO89 record. Therefore, the CO98 record is more representative of 2012 total annual deposition at Loch Vale. Daily precipitation-depth records for the co-located precipitation gages were 100 percent complete, and the total annual precipitation depths between the sites differed by 0.1 percent for the year (91.5 and 91.4 cm).
Estimating error statistics for Chambon-la-Forêt observatory definitive data
NASA Astrophysics Data System (ADS)
Lesur, Vincent; Heumez, Benoît; Telali, Abdelkader; Lalanne, Xavier; Soloviev, Anatoly
2017-08-01
We propose a new algorithm for calibrating definitive observatory data with the goal of providing users with estimates of the data error standard deviations (SDs). The algorithm has been implemented and tested using Chambon-la-Forêt observatory (CLF) data. The calibration process uses all available data. It is set as a large, weakly non-linear, inverse problem that ultimately provides estimates of baseline values in three orthogonal directions, together with their expected standard deviations. For this inverse problem, absolute data error statistics are estimated from two series of absolute measurements made within a day. Similarly, variometer data error statistics are derived by comparing variometer data time series between different pairs of instruments over few years. The comparisons of these time series led us to use an autoregressive process of order 1 (AR1 process) as a prior for the baselines. Therefore the obtained baselines do not vary smoothly in time. They have relatively small SDs, well below 300 pT when absolute data are recorded twice a week - i.e. within the daily to weekly measures recommended by INTERMAGNET. The algorithm was tested against the process traditionally used to derive baselines at CLF observatory, suggesting that statistics are less favourable when this latter process is used. Finally, two sets of definitive data were calibrated using the new algorithm. Their comparison shows that the definitive data SDs are less than 400 pT and may be slightly overestimated by our process: an indication that more work is required to have proper estimates of absolute data error statistics. For magnetic field modelling, the results show that even on isolated sites like CLF observatory, there are very localised signals over a large span of temporal frequencies that can be as large as 1 nT. The SDs reported here encompass signals of a few hundred metres and less than a day wavelengths.
First Impressions of CARTOSAT-1
NASA Technical Reports Server (NTRS)
Lutes, James
2007-01-01
CARTOSAT-1 RPCs need special handling. Absolute accuracy of uncontrolled scenes is poor (biases > 300 m). Noticeable cross-track scale error (+/- 3-4 m across stereo pair). Most errors are either biases or linear in line/sample (These are easier to correct with ground control).
NASA Astrophysics Data System (ADS)
Haldren, H. A.; Perey, D. F.; Yost, W. T.; Cramer, K. E.; Gupta, M. C.
2018-05-01
A digitally controlled instrument for conducting single-frequency and swept-frequency ultrasonic phase measurements has been developed based on a constant-frequency pulsed phase-locked-loop (CFPPLL) design. This instrument uses a pair of direct digital synthesizers to generate an ultrasonically transceived tone-burst and an internal reference wave for phase comparison. Real-time, constant-frequency phase tracking in an interrogated specimen is possible with a resolution of 0.000 38 rad (0.022°), and swept-frequency phase measurements can be obtained. Using phase measurements, an absolute thickness in borosilicate glass is presented to show the instrument's efficacy, and these results are compared to conventional ultrasonic pulse-echo time-of-flight (ToF) measurements. The newly developed instrument predicted the thickness with a mean error of -0.04 μm and a standard deviation of error of 1.35 μm. Additionally, the CFPPLL instrument shows a lower measured phase error in the absence of changing temperature and couplant thickness than high-resolution cross-correlation ToF measurements at a similar signal-to-noise ratio. By showing higher accuracy and precision than conventional pulse-echo ToF measurements and lower phase errors than cross-correlation ToF measurements, the new digitally controlled CFPPLL instrument provides high-resolution absolute ultrasonic velocity or path-length measurements in solids or liquids, as well as tracking of material property changes with high sensitivity. The ability to obtain absolute phase measurements allows for many new applications than possible with previous ultrasonic pulsed phase-locked loop instruments. In addition to improved resolution, swept-frequency phase measurements add useful capability in measuring properties of layered structures, such as bonded joints, or materials which exhibit non-linear frequency-dependent behavior, such as dispersive media.
A Simple Model Predicting Individual Weight Change in Humans
Thomas, Diana M.; Martin, Corby K.; Heymsfield, Steven; Redman, Leanne M.; Schoeller, Dale A.; Levine, James A.
2010-01-01
Excessive weight in adults is a national concern with over 2/3 of the US population deemed overweight. Because being overweight has been correlated to numerous diseases such as heart disease and type 2 diabetes, there is a need to understand mechanisms and predict outcomes of weight change and weight maintenance. A simple mathematical model that accurately predicts individual weight change offers opportunities to understand how individuals lose and gain weight and can be used to foster patient adherence to diets in clinical settings. For this purpose, we developed a one dimensional differential equation model of weight change based on the energy balance equation is paired to an algebraic relationship between fat free mass and fat mass derived from a large nationally representative sample of recently released data collected by the Centers for Disease Control. We validate the model's ability to predict individual participants’ weight change by comparing model estimates of final weight data from two recent underfeeding studies and one overfeeding study. Mean absolute error and standard deviation between model predictions and observed measurements of final weights are less than 1.8 ± 1.3 kg for the underfeeding studies and 2.5 ± 1.6 kg for the overfeeding study. Comparison of the model predictions to other one dimensional models of weight change shows improvement in mean absolute error, standard deviation of mean absolute error, and group mean predictions. The maximum absolute individual error decreased by approximately 60% substantiating reliability in individual weight change predictions. The model provides a viable method for estimating individual weight change as a result of changes in intake and determining individual dietary adherence during weight change studies. PMID:24707319
A nonlinear model of gold production in Malaysia
NASA Astrophysics Data System (ADS)
Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi
2014-06-01
Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.
Optimized retrievals of precipitable water from the VAS 'split window'
NASA Technical Reports Server (NTRS)
Chesters, Dennis; Robinson, Wayne D.; Uccellini, Louis W.
1987-01-01
Precipitable water fields have been retrieved from the VISSR Atmospheric Sounder (VAS) using a radiation transfer model for the differential water vapor absorption between the 11- and 12-micron 'split window' channels. Previous moisture retrievals using only the split window channels provided very good space-time continuity but poor absolute accuracy. This note describes how retrieval errors can be significantly reduced from plus or minus 0.9 to plus or minus 0.6 gm/sq cm by empirically optimizing the effective air temperature and absorption coefficients used in the two-channel model. The differential absorption between the VAS 11- and 12-micron channels, empirically estimated from 135 colocated VAS-RAOB observations, is found to be approximately 50 percent smaller than the theoretical estimates. Similar discrepancies have been noted previously between theoretical and empirical absorption coefficients applied to the retrieval of sea surface temperatures using radiances observed by VAS and polar-orbiting satellites. These discrepancies indicate that radiation transfer models for the 11-micron window appear to be less accurate than the satellite observations.
Estimating accuracy of land-cover composition from two-stage cluster sampling
Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.
2009-01-01
Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.
Monitoring Influenza Epidemics in China with Search Query from Baidu
Lv, Benfu; Peng, Geng; Chunara, Rumi; Brownstein, John S.
2013-01-01
Several approaches have been proposed for near real-time detection and prediction of the spread of influenza. These include search query data for influenza-related terms, which has been explored as a tool for augmenting traditional surveillance methods. In this paper, we present a method that uses Internet search query data from Baidu to model and monitor influenza activity in China. The objectives of the study are to present a comprehensive technique for: (i) keyword selection, (ii) keyword filtering, (iii) index composition and (iv) modeling and detection of influenza activity in China. Sequential time-series for the selected composite keyword index is significantly correlated with Chinese influenza case data. In addition, one-month ahead prediction of influenza cases for the first eight months of 2012 has a mean absolute percent error less than 11%. To our knowledge, this is the first study on the use of search query data from Baidu in conjunction with this approach for estimation of influenza activity in China. PMID:23750192
Analysis of counting errors in the phase/Doppler particle analyzer
NASA Technical Reports Server (NTRS)
Oldenburg, John R.
1987-01-01
NASA is investigating the application of the Phase Doppler measurement technique to provide improved drop sizing and liquid water content measurements in icing research. The magnitude of counting errors were analyzed because these errors contribute to inaccurate liquid water content measurements. The Phase Doppler Particle Analyzer counting errors due to data transfer losses and coincidence losses were analyzed for data input rates from 10 samples/sec to 70,000 samples/sec. Coincidence losses were calculated by determining the Poisson probability of having more than one event occurring during the droplet signal time. The magnitude of the coincidence loss can be determined, and for less than a 15 percent loss, corrections can be made. The data transfer losses were estimated for representative data transfer rates. With direct memory access enabled, data transfer losses are less than 5 percent for input rates below 2000 samples/sec. With direct memory access disabled losses exceeded 20 percent at a rate of 50 samples/sec preventing accurate number density or mass flux measurements. The data transfer losses of a new signal processor were analyzed and found to be less than 1 percent for rates under 65,000 samples/sec.
The efficacy of a novel mobile phone application for goldmann ptosis visual field interpretation.
Maamari, Robi N; D'Ambrosio, Michael V; Joseph, Jeffrey M; Tao, Jeremiah P
2014-01-01
To evaluate the efficacy of a novel mobile phone application that calculates superior visual field defects on Goldmann visual field charts. Experimental study in which the mobile phone application and 14 oculoplastic surgeons interpreted the superior visual field defect in 10 Goldmann charts. Percent error of the mobile phone application and the oculoplastic surgeons' estimates were calculated compared with computer software computation of the actual defects. Precision and time efficiency of the application were evaluated by processing the same Goldmann visual field chart 10 repeated times. The mobile phone application was associated with a mean percent error of 1.98% (95% confidence interval[CI], 0.87%-3.10%) in superior visual field defect calculation. The average mean percent error of the oculoplastic surgeons' visual estimates was 19.75% (95% CI, 14.39%-25.11%). Oculoplastic surgeons, on average, underestimated the defect in all 10 Goldmann charts. There was high interobserver variance among oculoplastic surgeons. The percent error of the 10 repeated measurements on a single chart was 0.93% (95% CI, 0.40%-1.46%). The average time to process 1 chart was 12.9 seconds (95% CI, 10.9-15.0 seconds). The mobile phone application was highly accurate, precise, and time-efficient in calculating the percent superior visual field defect using Goldmann charts. Oculoplastic surgeon visual interpretations were highly inaccurate, highly variable, and usually underestimated the field vision loss.
Absolute Parameters for the F-type Eclipsing Binary BW Aquarii
NASA Astrophysics Data System (ADS)
Maxted, P. F. L.
2018-05-01
BW Aqr is a bright eclipsing binary star containing a pair of F7V stars. The absolute parameters of this binary (masses, radii, etc.) are known to good precision so they are often used to test stellar models, particularly in studies of convective overshooting. ... Maxted & Hutcheon (2018) analysed the Kepler K2 data for BW Aqr and noted that it shows variability between the eclipses that may be caused by tidally induced pulsations. ... Table 1 shows the absolute parameters for BW Aqr derived from an improved analysis of the Kepler K2 light curve plus the RV measurements from both Imbert (1979) and Lester & Gies (2018). ... The values in Table 1 with their robust error estimates from the standard deviation of the mean are consistent with the values and errors from Maxted & Hutcheon (2018) based on the PPD calculated using emcee for a fit to the entire K2 light curve.
Altitude registration of limb-scattered radiation
NASA Astrophysics Data System (ADS)
Moy, Leslie; Bhartia, Pawan K.; Jaross, Glen; Loughman, Robert; Kramarova, Natalya; Chen, Zhong; Taha, Ghassan; Chen, Grace; Xu, Philippe
2017-01-01
One of the largest constraints to the retrieval of accurate ozone profiles from UV backscatter limb sounding sensors is altitude registration. Two methods, the Rayleigh scattering attitude sensing (RSAS) and absolute radiance residual method (ARRM), are able to determine altitude registration to the accuracy necessary for long-term ozone monitoring. The methods compare model calculations of radiances to measured radiances and are independent of onboard tracking devices. RSAS determines absolute altitude errors, but, because the method is susceptible to aerosol interference, it is limited to latitudes and time periods with minimal aerosol contamination. ARRM, a new technique introduced in this paper, can be applied across all seasons and altitudes. However, it is only appropriate for relative altitude error estimates. The application of RSAS to Limb Profiler (LP) measurements from the Ozone Mapping and Profiler Suite (OMPS) on board the Suomi NPP (SNPP) satellite indicates tangent height (TH) errors greater than 1 km with an absolute accuracy of ±200 m. Results using ARRM indicate a ˜ 300 to 400 m intra-orbital TH change varying seasonally ±100 m, likely due to either errors in the spacecraft pointing or in the geopotential height (GPH) data that we use in our analysis. ARRM shows a change of ˜ 200 m over ˜ 5 years with a relative accuracy (a long-term accuracy) of ±100 m outside the polar regions.
44 CFR 67.6 - Basis of appeal.
Code of Federal Regulations, 2010 CFR
2010-10-01
... absolute (except where mathematical or measurement error or changed physical conditions can be demonstrated... a mathematical or measurement error or changed physical conditions, then the specific source of the... registered professional engineer or licensed land surveyor, of the new data necessary for FEMA to conduct a...
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
Discrete distributed strain sensing of intelligent structures
NASA Technical Reports Server (NTRS)
Anderson, Mark S.; Crawley, Edward F.
1992-01-01
Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.
Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang
2016-10-14
First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5-60 m³/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2-60 m³/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow.
Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang
2016-01-01
First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5–60 m3/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2–60 m3/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow. PMID:27754412
Evaluation of lens distortion errors in video-based motion analysis
NASA Technical Reports Server (NTRS)
Poliner, Jeffrey; Wilmington, Robert; Klute, Glenn K.; Micocci, Angelo
1993-01-01
In an effort to study lens distortion errors, a grid of points of known dimensions was constructed and videotaped using a standard and a wide-angle lens. Recorded images were played back on a VCR and stored on a personal computer. Using these stored images, two experiments were conducted. Errors were calculated as the difference in distance from the known coordinates of the points to the calculated coordinates. The purposes of this project were as follows: (1) to develop the methodology to evaluate errors introduced by lens distortion; (2) to quantify and compare errors introduced by use of both a 'standard' and a wide-angle lens; (3) to investigate techniques to minimize lens-induced errors; and (4) to determine the most effective use of calibration points when using a wide-angle lens with a significant amount of distortion. It was seen that when using a wide-angle lens, errors from lens distortion could be as high as 10 percent of the size of the entire field of view. Even with a standard lens, there was a small amount of lens distortion. It was also found that the choice of calibration points influenced the lens distortion error. By properly selecting the calibration points and avoidance of the outermost regions of a wide-angle lens, the error from lens distortion can be kept below approximately 0.5 percent with a standard lens and 1.5 percent with a wide-angle lens.
Identification of driver errors : overview and recommendations
DOT National Transportation Integrated Search
2002-08-01
Driver error is cited as a contributing factor in most automobile crashes, and although estimates vary by source, driver error is cited as the principal cause of from 45 to 75 percent of crashes. However, the specific errors that lead to crashes, and...
NASA Technical Reports Server (NTRS)
Li, Rongsheng (Inventor); Kurland, Jeffrey A. (Inventor); Dawson, Alec M. (Inventor); Wu, Yeong-Wei A. (Inventor); Uetrecht, David S. (Inventor)
2004-01-01
Methods and structures are provided that enhance attitude control during gyroscope substitutions by insuring that a spacecraft's attitude control system does not drive its absolute-attitude sensors out of their capture ranges. In a method embodiment, an operational process-noise covariance Q of a Kalman filter is temporarily replaced with a substantially greater interim process-noise covariance Q. This replacement increases the weight given to the most recent attitude measurements and hastens the reduction of attitude errors and gyroscope bias errors. The error effect of the substituted gyroscopes is reduced and the absolute-attitude sensors are not driven out of their capture range. In another method embodiment, this replacement is preceded by the temporary replacement of an operational measurement-noise variance R with a substantially larger interim measurement-noise variance R to reduce transients during the gyroscope substitutions.
[Prediction of schistosomiasis infection rates of population based on ARIMA-NARNN model].
Ke-Wei, Wang; Yu, Wu; Jin-Ping, Li; Yu-Yu, Jiang
2016-07-12
To explore the effect of the autoregressive integrated moving average model-nonlinear auto-regressive neural network (ARIMA-NARNN) model on predicting schistosomiasis infection rates of population. The ARIMA model, NARNN model and ARIMA-NARNN model were established based on monthly schistosomiasis infection rates from January 2005 to February 2015 in Jiangsu Province, China. The fitting and prediction performances of the three models were compared. Compared to the ARIMA model and NARNN model, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model were the least with the values of 0.011 1, 0.090 0 and 0.282 4, respectively. The ARIMA-NARNN model could effectively fit and predict schistosomiasis infection rates of population, which might have a great application value for the prevention and control of schistosomiasis.
NASA Technical Reports Server (NTRS)
Yoshino, K.; Esmond, J. R.; Freeman, D. E.; Parkinson, W. H.
1993-01-01
Laboratory measurements of the relative absorption cross sections of ozone at temperatures 195, 228, and 295 K have been made throughout the 185 to 254 nm wavelength region. The absolute absorption cross sections at the same temperatures have been measured at several discrete wavelengths in the 185 to 250 nm region. The absolute cross sections of ozone have been used to put the relative cross sections on a firm absolute basis throughout the 185 to 255 nm region. These recalibrated cross sections are slightly lower than those of Molina and Molina (1986), but the differences are within a few percent and would not be significant in atmospheric applications.
NASA Technical Reports Server (NTRS)
Haugen, H. K.; Weitz, E.; Leone, S. R.
1985-01-01
Various techniques have been used to study photodissociation dynamics of the halogens and interhalogens. The quantum yields obtained by these techniques differ widely. The present investigation is concerned with a qualitatively new approach for obtaining highly accurate quantum yields for electronically excited states. This approach makes it possible to obtain an accuracy of 1 percent to 3 percent. It is shown that measurement of the initial transient gain/absorption vs the final absorption in a single time-resolved signal is a very accurate technique in the study of absolute branching fractions in photodissociation. The new technique is found to be insensitive to pulse and probe laser characteristics, molecular absorption cross sections, and absolute precursor density.
NASA Technical Reports Server (NTRS)
Kwon, Jin H.; Lee, Ja H.
1989-01-01
The far-field beam pattern and the power-collection efficiency are calculated for a multistage laser-diode-array amplifier consisting of about 200,000 5-W laser diode arrays with random distributions of phase and orientation errors and random diode failures. From the numerical calculation it is found that the far-field beam pattern is little affected by random failures of up to 20 percent of the laser diodes with reference of 80 percent receiving efficiency in the center spot. The random differences in phases among laser diodes due to probable manufacturing errors is allowed to about 0.2 times the wavelength. The maximum allowable orientation error is about 20 percent of the diffraction angle of a single laser diode aperture (about 1 cm). The preliminary results indicate that the amplifier could be used for space beam-power transmission with an efficiency of about 80 percent for a moderate-size (3-m-diameter) receiver placed at a distance of less than 50,000 km.
Finding Blackbody Temperature and Emissivity on a Sub-Pixel Scale
NASA Astrophysics Data System (ADS)
Bernstein, D. J.; Bausell, J.; Grigsby, S.; Kudela, R. M.
2015-12-01
Surface temperature and emissivity provide important insight into the ecosystem being remotely sensed. Dozier (1981) proposed a an algorithm to solve for percent coverage and temperatures of two different surface types (e.g. sea surface, cloud cover, etc.) within a given pixel, with a constant value for emissivity assumed. Here we build on Dozier (1981) by proposing an algorithm that solves for both temperature and emissivity of a water body within a satellite pixel by assuming known percent coverage of surface types within the pixel. Our algorithm generates thermal infrared (TIR) and emissivity end-member spectra for the two surface types. Our algorithm then superposes these end-member spectra on emissivity and TIR spectra emitted from four pixels with varying percent coverage of different surface types. The algorithm was tested preliminarily (48 iterations) using simulated pixels containing more than one surface type, with temperature and emissivity percent errors of ranging from 0 to 1.071% and 2.516 to 15.311% respectively[1]. We then tested the algorithm using a MASTER image from MASTER collected as part of the NASA Student Airborne Research Program (NASA SARP). Here the temperature of water was calculated to be within 0.22 K of in situ data. The algorithm calculated emissivity of water with an accuracy of 0.13 to 1.53% error for Salton Sea pixels collected with MASTER, also collected as part of NASA SARP. This method could improve retrievals for the HyspIRI sensor. [1] Percent error for emissivity was generated by averaging percent error across all selected bands widths.
Optimal quantum error correcting codes from absolutely maximally entangled states
NASA Astrophysics Data System (ADS)
Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio
2018-02-01
Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Heng, E-mail: hengli@mdanderson.org; Zhu, X. Ronald; Zhang, Xiaodong
Purpose: To develop and validate a novel delivery strategy for reducing the respiratory motion–induced dose uncertainty of spot-scanning proton therapy. Methods and Materials: The spot delivery sequence was optimized to reduce dose uncertainty. The effectiveness of the delivery sequence optimization was evaluated using measurements and patient simulation. One hundred ninety-one 2-dimensional measurements using different delivery sequences of a single-layer uniform pattern were obtained with a detector array on a 1-dimensional moving platform. Intensity modulated proton therapy plans were generated for 10 lung cancer patients, and dose uncertainties for different delivery sequences were evaluated by simulation. Results: Without delivery sequence optimization,more » the maximum absolute dose error can be up to 97.2% in a single measurement, whereas the optimized delivery sequence results in a maximum absolute dose error of ≤11.8%. In patient simulation, the optimized delivery sequence reduces the mean of fractional maximum absolute dose error compared with the regular delivery sequence by 3.3% to 10.6% (32.5-68.0% relative reduction) for different patients. Conclusions: Optimizing the delivery sequence can reduce dose uncertainty due to respiratory motion in spot-scanning proton therapy, assuming the 4-dimensional CT is a true representation of the patients' breathing patterns.« less
Absolute calibration accuracy of L4 TM and L5 TM sensor image pairs
Chander, G.; Micijevic, E.
2006-01-01
The Landsat suite of satellites has collected the longest continuous archive of multispectral data of any land-observing space program. From the Landsat program's inception in 1972 to the present, the Earth science user community has benefited from a historical record of remotely sensed data. However, little attention has been paid to ensuring that the data are calibrated and comparable from mission to mission, Launched in 1982 and 1984 respectively, the Landsat 4 (L4) and Landsat 5 (L5) Thematic Mappers (TM) are the backbone of an extensive archive of moderate resolution Earth imagery. To evaluate the "current" absolute accuracy of these two sensors, image pairs from the L5 TM and L4 TM sensors were compared. The approach involves comparing image statistics derived from large common areas observed eight days apart by the two sensors. The average percent differences in reflectance estimates obtained from the L4 TM agree with those from the L5 TM to within 15 percent. Additional work to characterize the absolute differences between the two sensors over the entire mission is in progress.
Influence of non-level walking on pedometer accuracy.
Leicht, Anthony S; Crowther, Robert G
2009-05-01
The YAMAX Digiwalker pedometer has been previously confirmed as a valid and reliable monitor during level walking, however, little is known about its accuracy during non-level walking activities or between genders. Subsequently, this study examined the influence of non-level walking and gender on pedometer accuracy. Forty-six healthy adults completed 3-min bouts of treadmill walking at their normal walking pace during 11 inclines (0-10%) while another 123 healthy adults completed walking up and down 47 stairs. During walking, participants wore a YAMAX Digiwalker SW-700 pedometer with the number of steps taken and registered by the pedometer recorded. Pedometer difference (steps registered-steps taken), net error (% of steps taken), absolute error (absolute % of steps taken) and gender were examined by repeated measures two-way ANOVA and Tukey's post hoc tests. During incline walking, pedometer accuracy indices were similar between inclines and gender except for a significantly greater step difference (-7+/-5 steps vs. 1+/-4 steps) and net error (-2.4+/-1.8% for 9% vs. 0.4+/-1.2% for 2%). Step difference and net error were significantly greater during stair descent compared to stair ascent while absolute error was significantly greater during stair ascent compared to stair descent. The current study demonstrated that the YAMAX Digiwalker SW-700 pedometer exhibited good accuracy during incline walking up to 10% while it overestimated steps taken during stair ascent/descent with greater overestimation during stair descent. Stair walking activity should be documented in field studies as the YAMAX Digiwalker SW-700 pedometer overestimates this activity type.
Satellite SAR geocoding with refined RPC model
NASA Astrophysics Data System (ADS)
Zhang, Lu; Balz, Timo; Liao, Mingsheng
2012-04-01
Recent studies have proved that the Rational Polynomial Camera (RPC) model is able to act as a reliable replacement of the rigorous Range-Doppler (RD) model for the geometric processing of satellite SAR datasets. But its capability in absolute geolocation of SAR images has not been evaluated quantitatively. Therefore, in this article the problems of error analysis and refinement of SAR RPC model are primarily investigated to improve the absolute accuracy of SAR geolocation. Range propagation delay and azimuth timing error are identified as two major error sources for SAR geolocation. An approach based on SAR image simulation and real-to-simulated image matching is developed to estimate and correct these two errors. Afterwards a refined RPC model can be built from the error-corrected RD model and then used in satellite SAR geocoding. Three experiments with different settings are designed and conducted to comprehensively evaluate the accuracies of SAR geolocation with both ordinary and refined RPC models. All the experimental results demonstrate that with RPC model refinement the absolute location accuracies of geocoded SAR images can be improved significantly, particularly in Easting direction. In another experiment the computation efficiencies of SAR geocoding with both RD and RPC models are compared quantitatively. The results show that by using the RPC model such efficiency can be remarkably improved by at least 16 times. In addition the problem of DEM data selection for SAR image simulation in RPC model refinement is studied by a comparative experiment. The results reveal that the best choice should be using the proper DEM datasets of spatial resolution comparable to that of the SAR images.
Using an Integrative Approach To Teach Hebrew Grammar in an Elementary Immersion Class.
ERIC Educational Resources Information Center
Eckstein, Peter
The 12-week program described here was designed to improve a Hebrew language immersion class' ability to correctly use the simple past and present tenses. The target group was a sixth-grade class that achieved a 65.68 percent error-free rate on a pre-test; the project's objective was to achieve 90 percent error free tests, using student…
Residual volume on land and when immersed in water: effect on percent body fat.
Demura, Shinichi; Yamaji, Shunsuke; Kitabayashi, Tamotsu
2006-08-01
There is a large residual volume (RV) error when assessing percent body fat by means of hydrostatic weighing. It has generally been measured before hydrostatic weighing. However, an individual's maximal exhalations on land and in the water may not be identical. The aims of this study were to compare residual volumes and vital capacities on land and when immersed to the neck in water, and to examine the influence of the measurement error on percent body fat. The participants were 20 healthy Japanese males and 20 healthy Japanese females. To assess the influence of the RV error on percent body fat in both conditions and to evaluate the cross-validity of the prediction equation, another 20 males and 20 females were measured using hydrostatic weighing. Residual volume was measured on land and in the water using a nitrogen wash-out technique based on an open-circuit approach. In water, residual volume was measured with the participant sitting on a chair while the whole body, except the head, was submerged . The trial-to-trial reliabilities of residual volume in both conditions were very good (intraclass correlation coefficient > 0.98). Although residual volume measured under the two conditions did not agree completely, they showed a high correlation (males: 0.880; females: 0.853; P < 0.05). The limits of agreement for residual volumes in both conditions using Bland-Altman plots were -0.430 to 0.508 litres. This range was larger than the trial-to-trial error of residual volume on land (-0.260 to 0.304 litres). Moreover, the relationship between percent body fat computed using residual volume measured in both conditions was very good for both sexes (males: r = 0.902; females: r = 0.869, P < 0.0001), and the errors were approximately -6 to 4% (limits of agreement for percent body fat: -3.4 to 2.2% for males; -6.3 to 4.4% for females). We conclude that if these errors are of no importance, residual volume measured on land can be used when assessing body composition.
Bosman, Lisa B; Darling, Seth B
2018-06-01
The advent of modern solar energy technologies can improve the costs of energy consumption on a global, national, and regional level, ultimately spanning stakeholders from governmental entities to utility companies, corporations, and residential homeowners. For those stakeholders experiencing the four seasons, accurately accounting for snow-related energy losses is important for effectively predicting photovoltaic performance energy generation and valuation. This paper provides an examination of a new, simplified approach to decrease snow-related forecasting error, in comparison to current solar energy performance models. A new method is proposed to allow model designers, and ultimately users, the opportunity to better understand the return on investment for solar energy systems located in snowy environments. The new method is validated using two different sets of solar energy systems located near Green Bay, WI, USA: a 3.0-kW micro inverter system and a 13.2-kW central inverter system. Both systems were unobstructed, facing south, and set at a tilt of 26.56°. Data were collected beginning in May 2014 (micro inverter system) and October 2014 (central inverter system), through January 2018. In comparison to reference industry standard solar energy prediction applications (PVWatts and PVsyst), the new method results in lower mean absolute percent errors per kilowatt hour of 0.039 and 0.055%, respectively, for the micro inverter system and central inverter system. The statistical analysis provides support for incorporating this new method into freely available, online, up-to-date prediction applications, such as PVWatts and PVsyst.
Improving estimates of streamflow characteristics by using Landsat-1 imagery
Hollyday, Este F.
1976-01-01
Imagery from the first Earth Resources Technology Satellite (renamed Landsat-1) was used to discriminate physical features of drainage basins in an effort to improve equations used to estimate streamflow characteristics at gaged and ungaged sites. Records of 20 gaged basins in the Delmarva Peninsula of Maryland, Delaware, and Virginia were analyzed for 40 statistical streamflow characteristics. Equations relating these characteristics to basin characteristics were obtained by a technique of multiple linear regression. A control group of equations contains basin characteristics derived from maps. An experimental group of equations contains basin characteristics derived from maps and imagery. Characteristics from imagery were forest, riparian (streambank) vegetation, water, and combined agricultural and urban land use. These basin characteristics were isolated photographically by techniques of film-density discrimination. The area of each characteristic in each basin was measured photometrically. Comparison of equations in the control group with corresponding equations in the experimental group reveals that for 12 out of 40 equations the standard error of estimate was reduced by more than 10 percent. As an example, the standard error of estimate of the equation for the 5-year recurrence-interval flood peak was reduced from 46 to 32 percent. Similarly, the standard error of the equation for the mean monthly flow for September was reduced from 32 to 24 percent, the standard error for the 7-day, 2-year recurrence low flow was reduced from 136 to 102 percent, and the standard error for the 3-day, 2-year flood volume was reduced from 30 to 12 percent. It is concluded that data from Landsat imagery can substantially improve the accuracy of estimates of some streamflow characteristics at sites in the Delmarva Peninsula.
Diffuse-flow conceptualization and simulation of the Edwards aquifer, San Antonio region, Texas
Lindgren, R.J.
2006-01-01
A numerical ground-water-flow model (hereinafter, the conduit-flow Edwards aquifer model) of the karstic Edwards aquifer in south-central Texas was developed for a previous study on the basis of a conceptualization emphasizing conduit development and conduit flow, and included simulating conduits as one-cell-wide, continuously connected features. Uncertainties regarding the degree to which conduits pervade the Edwards aquifer and influence ground-water flow, as well as other uncertainties inherent in simulating conduits, raised the question of whether a model based on the conduit-flow conceptualization was the optimum model for the Edwards aquifer. Accordingly, a model with an alternative hydraulic conductivity distribution without conduits was developed in a study conducted during 2004-05 by the U.S. Geological Survey, in cooperation with the San Antonio Water System. The hydraulic conductivity distribution for the modified Edwards aquifer model (hereinafter, the diffuse-flow Edwards aquifer model), based primarily on a conceptualization in which flow in the aquifer predominantly is through a network of numerous small fractures and openings, includes 38 zones, with hydraulic conductivities ranging from 3 to 50,000 feet per day. Revision of model input data for the diffuse-flow Edwards aquifer model was limited to changes in the simulated hydraulic conductivity distribution. The root-mean-square error for 144 target wells for the calibrated steady-state simulation for the diffuse-flow Edwards aquifer model is 20.9 feet. This error represents about 3 percent of the total head difference across the model area. The simulated springflows for Comal and San Marcos Springs for the calibrated steady-state simulation were within 2.4 and 15 percent of the median springflows for the two springs, respectively. The transient calibration period for the diffuse-flow Edwards aquifer model was 1947-2000, with 648 monthly stress periods, the same as for the conduit-flow Edwards aquifer model. The root-mean-square error for a period of drought (May-November 1956) for the calibrated transient simulation for 171 target wells is 33.4 feet, which represents about 5 percent of the total head difference across the model area. The root-mean-square error for a period of above-normal rainfall (November 1974-July 1975) for the calibrated transient simulation for 169 target wells is 25.8 feet, which represents about 4 percent of the total head difference across the model area. The root-mean-square error ranged from 6.3 to 30.4 feet in 12 target wells with long-term water-level measurements for varying periods during 1947-2000 for the calibrated transient simulation for the diffuse-flow Edwards aquifer model, and these errors represent 5.0 to 31.3 percent of the range in water-level fluctuations of each of those wells. The root-mean-square errors for the five major springs in the San Antonio segment of the aquifer for the calibrated transient simulation, as a percentage of the range of discharge fluctuations measured at the springs, varied from 7.2 percent for San Marcos Springs and 8.1 percent for Comal Springs to 28.8 percent for Leona Springs. The root-mean-square errors for hydraulic heads for the conduit-flow Edwards aquifer model are 27, 76, and 30 percent greater than those for the diffuse-flow Edwards aquifer model for the steady-state, drought, and above-normal rainfall synoptic time periods, respectively. The goodness-of-fit between measured and simulated springflows is similar for Comal, San Marcos, and Leona Springs for the diffuse-flow Edwards aquifer model and the conduit-flow Edwards aquifer model. The root-mean-square errors for Comal and Leona Springs were 15.6 and 21.3 percent less, respectively, whereas the root-mean-square error for San Marcos Springs was 3.3 percent greater for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. The root-mean-square errors for San Antonio and San Pedro Springs were appreciably greater, 80.2 and 51.0 percent, respectively, for the diffuse-flow Edwards aquifer model. The simulated water budgets for the diffuse-flow Edwards aquifer model are similar to those for the conduit-flow Edwards aquifer model. Differences in percentage of total sources or discharges for a budget component are 2.0 percent or less for all budget components for the steady-state and transient simulations. The largest difference in terms of the magnitude of water budget components for the transient simulation for 1956 was a decrease of about 10,730 acre-feet per year (about 2 per-cent) in springflow for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. This decrease in springflow (a water budget discharge) was largely offset by the decreased net loss of water from storage (a water budget source) of about 10,500 acre-feet per year.
Orris D. McCauley; George R., Jr. Trimble
1975-01-01
The relative or percentage value response after 12 years of selective cutting practices on low- and high-quality sites in Appalachian hardwoods amounted to a 119-percent increase on the low-quality site and 145 percent on the high-quality site. The absolute value or actual dollar response, on the other hand, showed that the low-quality site increased in value only $76/...
Hoos, Anne B.; Patel, Anant R.
1996-01-01
Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.
WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kry, S; Dromgoole, L; Alvarez, P
Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutionsmore » were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly in areas highlighted herein that show a tendency for errors.« less
Proprioceptive deficit in patients with complete tearing of the anterior cruciate ligament.
Godinho, Pedro; Nicoliche, Eduardo; Cossich, Victor; de Sousa, Eduardo Branco; Velasques, Bruna; Salles, José Inácio
2014-01-01
To investigate the existence of proprioceptive deficits between the injured limb and the uninjured (i.e. contralateral normal) limb, in individuals who suffered complete tearing of the anterior cruciate ligament (ACL), using a strength reproduction test. Sixteen patients with complete tearing of the ACL participated in the study. A voluntary maximum isometric strength test was performed, with reproduction of the muscle strength in the limb with complete tearing of the ACL and the healthy contralateral limb, with the knee flexed at 60°. The meta-intensity was used for the procedure of 20% of the voluntary maximum isometric strength. The proprioceptive performance was determined by means of absolute error, variable error and constant error values. Significant differences were found between the control group and ACL group for the variables of absolute error (p = 0.05) and constant error (p = 0.01). No difference was found in relation to variable error (p = 0.83). Our data corroborate the hypothesis that there is a proprioceptive deficit in subjects with complete tearing of the ACL in an injured limb, in comparison with the uninjured limb, during evaluation of the sense of strength. This deficit can be explained in terms of partial or total loss of the mechanoreceptors of the ACL.
Density of Jatropha curcas Seed Oil and its Methyl Esters: Measurement and Estimations
NASA Astrophysics Data System (ADS)
Veny, Harumi; Baroutian, Saeid; Aroua, Mohamed Kheireddine; Hasan, Masitah; Raman, Abdul Aziz; Sulaiman, Nik Meriam Nik
2009-04-01
Density data as a function of temperature have been measured for Jatropha curcas seed oil, as well as biodiesel jatropha methyl esters at temperatures from above their melting points to 90 ° C. The data obtained were used to validate the method proposed by Spencer and Danner using a modified Rackett equation. The experimental and estimated density values using the modified Rackett equation gave almost identical values with average absolute percent deviations less than 0.03% for the jatropha oil and 0.04% for the jatropha methyl esters. The Janarthanan empirical equation was also employed to predict jatropha biodiesel densities. This equation performed equally well with average absolute percent deviations within 0.05%. Two simple linear equations for densities of jatropha oil and its methyl esters are also proposed in this study.
Analysis of uncertainties in turbine metal temperature predictions
NASA Technical Reports Server (NTRS)
Stepka, F. S.
1980-01-01
An analysis was conducted to examine the extent to which various factors influence the accuracy of analytically predicting turbine blade metal temperatures and to determine the uncertainties in these predictions for several accuracies of the influence factors. The advanced turbofan engine gas conditions of 1700 K and 40 atmospheres were considered along with those of a highly instrumented high temperature turbine test rig and a low temperature turbine rig that simulated the engine conditions. The analysis showed that the uncertainty in analytically predicting local blade temperature was as much as 98 K, or 7.6 percent of the metal absolute temperature, with current knowledge of the influence factors. The expected reductions in uncertainties in the influence factors with additional knowledge and tests should reduce the uncertainty in predicting blade metal temperature to 28 K, or 2.1 percent of the metal absolute temperature.
ATLS Hypovolemic Shock Classification by Prediction of Blood Loss in Rats Using Regression Models.
Choi, Soo Beom; Choi, Joon Yul; Park, Jee Soo; Kim, Deok Won
2016-07-01
In our previous study, our input data set consisted of 78 rats, the blood loss in percent as a dependent variable, and 11 independent variables (heart rate, systolic blood pressure, diastolic blood pressure, mean arterial pressure, pulse pressure, respiration rate, temperature, perfusion index, lactate concentration, shock index, and new index (lactate concentration/perfusion)). The machine learning methods for multicategory classification were applied to a rat model in acute hemorrhage to predict the four Advanced Trauma Life Support (ATLS) hypovolemic shock classes for triage in our previous study. However, multicategory classification is much more difficult and complicated than binary classification. We introduce a simple approach for classifying ATLS hypovolaemic shock class by predicting blood loss in percent using support vector regression and multivariate linear regression (MLR). We also compared the performance of the classification models using absolute and relative vital signs. The accuracies of support vector regression and MLR models with relative values by predicting blood loss in percent were 88.5% and 84.6%, respectively. These were better than the best accuracy of 80.8% of the direct multicategory classification using the support vector machine one-versus-one model in our previous study for the same validation data set. Moreover, the simple MLR models with both absolute and relative values could provide possibility of the future clinical decision support system for ATLS classification. The perfusion index and new index were more appropriate with relative changes than absolute values.
Adaptive Trajectory Prediction Algorithm for Climbing Flights
NASA Technical Reports Server (NTRS)
Schultz, Charles Alexander; Thipphavong, David P.; Erzberger, Heinz
2012-01-01
Aircraft climb trajectories are difficult to predict, and large errors in these predictions reduce the potential operational benefits of some advanced features for NextGen. The algorithm described in this paper improves climb trajectory prediction accuracy by adjusting trajectory predictions based on observed track data. It utilizes rate-of-climb and airspeed measurements derived from position data to dynamically adjust the aircraft weight modeled for trajectory predictions. In simulations with weight uncertainty, the algorithm is able to adapt to within 3 percent of the actual gross weight within two minutes of the initial adaptation. The root-mean-square of altitude errors for five-minute predictions was reduced by 73 percent. Conflict detection performance also improved, with a 15 percent reduction in missed alerts and a 10 percent reduction in false alerts. In a simulation with climb speed capture intent and weight uncertainty, the algorithm improved climb trajectory prediction accuracy by up to 30 percent and conflict detection performance, reducing missed and false alerts by up to 10 percent.
Human speed perception is contrast dependent
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Thompson, Peter
1992-01-01
When two parallel gratings moving at the same speed are presented simultaneously, the lower-contrast grating appears slower. This misperception is evident across a wide range of contrasts (2.5-50 percent) and does not appear to saturate (e.g. a 50 percent contrast grating appears slower than a 70 percent contrast grating moving at the same speed). On average, a 70 percent contrast grating must be slowed by 35 percent to match a 10 percent contrast grating moving at 2 deg/sec (N = 6). Furthermore, the effect is largely independent of the absolute contrast level and is a quasi-linear function of log contrast ratio. A preliminary parametric study shows that, although spatial frequency has little effect, relative orientation is important. Finally, the misperception of relative speed appears lessened when the stimuli to be matched are presented sequentially.
Hannula, Manne; Huttunen, Kerttu; Koskelo, Jukka; Laitinen, Tomi; Leino, Tuomo
2008-01-01
In this study, the performances of artificial neural network (ANN) analysis and multilinear regression (MLR) model-based estimation of heart rate were compared in an evaluation of individual cognitive workload. The data comprised electrocardiography (ECG) measurements and an evaluation of cognitive load that induces psychophysiological stress (PPS), collected from 14 interceptor fighter pilots during complex simulated F/A-18 Hornet air battles. In our data, the mean absolute error of the ANN estimate was 11.4 as a visual analog scale score, being 13-23% better than the mean absolute error of the MLR model in the estimation of cognitive workload.
Buitrago, Jaime; Asfour, Shihab
2017-01-01
Short-term load forecasting is crucial for the operations planning of an electrical grid. Forecasting the next 24 h of electrical load in a grid allows operators to plan and optimize their resources. The purpose of this study is to develop a more accurate short-term load forecasting method utilizing non-linear autoregressive artificial neural networks (ANN) with exogenous multi-variable input (NARX). The proposed implementation of the network is new: the neural network is trained in open-loop using actual load and weather data, and then, the network is placed in closed-loop to generate a forecast using the predicted load as the feedback input.more » Unlike the existing short-term load forecasting methods using ANNs, the proposed method uses its own output as the input in order to improve the accuracy, thus effectively implementing a feedback loop for the load, making it less dependent on external data. Using the proposed framework, mean absolute percent errors in the forecast in the order of 1% have been achieved, which is a 30% improvement on the average error using feedforward ANNs, ARMAX and state space methods, which can result in large savings by avoiding commissioning of unnecessary power plants. Finally, the New England electrical load data are used to train and validate the forecast prediction.« less
Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers
Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.
2004-01-01
LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buitrago, Jaime; Asfour, Shihab
Short-term load forecasting is crucial for the operations planning of an electrical grid. Forecasting the next 24 h of electrical load in a grid allows operators to plan and optimize their resources. The purpose of this study is to develop a more accurate short-term load forecasting method utilizing non-linear autoregressive artificial neural networks (ANN) with exogenous multi-variable input (NARX). The proposed implementation of the network is new: the neural network is trained in open-loop using actual load and weather data, and then, the network is placed in closed-loop to generate a forecast using the predicted load as the feedback input.more » Unlike the existing short-term load forecasting methods using ANNs, the proposed method uses its own output as the input in order to improve the accuracy, thus effectively implementing a feedback loop for the load, making it less dependent on external data. Using the proposed framework, mean absolute percent errors in the forecast in the order of 1% have been achieved, which is a 30% improvement on the average error using feedforward ANNs, ARMAX and state space methods, which can result in large savings by avoiding commissioning of unnecessary power plants. Finally, the New England electrical load data are used to train and validate the forecast prediction.« less
NASA Technical Reports Server (NTRS)
Sun, Jielun
1993-01-01
Results are presented of a test of the physically based total column water vapor retrieval algorithm of Wentz (1992) for sensitivity to realistic vertical distributions of temperature and water vapor. The ECMWF monthly averaged temperature and humidity fields are used to simulate the spatial pattern of systematic retrieval error of total column water vapor due to this sensitivity. The estimated systematic error is within 0.1 g/sq cm over about 70 percent of the global ocean area; systematic errors greater than 0.3 g/sq cm are expected to exist only over a few well-defined regions, about 3 percent of the global oceans, assuming that the global mean value is unbiased.
Human speed perception is contrast dependent
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Thompson, Peter
1992-01-01
When two parallel gratings moving at the same speed are presented simultaneously, the lower-contrast grating appears slower. This misperception is evident across a wide range of contrasts (2.5-50 percent) and does not appear to saturate. On average, a 70 percent contrast grating must be slowed by 35 percent to match a 10 percent contrast grating moving at 2 deg/sec (N = 6). Furthermore, the effect is largely independent of the absolute contrast level and is a quasilinear function of log contrast ratio. A preliminary parametric study shows that, although spatial frequency has little effect, relative orientation is important. Finally, the misperception of relative speed appears lessened when the stimuli to be matched are presented sequentially.
Systematic error of the Gaia DR1 TGAS parallaxes from data for the red giant clump
NASA Astrophysics Data System (ADS)
Gontcharov, G. A.
2017-08-01
Based on the Gaia DR1 TGAS parallaxes and photometry from the Tycho-2, Gaia, 2MASS, andWISE catalogues, we have produced a sample of 100 000 clump red giants within 800 pc of the Sun. The systematic variations of the mode of their absolute magnitude as a function of the distance, magnitude, and other parameters have been analyzed. We show that these variations reach 0.7 mag and cannot be explained by variations in the interstellar extinction or intrinsic properties of stars and by selection. The only explanation seems to be a systematic error of the Gaia DR1 TGAS parallax dependent on the square of the observed distance in kpc: 0.18 R 2 mas. Allowance for this error reduces significantly the systematic dependences of the absolute magnitude mode on all parameters. This error reaches 0.1 mas within 800 pc of the Sun and allows an upper limit for the accuracy of the TGAS parallaxes to be estimated as 0.2 mas. A careful allowance for such errors is needed to use clump red giants as "standard candles." This eliminates all discrepancies between the theoretical and empirical estimates of the characteristics of these stars and allows us to obtain the first estimates of the modes of their absolute magnitudes from the Gaia parallaxes: mode( M H ) = -1.49 m ± 0.04 m , mode( M Ks ) = -1.63 m ± 0.03 m , mode( M W1) = -1.67 m ± 0.05 m mode( M W2) = -1.67 m ± 0.05 m , mode( M W3) = -1.66 m ± 0.02 m , mode( M W4) = -1.73 m ± 0.03 m , as well as the corresponding estimates of their de-reddened colors.
Aquatic habitat mapping with an acoustic doppler current profiler: Considerations for data quality
Gaeuman, David; Jacobson, Robert B.
2005-01-01
When mounted on a boat or other moving platform, acoustic Doppler current profilers (ADCPs) can be used to map a wide range of ecologically significant phenomena, including measures of fluid shear, turbulence, vorticity, and near-bed sediment transport. However, the instrument movement necessary for mapping applications can generate significant errors, many of which have not been inadequately described. This report focuses on the mechanisms by which moving-platform errors are generated, and quantifies their magnitudes under typical habitat-mapping conditions. The potential for velocity errors caused by mis-alignment of the instrument?s internal compass are widely recognized, but has not previously been quantified for moving instruments. Numerical analyses show that even relatively minor compass mis-alignments can produce significant velocity errors, depending on the ratio of absolute instrument velocity to the target velocity and on the relative directions of instrument and target motion. A maximum absolute instrument velocity of about 1 m/s is recommended for most mapping applications. Lower velocities are appropriate when making bed velocity measurements, an emerging application that makes use of ADCP bottom-tracking to measure the velocity of sediment particles at the bed. The mechanisms by which heterogeneities in the flow velocity field generate horizontal velocities errors are also quantified, and some basic limitations in the effectiveness of standard error-detection criteria for identifying these errors are described. Bed velocity measurements may be particularly vulnerable to errors caused by spatial variability in the sediment transport field.
Impact of Cost-Sharing Increases on Continuity of Specialty Drug Use: A Quasi-Experimental Study.
Li, Pengxiang; Hu, Tianyan; Yu, Xinyan; Chahin, Salim; Dahodwala, Nabila; Blum, Marissa; Pettit, Amy R; Doshi, Jalpa A
2017-07-24
To examine the impact of cost-sharing increases on continuity of specialty drug use in Medicare beneficiaries with multiple sclerosis (MS) or rheumatoid arthritis (RA). Five percent Medicare claims data (2007-2010). Quasi-experimental study examining changes in specialty drug use among a group of Medicare Part D beneficiaries without low-income subsidies (non-LIS) as they transitioned from a 5 percent cost-sharing preperiod to a ≥25 percent cost-sharing postperiod, as compared to changes among a disease-matched contemporaneous control group of patients eligible for full low-income subsidies (LIS), who faced minor cost sharing (≤$6.30 copayment) in both the pre- and postperiods. Key variables were extracted from Medicare data. Relative to the LIS group, the non-LIS group had a greater increase in incidence of 30-day continuous gaps in any Part D treatment from the lower cost-sharing period to the higher cost-sharing period (MS, absolute increase = 10.1 percent, OR = 1.61, 95% CI 1.19-2.17; RA, absolute increase = 21.9 percent, OR = 2.75, 95% CI 2.15-3.51). The increase in Part D treatment gaps was not offset by increased Part B specialty drug use. Cost-sharing increases due to specialty tier-level cost sharing were associated with interruptions in MS and RA specialty drug treatments. © Health Research and Educational Trust.
Code of Federal Regulations, 2013 CFR
2013-07-01
... a zero-percent certificate of indebtedness that is made in error? 363.138 Section 363.138 Money and... TREASURY BUREAU OF THE PUBLIC DEBT REGULATIONS GOVERNING SECURITIES HELD IN TREASURYDIRECT Zero-Percent Certificate of Indebtedness General § 363.138 Is Treasury liable for the purchase of a zero-percent...
Code of Federal Regulations, 2012 CFR
2012-07-01
... a zero-percent certificate of indebtedness that is made in error? 363.138 Section 363.138 Money and... TREASURY BUREAU OF THE PUBLIC DEBT REGULATIONS GOVERNING SECURITIES HELD IN TREASURYDIRECT Zero-Percent Certificate of Indebtedness General § 363.138 Is Treasury liable for the purchase of a zero-percent...
Code of Federal Regulations, 2011 CFR
2011-07-01
... a zero-percent certificate of indebtedness that is made in error? 363.138 Section 363.138 Money and... TREASURY BUREAU OF THE PUBLIC DEBT REGULATIONS GOVERNING SECURITIES HELD IN TREASURYDIRECT Zero-Percent Certificate of Indebtedness General § 363.138 Is Treasury liable for the purchase of a zero-percent...
ALT space shuttle barometric altimeter altitude analysis
NASA Technical Reports Server (NTRS)
Killen, R.
1978-01-01
The accuracy was analyzed of the barometric altimeters onboard the space shuttle orbiter. Altitude estimates from the air data systems including the operational instrumentation and the developmental flight instrumentation were obtained for each of the approach and landing test flights. By comparing the barometric altitude estimates to altitudes derived from radar tracking data filtered through a Kalman filter and fully corrected for atmospheric refraction, the errors in the barometric altitudes were shown to be 4 to 5 percent of the Kalman altitudes. By comparing the altitude determined from the true atmosphere derived from weather balloon data to the altitude determined from the U.S. Standard Atmosphere of 1962, it was determined that the assumption of the Standard Atmosphere equations contributes roughly 75 percent of the total error in the baro estimates. After correcting the barometric altitude estimates using an average summer model atmosphere computed for the average latitude of the space shuttle landing sites, the residual error in the altitude estimates was reduced to less than 373 feet. This corresponds to an error of less than 1.5 percent for altitudes above 4000 feet for all flights.
NASA Technical Reports Server (NTRS)
Lienert, Barry R.
1991-01-01
Monte Carlo perturbations of synthetic tensors to evaluate the Hext/Jelinek elliptical confidence regions for anisotropy of magnetic susceptibility (AMS) eigenvectors are used. When the perturbations are 33 percent of the minimum anisotropy, both the shapes and probability densities of the resulting eigenvector distributions agree with the elliptical distributions predicted by the Hext/Jelinek equations. When the perturbation size is increased to 100 percent of the minimum eigenvalue difference, the major axis of the 95 percent confidence ellipse underestimates the observed eigenvector dispersion by about 10 deg. The observed distributions of the principal susceptibilities (eigenvalues) are close to being normal, with standard errors that agree well with the calculated Hext/Jelinek errors. The Hext/Jelinek ellipses are also able to describe the AMS dispersions due to instrumental noise and provide reasonable limits for the AMS dispersions observed in two Hawaiian basaltic dikes. It is concluded that the Hext/Jelinek method provides a satisfactory description of the errors in AMS data and should be a standard part of any AMS data analysis.
Farzandipour, Mehrdad; Sheikhtaheri, Abbas
2009-01-01
To evaluate the accuracy of procedural coding and the factors that influence it, 246 records were randomly selected from four teaching hospitals in Kashan, Iran. “Recodes” were assigned blindly and then compared to the original codes. Furthermore, the coders' professional behaviors were carefully observed during the coding process. Coding errors were classified as major or minor. The relations between coding accuracy and possible effective factors were analyzed by χ2 or Fisher exact tests as well as the odds ratio (OR) and the 95 percent confidence interval for the OR. The results showed that using a tabular index for rechecking codes reduces errors (83 percent vs. 72 percent accuracy). Further, more thorough documentation by the clinician positively affected coding accuracy, though this relation was not significant. Readability of records decreased errors overall (p = .003), including major ones (p = .012). Moreover, records with no abbreviations had fewer major errors (p = .021). In conclusion, not using abbreviations, ensuring more readable documentation, and paying more attention to available information increased coding accuracy and the quality of procedure databases. PMID:19471647
Koltun, G.F.; Kula, Stephanie P.
2013-01-01
This report presents the results of a study to develop methods for estimating selected low-flow statistics and for determining annual flow-duration statistics for Ohio streams. Regression techniques were used to develop equations for estimating 10-year recurrence-interval (10-percent annual-nonexceedance probability) low-flow yields, in cubic feet per second per square mile, with averaging periods of 1, 7, 30, and 90-day(s), and for estimating the yield corresponding to the long-term 80-percent duration flow. These equations, which estimate low-flow yields as a function of a streamflow-variability index, are based on previously published low-flow statistics for 79 long-term continuous-record streamgages with at least 10 years of data collected through water year 1997. When applied to the calibration dataset, average absolute percent errors for the regression equations ranged from 15.8 to 42.0 percent. The regression results have been incorporated into the U.S. Geological Survey (USGS) StreamStats application for Ohio (http://water.usgs.gov/osw/streamstats/ohio.html) in the form of a yield grid to facilitate estimation of the corresponding streamflow statistics in cubic feet per second. Logistic-regression equations also were developed and incorporated into the USGS StreamStats application for Ohio for selected low-flow statistics to help identify occurrences of zero-valued statistics. Quantiles of daily and 7-day mean streamflows were determined for annual and annual-seasonal (September–November) periods for each complete climatic year of streamflow-gaging station record for 110 selected streamflow-gaging stations with 20 or more years of record. The quantiles determined for each climatic year were the 99-, 98-, 95-, 90-, 80-, 75-, 70-, 60-, 50-, 40-, 30-, 25-, 20-, 10-, 5-, 2-, and 1-percent exceedance streamflows. Selected exceedance percentiles of the annual-exceedance percentiles were subsequently computed and tabulated to help facilitate consideration of the annual risk of exceedance or nonexceedance of annual and annual-seasonal-period flow-duration values. The quantiles are based on streamflow data collected through climatic year 2008.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saenz, D; Gutierrez, A
Purpose: The ScandiDos Discover has obtained FDA clearance and is now clinically released. We studied the essential attenuation and beam hardening components as well as tested the diode array’s ability to detect changes in absolute dose and MLC leaf positions. Methods: The ScandiDos Discover was mounted on the heads of an Elekta VersaHD and a Varian 23EX. Beam attenuation measurements were made at 10 cm depth for 6 MV and 18 MV beam energies. The PDD(10) was measured as a metric for the effect on beam quality. Next, a plan consisting of two orthogonal 10 × 10 cm2 fields wasmore » used to adjust the dose per fraction by scaling monitor units to test the absolute dose detection sensitivity of the Discover. A second plan (conformal arc) was then delivered several times independently on the Elekta VersaHD. Artificially introduced MLC position errors in the four central leaves were then added. The errors were incrementally increased from 1 mm to 4 mm and back across seven control points. Results: The absolute dose measured at 10 cm depth decreased by 1.2% and 0.7% for 6 MV and 18 MV beam with the Discover, respectively. Attenuation depended slightly on the field size but only changed the attenuation by 0.1% across 5 × 5 cm{sup 2} and 20 − 20 cm{sup 2} fields. The change in PDD(10) for a 10 − 10 cm{sup 2} field was +0.1% and +0.6% for 6 MV and 18 MV, respectively. Changes in monitor units from −5.0% to 5.0% were faithfully detected. Detected leaf errors were within 1.0 mm of intended errors. Conclusion: A novel in-vivo dosimeter monitoring the radiation beam during treatment was examined through its attenuation and beam hardening characteristics. The device tracked with changes in absolute dose as well as introduced leaf position deviations.« less
A Model of Self-Monitoring Blood Glucose Measurement Error.
Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio
2017-07-01
A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.
Radiologic Errors in Patients With Lung Cancer
Forrest, John V.; Friedman, Paul J.
1981-01-01
Some 20 percent to 50 percent of detectable malignant lesions are missed or misdiagnosed at the time of their first radiologic appearance. These errors can result in delayed diagnosis and treatment, which may affect a patient's survival. Use of moderately high (130 to 150) kilovolt peak films, awareness of portions of the lung where lesions are often missed (such as lung apices and paramediastinal and hilar areas), careful comparison of current roentgenograms with those taken previously and the use of an independent second observer can help to minimize the rate of radiologic diagnostic errors in patients with lung cancer. ImagesFigure 3.Figure 4. PMID:7257363
Resolving Mixed Algal Species in Hyperspectral Images
Mehrubeoglu, Mehrube; Teng, Ming Y.; Zimba, Paul V.
2014-01-01
We investigated a lab-based hyperspectral imaging system's response from pure (single) and mixed (two) algal cultures containing known algae types and volumetric combinations to characterize the system's performance. The spectral response to volumetric changes in single and combinations of algal mixtures with known ratios were tested. Constrained linear spectral unmixing was applied to extract the algal content of the mixtures based on abundances that produced the lowest root mean square error. Percent prediction error was computed as the difference between actual percent volumetric content and abundances at minimum RMS error. Best prediction errors were computed as 0.4%, 0.4% and 6.3% for the mixed spectra from three independent experiments. The worst prediction errors were found as 5.6%, 5.4% and 13.4% for the same order of experiments. Additionally, Beer-Lambert's law was utilized to relate transmittance to different volumes of pure algal suspensions demonstrating linear logarithmic trends for optical property measurements. PMID:24451451
Common genetic variation and novel loci associated with volumetric mammographic density.
Brand, Judith S; Humphreys, Keith; Li, Jingmei; Karlsson, Robert; Hall, Per; Czene, Kamila
2018-04-17
Mammographic density (MD) is a strong and heritable intermediate phenotype of breast cancer, but much of its genetic variation remains unexplained. We conducted a genetic association study of volumetric MD in a Swedish mammography screening cohort (n = 9498) to identify novel MD loci. Associations with volumetric MD phenotypes (percent dense volume, absolute dense volume, and absolute nondense volume) were estimated using linear regression adjusting for age, body mass index, menopausal status, and six principal components. We also estimated the proportion of MD variance explained by additive contributions from single-nucleotide polymorphisms (SNP-based heritability [h 2 SNP ]) in 4948 participants of the cohort. In total, three novel MD loci were identified (at P < 5 × 10 - 8 ): one for percent dense volume (HABP2) and two for the absolute dense volume (INHBB, LINC01483). INHBB is an established locus for ER-negative breast cancer, and HABP2 and LINC01483 represent putative new breast cancer susceptibility loci, because both loci were associated with breast cancer in available meta-analysis data including 122,977 breast cancer cases and 105,974 control subjects (P < 0.05). h 2 SNP (SE) estimates for percent dense, absolute dense, and nondense volume were 0.29 (0.07), 0.31 (0.07), and 0.25 (0.07), respectively. Corresponding ratios of h 2 SNP to previously observed narrow-sense h 2 estimates in the same cohort were 0.46, 0.72, and 0.41, respectively. These findings provide new insights into the genetic basis of MD and biological mechanisms linking MD to breast cancer risk. Apart from identifying three novel loci, we demonstrate that at least 25% of the MD variance is explained by common genetic variation with h 2 SNP /h 2 ratios varying between dense and nondense MD components.
NASA Technical Reports Server (NTRS)
Eckardt, Robert C.; Byer, Robert L.; Masuda, Hisashi; Fan, Yuan Xuan
1990-01-01
Both absolute and relative nonlinear optical coefficients of six nonlinear materials measured by second-harmonic generation are discussed. A single-mode, injection-seeded, Q-switched Nd:YAG laser with spatially filtered output was used to generate the 1.064-micron fundamental radiation. The following results were obtained: d36(KDP) = 0.38 pm/V, d36(KD/asterisk/P) = 0.37 pm/V, (parallel)d22(BaB2O4)(parallel) = 2.2 pm/V, d31(LiIO3) = -4.1 pm/V, d31(5 percentMgO:MgO LiNbO3) = -4.7 pm/V, and d(eff)(KTP) = 3.2 pm/V. The accuracy of these measurements is estimated to be better than 10 percent.
Neural network versus classical time series forecasting models
NASA Astrophysics Data System (ADS)
Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam
2017-05-01
Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.
Assessing and Ensuring GOES-R Magnetometer Accuracy
NASA Technical Reports Server (NTRS)
Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald
2016-01-01
The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.
40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of diameters meter per meter m/m 1 b atomic oxygen-to-carbon ratio mole per mole mol/mol 1 C # number... error between a quantity and its reference e brake-specific emission or fuel consumption gram per... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...
2013-09-01
M.4.1. Two-dimensional domains cropped out of three-dimensional numerically generated realizations; (a) 3D PCE-NAPL realizations generated by UTCHEM...165 Figure R.3.2. The absolute error vs relative error scatter plots of pM and gM from SGS data set- 4 using multi-task manifold...error scatter plots of pM and gM from TP/MC data set using multi- task manifold regression
Impact of spot charge inaccuracies in IMPT treatments.
Kraan, Aafke C; Depauw, Nicolas; Clasie, Ben; Giunta, Marina; Madden, Tom; Kooy, Hanne M
2017-08-01
Spot charge is one parameter of pencil-beam scanning dose delivery system whose accuracy is typically high but whose required value has not been investigated. In this work we quantify the dose impact of spot charge inaccuracies on the dose distribution in patients. Knowing the effect of charge errors is relevant for conventional proton machines, as well as for new generation proton machines, where ensuring accurate charge may be challenging. Through perturbation of spot charge in treatment plans for seven patients and a phantom, we evaluated the dose impact of absolute (up to 5× 10 6 protons) and relative (up to 30%) charge errors. We investigated the dependence on beam width by studying scenarios with small, medium and large beam sizes. Treatment plan statistics included the Γ passing rate, dose-volume-histograms and dose differences. The allowable absolute charge error for small spot plans was about 2× 10 6 protons. Larger limits would be allowed if larger spots were used. For relative errors, the maximum allowable error size for small, medium and large spots was about 13%, 8% and 6% for small, medium and large spots, respectively. Dose distributions turned out to be surprisingly robust against random spot charge perturbation. Our study suggests that ensuring spot charge errors as small as 1-2% as is commonly aimed at in conventional proton therapy machines, is clinically not strictly needed. © 2017 American Association of Physicists in Medicine.
Nissen, Steven E; Stroes, Erik; Dent-Acosta, Ricardo E; Rosenson, Robert S; Lehman, Sam J; Sattar, Naveed; Preiss, David; Bruckert, Eric; Ceška, Richard; Lepor, Norman; Ballantyne, Christie M; Gouni-Berthold, Ioanna; Elliott, Mary; Brennan, Danielle M; Wasserman, Scott M; Somaratne, Ransi; Scott, Rob; Stein, Evan A
2016-04-19
Muscle-related statin intolerance is reported by 5% to 20% of patients. To identify patients with muscle symptoms confirmed by statin rechallenge and compare lipid-lowering efficacy for 2 nonstatin therapies, ezetimibe and evolocumab. Two-stage randomized clinical trial including 511 adult patients with uncontrolled low-density lipoprotein cholesterol (LDL-C) levels and history of intolerance to 2 or more statins enrolled in 2013 and 2014 globally. Phase A used a 24-week crossover procedure with atorvastatin or placebo to identify patients having symptoms only with atorvastatin but not placebo. In phase B, after a 2-week washout, patients were randomized to ezetimibe or evolocumab for 24 weeks. Phase A: atorvastatin (20 mg) vs placebo. Phase B: randomization 2:1 to subcutaneous evolocumab (420 mg monthly) or oral ezetimibe (10 mg daily). Coprimary end points were the mean percent change in LDL-C level from baseline to the mean of weeks 22 and 24 levels and from baseline to week 24 levels. Of the 491 patients who entered phase A (mean age, 60.7 [SD, 10.2] years; 246 women [50.1%]; 170 with coronary heart disease [34.6%]; entry mean LDL-C level, 212.3 [SD, 67.9] mg/dL), muscle symptoms occurred in 209 of 491 (42.6%) while taking atorvastatin but not while taking placebo. Of these, 199 entered phase B, along with 19 who proceeded directly to phase B for elevated creatine kinase (N = 218, with 73 randomized to ezetimibe and 145 to evolocumab; entry mean LDL-C level, 219.9 [SD, 72] mg/dL). For the mean of weeks 22 and 24, LDL-C level with ezetimibe was 183.0 mg/dL; mean percent LDL-C change, -16.7% (95% CI, -20.5% to -12.9%), absolute change, -31.0 mg/dL and with evolocumab was 103.6 mg/dL; mean percent change, -54.5% (95% CI, -57.2% to -51.8%); absolute change, -106.8 mg/dL (P < .001). LDL-C level at week 24 with ezetimibe was 181.5 mg/dL; mean percent change, -16.7% (95% CI, -20.8% to -12.5%); absolute change, -31.2 mg/dL and with evolocumab was 104.1 mg/dL; mean percent change, -52.8% (95% CI, -55.8% to -49.8%); absolute change, -102.9 mg/dL (P < .001). For the mean of weeks 22 and 24, between-group difference in LDL-C was -37.8%; absolute difference, -75.8 mg/dL. For week 24, between-group difference in LDL-C was -36.1%; absolute difference, -71.7 mg/dL. Muscle symptoms were reported in 28.8% of ezetimibe-treated patients and 20.7% of evolocumab-treated patients (log-rank P = .17). Active study drug was stopped for muscle symptoms in 5 of 73 ezetimibe-treated patients (6.8%) and 1 of 145 evolocumab-treated patients (0.7%). Among patients with statin intolerance related to muscle-related adverse effects, the use of evolocumab compared with ezetimibe resulted in a significantly greater reduction in LDL-C levels after 24 weeks. Further studies are needed to assess long-term efficacy and safety. clinicaltrials.gov Identifier: NCT01984424.
One joule output from a diode-array-pumped Nd:YAG laser with side-pumped rod geometry
NASA Technical Reports Server (NTRS)
Kasinski, Jeffrey J.; Hughes, Will; Dibiase, Don; Bournes, Patrick; Burnham, Ralph
1992-01-01
Output of 1.25 J per pulse (1.064 micron) with an absolute optical efficiency of 28 percent and corresponding electrical efficiency of 10 percent was demonstrated in a diode-array-pumped Nd:YAG laser using a side-pumped rod geometry in a master-oscillator/power-amplifier configuration. In Q-switched operation, an output of 0.75 J in a 17-ns pulse was obtained. The fundamental laser output was frequency doubled in KTP with 60 percent conversion efficiency to obtain 0.45 J in a 16-ns pulse at 532 nm. The output beam had high spatial quality with pointing stability better than 40 microrad and a shot-to-shot pulse energy fluctuation of less than +/-3 percent.
Failure analysis and modeling of a VAXcluster system
NASA Technical Reports Server (NTRS)
Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.
1990-01-01
This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.
Error analysis on spinal motion measurement using skin mounted sensors.
Yang, Zhengyi; Ma, Heather Ting; Wang, Deming; Lee, Raymond
2008-01-01
Measurement errors of skin-mounted sensors in measuring forward bending movement of the lumbar spines are investigated. In this investigation, radiographic images capturing the entire lumbar spines' positions were acquired and used as a 'gold' standard. Seventeen young male volunteers (21 (SD 1) years old) agreed to participate in the study. Light-weight miniature sensors of the electromagnetic tracking systems-Fastrak were attached to the skin overlying the spinous processes of the lumbar spine. With the sensors attached, the subjects were requested to take lateral radiographs in two postures: neutral upright and full flexion. The ranges of motions of lumbar spine were calculated from two sets of digitized data: the bony markers of vertebral bodies and the sensors and compared. The differences between the two sets of results were then analyzed. The relative movement between sensor and vertebrae was decomposed into sensor sliding and titling, from which sliding error and titling error were introduced. Gross motion range of forward bending of lumbar spine measured from bony markers of vertebrae is 67.8 degrees (SD 10.6 degrees ) and that from sensors is 62.8 degrees (SD 12.8 degrees ). The error and absolute error for gross motion range were 5.0 degrees (SD 7.2 degrees ) and 7.7 degrees (SD 3.9 degrees ). The contributions of sensors placed on S1 and L1 to the absolute error were 3.9 degrees (SD 2.9 degrees ) and 4.4 degrees (SD 2.8 degrees ), respectively.
Donati, Marco; Camomilla, Valentina; Vannozzi, Giuseppe; Cappozzo, Aurelio
2008-07-19
The quantitative description of joint mechanics during movement requires the reconstruction of the position and orientation of selected anatomical axes with respect to a laboratory reference frame. These anatomical axes are identified through an ad hoc anatomical calibration procedure and their position and orientation are reconstructed relative to bone-embedded frames normally derived from photogrammetric marker positions and used to describe movement. The repeatability of anatomical calibration, both within and between subjects, is crucial for kinematic and kinetic end results. This paper illustrates an anatomical calibration approach, which does not require anatomical landmark manual palpation, described in the literature to be prone to great indeterminacy. This approach allows for the estimate of subject-specific bone morphology and automatic anatomical frame identification. The experimental procedure consists of digitization through photogrammetry of superficial points selected over the areas of the bone covered with a thin layer of soft tissue. Information concerning the location of internal anatomical landmarks, such as a joint center obtained using a functional approach, may also be added. The data thus acquired are matched with the digital model of a deformable template bone. Consequently, the repeatability of pelvis, knee and hip joint angles is determined. Five volunteers, each of whom performed five walking trials, and six operators, with no specific knowledge of anatomy, participated in the study. Descriptive statistics analysis was performed during upright posture, showing a limited dispersion of all angles (less than 3 deg) except for hip and knee internal-external rotation (6 deg and 9 deg, respectively). During level walking, the ratio of inter-operator and inter-trial error and an absolute subject-specific repeatability were assessed. For pelvic and hip angles, and knee flexion-extension the inter-operator error was equal to the inter-trial error-the absolute error ranging from 0.1 deg to 0.9 deg. Knee internal-external rotation and ab-adduction showed, on average, inter-operator errors, which were 8% and 28% greater than the relevant inter-trial errors, respectively. The absolute error was in the range 0.9-2.9 deg.
Problems in determining the surface density of the Galactic disk
NASA Technical Reports Server (NTRS)
Statler, Thomas S.
1989-01-01
A new method is presented for determining the local surface density of the Galactic disk from distance and velocity measurements of stars toward the Galactic poles. The procedure is fully three-dimensional, approximating the Galactic potential by a potential of Staeckel form and using the analytic third integral to treat the tilt and the change of shape of the velocity ellipsoid consistently. Applying the procedure to artificial data superficially resembling the K dwarf sample of Kuijken and Gilmore (1988, 1989), it is shown that the current best estimates of local disk surface density are uncertain by at least 30 percent. Of this, about 25 percent is due to the size of the velocity sample, about 15 percent comes from uncertainties in the rotation curve and the solar galactocentric distance, and about 10 percent from ignorance of the shape of the velocity distribution above z = 1 kpc, the errors adding in quadrature. Increasing the sample size by a factor of 3 will reduce the error to 20 percent. To achieve 10 percent accuracy, observations will be needed along other lines of sight to constrain the shape of the velocity ellipsoid.
Herpel, Laura B; Kanner, Richard E; Lee, Shing M; Fessler, Henry E; Sciurba, Frank C; Connett, John E; Wise, Robert A
2006-05-15
Our goal is to determine short-term intraindividual biologic and measurement variability in spirometry of patients with a wide range of stable chronic obstructive pulmonary disease severity, using datasets from the National Emphysema Treatment Trial (NETT) and the Lung Health Study (LHS). This may be applied to determine criteria that can be used to assess a clinically meaningful change in spirometry. A total of 5,886 participants from the LHS and 1,215 participants from the NETT performed prebronchodilator spirometry during two baseline sessions. We analyzed varying criteria for absolute and percent change of FEV(1) and FVC to determine which criterion was met by 90% of the participants. The mean +/- SD FEV(1) for the initial session was 2.64 +/- 0.60 L (75.1 +/- 8.8% predicted) for the LHS and 0.68 +/- 0.22 L (23.7 +/- 6.5% predicted) for the NETT. The mean +/- SD number of days between test sessions was 24.9 +/- 17.1 for the LHS and 85.7 +/- 21.7 for the NETT. As the degree of obstruction increased, the intersession percent difference of FEV(1) increased. However, the absolute difference between tests remained relatively constant despite the severity of obstruction (0.106 +/- 0.10 L). Over 90% of participants had an intersession FEV(1) difference of less than 225 ml irrespective of the severity of obstruction. Absolute changes in FEV(1) rather than percent change should be used to determine whether patients with chronic obstructive pulmonary disease have improved or worsened between test sessions.
Weatherwax, Ryan M; Harris, Nigel K; Kilding, Andrew E; Dalleck, Lance C
2018-01-01
Even though cardiorespiratory fitness (CRF) training elicits numerous health benefits, not all individuals have positive training responses following a structured CRF intervention. It has been suggested that the technical error (TE), a combination of biological variability and measurement error, should be used to establish specific training responsiveness criteria to gain further insight on the effectiveness of the training program. To date, most training interventions use an absolute change or a TE from previous findings, which do not take into consideration the training site and equipment used to establish training outcomes or the specific cohort being evaluated. The purpose of this investigation was to retrospectively analyze training responsiveness of two CRF training interventions using two common criteria and a site-specific TE. Sixteen men and women completed two maximal graded exercise tests and verification bouts to identify maximal oxygen consumption (VO 2 max) and establish a site-specific TE. The TE was then used to retrospectively analyze training responsiveness in comparison to commonly used criteria: percent change of >0% and >+5.6% in VO 2 max. The TE was found to be 7.7% for relative VO 2 max. χ 2 testing showed significant differences in all training criteria for each intervention and pooled data from both interventions, except between %Δ >0 and %Δ >+7.7% in one of the investigations. Training nonresponsiveness ranged from 11.5% to 34.6%. Findings from the present study support the utility of site-specific TE criterion to quantify training responsiveness. A similar methodology of establishing a site-specific and even cohort specific TE should be considered to establish when true cardiorespiratory training adaptations occur.
Morrow, Linda; Hompesch, Marcus; Tideman, Ann M; Matson, Jennifer; Dunne, Nancy; Pardo, Scott; Parkes, Joan L; Schachner, Holly C; Simmons, David A
2011-07-01
This glucose clamp study assessed the performance of an electrochemical continuous glucose monitoring (CGM) system for monitoring levels of interstitial glucose. This novel system does not require use of a trocar or needle for sensor insertion. Continuous glucose monitoring sensors were inserted subcutaneously into the abdominal tissue of 14 adults with type 1 or type 2 diabetes. Subjects underwent an automated glucose clamp procedure with four consecutive post-steady-state glucose plateau periods (40 min each): (a) hypoglycemic (50 mg/dl), (b) hyperglycemic (250 mg/dl), (c) second hypoglycemic (50 mg/dl), and (d) euglycemic (90 mg/dl). Plasma glucose results obtained with YSI glucose analyzers were used for sensor calibration. Accuracy was assessed retrospectively for plateau periods and transition states, when glucose levels were changing rapidly (approximately 2 mg/dl/min). Mean absolute percent difference (APD) was lowest during hypoglycemic plateaus (11.68%, 14.15%) and the euglycemic-to-hypoglycemic transition (14.21%). Mean APD during the hyperglycemic plateau was 17.11%; mean APDs were 18.12% and 19.25% during the hypoglycemic-to-hyperglycemic and hyperglycemic-to-hypoglycemic transitions, respectively. Parkes (consensus) error grid analysis (EGA) and rate EGA of the plateaus and transition periods, respectively, yielded 86.8% and 68.6% accurate results (zone A) and 12.1% and 20.0% benign errors (zone B). Continuous EGA yielded 88.5%, 75.4%, and 79.3% accurate results and 8.3%, 14.3%, and 2.4% benign errors for the euglycemic, hyperglycemic, and hypoglycemic transition periods, respectively. Adverse events were mild and unlikely to be device related. This novel CGM system was safe and accurate across the clinically relevant glucose range. © 2011 Diabetes Technology Society.
Morrow, Linda; Hompesch, Marcus; Tideman, Ann M; Matson, Jennifer; Dunne, Nancy; Pardo, Scott; Parkes, Joan L; Schachner, Holly C; Simmons, David A
2011-01-01
Background This glucose clamp study assessed the performance of an electrochemical continuous glucose monitoring (CGM) system for monitoring levels of interstitial glucose. This novel system does not require use of a trocar or needle for sensor insertion. Method Continuous glucose monitoring sensors were inserted subcutaneously into the abdominal tissue of 14 adults with type 1 or type 2 diabetes. Subjects underwent an automated glucose clamp procedure with four consecutive post-steady-state glucose plateau periods (40 min each): (a) hypoglycemic (50 mg/dl), (b) hyperglycemic (250 mg/dl), (c) second hypoglycemic (50 mg/dl), and (d) euglycemic (90 mg/dl). Plasma glucose results obtained with YSI glucose analyzers were used for sensor calibration. Accuracy was assessed retrospectively for plateau periods and transition states, when glucose levels were changing rapidly (approximately 2 mg/dl/min). Results Mean absolute percent difference (APD) was lowest during hypoglycemic plateaus (11.68%, 14.15%) and the euglycemic-to-hypoglycemic transition (14.21%). Mean APD during the hyperglycemic plateau was 17.11%; mean APDs were 18.12% and 19.25% during the hypoglycemic-to-hyperglycemic and hyperglycemic-to-hypoglycemic transitions, respectively. Parkes (consensus) error grid analysis (EGA) and rate EGA of the plateaus and transition periods, respectively, yielded 86.8% and 68.6% accurate results (zone A) and 12.1% and 20.0% benign errors (zone B). Continuous EGA yielded 88.5%, 75.4%, and 79.3% accurate results and 8.3%, 14.3%, and 2.4% benign errors for the euglycemic, hyperglycemic, and hypoglycemic transition periods, respectively. Adverse events were mild and unlikely to be device related. Conclusion This novel CGM system was safe and accurate across the clinically relevant glucose range. PMID:21880226
Estimating riparian understory vegetation cover with beta regression and copula models
Eskelson, Bianca N.I.; Madsen, Lisa; Hagar, Joan C.; Temesgen, Hailemariam
2011-01-01
Understory vegetation communities are critical components of forest ecosystems. As a result, the importance of modeling understory vegetation characteristics in forested landscapes has become more apparent. Abundance measures such as shrub cover are bounded between 0 and 1, exhibit heteroscedastic error variance, and are often subject to spatial dependence. These distributional features tend to be ignored when shrub cover data are analyzed. The beta distribution has been used successfully to describe the frequency distribution of vegetation cover. Beta regression models ignoring spatial dependence (BR) and accounting for spatial dependence (BRdep) were used to estimate percent shrub cover as a function of topographic conditions and overstory vegetation structure in riparian zones in western Oregon. The BR models showed poor explanatory power (pseudo-R2 ≤ 0.34) but outperformed ordinary least-squares (OLS) and generalized least-squares (GLS) regression models with logit-transformed response in terms of mean square prediction error and absolute bias. We introduce a copula (COP) model that is based on the beta distribution and accounts for spatial dependence. A simulation study was designed to illustrate the effects of incorrectly assuming normality, equal variance, and spatial independence. It showed that BR, BRdep, and COP models provide unbiased parameter estimates, whereas OLS and GLS models result in slightly biased estimates for two of the three parameters. On the basis of the simulation study, 93–97% of the GLS, BRdep, and COP confidence intervals covered the true parameters, whereas OLS and BR only resulted in 84–88% coverage, which demonstrated the superiority of GLS, BRdep, and COP over OLS and BR models in providing standard errors for the parameter estimates in the presence of spatial dependence.
Accuracy of a continuous glucose monitoring system in dogs and cats with diabetic ketoacidosis.
Reineke, Erica L; Fletcher, Daniel J; King, Lesley G; Drobatz, Kenneth J
2010-06-01
(1) To determine the ability of a continuous interstitial glucose monitoring system (CGMS) to accurately estimate blood glucose (BG) in dogs and cats with diabetic ketoacidosis. (2) To determine the effect of perfusion, hydration, body condition score, severity of ketosis, and frequency of calibration on the accuracy of the CGMS. Prospective study. University Teaching Hospital. Thirteen dogs and 11 cats diagnosed with diabetic ketoacidosis were enrolled in the study within 24 hours of presentation. Once BG dropped below 22.2 mmol/L (400 mg/dL), a sterile flexible glucose sensor was placed aseptically in the interstitial space and attached to the continuous glucose monitoring device for estimation of the interstitial glucose every 5 minutes. BG measurements were taken with a portable BG meter every 2-4 hours at the discretion of the primary clinician and compared with CGMS glucose measurements. The CGMS estimates of BG and BG measured on the glucometer were strongly associated regardless of calibration frequency (calibration every 8 h: r=0.86, P<0.001; calibration every 12 h: r=0.85, P<0.001). Evaluation of this data using both the Clarke and Consensus error grids showed that 96.7% and 99% of the CGMS readings, respectively, were deemed clinically acceptable (Zones A and B errors). Interpatient variability in the accuracy of the CGMS glucose measurements was found but was not associated with body condition, perfusion, or degree of ketosis. A weak association between hydration status of the patient as assessed with the visual analog scale and absolute percent error (Spearman's rank correlation, rho=-0.079, 95% CI=-0.15 to -0.01, P=0.03) was found, with the device being more accurate in the more hydrated patients. The CGMS provides clinically accurate estimates of BG in patients with diabetic ketoacidosis.
Poster Presentation: Optical Test of NGST Developmental Mirrors
NASA Technical Reports Server (NTRS)
Hadaway, James B.; Geary, Joseph; Reardon, Patrick; Peters, Bruce; Keidel, John; Chavers, Greg
2000-01-01
An Optical Testing System (OTS) has been developed to measure the figure and radius of curvature of NGST developmental mirrors in the vacuum, cryogenic environment of the X-Ray Calibration Facility (XRCF) at Marshall Space Flight Center (MSFC). The OTS consists of a WaveScope Shack-Hartmann sensor from Adaptive Optics Associates as the main instrument, a Point Diffraction Interferometer (PDI), a Point Spread Function (PSF) imager, an alignment system, a Leica Disto Pro distance measurement instrument, and a laser source palette (632.8 nm wavelength) that is fiber-coupled to the sensor instruments. All of the instruments except the laser source palette are located on a single breadboard known as the Wavefront Sensor Pallet (WSP). The WSP is located on top of a 5-DOF motion system located at the center of curvature of the test mirror. Two PC's are used to control the OTS. The error in the figure measurement is dominated by the WaveScope's measurement error. An analysis using the absolute wavefront gradient error of 1/50 wave P-V (at 0.6328 microns) provided by the manufacturer leads to a total surface figure measurement error of approximately 1/100 wave rms. This easily meets the requirement of 1/10 wave P-V. The error in radius of curvature is dominated by the Leica's absolute measurement error of VI.5 mm and the focus setting error of Vi.4 mm, giving an overall error of V2 mm. The OTS is currently being used to test the NGST Mirror System Demonstrators (NMSD's) and the Subscale Beryllium Mirror Demonstrator (SBNM).
Is adult gait less susceptible than paediatric gait to hip joint centre regression equation error?
Kiernan, D; Hosking, J; O'Brien, T
2016-03-01
Hip joint centre (HJC) regression equation error during paediatric gait has recently been shown to have clinical significance. In relation to adult gait, it has been inferred that comparable errors with children in absolute HJC position may in fact result in less significant kinematic and kinetic error. This study investigated the clinical agreement of three commonly used regression equation sets (Bell et al., Davis et al. and Orthotrak) for adult subjects against the equations of Harrington et al. The relationship between HJC position error and subject size was also investigated for the Davis et al. set. Full 3-dimensional gait analysis was performed on 12 healthy adult subjects with data for each set compared to Harrington et al. The Gait Profile Score, Gait Variable Score and GDI-kinetic were used to assess clinical significance while differences in HJC position between the Davis and Harrington sets were compared to leg length and subject height using regression analysis. A number of statistically significant differences were present in absolute HJC position. However, all sets fell below the clinically significant thresholds (GPS <1.6°, GDI-Kinetic <3.6 points). Linear regression revealed a statistically significant relationship for both increasing leg length and increasing subject height with decreasing error in anterior/posterior and superior/inferior directions. Results confirm a negligible clinical error for adult subjects suggesting that any of the examined sets could be used interchangeably. Decreasing error with both increasing leg length and increasing subject height suggests that the Davis set should be used cautiously on smaller subjects. Copyright © 2016 Elsevier B.V. All rights reserved.
Modeling the hypothalamus-pituitary-adrenal axis: A review and extension.
Hosseinichimeh, Niyousha; Rahmandad, Hazhir; Wittenborn, Andrea K
2015-10-01
Multiple models of the hypothalamus-pituitary-adrenal (HPA) axis have been developed to characterize the oscillations seen in the hormone concentrations and to examine HPA axis dysfunction. We reviewed the existing models, then replicated and compared five of them by finding their correspondence to a dataset consisting of ACTH and cortisol concentrations of 17 healthy individuals. We found that existing models use different feedback mechanisms, vary in the level of details and complexities, and offer inconsistent conclusions. None of the models fit the validation dataset well. Therefore, we re-calibrated the best performing model using partial calibration and extended the model by adding individual fixed effects and an exogenous circadian function. Our estimated parameters reduced the mean absolute percent error significantly and offer a validated reference model that can be used in diverse applications. Our analysis suggests that the circadian and ultradian cycles are not created endogenously by the HPA axis feedbacks, which is consistent with the recent literature on the circadian clock and HPA axis. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Song, L.; Liu, S.; Kustas, W. P.; Nieto, H.
2017-12-01
Operational estimation of spatio-temporal continuously daily evapotranspiration (ET), and the components evaporation (E) and transpiration (T), at watershed scale is very useful for developing a sustainable water resource strategy in semi-arid and arid areas. In this study, multi-year all-weather daily ET, E and T were estimated using MODIS-based (Dual Temperature Difference) DTD model under different land covers in Heihe watershed, China. The remotely sensed ET was validated using ground measurements from large aperture scintillometer systems, with a source area of several kilometers, under grassland, cropland and riparian shrub-forest. The results showed that the remotely sensed ET produced mean absolute percent deviation (MAPD) errors of about 30% during the growing season for all-weather conditions, but the model performed better under clear sky conditions. However, uncertainty in interpolated MODIS land surface temperature input data under cloudy conditions to the DTD model, and the representativeness of LAS measurements for the heterogeneous land surfaces contribute to the discrepancies between the modeled and ground measured surface heat fluxes, especially for the more humid grassland and heterogeneous shrub-forest sites.
Computational technique for stepwise quantitative assessment of equation correctness
NASA Astrophysics Data System (ADS)
Othman, Nuru'l Izzah; Bakar, Zainab Abu
2017-04-01
Many of the computer-aided mathematics assessment systems that are available today possess the capability to implement stepwise correctness checking of a working scheme for solving equations. The computational technique for assessing the correctness of each response in the scheme mainly involves checking the mathematical equivalence and providing qualitative feedback. This paper presents a technique, known as the Stepwise Correctness Checking and Scoring (SCCS) technique that checks the correctness of each equation in terms of structural equivalence and provides quantitative feedback. The technique, which is based on the Multiset framework, adapts certain techniques from textual information retrieval involving tokenization, document modelling and similarity evaluation. The performance of the SCCS technique was tested using worked solutions on solving linear algebraic equations in one variable. 350 working schemes comprising of 1385 responses were collected using a marking engine prototype, which has been developed based on the technique. The results show that both the automated analytical scores and the automated overall scores generated by the marking engine exhibit high percent agreement, high correlation and high degree of agreement with manual scores with small average absolute and mixed errors.
Sethuraman, Usha; Kannikeswaran, Nirupama; Murray, Kyle P; Zidan, Marwan A; Chamberlain, James M
2015-06-01
Prescription errors occur frequently in pediatric emergency departments (PEDs).The effect of computerized physician order entry (CPOE) with electronic medication alert system (EMAS) on these is unknown. The objective was to compare prescription errors rates before and after introduction of CPOE with EMAS in a PED. The hypothesis was that CPOE with EMAS would significantly reduce the rate and severity of prescription errors in the PED. A prospective comparison of a sample of outpatient, medication prescriptions 5 months before and after CPOE with EMAS implementation (7,268 before and 7,292 after) was performed. Error types and rates, alert types and significance, and physician response were noted. Medication errors were deemed significant if there was a potential to cause life-threatening injury, failure of therapy, or an adverse drug effect. There was a significant reduction in the errors per 100 prescriptions (10.4 before vs. 7.3 after; absolute risk reduction = 3.1, 95% confidence interval [CI] = 2.2 to 4.0). Drug dosing error rates decreased from 8 to 5.4 per 100 (absolute risk reduction = 2.6, 95% CI = 1.8 to 3.4). Alerts were generated for 29.6% of prescriptions, with 45% involving drug dose range checking. The sensitivity of CPOE with EMAS in identifying errors in prescriptions was 45.1% (95% CI = 40.8% to 49.6%), and the specificity was 57% (95% CI = 55.6% to 58.5%). Prescribers modified 20% of the dosing alerts, resulting in the error not reaching the patient. Conversely, 11% of true dosing alerts for medication errors were overridden by the prescribers: 88 (11.3%) resulted in medication errors, and 684 (88.6%) were false-positive alerts. A CPOE with EMAS was associated with a decrease in overall prescription errors in our PED. Further system refinements are required to reduce the high false-positive alert rates. © 2015 by the Society for Academic Emergency Medicine.
[Errors in prescriptions and their preparation at the outpatient pharmacy of a regional hospital].
Alvarado A, Carolina; Ossa G, Ximena; Bustos M, Luis
2017-01-01
Adverse effects of medications are an important cause of morbidity and hospital admissions. Errors in prescription or preparation of medications by pharmacy personnel are a factor that may influence these occurrence of the adverse effects Aim: To assess the frequency and type of errors in prescriptions and in their preparation at the pharmacy unit of a regional public hospital. Prescriptions received by ambulatory patients and those being discharged from the hospital, were reviewed using a 12-item checklist. The preparation of such prescriptions at the pharmacy unit was also reviewed using a seven item checklist. Seventy two percent of prescriptions had at least one error. The most common mistake was the impossibility of determining the concentration of the prescribed drug. Prescriptions for patients being discharged from the hospital had the higher number of errors. When a prescription had more than two drugs, the risk of error increased 2.4 times. Twenty four percent of prescription preparations had at least one error. The most common mistake was the labeling of drugs with incomplete medical indications. When a preparation included more than three drugs, the risk of preparation error increased 1.8 times. Prescription and preparation of medication delivered to patients had frequent errors. The most important risk factor for errors was the number of drugs prescribed.
Performance Evaluation of sUAS Equipped with Velodyne HDL-32E LiDAR Sensor
NASA Astrophysics Data System (ADS)
Jozkow, G.; Wieczorek, P.; Karpina, M.; Walicka, A.; Borkowski, A.
2017-08-01
The Velodyne HDL-32E laser scanner is used more frequently as main mapping sensor in small commercial UASs. However, there is still little information about the actual accuracy of point clouds collected with such UASs. This work evaluates empirically the accuracy of the point cloud collected with such UAS. Accuracy assessment was conducted in four aspects: impact of sensors on theoretical point cloud accuracy, trajectory reconstruction quality, and internal and absolute point cloud accuracies. Theoretical point cloud accuracy was evaluated by calculating 3D position error knowing errors of used sensors. The quality of trajectory reconstruction was assessed by comparing position and attitude differences from forward and reverse EKF solution. Internal and absolute accuracies were evaluated by fitting planes to 8 point cloud samples extracted for planar surfaces. In addition, the absolute accuracy was also determined by calculating point 3D distances between LiDAR UAS and reference TLS point clouds. Test data consisted of point clouds collected in two separate flights performed over the same area. Executed experiments showed that in tested UAS, the trajectory reconstruction, especially attitude, has significant impact on point cloud accuracy. Estimated absolute accuracy of point clouds collected during both test flights was better than 10 cm, thus investigated UAS fits mapping-grade category.
NASA Technical Reports Server (NTRS)
Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan
2016-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe highaccuracy, long-term climate change trends and to use decadal change observations as a method to determine the accuracy of climate change. A CLARREO objective is to improve the accuracy of SI-traceable, absolute calibration at infrared and reflected solar wavelengths to reach on-orbit accuracies required to allow climate change observations to survive data gaps and observe climate change at the limit of natural variability. Such an effort will also demonstrate National Institute of Standards and Technology (NIST) approaches for use in future spaceborne instruments. The current work describes the results of laboratory and field measurements with the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. SOLARIS allows testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. Results of laboratory calibration measurements are provided to demonstrate key assumptions about instrument behavior that are needed to achieve CLARREO's climate measurement requirements. Absolute radiometric response is determined using laser-based calibration sources and applied to direct solar views for comparison with accepted solar irradiance models to demonstrate accuracy values giving confidence in the error budget for the CLARREO reflectance retrieval.
NASA Astrophysics Data System (ADS)
Sadi, Maryam
2018-01-01
In this study a group method of data handling model has been successfully developed to predict heat capacity of ionic liquid based nanofluids by considering reduced temperature, acentric factor and molecular weight of ionic liquids, and nanoparticle concentration as input parameters. In order to accomplish modeling, 528 experimental data points extracted from the literature have been divided into training and testing subsets. The training set has been used to predict model coefficients and the testing set has been applied for model validation. The ability and accuracy of developed model, has been evaluated by comparison of model predictions with experimental values using different statistical parameters such as coefficient of determination, mean square error and mean absolute percentage error. The mean absolute percentage error of developed model for training and testing sets are 1.38% and 1.66%, respectively, which indicate excellent agreement between model predictions and experimental data. Also, the results estimated by the developed GMDH model exhibit a higher accuracy when compared to the available theoretical correlations.
Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data
Young, Alistair A.; Li, Xiaosong
2014-01-01
Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382
Corsica: A Multi-Mission Absolute Calibration Site
NASA Astrophysics Data System (ADS)
Bonnefond, P.; Exertier, P.; Laurain, O.; Guinle, T.; Femenias, P.
2013-09-01
In collaboration with the CNES and NASA oceanographic projects (TOPEX/Poseidon and Jason), the OCA (Observatoire de la Côte d'Azur) developed a verification site in Corsica since 1996, operational since 1998. CALibration/VALidation embraces a wide variety of activities, ranging from the interpretation of information from internal-calibration modes of the sensors to validation of the fully corrected estimates of the reflector heights using in situ data. Now, Corsica is, like the Harvest platform (NASA side) [14], an operating calibration site able to support a continuous monitoring with a high level of accuracy: a 'point calibration' which yields instantaneous bias estimates with a 10-day repeatability of 30 mm (standard deviation) and mean errors of 4 mm (standard error). For a 35-day repeatability (ERS, Envisat), due to a smaller time series, the standard error is about the double ( 7 mm).In this paper, we will present updated results of the absolute Sea Surface Height (SSH) biases for TOPEX/Poseidon (T/P), Jason-1, Jason-2, ERS-2 and Envisat.
Artificial neural network modelling of a large-scale wastewater treatment plant operation.
Güçlü, Dünyamin; Dursun, Sükrü
2010-11-01
Artificial Neural Networks (ANNs), a method of artificial intelligence method, provide effective predictive models for complex processes. Three independent ANN models trained with back-propagation algorithm were developed to predict effluent chemical oxygen demand (COD), suspended solids (SS) and aeration tank mixed liquor suspended solids (MLSS) concentrations of the Ankara central wastewater treatment plant. The appropriate architecture of ANN models was determined through several steps of training and testing of the models. ANN models yielded satisfactory predictions. Results of the root mean square error, mean absolute error and mean absolute percentage error were 3.23, 2.41 mg/L and 5.03% for COD; 1.59, 1.21 mg/L and 17.10% for SS; 52.51, 44.91 mg/L and 3.77% for MLSS, respectively, indicating that the developed model could be efficiently used. The results overall also confirm that ANN modelling approach may have a great implementation potential for simulation, precise performance prediction and process control of wastewater treatment plants.
Hahn, David K; RaghuVeer, Krishans; Ortiz, J V
2014-05-15
Time-dependent density functional theory (TD-DFT) and electron propagator theory (EPT) are used to calculate the electronic transition energies and ionization energies, respectively, of species containing phosphorus or sulfur. The accuracy of TD-DFT and EPT, in conjunction with various basis sets, is assessed with data from gas-phase spectroscopy. TD-DFT is tested using 11 prominent exchange-correlation functionals on a set of 37 vertical and 19 adiabatic transitions. For vertical transitions, TD-CAM-B3LYP calculations performed with the MG3S basis set are lowest in overall error, having a mean absolute deviation from experiment of 0.22 eV, or 0.23 eV over valence transitions and 0.21 eV over Rydberg transitions. Using a larger basis set, aug-pc3, improves accuracy over the valence transitions via hybrid functionals, but improved accuracy over the Rydberg transitions is only obtained via the BMK functional. For adiabatic transitions, all hybrid functionals paired with the MG3S basis set perform well, and B98 is best, with a mean absolute deviation from experiment of 0.09 eV. The testing of EPT used the Outer Valence Green's Function (OVGF) approximation and the Partial Third Order (P3) approximation on 37 vertical first ionization energies. It is found that OVGF outperforms P3 when basis sets of at least triple-ζ quality in the polarization functions are used. The largest basis set used in this study, aug-pc3, obtained the best mean absolute error from both methods -0.08 eV for OVGF and 0.18 eV for P3. The OVGF/6-31+G(2df,p) level of theory is particularly cost-effective, yielding a mean absolute error of 0.11 eV.
NASA Astrophysics Data System (ADS)
Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun
2015-01-01
Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).
Chu, David; Xiao, Jane; Shah, Payal; Todd, Brett
2018-06-20
Cognitive errors are a major contributor to medical error. Traditionally, medical errors at teaching hospitals are analyzed in morbidity and mortality (M&M) conferences. We aimed to describe the frequency of cognitive errors in relation to the occurrence of diagnostic and other error types, in cases presented at an emergency medicine (EM) resident M&M conference. We conducted a retrospective study of all cases presented at a suburban US EM residency monthly M&M conference from September 2011 to August 2016. Each case was reviewed using the electronic medical record (EMR) and notes from the M&M case by two EM physicians. Each case was categorized by type of primary medical error that occurred as described by Okafor et al. When a diagnostic error occurred, the case was reviewed for contributing cognitive and non-cognitive factors. Finally, when a cognitive error occurred, the case was classified into faulty knowledge, faulty data gathering or faulty synthesis, as described by Graber et al. Disagreements in error type were mediated by a third EM physician. A total of 87 M&M cases were reviewed; the two reviewers agreed on 73 cases, and 14 cases required mediation by a third reviewer. Forty-eight cases involved diagnostic errors, 47 of which were cognitive errors. Of these 47 cases, 38 involved faulty synthesis, 22 involved faulty data gathering and only 11 involved faulty knowledge. Twenty cases contained more than one type of cognitive error. Twenty-nine cases involved both a resident and an attending physician, while 17 cases involved only an attending physician. Twenty-one percent of the resident cases involved all three cognitive errors, while none of the attending cases involved all three. Forty-one percent of the resident cases and only 6% of the attending cases involved faulty knowledge. One hundred percent of the resident cases and 94% of the attending cases involved faulty synthesis. Our review of 87 EM M&M cases revealed that cognitive errors are commonly involved in cases presented, and that these errors are less likely due to deficient knowledge and more likely due to faulty synthesis. M&M conferences may therefore provide an excellent forum to discuss cognitive errors and how to reduce their occurrence.
Daboul, Amro; Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea
2018-01-01
Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'.
Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea
2018-01-01
Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'. PMID:29787586
NASA Technical Reports Server (NTRS)
Kahle, A. B.; Alley, R. E.; Schieldge, J. P.
1984-01-01
The sensitivity of thermal inertia (TI) calculations to errors in the measurement or parameterization of a number of environmental factors is considered here. The factors include effects of radiative transfer in the atmosphere, surface albedo and emissivity, variations in surface turbulent heat flux density, cloud cover, vegetative cover, and topography. The error analysis is based upon data from the Heat Capacity Mapping Mission (HCMM) satellite for July 1978 at three separate test sites in the deserts of the western United States. Results show that typical errors in atmospheric radiative transfer, cloud cover, and vegetative cover can individually cause root-mean-square (RMS) errors of about 10 percent (with atmospheric effects sometimes as large as 30-40 percent) in HCMM-derived thermal inertia images of 20,000-200,000 pixels.
Deng, Nanjie; Cui, Di; Zhang, Bin W; Xia, Junchao; Cruz, Jeffrey; Levy, Ronald
2018-06-13
Accurately predicting absolute binding free energies of protein-ligand complexes is important as a fundamental problem in both computational biophysics and pharmaceutical discovery. Calculating binding free energies for charged ligands is generally considered to be challenging because of the strong electrostatic interactions between the ligand and its environment in aqueous solution. In this work, we compare the performance of the potential of mean force (PMF) method and the double decoupling method (DDM) for computing absolute binding free energies for charged ligands. We first clarify an unresolved issue concerning the explicit use of the binding site volume to define the complexed state in DDM together with the use of harmonic restraints. We also provide an alternative derivation for the formula for absolute binding free energy using the PMF approach. We use these formulas to compute the binding free energy of charged ligands at an allosteric site of HIV-1 integrase, which has emerged in recent years as a promising target for developing antiviral therapy. As compared with the experimental results, the absolute binding free energies obtained by using the PMF approach show unsigned errors of 1.5-3.4 kcal mol-1, which are somewhat better than the results from DDM (unsigned errors of 1.6-4.3 kcal mol-1) using the same amount of CPU time. According to the DDM decomposition of the binding free energy, the ligand binding appears to be dominated by nonpolar interactions despite the presence of very large and favorable intermolecular ligand-receptor electrostatic interactions, which are almost completely cancelled out by the equally large free energy cost of desolvation of the charged moiety of the ligands in solution. We discuss the relative strengths of computing absolute binding free energies using the alchemical and physical pathway methods.
Correcting for Optimistic Prediction in Small Data Sets
Smith, Gordon C. S.; Seaman, Shaun R.; Wood, Angela M.; Royston, Patrick; White, Ian R.
2014-01-01
The C statistic is a commonly reported measure of screening test performance. Optimistic estimation of the C statistic is a frequent problem because of overfitting of statistical models in small data sets, and methods exist to correct for this issue. However, many studies do not use such methods, and those that do correct for optimism use diverse methods, some of which are known to be biased. We used clinical data sets (United Kingdom Down syndrome screening data from Glasgow (1991–2003), Edinburgh (1999–2003), and Cambridge (1990–2006), as well as Scottish national pregnancy discharge data (2004–2007)) to evaluate different approaches to adjustment for optimism. We found that sample splitting, cross-validation without replication, and leave-1-out cross-validation produced optimism-adjusted estimates of the C statistic that were biased and/or associated with greater absolute error than other available methods. Cross-validation with replication, bootstrapping, and a new method (leave-pair-out cross-validation) all generated unbiased optimism-adjusted estimates of the C statistic and had similar absolute errors in the clinical data set. Larger simulation studies confirmed that all 3 methods performed similarly with 10 or more events per variable, or when the C statistic was 0.9 or greater. However, with lower events per variable or lower C statistics, bootstrapping tended to be optimistic but with lower absolute and mean squared errors than both methods of cross-validation. PMID:24966219
Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.
de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo
2018-03-01
Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.
NASA Lewis Stirling engine computer code evaluation
NASA Technical Reports Server (NTRS)
Sullivan, Timothy J.
1989-01-01
In support of the U.S. Department of Energy's Stirling Engine Highway Vehicle Systems program, the NASA Lewis Stirling engine performance code was evaluated by comparing code predictions without engine-specific calibration factors to GPU-3, P-40, and RE-1000 Stirling engine test data. The error in predicting power output was -11 percent for the P-40 and 12 percent for the Re-1000 at design conditions and 16 percent for the GPU-3 at near-design conditions (2000 rpm engine speed versus 3000 rpm at design). The efficiency and heat input predictions showed better agreement with engine test data than did the power predictions. Concerning all data points, the error in predicting the GPU-3 brake power was significantly larger than for the other engines and was mainly a result of inaccuracy in predicting the pressure phase angle. Analysis into this pressure phase angle prediction error suggested that improvements to the cylinder hysteresis loss model could have a significant effect on overall Stirling engine performance predictions.
Error in Dasibi flight measurements of atmospheric ozone due to instrument wall-loss
NASA Technical Reports Server (NTRS)
Ainsworth, J. E.; Hagemeyer, J. R.; Reed, E. I.
1981-01-01
Theory suggests that in laminar flow the percent loss of a trace constituent to the walls of a measuring instrument varies as P to the -2/3, where P is the total gas pressure. Preliminary laboratory ozone wall-loss measurements confirm this P to the -2/3 dependence. Accurate assessment of wall-loss is thus of particular importance for those balloon-borne instruments utilizing laminar flow at ambient pressure, since the ambient pressure decreases by a factor of 350 during ascent to 40 km. Measurements and extrapolations made for a Dasibi ozone monitor modified for balloon flight indicate that the wall-loss error at 40 km was between 6 and 30 percent and that the wall-loss error in the derived total ozone column-content for the region from the surface to 40 km altitude was between 2 and 10 percent. At 1000 mb, turbulence caused an order of magnitude increase in the Dasibi wall-loss.
NASA Astrophysics Data System (ADS)
Mercer, Jason J.; Westbrook, Cherie J.
2016-11-01
Microform is important in understanding wetland functions and processes. But collecting imagery of and mapping the physical structure of peatlands is often expensive and requires specialized equipment. We assessed the utility of coupling computer vision-based structure from motion with multiview stereo photogrammetry (SfM-MVS) and ground-based photos to map peatland topography. The SfM-MVS technique was tested on an alpine peatland in Banff National Park, Canada, and guidance was provided on minimizing errors. We found that coupling SfM-MVS with ground-based photos taken with a point and shoot camera is a viable and competitive technique for generating ultrahigh-resolution elevations (i.e., <0.01 m, mean absolute error of 0.083 m). In evaluating 100+ viable SfM-MVS data collection and processing scenarios, vegetation was found to considerably influence accuracy. Vegetation class, when accounted for, reduced absolute error by as much as 50%. The logistic flexibility of ground-based SfM-MVS paired with its high resolution, low error, and low cost makes it a research area worth developing as well as a useful addition to the wetland scientists' toolkit.
Using a Hybrid Model to Forecast the Prevalence of Schistosomiasis in Humans.
Zhou, Lingling; Xia, Jing; Yu, Lijing; Wang, Ying; Shi, Yun; Cai, Shunxiang; Nie, Shaofa
2016-03-23
We previously proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in forecasting schistosomiasis. Our purpose in the current study was to forecast the annual prevalence of human schistosomiasis in Yangxin County, using our ARIMA-NARNN model, thereby further certifying the reliability of our hybrid model. We used the ARIMA, NARNN and ARIMA-NARNN models to fit and forecast the annual prevalence of schistosomiasis. The modeling time range included was the annual prevalence from 1956 to 2008 while the testing time range included was from 2009 to 2012. The mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to measure the model performance. We reconstructed the hybrid model to forecast the annual prevalence from 2013 to 2016. The modeling and testing errors generated by the ARIMA-NARNN model were lower than those obtained from either the single ARIMA or NARNN models. The predicted annual prevalence from 2013 to 2016 demonstrated an initial decreasing trend, followed by an increase. The ARIMA-NARNN model can be well applied to analyze surveillance data for early warning systems for the control and elimination of schistosomiasis.
Smith, S. Jerrod; Lewis, Jason M.; Graves, Grant M.
2015-09-28
Generalized-least-squares multiple-linear regression analysis was used to formulate regression relations between peak-streamflow frequency statistics and basin characteristics. Contributing drainage area was the only basin characteristic determined to be statistically significant for all percentage of annual exceedance probabilities and was the only basin characteristic used in regional regression equations for estimating peak-streamflow frequency statistics on unregulated streams in and near the Oklahoma Panhandle. The regression model pseudo-coefficient of determination, converted to percent, for the Oklahoma Panhandle regional regression equations ranged from about 38 to 63 percent. The standard errors of prediction and the standard model errors for the Oklahoma Panhandle regional regression equations ranged from about 84 to 148 percent and from about 76 to 138 percent, respectively. These errors were comparable to those reported for regional peak-streamflow frequency regression equations for the High Plains areas of Texas and Colorado. The root mean square errors for the Oklahoma Panhandle regional regression equations (ranging from 3,170 to 92,000 cubic feet per second) were less than the root mean square errors for the Oklahoma statewide regression equations (ranging from 18,900 to 412,000 cubic feet per second); therefore, the Oklahoma Panhandle regional regression equations produce more accurate peak-streamflow statistic estimates for the irrigated period of record in the Oklahoma Panhandle than do the Oklahoma statewide regression equations. The regression equations developed in this report are applicable to streams that are not substantially affected by regulation, impoundment, or surface-water withdrawals. These regression equations are intended for use for stream sites with contributing drainage areas less than or equal to about 2,060 square miles, the maximum value for the independent variable used in the regression analysis.
Checking ozone amounts by measurements of UV-irradiances
NASA Technical Reports Server (NTRS)
Seckmeyer, Gunther; Kettner, Christiane; Thiel, Stephen
1994-01-01
Absolute measurements of UV-irradiances in Germany and New Zealand are used to determine the total amounts of ozone. UV-irradiances measured and calculated for clear skies and for solar zenith angles less than 60 deg generally show a good accordance. The UVB-irradiances, however, show that the actual Dobson values are about 5 percent higher in Germany and about 3 percent higher in New Zealand compared to those obtained by our method. Possible reasons for these deviations are discussed.
Marriage Meets the Joneses: Relative Income, Identity, and Marital Status
Watson, Tara; McLanahan, Sara
2012-01-01
This paper investigates the effect of relative income on marriage. Accounting flexibly for absolute income, the ratio between a man's income and a local reference group median is a strong predictor of marital status, but only for low-income men. Relative income affects marriage even among those living with a partner. A ten percent higher reference group income is associated with a two percent reduction in marriage. We propose an identity model to explain the results. PMID:24639593
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven Karl
This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.
Skinner, Kenneth D.
2009-01-01
Elevation data in riverine environments can be used in various applications for which different levels of accuracy are required. The Experimental Advanced Airborne Research LiDAR (Light Detection and Ranging) - or EAARL - system was used to obtain topographic and bathymetric data along the lower Boise River, southwestern Idaho, for use in hydraulic and habitat modeling. The EAARL data were post-processed into bare earth and bathymetric raster and point datasets. Concurrently with the EAARL data collection, real-time kinetic global positioning system and total station ground-survey data were collected in three areas within the lower Boise River basin to assess the accuracy of the EAARL elevation data in different hydrogeomorphic settings. The accuracies of the EAARL-derived elevation data, determined in open, flat terrain, to provide an optimal vertical comparison surface, had root mean square errors ranging from 0.082 to 0.138 m. Accuracies for bank, floodplain, and in-stream bathymetric data had root mean square errors ranging from 0.090 to 0.583 m. The greater root mean square errors for the latter data are the result of high levels of turbidity in the downstream ground-survey area, dense tree canopy, and horizontal location discrepancies between the EAARL and ground-survey data in steeply sloping areas such as riverbanks. The EAARL point to ground-survey comparisons produced results similar to those for the EAARL raster to ground-survey comparisons, indicating that the interpolation of the EAARL points to rasters did not introduce significant additional error. The mean percent error for the wetted cross-sectional areas of the two upstream ground-survey areas was 1 percent. The mean percent error increases to -18 percent if the downstream ground-survey area is included, reflecting the influence of turbidity in that area.
NASA Technical Reports Server (NTRS)
Rothenberg, Edward A; Ordin, Paul M
1954-01-01
The performance of jet fuel with an oxidant mixture containing 70 percent liquid fluorine and 30 percent liquid oxygen by weight was investigated in a 500-pound-thrust engine operating at a chamber pressure of 300 pounds per square inch absolute. A one-oxidant-on-one-fuel skewed-hole impinging-jet injector was evaluated in a chamber of characteristic length equal to 50 inches. A maximum experimental specific impulse of 268 pound-seconds per pound was obtained at 25 percent fuel, which corresponds to 96 percent of the maximum theoretical specific impulse based on frozen composition expansion. The maximum characteristic velocity obtained was 6050 feet per second at 23 percent fuel, or 94 percent of the theoretical maximum. The average thrust coefficient was 1.38 for the 500-pound thrust combustion-chamber nozzle used, which was 99 percent of the theoretical (frozen) maximum. Mixtures of fluorine and oxygen were found to be self-igniting with jet fuel with fluorine concentrations as low as 4 percent, when low starting propellant flow rated were used.
Estimates of the absolute error and a scheme for an approximate solution to scheduling problems
NASA Astrophysics Data System (ADS)
Lazarev, A. A.
2009-02-01
An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.
Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian
2013-01-01
Background Inertial measurement of motion with Attitude and Heading Reference Systems (AHRS) is emerging as an alternative to 3D motion capture systems in biomechanics. The objectives of this study are: 1) to describe the absolute and relative accuracy of multiple units of commercially available AHRS under various types of motion; and 2) to evaluate the effect of motion velocity on the accuracy of these measurements. Methods The criterion validity of accuracy was established under controlled conditions using an instrumented Gimbal table. AHRS modules were carefully attached to the center plate of the Gimbal table and put through experimental static and dynamic conditions. Static and absolute accuracy was assessed by comparing the AHRS orientation measurement to those obtained using an optical gold standard. Relative accuracy was assessed by measuring the variation in relative orientation between modules during trials. Findings Evaluated AHRS systems demonstrated good absolute static accuracy (mean error < 0.5o) and clinically acceptable absolute accuracy under condition of slow motions (mean error between 0.5o and 3.1o). In slow motions, relative accuracy varied from 2o to 7o depending on the type of AHRS and the type of rotation. Absolute and relative accuracy were significantly affected (p<0.05) by velocity during sustained motions. The extent of that effect varied across AHRS. Interpretation Absolute and relative accuracy of AHRS are affected by environmental magnetic perturbations and conditions of motions. Relative accuracy of AHRS is mostly affected by the ability of all modules to locate the same global reference coordinate system at all time. Conclusions Existing AHRS systems can be considered for use in clinical biomechanics under constrained conditions of use. While their individual capacity to track absolute motion is relatively consistent, the use of multiple AHRS modules to compute relative motion between rigid bodies needs to be optimized according to the conditions of operation. PMID:24260324
Review of conservative surgery in early breast cancer. British Columbia experience.
Holmvang, A M; Grafton, C; Sandy, J T
1985-05-01
Conservation mastectomy in combination with radiotherapy is becoming an accepted treatment for early breast cancer. No absolute guidelines exist as to appropriate patient selection or correct surgical technique, but certain unifying trends can be ascertained from the current literature. The purpose of this study was to review the literature and to identify areas of incongruence between present management of patients in British Columbia and suggestions in the current literature. One hundred patients were reviewed. Twenty-six percent of them did not receive preoperative mammograms, and tumor stage was inappropriate in 9 percent. Thirteen percent had excisional biopsies only. A quarter of the patients had tumor resection through unfavorably placed incisions. Eight percent did not have estrogen receptor determination. Thirty-nine percent of the pathology reports made no comment as to adequacy of resection margins. It is hoped that these areas that, with proper attention, can improve cosmetic results and decrease the incidence of local tumor recurrence.
Chenausky, Karen; Kernbach, Julius; Norton, Andrea; Schlaug, Gottfried
2017-01-01
We investigated the relationship between imaging variables for two language/speech-motor tracts and speech fluency variables in 10 minimally verbal (MV) children with autism. Specifically, we tested whether measures of white matter integrity-fractional anisotropy (FA) of the arcuate fasciculus (AF) and frontal aslant tract (FAT)-were related to change in percent syllable-initial consonants correct, percent items responded to, and percent syllable insertion errors (from best baseline to post 25 treatment sessions). Twenty-three MV children with autism spectrum disorder (ASD) received Auditory-Motor Mapping Training (AMMT), an intonation-based treatment to improve fluency in spoken output, and we report on seven who received a matched control treatment. Ten of the AMMT participants were able to undergo a magnetic resonance imaging study at baseline; their performance on baseline speech production measures is compared to that of the other two groups. No baseline differences were found between groups. A canonical correlation analysis (CCA) relating FA values for left- and right-hemisphere AF and FAT to speech production measures showed that FA of the left AF and right FAT were the largest contributors to the synthetic independent imaging-related variable. Change in percent syllable-initial consonants correct and percent syllable-insertion errors were the largest contributors to the synthetic dependent fluency-related variable. Regression analyses showed that FA values in left AF significantly predicted change in percent syllable-initial consonants correct, no FA variables significantly predicted change in percent items responded to, and FA of right FAT significantly predicted change in percent syllable-insertion errors. Results are consistent with previously identified roles for the AF in mediating bidirectional mapping between articulation and acoustics, and the FAT in its relationship to speech initiation and fluency. They further suggest a division of labor between the hemispheres, implicating the left hemisphere in accuracy of speech production and the right hemisphere in fluency in this population. Changes in response rate are interpreted as stemming from factors other than the integrity of these two fiber tracts. This study is the first to document the existence of a subgroup of MV children who experience increases in syllable- insertion errors as their speech develops in response to therapy.
Lifesource XL-18 pedometer for measuring steps under controlled and free-living conditions.
Liu, Sam; Brooks, Dina; Thomas, Scott; Eysenbach, Gunther; Nolan, Robert Peter
2015-01-01
The primary aim was to examine the criterion and construct validity and test-retest reliability of the Lifesource XL-18 pedometer (A&D Medical, Toronto, ON, Canada) for measuring steps under controlled and free-living activities. The influence of body mass index, waist size and walking speed on the criterion validity of XL-18 was also explored. Forty adults (35-74 years) performed a 6-min walk test in the controlled condition, and the criterion validity of XL-18 was assessed by comparing it to steps counted manually. Thirty-five adults participated in the free-living condition and the construct validity of XL-18 was assessed by comparing it to Yamax SW-200 (YAMAX Health & Sports, Inc., San Antonio, TX, USA). During the controlled condition, XL-18 did not significantly differ from criterion (P > 0.05) and no systematic error was found using Bland-Altman analysis. The accuracy of XL-18 decreased with slower walking speed (P = 0.001). During the free-living condition, Bland-Altman analysis revealed that XL-18 overestimated daily steps by 327 ± 118 than Yamax (P = 0.004). However, the absolute percent error (APE) (6.5 ± 0.58%) was still within an acceptable range. XL-18 did not differ statistically between pant pockets. XL-18 is suitable for measuring steps in controlled and free-living conditions. However, caution may be required when interpreting the steps recorded under slower speeds and free-living conditions.
NASA Astrophysics Data System (ADS)
Gao, Jing; Burt, James E.
2017-12-01
This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.
Validation of the ASTER Global Digital Elevation Model Version 2 over the conterminous United States
Gesch, Dean B.; Oimoen, Michael J.; Zhang, Zheng; Meyer, David J.; Danielson, Jeffrey J.
2012-01-01
The ASTER Global Digital Elevation Model Version 2 (GDEM v2) was evaluated over the conterminous United States in a manner similar to the validation conducted for the original GDEM Version 1 (v1) in 2009. The absolute vertical accuracy of GDEM v2 was calculated by comparison with more than 18,000 independent reference geodetic ground control points from the National Geodetic Survey. The root mean square error (RMSE) measured for GDEM v2 is 8.68 meters. This compares with the RMSE of 9.34 meters for GDEM v1. Another important descriptor of vertical accuracy is the mean error, or bias, which indicates if a DEM has an overall vertical offset from true ground level. The GDEM v2 mean error of -0.20 meters is a significant improvement over the GDEM v1 mean error of -3.69 meters. The absolute vertical accuracy assessment results, both mean error and RMSE, were segmented by land cover to examine the effects of cover types on measured errors. The GDEM v2 mean errors by land cover class verify that the presence of aboveground features (tree canopies and built structures) cause a positive elevation bias, as would be expected for an imaging system like ASTER. In open ground classes (little or no vegetation with significant aboveground height), GDEM v2 exhibits a negative bias on the order of 1 meter. GDEM v2 was also evaluated by differencing with the Shuttle Radar Topography Mission (SRTM) dataset. In many forested areas, GDEM v2 has elevations that are higher in the canopy than SRTM.
Enhanced Lamb dip for absolute laser frequency stabilization
NASA Technical Reports Server (NTRS)
Siegman, A. E.; Byer, R. L.; Wang, S. C.
1972-01-01
Enhanced Lamb dip width is 5 MHz and total depth is 10 percent of peak power. Present configuration is useful as frequency standard in near infrared. Technique extends to other lasers, for which low pressure narrow linewidth gain tubes can be constructed.
Forecast models for suicide: Time-series analysis with data from Italy.
Preti, Antonio; Lentini, Gianluca
2016-01-01
The prediction of suicidal behavior is a complex task. To fine-tune targeted preventative interventions, predictive analytics (i.e. forecasting future risk of suicide) is more important than exploratory data analysis (pattern recognition, e.g. detection of seasonality in suicide time series). This study sets out to investigate the accuracy of forecasting models of suicide for men and women. A total of 101 499 male suicides and of 39 681 female suicides - occurred in Italy from 1969 to 2003 - were investigated. In order to apply the forecasting model and test its accuracy, the time series were split into a training set (1969 to 1996; 336 months) and a test set (1997 to 2003; 84 months). The main outcome was the accuracy of forecasting models on the monthly number of suicides. These measures of accuracy were used: mean absolute error; root mean squared error; mean absolute percentage error; mean absolute scaled error. In both male and female suicides a change in the trend pattern was observed, with an increase from 1969 onwards to reach a maximum around 1990 and decrease thereafter. The variances attributable to the seasonal and trend components were, respectively, 24% and 64% in male suicides, and 28% and 41% in female ones. Both annual and seasonal historical trends of monthly data contributed to forecast future trends of suicide with a margin of error around 10%. The finding is clearer in male than in female time series of suicide. The main conclusion of the study is that models taking seasonality into account seem to be able to derive information on deviation from the mean when this occurs as a zenith, but they fail to reproduce it when it occurs as a nadir. Preventative efforts should concentrate on the factors that influence the occurrence of increases above the main trend in both seasonal and cyclic patterns of suicides.
Clark, Ross A; Paterson, Kade; Ritchie, Callan; Blundell, Simon; Bryant, Adam L
2011-03-01
Commercial timing light systems (CTLS) provide precise measurement of athletes running velocity, however they are often expensive and difficult to transport. In this study an inexpensive, wireless and portable timing light system was created using the infrared camera in Nintendo Wii hand controllers (NWHC). System creation with gold-standard validation. A Windows-based software program using NWHC to replicate a dual-beam timing gate was created. Firstly, data collected during 2m walking and running trials were validated against a 3D kinematic system. Secondly, data recorded during 5m running trials at various intensities from standing or flying starts were compared to a single beam CTLS and the independent and average scores of three handheld stopwatch (HS) operators. Intraclass correlation coefficient and Bland-Altman plots were used to assess validity. Absolute error quartiles and percentage of trials in absolute error threshold ranges were used to determine accuracy. The NWHC system was valid when compared against the 3D kinematic system (ICC=0.99, median absolute error (MAR)=2.95%). For the flying 5m trials the NWHC system possessed excellent validity and precision (ICC=0.97, MAR<3%) when compared with the CTLS. In contrast, the NWHC system and the HS values during standing start trials possessed only modest validity (ICC<0.75) and accuracy (MAR>8%). A NWHC timing light system is inexpensive, portable and valid for assessing running velocity. Errors in the 5m standing start trials may have been due to erroneous event detection by either the commercial or NWHC-based timing light systems. Copyright © 2010 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
A prediction model of short-term ionospheric foF2 based on AdaBoost
NASA Astrophysics Data System (ADS)
Zhao, Xiukuan; Ning, Baiqi; Liu, Libo; Song, Gangbing
2014-02-01
In this paper, the AdaBoost-BP algorithm is used to construct a new model to predict the critical frequency of the ionospheric F2-layer (foF2) one hour ahead. Different indices were used to characterize ionospheric diurnal and seasonal variations and their dependence on solar and geomagnetic activity. These indices, together with the current observed foF2 value, were input into the prediction model and the foF2 value at one hour ahead was output. We analyzed twenty-two years' foF2 data from nine ionosonde stations in the East-Asian sector in this work. The first eleven years' data were used as a training dataset and the second eleven years' data were used as a testing dataset. The results show that the performance of AdaBoost-BP is better than those of BP Neural Network (BPNN), Support Vector Regression (SVR) and the IRI model. For example, the AdaBoost-BP prediction absolute error of foF2 at Irkutsk station (a middle latitude station) is 0.32 MHz, which is better than 0.34 MHz from BPNN, 0.35 MHz from SVR and also significantly outperforms the IRI model whose absolute error is 0.64 MHz. Meanwhile, AdaBoost-BP prediction absolute error at Taipei station from the low latitude is 0.78 MHz, which is better than 0.81 MHz from BPNN, 0.81 MHz from SVR and 1.37 MHz from the IRI model. Finally, the variety characteristics of the AdaBoost-BP prediction error along with seasonal variation, solar activity and latitude variation were also discussed in the paper.
ERIC Educational Resources Information Center
Titus, Freddie
2010-01-01
Fifty percent of college-bound students graduate from high school underprepared for mathematics at the post-secondary level. As a result, thirty-five percent of college students take developmental mathematics courses. What is even more shocking is the high failure rate (ranging from 35 to 42 percent) of students enrolled in developmental…
Tree and impervious cover in the United States
David J. Nowak; Eric J. Greenfield
2012-01-01
Using aerial photograph interpretation of circa 2005 imagery, percent tree canopy and impervious surface cover in the conterminous United States are estimated at 34.2% (standard error (SE) = 0.2%) and 2.4% (SE = 0.1%), respectively. Within urban/community areas, percent tree cover (35.1%, SE = 0.4%) is similar to the national value, but percent impervious cover is...
Height-Error Analysis for the FAA-Air Force Replacement Radar Program (FARR)
1991-08-01
7719 Figure 1-7 CLIMATOLOGY ERRORS BY MONWTH PERCENT FREQUENCY TABLE OF ERROR BY MONTH ERROR MONTH Col Pc IJAl IFEB )MA IA R IAY JJ’N IJUL JAUG (SEP...MONTH Col Pct IJAN IFEB IMPJ JAPR 1 MM IJUN IJUL JAUG ISEP J--T IN~ IDEC I Total ----- -- - - --------------------------.. . -.. 4...MONTH ERROR MONTH Col Pct IJAN IFEB IM4AR IAPR IMAY jJum IJU JAUG ISEP JOCT IN JDEC I Total . .- 4
Analytic barrage attack model. Final report, January 1986-January 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
St Ledger, J.W.; Naegeli, R.E.; Dowden, N.A.
An analytic model is developed for a nuclear barrage attack, assuming weapons with no aiming error and a cookie-cutter damage function. The model is then extended with approximations for the effects of aiming error and distance damage sigma. The final result is a fast running model which calculates probability of damage for a barrage attack. The probability of damage is accurate to within seven percent or better, for weapon reliabilities of 50 to 100 percent, distance damage sigmas of 0.5 or less, and zero to very large circular error probabilities. FORTRAN 77 coding is included in the report for themore » analytic model and for a numerical model used to check the analytic results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez, P.; Tambasco, M.; LaFontaine, R.
2014-08-15
Our goal is to compare the dosimetric accuracy of the Pinnacle-3 9.2 Collapsed Cone Convolution Superposition (CCCS) and the iPlan 4.1 Monte Carlo (MC) and Pencil Beam (PB) algorithms in an anthropomorphic lung phantom using measurement as the gold standard. Ion chamber measurements were taken for 6, 10, and 18 MV beams in a CIRS E2E SBRT Anthropomorphic Lung Phantom, which mimics lung, spine, ribs, and tissue. The plan implemented six beams with a 5×5 cm{sup 2} field size, delivering a total dose of 48 Gy. Data from the planning systems were computed at the treatment isocenter in the leftmore » lung, and two off-axis points, the spinal cord and the right lung. The measurements were taken using a pinpoint chamber. The best results between data from the algorithms and our measurements occur at the treatment isocenter. For the 6, 10, and 18 MV beams, iPlan 4.1 MC software performs the best with 0.3%, 0.2%, and 4.2% absolute percent difference from measurement, respectively. Differences between our measurements and algorithm data are much greater for the off-axis points. The best agreement seen for the right lung and spinal cord is 11.4% absolute percent difference with 6 MV iPlan 4.1 PB and 18 MV iPlan 4.1 MC, respectively. As energy increases absolute percent difference from measured data increases up to 54.8% for the 18 MV CCCS algorithm. This study suggests that iPlan 4.1 MC computes peripheral dose and target dose in the lung more accurately than the iPlan 4.1 PB and Pinnicale CCCS algorithms.« less
Safety and Cost Assessment of Connected and Automated Vehicles
DOT National Transportation Integrated Search
2018-03-29
Many light-duty vehicle crashes occur due to human error and distracted driving. The National Highway Traffic Safety Administration (NHTSA) reports that ten percent of all fatal crashes and seventeen percent of injury crashes in 2011 were a result of...
[Vaccinations among students in health care professions].
von Lindeman, Katharina; Kugler, Joachim; Klewer, Jörg
2011-12-01
Incomplete vaccinations among students in health care professions lead to an increased risk for infections. Until now, only few studies related to this issue do exist. Therefore vaccinations and awareness regarding the importance of vaccinations among students in health care professions should be investigated. All 433 students of a regional college for health care professionals were asked to complete a standardized and anonymous questionnaire. Altogether 301 nursing students and 131 students of the other health care professions participated. About 66.1 percent of nursing students and 50.4 percent of students of other health care professions rated vaccination as "absolutely necessary". Different percentages of completed vaccinations were reported for tetanus (79.1 percent versus 64.4 percent), hepatitis B (78.7 percent versus 77.5 percent) and hepatitis A (74.1 percent versus 68.5 percent). 6.3 percent versus 15.4 percent did not know if they were vaccinated against tetanus, hepatitis B (5.3 percent versus 7.7 percent) and hepatitis A (5.6 percent versus 9.2 percent). While approximately half of the students reported "primary vaccination and booster" against mumps (59.5 percent versus 53.5 percent), measles (58.8 percent versus 54.6 percent) and rubella (58.3 percent versus 55.4 percent), this was reported less for pertussis (43.8 percent versus 39.8 percent) and varicella (32.4 percent versus 25.2 percent). The results indicate inadequate vaccination status in the investigated students. In addition, a gap between the awareness of the importance of vaccinations and personal preventive behavior became obvious. Therefore, education of these future health professionals still requires issues related to vaccinations.
Absolute flux density calibrations of radio sources: 2.3 GHz
NASA Technical Reports Server (NTRS)
Freiley, A. J.; Batelaan, P. D.; Bathker, D. A.
1977-01-01
A detailed description of a NASA/JPL Deep Space Network program to improve S-band gain calibrations of large aperture antennas is reported. The program is considered unique in at least three ways; first, absolute gain calibrations of high quality suppressed-sidelobe dual mode horns first provide a high accuracy foundation to the foundation to the program. Second, a very careful transfer calibration technique using an artificial far-field coherent-wave source was used to accurately obtain the gain of one large (26 m) aperture. Third, using the calibrated large aperture directly, the absolute flux density of five selected galactic and extragalactic natural radio sources was determined with an absolute accuracy better than 2 percent, now quoted at the familiar 1 sigma confidence level. The follow-on considerations to apply these results to an operational network of ground antennas are discussed. It is concluded that absolute gain accuracies within + or - 0.30 to 0.40 db are possible, depending primarily on the repeatability (scatter) in the field data from Deep Space Network user stations.
The AFGL (Air Force Geophysics Laboratory) Absolute Gravity System’s Error Budget Revisted.
1985-05-08
also be induced by equipment not associated with the system. A systematic bias of 68 pgal was observed by the Istituto di Metrologia "G. Colonnetti...Laboratory Astrophysics, Univ. of Colo., Boulder, Colo. IMGC: Istituto di Metrologia "G. Colonnetti", Torino, Italy Table 1. Absolute Gravity Values...measurements were made with three Model D and three Model G La Coste-Romberg gravity meters. These instruments were operated by the following agencies
NASA Technical Reports Server (NTRS)
McCorkel, Joel; Thome, Kurtis; Hair, Jason; McAndrew, Brendan; Jennings, Don; Rabin, Douglas; Daw, Adrian; Lundsford, Allen
2012-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission key goals include enabling observation of high accuracy long-term climate change trends, use of these observations to test and improve climate forecasts, and calibration of operational and research sensors. The spaceborne instrument suites include a reflected solar spectroradiometer, emitted infrared spectroradiometer, and radio occultation receivers. The requirement for the RS instrument is that derived reflectance must be traceable to Sl standards with an absolute uncertainty of <0.3% and the error budget that achieves this requirement is described in previo1L5 work. This work describes the Solar/Lunar Absolute Reflectance Imaging Spectroradiometer (SOLARIS), a calibration demonstration system for RS instrument, and presents initial calibration and characterization methods and results. SOLARIS is an Offner spectrometer with two separate focal planes each with its own entrance aperture and grating covering spectral ranges of 320-640, 600-2300 nm over a full field-of-view of 10 degrees with 0.27 milliradian sampling. Results from laboratory measurements including use of integrating spheres, transfer radiometers and spectral standards combined with field-based solar and lunar acquisitions are presented. These results will be used to assess the accuracy and repeatability of the radiometric and spectral characteristics of SOLARIS, which will be presented against the sensor-level requirements addressed in the CLARREO RS instrument error budget.
NASA Astrophysics Data System (ADS)
Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei
This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.
Converting international ¼ inch tree volume to Doyle
Aaron Holley; John R. Brooks; Stuart A. Moss
2014-01-01
An equation for converting Mesavage and Girard's International ¼ inch tree volumes to the Doyle log rule is presented as a function of tree diameter. Volume error for trees having less than four logs exhibited volume prediction errors within a range of ±10 board feet. In addition, volume prediction error as a percent of actual Doyle tree volume...
NASA Technical Reports Server (NTRS)
Desormeaux, Yves; Rossow, William B.; Brest, Christopher L.; Campbell, G. G.
1993-01-01
Procedures are described for normalizing the radiometric calibration of image radiances obtained from geostationary weather satellites that contributed data to the International Satellite Cloud Climatology Project. The key step is comparison of coincident and collocated measurements made by each satellite and the concurrent AVHRR on the 'afternoon' NOAA polar-orbiting weather satellite at the same viewing geometry. The results of this comparison allow transfer of the AVHRR absolute calibration, which has been established over the whole series, to the radiometers on the geostationary satellites. Results are given for Meteosat-2, 3, and 4, for GOES-5, 6, and 7, for GMS-2, 3, and 4 and for Insat-1B. The relative stability of the calibrations of these radiance data is estimated to be within +/- 3 percent; the uncertainty of the absolute calibrations is estimated to be less than 10 percent. The remaining uncertainties are at least two times smaller than for the original radiance data.
Two-Year Body Composition Analyses of Long-Lived GHR Null Mice
List, Edward O.; Palmer, Amanda J.; Chung, Min-Yu; Wright-Piekarski, Jacob; Lubbers, Ellen; O'Connor, Patrick; Okada, Shigeru; Kopchick, John J.
2010-01-01
Growth hormone receptor gene–disrupted (GHR−/−) mice exhibit increased life span and adipose tissue mass. Although this obese phenotype has been reported extensively for young adult male GHR−/− mice, data for females and for other ages in either gender are lacking. Thus, the purpose of this study was to evaluate body composition longitudinally in both male and female GHR−/− mice. Results show that GHR−/− mice have a greater percent fat mass with no significant difference in absolute fat mass throughout life. Lean mass shows an opposite trend with percent lean mass not significantly different between genotypes but absolute mass reduced in GHR−/− mice. Differences in body composition are more pronounced in male than in female mice, and both genders of GHR−/− mice show specific enlargement of the subcutaneous adipose depot. Along with previously published data, these results suggest a consistent and intriguing protective effect of excess fat mass in the subcutaneous region. PMID:19901018
NASA Astrophysics Data System (ADS)
Piper, Lawrence G.
1993-09-01
We have measured the relative intensities of the nitrogen Vegard-Kaplan bands N2(A 3sigma(u)/+/ - X 1Sigma(g)/+/) for transitions covering a range in r centroids between 1.22 and 1.48 A. With this data we constructed a relative electronic transition moment function that diverges significantly from previously reported functions. We place our data on an absolute basis by normalizing our relative function by the experimentally determined Einstein coefficient for the v prime = 0 to v double prime = 6 transition. Combining our normalized data from 1.22 to 1.48 A with absolute transition moment data measured by Shemansky between 1.08 and 1.14 A results in a function covering the range between 1.08 and 1.48 A. The radiative lifetimes calculated from this function are longer than those currently accepted by amounts varying between 25 percent for v prime = 0-50 percent for v prime = 4-6.
Application of Intra-Oral Dental Scanners in the Digital Workflow of Implantology
van der Meer, Wicher J.; Andriessen, Frank S.; Wismeijer, Daniel; Ren, Yijin
2012-01-01
Intra-oral scanners will play a central role in digital dentistry in the near future. In this study the accuracy of three intra-oral scanners was compared. Materials and methods: A master model made of stone was fitted with three high precision manufactured PEEK cylinders and scanned with three intra-oral scanners: the CEREC (Sirona), the iTero (Cadent) and the Lava COS (3M). In software the digital files were imported and the distance between the centres of the cylinders and the angulation between the cylinders was assessed. These values were compared to the measurements made on a high accuracy 3D scan of the master model. Results: The distance errors were the smallest and most consistent for the Lava COS. The distance errors for the Cerec were the largest and least consistent. All the angulation errors were small. Conclusions: The Lava COS in combination with a high accuracy scanning protocol resulted in the smallest and most consistent errors of all three scanners tested when considering mean distance errors in full arch impressions both in absolute values and in consistency for both measured distances. For the mean angulation errors, the Lava COS had the smallest errors between cylinders 1–2 and the largest errors between cylinders 1–3, although the absolute difference with the smallest mean value (iTero) was very small (0,0529°). An expected increase in distance and/or angular errors over the length of the arch due to an accumulation of registration errors of the patched 3D surfaces could be observed in this study design, but the effects were statistically not significant. Clinical relevance For making impressions of implant cases for digital workflows, the most accurate scanner with the scanning protocol that will ensure the most accurate digital impression should be used. In our study model that was the Lava COS with the high accuracy scanning protocol. PMID:22937030
[Design and accuracy analysis of upper slicing system of MSCT].
Jiang, Rongjian
2013-05-01
The upper slicing system is the main components of the optical system in MSCT. This paper focuses on the design of upper slicing system and its accuracy analysis to improve the accuracy of imaging. The error of slice thickness and ray center by bearings, screw and control system were analyzed and tested. In fact, the accumulated error measured is less than 1 microm, absolute error measured is less than 10 microm. Improving the accuracy of the upper slicing system contributes to the appropriate treatment methods and success rate of treatment.
Nilles, M.A.; Gordon, J.D.; Schroder, L.J.; Paulin, C.E.
1995-01-01
The U.S. Geological Survey used four programs in 1991 to provide external quality assurance for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN). An intersite-comparison program was used to evaluate onsite pH and specific-conductance determinations. The effects of routine sample handling, processing, and shipping of wet-deposition samples on analyte determinations and an estimated precision of analyte values and concentrations were evaluated in the blind-audit program. Differences between analytical results and an estimate of the analytical precision of four laboratories routinely measuring wet deposition were determined by an interlaboratory-comparison program. Overall precision estimates for the precipitation-monitoring system were determined for selected sites by a collocated-sampler program. Results of the intersite-comparison program indicated that 93 and 86 percent of the site operators met the NADP/NTN accuracy goal for pH determinations during the two intersite-comparison studies completed during 1991. The results also indicated that 96 and 97 percent of the site operators met the NADP/NTN accuracy goal for specific-conductance determinations during the two 1991 studies. The effects of routine sample handling, processing, and shipping, determined in the blind-audit program indicated significant positive bias (a=.O 1) for calcium, magnesium, sodium, potassium, chloride, nitrate, and sulfate. Significant negative bias (or=.01) was determined for hydrogen ion and specific conductance. Only ammonium determinations were not biased. A Kruskal-Wallis test indicated that there were no significant (*3t=.01) differences in analytical results from the four laboratories participating in the interlaboratory-comparison program. Results from the collocated-sampler program indicated the median relative error for cation concentration and deposition exceeded eight percent at most sites, whereas the median relative error for sample volume, sulfate, and nitrate concentration at all sites was less than four percent. The median relative error for hydrogen ion concentration and deposition ranged from 4.6 to 18.3 percent at the four sites and as indicated in previous years of the study, was inversely proportional to the acidity of the precipitation at a given site. Overall, collocated-sampling error typically was five times that of laboratory error estimates for most analytes.
Evaluation of Acoustic Doppler Current Profiler measurements of river discharge
Morlock, S.E.
1996-01-01
The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
NASA Technical Reports Server (NTRS)
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
NASA Astrophysics Data System (ADS)
Ahdika, Atina; Lusiyana, Novyan
2017-02-01
World Health Organization (WHO) noted Indonesia as the country with the highest dengue (DHF) cases in Southeast Asia. There are no vaccine and specific treatment for DHF. One of the efforts which can be done by both government and resident is doing a prevention action. In statistics, there are some methods to predict the number of DHF cases to be used as the reference to prevent the DHF cases. In this paper, a discrete time series model, INAR(1)-Poisson model in specific, and Markov prediction model are used to predict the number of DHF patients in West Java Indonesia. The result shows that MPM is the best model since it has the smallest value of MAE (mean absolute error) and MAPE (mean absolute percentage error).
NASA Astrophysics Data System (ADS)
Nagarajan, K.; Shashidharan Nair, C. K.
2007-07-01
The channelled spectrum employing polarized light interference is a very convenient method for the study of dispersion of birefringence. However, while using this method, the absolute order of the polarized light interference fringes cannot be determined easily. Approximate methods are therefore used to estimate the order. One of the approximations is that the dispersion of birefringence across neighbouring integer order fringes is negligible. In this paper, we show how this approximation can cause errors. A modification is reported whereby the error in the determination of absolute fringe order can be reduced using fractional orders instead of integer orders. The theoretical background for this method supported with computer simulation is presented. An experimental arrangement implementing these modifications is described. This method uses a Constant Deviation Spectrometer (CDS) and a Soleil Babinet Compensator (SBC).
Assimilation of Freeze - Thaw Observations into the NASA Catchment Land Surface Model
NASA Technical Reports Server (NTRS)
Farhadi, Leila; Reichle, Rolf H.; DeLannoy, Gabrielle J. M.; Kimball, John S.
2014-01-01
The land surface freeze-thaw (F-T) state plays a key role in the hydrological and carbon cycles and thus affects water and energy exchanges and vegetation productivity at the land surface. In this study, we developed an F-T assimilation algorithm for the NASA Goddard Earth Observing System, version 5 (GEOS-5) modeling and assimilation framework. The algorithm includes a newly developed observation operator that diagnoses the landscape F-T state in the GEOS-5 Catchment land surface model. The F-T analysis is a rule-based approach that adjusts Catchment model state variables in response to binary F-T observations, while also considering forecast and observation errors. A regional observing system simulation experiment was conducted using synthetically generated F-T observations. The assimilation of perfect (error-free) F-T observations reduced the root-mean-square errors (RMSE) of surface temperature and soil temperature by 0.206 C and 0.061 C, respectively, when compared to model estimates (equivalent to a relative RMSE reduction of 6.7 percent and 3.1 percent, respectively). For a maximum classification error (CEmax) of 10 percent in the synthetic F-T observations, the F-T assimilation reduced the RMSE of surface temperature and soil temperature by 0.178 C and 0.036 C, respectively. For CEmax=20 percent, the F-T assimilation still reduces the RMSE of model surface temperature estimates by 0.149 C but yields no improvement over the model soil temperature estimates. The F-T assimilation scheme is being developed to exploit planned operational F-T products from the NASA Soil Moisture Active Passive (SMAP) mission.
Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models
de Jesus, Karla; Ayala, Helon V. H.; de Jesus, Kelly; Coelho, Leandro dos S.; Medeiros, Alexandre I.A.; Abraldes, José A.; Vaz, Mário A.P.; Fernandes, Ricardo J.; Vilas-Boas, João Paulo
2018-01-01
Abstract Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances. PMID:29599857
2017-01-01
Purpose/Background Shoulder proprioception is essential in the activities of daily living as well as in sports. Acute muscle fatigue is believed to cause a deterioration of proprioception, increasing the risk of injury. The purpose of this study was to evaluate if fatigue of the shoulder external rotators during eccentric versus concentric activity affects shoulder joint proprioception as determined by active reproduction of position. Study design Quasi-experimental trial. Methods Twenty-two healthy subjects with no recent history of shoulder pathology were randomly allocated to either a concentric or an eccentric exercise group for fatiguing the shoulder external rotators. Proprioception was assessed before and after the fatiguing protocol using an isokinetic dynamometer, by measuring active reproduction of position at 30 ° of shoulder external rotation, reported as absolute angular error. The fatiguing protocol consisted of sets of fifteen consecutive external rotator muscle contractions in either the concentric or eccentric action. The subjects were exercised until there was a 30% decline from the peak torque of the subjects’ maximal voluntary contraction over three consecutive muscle contractions. Results A one-way analysis of variance test revealed no statistical difference in absolute angular error (p > 0.05) between concentric and eccentric groups. Moreover, no statistical difference (p > 0.05) was found in absolute angular error between pre- and post-fatigue in either group. Conclusions Eccentric exercise does not seem to acutely affect shoulder proprioception to a larger extent than concentric exercise. Level of evidence 2b PMID:28515976
Jiménez-Carvelo, Ana M; González-Casado, Antonio; Cuadros-Rodríguez, Luis
2017-03-01
A new analytical method for the quantification of olive oil and palm oil in blends with other vegetable edible oils (canola, safflower, corn, peanut, seeds, grapeseed, linseed, sesame and soybean) using normal phase liquid chromatography, and applying chemometric tools was developed. The procedure for obtaining of chromatographic fingerprint from the methyl-transesterified fraction from each blend is described. The multivariate quantification methods used were Partial Least Square-Regression (PLS-R) and Support Vector Regression (SVR). The quantification results were evaluated by several parameters as the Root Mean Square Error of Validation (RMSEV), Mean Absolute Error of Validation (MAEV) and Median Absolute Error of Validation (MdAEV). It has to be highlighted that the new proposed analytical method, the chromatographic analysis takes only eight minutes and the results obtained showed the potential of this method and allowed quantification of mixtures of olive oil and palm oil with other vegetable oils. Copyright © 2016 Elsevier B.V. All rights reserved.
Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer
Andrade-Delgado, Laura; Telich-Tarriba, Jose E.; Fuente-del-Campo, Antonio; Altamirano-Arcos, Carlos A.
2018-01-01
Summary: Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively (P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies. PMID:29464171
NASA Astrophysics Data System (ADS)
Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini
2017-04-01
Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.
Comparative study of four time series methods in forecasting typhoid fever incidence in China.
Zhang, Xingyu; Liu, Yuanyuan; Yang, Min; Zhang, Tao; Young, Alistair A; Li, Xiaosong
2013-01-01
Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA) model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN), radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model.
Comparative Study of Four Time Series Methods in Forecasting Typhoid Fever Incidence in China
Zhang, Xingyu; Liu, Yuanyuan; Yang, Min; Zhang, Tao; Young, Alistair A.; Li, Xiaosong
2013-01-01
Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA) model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN), radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model. PMID:23650546
Elevation correction factor for absolute pressure measurements
NASA Technical Reports Server (NTRS)
Panek, Joseph W.; Sorrells, Mark R.
1996-01-01
With the arrival of highly accurate multi-port pressure measurement systems, conditions that previously did not affect overall system accuracy must now be scrutinized closely. Errors caused by elevation differences between pressure sensing elements and model pressure taps can be quantified and corrected. With multi-port pressure measurement systems, the sensing elements are connected to pressure taps that may be many feet away. The measurement system may be at a different elevation than the pressure taps due to laboratory space or test article constraints. This difference produces a pressure gradient that is inversely proportional to height within the interface tube. The pressure at the bottom of the tube will be higher than the pressure at the top due to the weight of the tube's column of air. Tubes with higher pressures will exhibit larger absolute errors due to the higher air density. The above effect is well documented but has generally been taken into account with large elevations only. With error analysis techniques, the loss in accuracy from elevation can be easily quantified. Correction factors can be applied to maintain the high accuracies of new pressure measurement systems.
Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer.
Rendón-Medina, Marco A; Andrade-Delgado, Laura; Telich-Tarriba, Jose E; Fuente-Del-Campo, Antonio; Altamirano-Arcos, Carlos A
2018-01-01
Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively ( P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies.
Kuhn, Stefan; Egert, Björn; Neumann, Steffen; Steinbeck, Christoph
2008-09-25
Current efforts in Metabolomics, such as the Human Metabolome Project, collect structures of biological metabolites as well as data for their characterisation, such as spectra for identification of substances and measurements of their concentration. Still, only a fraction of existing metabolites and their spectral fingerprints are known. Computer-Assisted Structure Elucidation (CASE) of biological metabolites will be an important tool to leverage this lack of knowledge. Indispensable for CASE are modules to predict spectra for hypothetical structures. This paper evaluates different statistical and machine learning methods to perform predictions of proton NMR spectra based on data from our open database NMRShiftDB. A mean absolute error of 0.18 ppm was achieved for the prediction of proton NMR shifts ranging from 0 to 11 ppm. Random forest, J48 decision tree and support vector machines achieved similar overall errors. HOSE codes being a notably simple method achieved a comparatively good result of 0.17 ppm mean absolute error. NMR prediction methods applied in the course of this work delivered precise predictions which can serve as a building block for Computer-Assisted Structure Elucidation for biological metabolites.
Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi
2017-08-01
The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.
A novel diagnosis method for a Hall plates-based rotary encoder with a magnetic concentrator.
Meng, Bumin; Wang, Yaonan; Sun, Wei; Yuan, Xiaofang
2014-07-31
In the last few years, rotary encoders based on two-dimensional complementary metal oxide semiconductors (CMOS) Hall plates with a magnetic concentrator have been developed to measure contactless absolute angle. There are various error factors influencing the measuring accuracy, which are difficult to locate after the assembly of encoder. In this paper, a model-based rapid diagnosis method is presented. Based on an analysis of the error mechanism, an error model is built to compare minimum residual angle error and to quantify the error factors. Additionally, a modified particle swarm optimization (PSO) algorithm is used to reduce the calculated amount. The simulation and experimental results show that this diagnosis method is feasible to quantify the causes of the error and to reduce iteration significantly.
Ahearn, Elizabeth A.
2010-01-01
Multiple linear regression equations for determining flow-duration statistics were developed to estimate select flow exceedances ranging from 25- to 99-percent for six 'bioperiods'-Salmonid Spawning (November), Overwinter (December-February), Habitat Forming (March-April), Clupeid Spawning (May), Resident Spawning (June), and Rearing and Growth (July-October)-in Connecticut. Regression equations also were developed to estimate the 25- and 99-percent flow exceedances without reference to a bioperiod. In total, 32 equations were developed. The predictive equations were based on regression analyses relating flow statistics from streamgages to GIS-determined basin and climatic characteristics for the drainage areas of those streamgages. Thirty-nine streamgages (and an additional 6 short-term streamgages and 28 partial-record sites for the non-bioperiod 99-percent exceedance) in Connecticut and adjacent areas of neighboring States were used in the regression analysis. Weighted least squares regression analysis was used to determine the predictive equations; weights were assigned based on record length. The basin characteristics-drainage area, percentage of area with coarse-grained stratified deposits, percentage of area with wetlands, mean monthly precipitation (November), mean seasonal precipitation (December, January, and February), and mean basin elevation-are used as explanatory variables in the equations. Standard errors of estimate of the 32 equations ranged from 10.7 to 156 percent with medians of 19.2 and 55.4 percent to predict the 25- and 99-percent exceedances, respectively. Regression equations to estimate high and median flows (25- to 75-percent exceedances) are better predictors (smaller variability of the residual values around the regression line) than the equations to estimate low flows (less than 75-percent exceedance). The Habitat Forming (March-April) bioperiod had the smallest standard errors of estimate, ranging from 10.7 to 20.9 percent. In contrast, the Rearing and Growth (July-October) bioperiod had the largest standard errors, ranging from 30.9 to 156 percent. The adjusted coefficient of determination of the equations ranged from 77.5 to 99.4 percent with medians of 98.5 and 90.6 percent to predict the 25- and 99-percent exceedances, respectively. Descriptive information on the streamgages used in the regression, measured basin and climatic characteristics, and estimated flow-duration statistics are provided in this report. Flow-duration statistics and the 32 regression equations for estimating flow-duration statistics in Connecticut are stored on the U.S. Geological Survey World Wide Web application ?StreamStats? (http://water.usgs.gov/osw/streamstats/index.html). The regression equations developed in this report can be used to produce unbiased estimates of select flow exceedances statewide.
Verification of unfold error estimates in the unfold operator code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fehl, D.L.; Biggs, F.
Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashionmore » with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}« less
NASA Technical Reports Server (NTRS)
Boughner, R.; Larsen, J. C.; Natarajan, M.
1980-01-01
The influence of short lived photochemically produced species on solar occultation measurements of ClO and NO was examined. Time varying altitude profiles of ClO and NO were calculated with a time dependent photochemical model to simulate the distribution of these species during a solar occultation measurement. These distributions were subsequently used to calculate simulated radiances for various tangent paths from which mixing ratios were inferred with a conventional technique that assumes spherical symmetry. These results show that neglecting the variation of ClO in the retrieval process produces less than a 10 percent error between the true and inverted profile for both sunrise and sunset above 18 km. For NO, errors are less than 10 percent for tangent altitudes above about 35 km for sunrise and sunset; at lower altitudes, the error increases, approaching 100 percent at altitudes near 25 km. the results also show that average inhomogeneity factors, which measure the concentration variation along the tangent path and which can be calculated from a photochemical model, can indicate which species require more careful data analysis.
Parrett, Charles; Omang, R.J.; Hull, J.A.
1983-01-01
Equations for estimating mean annual runoff and peak discharge from measurements of channel geometry were developed for western and northeastern Montana. The study area was divided into two regions for the mean annual runoff analysis, and separate multiple-regression equations were developed for each region. The active-channel width was determined to be the most important independent variable in each region. The standard error of estimate for the estimating equation using active-channel width was 61 percent in the Northeast Region and 38 percent in the West region. The study area was divided into six regions for the peak discharge analysis, and multiple regression equations relating channel geometry and basin characteristics to peak discharges having recurrence intervals of 2, 5, 10, 25, 50 and 100 years were developed for each region. The standard errors of estimate for the regression equations using only channel width as an independent variable ranged from 35 to 105 percent. The standard errors improved in four regions as basin characteristics were added to the estimating equations. (USGS)
NASA Astrophysics Data System (ADS)
Porter, J. M.; Jeffries, J. B.; Hanson, R. K.
2009-09-01
A novel three-wavelength mid-infrared laser-based absorption/extinction diagnostic has been developed for simultaneous measurement of temperature and vapor-phase mole fraction in an evaporating hydrocarbon fuel aerosol (vapor and liquid droplets). The measurement technique was demonstrated for an n-decane aerosol with D 50˜3 μ m in steady and shock-heated flows with a measurement bandwidth of 125 kHz. Laser wavelengths were selected from FTIR measurements of the C-H stretching band of vapor and liquid n-decane near 3.4 μm (3000 cm -1), and from modeled light scattering from droplets. Measurements were made for vapor mole fractions below 2.3 percent with errors less than 10 percent, and simultaneous temperature measurements over the range 300 K< T<900 K were made with errors less than 3 percent. The measurement technique is designed to provide accurate values of temperature and vapor mole fraction in evaporating polydispersed aerosols with small mean diameters ( D 50<10 μ m), where near-infrared laser-based scattering corrections are prone to error.
Network Adjustment of Orbit Errors in SAR Interferometry
NASA Astrophysics Data System (ADS)
Bahr, Hermann; Hanssen, Ramon
2010-03-01
Orbit errors can induce significant long wavelength error signals in synthetic aperture radar (SAR) interferograms and thus bias estimates of wide-scale deformation phenomena. The presented approach aims for correcting orbit errors in a preprocessing step to deformation analysis by modifying state vectors. Whereas absolute errors in the orbital trajectory are negligible, the influence of relative errors (baseline errors) is parametrised by their parallel and perpendicular component as a linear function of time. As the sensitivity of the interferometric phase is only significant with respect to the perpendicular base-line and the rate of change of the parallel baseline, the algorithm focuses on estimating updates to these two parameters. This is achieved by a least squares approach, where the unwrapped residual interferometric phase is observed and atmospheric contributions are considered to be stochastic with constant mean. To enhance reliability, baseline errors are adjusted in an overdetermined network of interferograms, yielding individual orbit corrections per acquisition.
ERIC Educational Resources Information Center
Huprich, Julia; Green, Ravonne
2007-01-01
The Council on Public Liberal Arts Colleges (COPLAC) libraries websites were assessed for Section 508 errors using the online WebXACT tool. Only three of the twenty-one institutions (14%) had zero accessibility errors. Eighty-six percent of the COPLAC institutions had an average of 1.24 errors. Section 508 compliance is required for institutions…
Measuring Conceptual Complexity: A Content-Analytic Model Using the Federal Income Tax Laws.
ERIC Educational Resources Information Center
Karlinsky, Stewart S.; Andrews, J. Douglas
1986-01-01
Concludes that more than 15 percent of the federal income tax law's complexity is attributable to the capital gains sections. Confirms the idea that the capital gain and loss provisions substantially complicate the law in both absolute and relative terms. (FL)
Using a Hybrid Model to Forecast the Prevalence of Schistosomiasis in Humans
Zhou, Lingling; Xia, Jing; Yu, Lijing; Wang, Ying; Shi, Yun; Cai, Shunxiang; Nie, Shaofa
2016-01-01
Background: We previously proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in forecasting schistosomiasis. Our purpose in the current study was to forecast the annual prevalence of human schistosomiasis in Yangxin County, using our ARIMA-NARNN model, thereby further certifying the reliability of our hybrid model. Methods: We used the ARIMA, NARNN and ARIMA-NARNN models to fit and forecast the annual prevalence of schistosomiasis. The modeling time range included was the annual prevalence from 1956 to 2008 while the testing time range included was from 2009 to 2012. The mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to measure the model performance. We reconstructed the hybrid model to forecast the annual prevalence from 2013 to 2016. Results: The modeling and testing errors generated by the ARIMA-NARNN model were lower than those obtained from either the single ARIMA or NARNN models. The predicted annual prevalence from 2013 to 2016 demonstrated an initial decreasing trend, followed by an increase. Conclusions: The ARIMA-NARNN model can be well applied to analyze surveillance data for early warning systems for the control and elimination of schistosomiasis. PMID:27023573
Measuring the Accuracy of Simple Evolving Connectionist System with Varying Distance Formulas
NASA Astrophysics Data System (ADS)
Al-Khowarizmi; Sitompul, O. S.; Suherman; Nababan, E. B.
2017-12-01
Simple Evolving Connectionist System (SECoS) is a minimal implementation of Evolving Connectionist Systems (ECoS) in artificial neural networks. The three-layer network architecture of the SECoS could be built based on the given input. In this study, the activation value for the SECoS learning process, which is commonly calculated using normalized Hamming distance, is also calculated using normalized Manhattan distance and normalized Euclidean distance in order to compare the smallest error value and best learning rate obtained. The accuracy of measurement resulted by the three distance formulas are calculated using mean absolute percentage error. In the training phase with several parameters, such as sensitivity threshold, error threshold, first learning rate, and second learning rate, it was found that normalized Euclidean distance is more accurate than both normalized Hamming distance and normalized Manhattan distance. In the case of beta fibrinogen gene -455 G/A polymorphism patients used as training data, the highest mean absolute percentage error value is obtained with normalized Manhattan distance compared to normalized Euclidean distance and normalized Hamming distance. However, the differences are very small that it can be concluded that the three distance formulas used in SECoS do not have a significant effect on the accuracy of the training results.
NASA Astrophysics Data System (ADS)
Talamonti, James J.; Kay, Richard B.; Krebs, Danny J.
1996-05-01
A numerical model was developed to emulate the capabilities of systems performing noncontact absolute distance measurements. The model incorporates known methods to minimize signal processing and digital sampling errors and evaluates the accuracy limitations imposed by spectral peak isolation by using Hanning, Blackman, and Gaussian windows in the fast Fourier transform technique. We applied this model to the specific case of measuring the relative lengths of a compound Michelson interferometer. By processing computer-simulated data through our model, we project the ultimate precision for ideal data, and data containing AM-FM noise. The precision is shown to be limited by nonlinearities in the laser scan. absolute distance, interferometer.
Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples
NASA Technical Reports Server (NTRS)
Ratnatunga, Kavan U.; Casertano, Stefano
1991-01-01
A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.
Importance of Geosat orbit and tidal errors in the estimation of large-scale Indian Ocean variations
NASA Technical Reports Server (NTRS)
Perigaud, Claire; Zlotnicki, Victor
1992-01-01
To improve the estimate accuracy of large-scale meridional sea-level variations, Geosat ERM data on the Indian Ocean for a 26-month period were processed using two different techniques of orbit error reduction. The first technique removes an along-track polynomial of degree 1 over about 5000 km and the second technique removes an along-track once-per-revolution sine wave about 40,000 km. Results obtained show that the polynomial technique produces stronger attenuation of both the tidal error and the large-scale oceanic signal. After filtering, the residual difference between the two methods represents 44 percent of the total variance and 23 percent of the annual variance. The sine-wave method yields a larger estimate of annual and interannual meridional variations.
Forecasting impact injuries of unrestrained occupants in railway vehicle passenger compartments.
Xie, Suchao; Zhou, Hui
2014-01-01
In order to predict the injury parameters of the occupants corresponding to different experimental parameters and to determine impact injury indices conveniently and efficiently, a model forecasting occupant impact injury was established in this work. The work was based on finite experimental observation values obtained by numerical simulation. First, the various factors influencing the impact injuries caused by the interaction between unrestrained occupants and the compartment's internal structures were collated and the most vulnerable regions of the occupant's body were analyzed. Then, the forecast model was set up based on a genetic algorithm-back propagation (GA-BP) hybrid algorithm, which unified the individual characteristics of the back propagation-artificial neural network (BP-ANN) model and the genetic algorithm (GA). The model was well suited to studies of occupant impact injuries and allowed multiple-parameter forecasts of the occupant impact injuries to be realized assuming values for various influencing factors. Finally, the forecast results for three types of secondary collision were analyzed using forecasting accuracy evaluation methods. All of the results showed the ideal accuracy of the forecast model. When an occupant faced a table, the relative errors between the predicted and experimental values of the respective injury parameters were kept within ± 6.0 percent and the average relative error (ARE) values did not exceed 3.0 percent. When an occupant faced a seat, the relative errors between the predicted and experimental values of the respective injury parameters were kept within ± 5.2 percent and the ARE values did not exceed 3.1 percent. When the occupant faced another occupant, the relative errors between the predicted and experimental values of the respective injury parameters were kept within ± 6.3 percent and the ARE values did not exceed 3.8 percent. The injury forecast model established in this article reduced repeat experiment times and improved the design efficiency of the internal compartment's structure parameters, and it provided a new way for assessing the safety performance of the interior structural parameters in existing, and newly designed, railway vehicle compartments.
NASA Astrophysics Data System (ADS)
Chérigier, L.; Czarnetzki, U.; Luggenhölscher, D.; Schulz-von der Gathen, V.; Döbele, H. F.
1999-01-01
Absolute atomic hydrogen densities were measured in the gaseous electronics conference reference cell parallel plate reactor by Doppler-free two-photon absorption laser induced fluorescence spectroscopy (TALIF) at λ=205 nm. The capacitively coupled radio frequency discharge was operated at 13.56 MHz in pure hydrogen under various input power and pressure conditions. The Doppler-free excitation technique with an unfocused laser beam together with imaging the fluorescence radiation by an intensified charge coupled device camera allows instantaneous spatial resolution along the radial direction. Absolute density calibration is obtained with the aid of a flow tube reactor and titration with NO2. The influence of spatial intensity inhomogenities along the laser beam and subsequent fluorescence are corrected by TALIF in xenon. A full mapping of the absolute density distribution between the electrodes was obtained. The detection limit for atomic hydrogen amounts to about 2×1018 m-3. The dissociation degree is of the order of a few percent.
An error criterion for determining sampling rates in closed-loop control systems
NASA Technical Reports Server (NTRS)
Brecher, S. M.
1972-01-01
The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.
NASA Astrophysics Data System (ADS)
Rieger, G.; Pinnington, E. H.; Ciubotariu, C.
2000-12-01
Absolute photon emission cross sections following electron capture reactions have been measured for C2+, N3+, N4+ and O3+ ions colliding with Li(2s) atoms at keV energies. The results are compared with calculations using the extended classical over-the-barrier model by Niehaus. We explore the limits of our experimental method and present a detailed discussion of experimental errors.
2003-01-01
Data are not readily available on the accuracy of one of the most commonly used home blood glucose meters, the One Touch Ultra (LifeScan, Milpitas, California). The purpose of this report is to provide information on the accuracy of this home glucose meter in children with type 1 diabetes. During a 24-h clinical research center stay, the accuracy of the Ultra meter was assessed in 91 children, 3-17 years old, with type 1 diabetes by comparing the Ultra glucose values with concurrent reference serum glucose values measured in a central laboratory. The Pearson correlation between the 2,068 paired Ultra and reference values was 0.97, with the median relative absolute difference being 6%. Ninety-four percent of all Ultra values (96% of venous and 84% of capillary samples) met the proposed International Organisation for Standardisation (ISO) standard for instruments used for self-monitoring of glucose when compared with venous reference values. Ninety-nine percent of values were in zones A + B of the Modified Error Grid. A high degree of accuracy was seen across the full range of glucose values. For 353 data points during an insulin-induced hypoglycemia test, the Ultra meter was found to have accuracy that was comparable to concurrently used benchmark instruments (Beckman, YSI, or i-STAT); 95% and 96% of readings from the Ultra meter and the benchmark instruments met the proposed ISO criteria, respectively. These results confirm that the One Touch Ultra meter provides accurate glucose measurements for both hypoglycemia and hyperglycemia in children with type 1 diabetes.
QSAR modeling of β-lactam binding to human serum proteins
NASA Astrophysics Data System (ADS)
Hall, L. Mark; Hall, Lowell H.; Kier, Lemont B.
2003-02-01
The binding of beta-lactams to human serum proteins was modeled with topological descriptors of molecular structure. Experimental data was the concentration of protein-bound drug expressed as a percent of the total plasma concentration (percent fraction bound, PFB) for 87 penicillins and for 115 β-lactams. The electrotopological state indices (E-State) and the molecular connectivity chi indices were found to be the basis of two satisfactory models. A data set of 74 penicillins from a drug design series was successfully modeled with statistics: r2=0.80, s = 12.1, q2=0.76, spress=13.4. This model was then used to predict protein binding (PFB) for 13 commercial penicillins, resulting in a very good mean absolute error, MAE = 12.7 and correlation coefficient, q2=0.84. A group of 28 cephalosporins were combined with the penicillin data to create a dataset of 115 beta-lactams that was successfully modeled: r2=0.82, s = 12.7, q2=0.78, spress=13.7. A ten-fold 10% leave-group-out (LGO) cross-validation procedure was implemented, leading to very good statistics: MAE = 10.9, spress=14.0, q2 (or r2 press)=0.78. The models indicate a combination of general and specific structure features that are important for estimating protein binding in this class of antibiotics. For the β-lactams, significant factors that increase binding are presence and electron accessibility of aromatic rings, halogens, methylene groups, and =N- atoms. Significant negative influence on binding comes from amine groups and carbonyl oxygen atoms.
Patient identification errors are common in a simulated setting.
Henneman, Philip L; Fisher, Donald L; Henneman, Elizabeth A; Pham, Tuan A; Campbell, Megan M; Nathanson, Brian H
2010-06-01
We evaluate the frequency and accuracy of health care workers verifying patient identity before performing common tasks. The study included prospective, simulated patient scenarios with an eye-tracking device that showed where the health care workers looked. Simulations involved nurses administering an intravenous medication, technicians labeling a blood specimen, and clerks applying an identity band. Participants were asked to perform their assigned task on 3 simulated patients, and the third patient had a different date of birth and medical record number than the identity information on the artifact label specific to the health care workers' task. Health care workers were unaware that the focus of the study was patient identity. Sixty-one emergency health care workers participated--28 nurses, 16 technicians, and 17 emergency service associates--in 183 patient scenarios. Sixty-one percent of health care workers (37/61) caught the identity error (61% nurses, 94% technicians, 29% emergency service associates). Thirty-nine percent of health care workers (24/61) performed their assigned task on the wrong patient (39% nurses, 6% technicians, 71% emergency service associates). Eye-tracking data were available for 73% of the patient scenarios (133/183). Seventy-four percent of health care workers (74/100) failed to match the patient to the identity band (87% nurses, 49% technicians). Twenty-seven percent of health care workers (36/133) failed to match the artifact to the patient or the identity band before performing their task (33% nurses, 9% technicians, 33% emergency service associates). Fifteen percent (5/33) of health care workers who completed the steps to verify patient identity on the patient with the identification error still failed to recognize the error. Wide variation exists among health care workers verifying patient identity before performing everyday tasks. Education, process changes, and technology are needed to improve the frequency and accuracy of patient identification. Copyright (c) 2009. Published by Mosby, Inc.
Using lean to improve medication administration safety: in search of the "perfect dose".
Ching, Joan M; Long, Christina; Williams, Barbara L; Blackmore, C Craig
2013-05-01
At Virginia Mason Medical Center (Seattle), the Collaborative Alliance for Nursing Outcomes (CALNOC) Medication Administration Accuracy Quality Study was used in combination with Lean quality improvement efforts to address medication administration safety. Lean interventions were targeted at improving the medication room layout, applying visual controls, and implementing nursing standard work. The interventions were designed to prevent medication administration errors through improving six safe practices: (1) comparing medication with medication administration record, (2) labeling medication, (3) checking two forms of patient identification, (4) explaining medication to patient, (5) charting medication immediately, and (6) protecting the process from distractions/interruptions. Trained nurse auditors observed 9,244 doses for 2,139 patients. Following the intervention, the number of safe-practice violations decreased from 83 violations/100 doses at baseline (January 2010-March 2010) to 42 violations/100 doses at final follow-up (July 2011-September 2011), resulting in an absolute risk reduction of 42 violations/100 doses (95% confidence interval [CI]: 35-48), p < .001). The number of medication administration errors decreased from 10.3 errors/100 doses at baseline to 2.8 errors/100 doses at final follow-up (absolute risk reduction: 7 violations/100 doses [95% CI: 5-10, p < .001]). The "perfect dose" score, reflecting compliance with all six safe practices and absence of any of the eight medication administration errors, improved from 37 in compliance/100 doses at baseline to 68 in compliance/100 doses at the final follow-up. Lean process improvements coupled with direct observation can contribute to substantial decreases in errors in nursing medication administration.
Furmanek, Mariusz P.; Słomka, Kajetan J.; Sobiesiak, Andrzej; Rzepko, Marian; Juras, Grzegorz
2018-01-01
Abstract The proprioceptive information received from mechanoreceptors is potentially responsible for controlling the joint position and force differentiation. However, it is unknown whether cryotherapy influences this complex mechanism. Previously reported results are not universally conclusive and sometimes even contradictory. The main objective of this study was to investigate the impact of local cryotherapy on knee joint position sense (JPS) and force production sense (FPS). The study group consisted of 55 healthy participants (age: 21 ± 2 years, body height: 171.2 ± 9 cm, body mass: 63.3 ± 12 kg, BMI: 21.5 ± 2.6). Local cooling was achieved with the use of gel-packs cooled to -2 ± 2.5°C and applied simultaneously over the knee joint and the quadriceps femoris muscle for 20 minutes. JPS and FPS were evaluated using the Biodex System 4 Pro apparatus. Repeated measures analysis of variance (ANOVA) did not show any statistically significant changes of the JPS and FPS under application of cryotherapy for all analyzed variables: the JPS’s absolute error (p = 0.976), its relative error (p = 0.295), and its variable error (p = 0.489); the FPS’s absolute error (p = 0.688), its relative error (p = 0.193), and its variable error (p = 0.123). The results indicate that local cooling does not affect proprioceptive acuity of the healthy knee joint. They also suggest that local limited cooling before physical activity at low velocity did not present health or injury risk in this particular study group. PMID:29599858
The effects of multiple aerospace environmental stressors on human performance
NASA Technical Reports Server (NTRS)
Popper, S. E.; Repperger, D. W.; Mccloskey, K.; Tripp, L. D.
1992-01-01
An extended Fitt's law paradigm reaction time (RT) task was used to evaluate the effects of acceleration on human performance in the Dynamic Environment Simulator (DES) at Armstrong Laboratory, Wright-Patterson AFB, Ohio. This effort was combined with an evaluation of the standard CSU-13 P anti-gravity suit versus three configurations of a 'retrograde inflation anti-G suit'. Results indicated that RT and error rates increased 17 percent and 14 percent respectively from baseline to the end of the simulated aerial combat maneuver and that the most common error was pressing too few buttons.
Medication Administration Practices of School Nurses.
ERIC Educational Resources Information Center
McCarthy, Ann Marie; Kelly, Michael W.; Reed, David
2000-01-01
Assessed medication administration practices among school nurses, surveying members of the National Association of School Nurses. Respondents were extremely concerned about medication administration. Errors in administering medications were reported by 48.5 percent of respondents, with missed doses the most common error. Most nurses followed…
Kelly, Tanika N; Hixson, James E; Rao, Dabeeru C; Mei, Hao; Rice, Treva K; Jaquish, Cashell E; Shimmin, Lawrence C; Schwander, Karen; Chen, Chung-Shuian; Liu, Depei; Chen, Jichun; Bormans, Concetta; Shukla, Pramila; Farhana, Naveed; Stuart, Colin; Whelton, Paul K; He, Jiang; Gu, Dongfeng
2010-12-01
Genetic determinants of blood pressure (BP) response to potassium, or potassium sensitivity, are largely unknown. We conducted a genome-wide linkage scan and positional candidate gene analysis to identify genetic determinants of potassium sensitivity. A total of 1906 Han Chinese participants took part in a 7-day high-sodium diet followed by a 7-day high-sodium plus potassium dietary intervention. BP measurements were obtained at baseline and after each intervention using a random-zero sphygmomanometer. Significant linkage signals (logarithm of odds [LOD] score, >3) for BP responses to potassium were detected at chromosomal regions 3q24-q26.1, 3q28, and 11q22.3-q24.3. Maximum multipoint LOD scores of 3.09 at 3q25.2 and 3.41 at 11q23.3 were observed for absolute diastolic BP (DBP) and mean arterial pressure (MAP) responses, respectively. Linkage peaks of 3.56 at 3q25.1 and 3.01 at 11q23.3 for percent DBP response and 3.22 at 3q25.2, 3.01 at 3q28, and 4.48 at 11q23.3 for percent MAP response also were identified. Angiotensin II receptor, type 1 (AGTR1), single-nucleotide polymorphism rs16860760 in the 3q24-q26.1 region was significantly associated with absolute and percent systolic BP responses to potassium (P=0.0008 and P=0.0006, respectively). Absolute (95% CI) systolic BP responses for genotypes C/C, C/T, and T/T were -3.71 (-4.02 to -3.40), -2.62 (-3.38 to -1.85), and 1.03 (-3.73 to 5.79) mm Hg, respectively, and percent responses (95% CI) were -3.07 (-3.33 to -2.80), -2.07 (-2.74 to -1.41), and 0.90 (-3.20 to 4.99), respectively. Similar trends were observed for DBP and MAP responses. Genetic regions on chromosomes 3 and 11 may harbor important susceptibility loci for potassium sensitivity. Furthermore, the AGTR1 gene was a significant predictor of BP responses to potassium intake.
Estimating Accuracy of Land-Cover Composition From Two-Stage Clustering Sampling
Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), ...
Explorations in Statistics: The Analysis of Change
ERIC Educational Resources Information Center
Curran-Everett, Douglas; Williams, Calvin L.
2015-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This tenth installment of "Explorations in Statistics" explores the analysis of a potential change in some physiological response. As researchers, we often express absolute change as percent change so we can…
Properties of soil in the San Fernando hydraulic fill dams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, K.L.
1975-08-01
Results are presented of extensive field and laboratory tests on soils from two old hydraulic fill dams that were damaged during the Feb. 9, 1971, San Fernando earthquake. The data include standard penetration, absolute and relative compaction, relative density, static strength, and cyclic triaxial test results for both the hydraulic fill silty sand and the natural silty and gravelly sand alluvium. The relative densities of the hydraulic fills ranged from about 51 to 58 percent and the relative compaction ranged from about 85 to 92 percent of Modified AASHO maximum density. The relative density of the alluvium was about 65more » to 70 percent. Other properties were consistent with previously published data from other similar soils at similar densities.« less
The absolute radiometric calibration of the advanced very high resolution radiometer
NASA Technical Reports Server (NTRS)
Slater, P. N.; Teillet, P. M.; Ding, Y.
1989-01-01
The measurement conditions are described for an intensive field campaign at White Sands Missile Range for the calibration of the AVHRRs on NOAA-9, NOAA-10 and NOAA-11, LANDSAT-4 TM and SPOT. Three different methods for calibration of AVHRRs by reference to a ground surface site are reported, and results from these methods are compared. Significant degradations in NOAA-9 and NOAA-10 AVHRR responsivities occurred since prelaunch calibrations were completed. As of February 1988, degradations in NOAA-9 AVHRR responsivities were on the order of 37 percent in channel and 41 percent in channel 2, and for the NOAA-10 AVHRR these degradations were 42 and 59 percent in channels 1 and 2, respectively.
A biometric identification system based on eigenpalm and eigenfinger features.
Ribaric, Slobodan; Fratric, Ivan
2005-11-01
This paper presents a multimodal biometric identification system based on the features of the human hand. We describe a new biometric approach to personal identification using eigenfinger and eigenpalm features, with fusion applied at the matching-score level. The identification process can be divided into the following phases: capturing the image; preprocessing; extracting and normalizing the palm and strip-like finger subimages; extracting the eigenpalm and eigenfinger features based on the K-L transform; matching and fusion; and, finally, a decision based on the (k, l)-NN classifier and thresholding. The system was tested on a database of 237 people (1,820 hand images). The experimental results showed the effectiveness of the system in terms of the recognition rate (100 percent), the equal error rate (EER = 0.58 percent), and the total error rate (TER = 0.72 percent).
Assessing agreement between malaria slide density readings.
Alexander, Neal; Schellenberg, David; Ngasala, Billy; Petzold, Max; Drakeley, Chris; Sutherland, Colin
2010-01-04
Several criteria have been used to assess agreement between replicate slide readings of malaria parasite density. Such criteria may be based on percent difference, or absolute difference, or a combination. Neither the rationale for choosing between these types of criteria, nor that for choosing the magnitude of difference which defines acceptable agreement, are clear. The current paper seeks a procedure which avoids the disadvantages of these current options and whose parameter values are more clearly justified. Variation of parasite density within a slide is expected, even when it has been prepared from a homogeneous sample. This places lower limits on sensitivity and observer agreement, quantified by the Poisson distribution. This means that, if a criterion of fixed percent difference criterion is used for satisfactory agreement, the number of discrepant readings is over-estimated at low parasite densities. With a criterion of fixed absolute difference, the same happens at high parasite densities. For an ideal slide, following the Poisson distribution, a criterion based on a constant difference in square root counts would apply for all densities. This can be back-transformed to a difference in absolute counts, which, as expected, gives a wider range of acceptable agreement at higher average densities. In an example dataset from Tanzania, observed differences in square root counts correspond to a 95% limits of agreement of -2,800 and +2,500 parasites/microl at average density of 2,000 parasites/microl, and -6,200 and +5,700 parasites/microl at 10,000 parasites/microl. However, there were more outliers beyond those ranges at higher densities, meaning that actual coverage of these ranges was not a constant 95%, but decreased with density. In a second study, a trial of microscopist training, the corresponding ranges of agreement are wider and asymmetrical: -8,600 to +5,200/microl, and -19,200 to +11,700/microl, respectively. By comparison, the optimal limits of agreement, corresponding to Poisson variation, are +/- 780 and +/- 1,800 parasites/microl, respectively. The focus of this approach on the volume of blood read leads to other conclusions. For example, no matter how large a volume of blood is read, some densities are too low to be reliably detected, which in turn means that disagreements on slide positivity may simply result from within-slide variation, rather than reading errors. The proposed method defines limits of acceptable agreement in a way which allows for the natural increase in variability with parasite density. This includes defining the levels of between-reader variability, which are consistent with random variation: disagreements within these limits should not trigger additional readings. This approach merits investigation in other settings, in order to determine both the extent of its applicability, and appropriate numerical values for limits of agreement.
Angermeier, Ingo; Dunford, Benjamin B; Boss, Alan D; Boss, R Wayne
2009-01-01
Numerous challenges confront managers in the healthcare industry, making it increasingly difficult for healthcare organizations to gain and sustain a competitive advantage. Contemporary management challenges in the industry have many different origins (e.g., economic, financial, clinical, and legal), but there is growing recognition that some of management's greatest problems have organizational roots. Thus, healthcare organizations must examine their personnel management strategies to ensure that they are optimized for fostering a highly committed and productive workforce. Drawing on a sample of 2,522 employees spread across 312 departments within a large U.S. healthcare organization, this article examines the impact of a participative management climate on four employee-level outcomes that represent some of the greatest challenges in the healthcare industry: customer service, medical errors, burnout, and turnover intentions. This study provides clear evidence that employee perceptions of the extent to which their work climate is participative rather than authoritarian have important implications for critical work attitudes and behavior. Specifically, employees in highly participative work climates provided 14 percent better customer service, committed 26 percent fewer clinical errors, demonstrated 79 percent lower burnout, and felt 61 percent lower likelihood of leaving the organization than employees in more authoritarian work climates. These findings suggest that participative management initiatives have a significant impact on the commitment and productivity of individual employees, likely improving the patient care and effectiveness of healthcare organizations as a whole.
Lundberg, Frida E; Johansson, Anna L V; Rodriguez-Wallberg, Kenny; Brand, Judith S; Czene, Kamila; Hall, Per; Iliadou, Anastasia N
2016-04-13
Ovarian stimulation drugs, in particular hormonal agents used for controlled ovarian stimulation (COS) required to perform in vitro fertilization, increase estrogen and progesterone levels and have therefore been suspected to influence breast cancer risk. This study aims to investigate whether infertility and hormonal fertility treatment influences mammographic density, a strong hormone-responsive risk factor for breast cancer. Cross-sectional study including 43,313 women recruited to the Karolinska Mammography Project between 2010 and 2013. Among women who reported having had infertility, 1576 had gone through COS, 1429 had had hormonal stimulation without COS and 5958 had not received any hormonal fertility treatment. Percent and absolute mammographic densities were obtained using the volumetric method Volpara™. Associations with mammographic density were assessed using multivariable generalized linear models, estimating mean differences (MD) with 95 % confidence intervals (CI). After multivariable adjustment, women with a history of infertility had 1.53 cm(3) higher absolute dense volume compared to non-infertile women (95 % CI: 0.70 to 2.35). Among infertile women, only those who had gone through COS treatment had a higher absolute dense volume than those who had not received any hormone treatment (adjusted MD 3.22, 95 % CI: 1.10 to 5.33). No clear associations were observed between infertility, fertility treatment and percent volumetric density. Overall, women reporting infertility had more dense tissue in the breast. The higher absolute dense volume in women treated with COS may indicate a treatment effect, although part of the association might also be due to the underlying infertility. Continued monitoring of cancer risk in infertile women, especially those who undergo COS, is warranted.
Estimating pore and cement volumes in thin section
Halley, R.B.
1978-01-01
Point count estimates of pore, grain and cement volumes from thin sections are inaccurate, often by more than 100 percent, even though they may be surprisingly precise (reproducibility + or - 3 percent). Errors are produced by: 1) inclusion of submicroscopic pore space within solid volume and 2) edge effects caused by grain curvature within a 30-micron thick thin section. Submicroscopic porosity may be measured by various physical tests or may be visually estimated from scanning electron micrographs. Edge error takes the form of an envelope around grains and increases with decreasing grain size and sorting, increasing grain irregularity and tighter grain packing. Cements are greatly involved in edge error because of their position at grain peripheries and their generally small grain size. Edge error is minimized by methods which reduce the thickness of the sample viewed during point counting. Methods which effectively reduce thickness include use of ultra-thin thin sections or acetate peels, point counting in reflected light, or carefully focusing and counting on the upper surface of the thin section.
Beekley, Matthew D; Abe, Takashi; Kondo, Masakatsu; Midorikawa, Taishi; Yamauchi, Taro
2006-01-01
Sumo wrestling is unique in combat sport, and in all of sport. We examined the maximum aerobic capacity and body composition of sumo wrestlers and compared them to untrained controls. We also compared "aerobic muscle quality", meaning VO2max normalized to predicted skeletal muscle mass (SMM) (VO2max /SMM), between sumo wrestlers and controls and among previously published data for male athletes from combat, aerobic, and power sports. Sumo wrestlers, compared to untrained controls, had greater (p < 0.05) body mass (mean ± SD; 117.0 ± 4.9 vs. 56.1 ± 9.8 kg), percent fat (24.0 ± 1.4 vs. 13.3 ± 4.5), fat-free mass (88.9 ± 4.2 vs. 48.4 ± 6.8 kg), predicted SMM (48.2 ± 2.9 vs. 20.6 ± 4.7 kg) and absolute VO2max (3.6 ± 1.3 vs. 2.5 ± 0.7 L·min(-1)). Mean VO2max /SMM (ml·kg SMM(-1)·min(-1)) was significantly different (p < 0.05) among aerobic athletes (164.8 ± 18.3), combat athletes (which was not different from untrained controls; 131.4 ± 9.3 and 128.6 ± 13.6, respectively), power athletes (96.5 ± 5.3), and sumo wrestlers (71.4 ± 5.3). There was a strong negative correlation (r = - 0.75) between percent body fat and VO2max /SMM (p < 0.05). We conclude that sumo wrestlers have some of the largest percent body fat and fat-free mass and the lowest "aerobic muscle quality "(VO2max /SMM), both in combat sport and compared to aerobic and power sport athletes. Additionally, it appears from analysis of the relationship between SMM and absolute VO2max for all sports that there is a "ceiling "at which increases in SMM do not result in additional increases in absolute VO2max. Key PointsSumo wrestlers have a high absolute VO2max compared to untrained controls.However, sumo wrestlers have a low VO2max /kg of skeletal muscle mass compared to other combat sports, other strength/power sports, and untrained controls.The reason for this is unknown, but is probably related to alterations in sumo skeletal muscle compared to other sports.Based on the present and previous data, there appears to be a "ceiling "at which increases in skeletal muscle mass do not result in additional increases in absolute VO2max.
Pettijohn, Robert A.; Busby, John F.; Cervantes, Michael A.
1993-01-01
The U.S. Geological Survey used four programs in 1990 to provide external data quality assurance for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN). Results of the intersite- comparison program indicate that 80 and 74 percent of the site operators met the NADP/NTN goals for pH determination and 98 and 95 percent of the site operators met the NADP/NTN goals for specific- conductance determination during the two studies in 1990. The effects of routine sample handling, processing, and shipping determined in the blind-audit program indicated significant positive bias for calcium, magnesium, sodium, potassium, chloride, nitrate, and sulfate. Significant negative bias was determined for hydrogen ion and specific conductance. A Kruskal-Wallis test indicated that there were no significant (a=0.01) differences in analytical results from the three laboratories participating in the interlaboratory-comparison program. Results from the collocated-sampler study indicate the median relative error for potassium and ammonium concentration and deposition exceeded 15 percent at most sites while the median relative error for sulfate and nitrate at all sites was less than 6 percent for concentration and was less than 15 percent for deposition.
NASA Technical Reports Server (NTRS)
Anbar, A. D.; Allen, M.; Nair, H. A.
1993-01-01
We have investigated the impact of high resolution, temperature-dependent CO2 cross-section measurements, reported by Lewis and Carver (1983), on calculations of photodissociation rate coefficients in the Martian atmosphere. We find that the adoption of 50 A intervals for the purpose of computational efficiency results in errors in the calculated values for photodissociation of CO2, H2O, and O2 which are generally not above 10 percent, but as large as 20 percent in some instances. These are acceptably small errors, especially considering the uncertainties introduced by the large temperature dependence of the CO2 cross section. The inclusion of temperature-dependent CO2 cross sections is shown to lead to a decrease in the diurnally averaged rate of CO2 photodissociation as large as 33 percent at some altitudes, and increases of as much as 950 percent and 80 percent in the photodissociation rate coefficients of H2O and O2, respectively. The actual magnitude of the changes depends on the assumptions used to model the CO2 absorption spectrum at temperatures lower than the available measurements, and at wavelengths longward of 1970 A.
Wang, Guochao; Tan, Lilong; Yan, Shuhua
2018-02-07
We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He-Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10 -8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.
Tan, Lilong; Yan, Shuhua
2018-01-01
We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He–Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10−8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions. PMID:29414897
Genetic and Environmental Contributions to Educational Attainment in Australia.
ERIC Educational Resources Information Center
Miller, Paul; Mulvey, Charles; Martin, Nick
2001-01-01
Data from a large sample of Australian twins indicate that 50 to 65 percent of variance in educational attainments can be attributed to genetic endowments. Only about 25 to 40 percent may be due to environmental factors, depending on adjustments for measurement error and assortative mating. (Contains 51 references.) (MLH)
47 CFR 101.91 - Involuntary relocation procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... engineering, equipment, site and FCC fees, as well as any legitimate and prudent transaction expenses incurred..., reliability is measured by the percent of time the bit error rate (BER) exceeds a desired value, and for analog or digital voice transmissions, it is measured by the percent of time that audio signal quality...
Forecasting in foodservice: model development, testing, and evaluation.
Miller, J L; Thompson, P A; Orabella, M M
1991-05-01
This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits.
Liu, Min Hsien; Chen, Cheng; Hong, Yaw Shun
2005-02-08
A three-parametric modification equation and the least-squares approach are adopted to calibrating hybrid density-functional theory energies of C(1)-C(10) straight-chain aldehydes, alcohols, and alkoxides to accurate enthalpies of formation DeltaH(f) and Gibbs free energies of formation DeltaG(f), respectively. All calculated energies of the C-H-O composite compounds were obtained based on B3LYP6-311++G(3df,2pd) single-point energies and the related thermal corrections of B3LYP6-31G(d,p) optimized geometries. This investigation revealed that all compounds had 0.05% average absolute relative error (ARE) for the atomization energies, with mean value of absolute error (MAE) of just 2.1 kJ/mol (0.5 kcal/mol) for the DeltaH(f) and 2.4 kJ/mol (0.6 kcal/mol) for the DeltaG(f) of formation.
Verifying Safeguards Declarations with INDEPTH: A Sensitivity Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grogan, Brandon R; Richards, Scott
2017-01-01
A series of ORIGEN calculations were used to simulate the irradiation and decay of a number of spent fuel assemblies. These simulations focused on variations in the irradiation history that achieved the same terminal burnup through a different set of cycle histories. Simulated NDA measurements were generated for each test case from the ORIGEN data. These simulated measurement types included relative gammas, absolute gammas, absolute gammas plus neutrons, and concentrations of a set of six isotopes commonly measured by NDA. The INDEPTH code was used to reconstruct the initial enrichment, cooling time, and burnup for each irradiation using each simulatedmore » measurement type. The results were then compared to the initial ORIGEN inputs to quantify the size of the errors induced by the variations in cycle histories. Errors were compared based on the underlying changes to the cycle history, as well as the data types used for the reconstructions.« less
NASA Astrophysics Data System (ADS)
Lippman, Thomas; Brockie, Richard; Coker, Jon; Contreras, John; Galbraith, Rick; Garzon, Samir; Hanson, Weldon; Leong, Tom; Marley, Arley; Wood, Roger; Zakai, Rehan; Zolla, Howard; Duquette, Paul; Petrizzi, Joe
2015-05-01
Exponential growth of the areal density has driven the magnetic recording industry for almost sixty years. But now areal density growth is slowing down, suggesting that current technologies are reaching their fundamental limit. The next generation of recording technologies, namely, energy-assisted writing and bit-patterned media, remains just over the horizon. Two-Dimensional Magnetic Recording (TDMR) is a promising new approach, enabling continued areal density growth with only modest changes to the heads and recording electronics. We demonstrate a first generation implementation of TDMR by using a dual-element read sensor to improve the recovery of data encoded by a conventional low-density parity-check (LDPC) channel. The signals are combined with a 2D equalizer into a single modified waveform that is decoded by a standard LDPC channel. Our detection hardware can perform simultaneous measurement of the pre- and post-combined error rate information, allowing one set of measurements to assess the absolute areal density capability of the TDMR system as well as the gain over a conventional shingled magnetic recording system with identical components. We discuss areal density measurements using this hardware and demonstrate gains exceeding five percent based on experimental dual reader components.
NASA Technical Reports Server (NTRS)
Keitz, J. F.
1982-01-01
The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 3 of the four major tasks included in the study. Task 3 compares flight plans developed on the Suitland forecast with actual data observed by the aircraft (and averaged over 10 degree segments). The results show that the average difference between the forecast and observed wind speed is 9 kts. without considering direction, and the average difference in the component of the forecast wind parallel to the direction of the observed wind is 13 kts. - both indicating that the Suitland forecast underestimates the wind speeds. The Root Mean Square (RMS) vector error is 30.1 kts. The average absolute difference in direction between the forecast and observed wind is 26 degrees and the temperature difference is 3 degree Centigrade. These results indicate that the forecast model as well as the verifying analysis used to develop comparison flight plans in Tasks 1 and 2 is a limiting factor and that the average potential fuel savings or penalty are up to 3.6 percent depending on the direction of flight.
Method of estimating natural recharge to the Edwards Aquifer in the San Antonio area, Texas
Puente, Celso
1978-01-01
The principal errors in the estimates of annual recharge are related to errors in estimating runoff in ungaged areas, which represent about 30 percent of the infiltration area. The estimated long-term average annual recharge in each basin, however, is probably representative of the actual recharge because the averaging procedure tends to cancel out the major errors.
ERIC Educational Resources Information Center
Micceri, Theodore; Parasher, Pradnya; Waugh, Gordon W.; Herreid, Charlene
2009-01-01
An extensive review of the research literature and a study comparing over 36,000 survey responses with archival true scores indicated that one should expect a minimum of at least three percent random error for the least ambiguous of self-report measures. The Gulliver Effect occurs when a small proportion of error in a sizable subpopulation exerts…
NASA Technical Reports Server (NTRS)
Wilmington, R. P.; Klute, Glenn K. (Editor); Carroll, Amy E. (Editor); Stuart, Mark A. (Editor); Poliner, Jeff (Editor); Rajulu, Sudhakar (Editor); Stanush, Julie (Editor)
1992-01-01
Kinematics, the study of motion exclusive of the influences of mass and force, is one of the primary methods used for the analysis of human biomechanical systems as well as other types of mechanical systems. The Anthropometry and Biomechanics Laboratory (ABL) in the Crew Interface Analysis section of the Man-Systems Division performs both human body kinematics as well as mechanical system kinematics using the Ariel Performance Analysis System (APAS). The APAS supports both analysis of analog signals (e.g. force plate data collection) as well as digitization and analysis of video data. The current evaluations address several methodology issues concerning the accuracy of the kinematic data collection and analysis used in the ABL. This document describes a series of evaluations performed to gain quantitative data pertaining to position and constant angular velocity movements under several operating conditions. Two-dimensional as well as three-dimensional data collection and analyses were completed in a controlled laboratory environment using typical hardware setups. In addition, an evaluation was performed to evaluate the accuracy impact due to a single axis camera offset. Segment length and positional data exhibited errors within 3 percent when using three-dimensional analysis and yielded errors within 8 percent through two-dimensional analysis (Direct Linear Software). Peak angular velocities displayed errors within 6 percent through three-dimensional analyses and exhibited errors of 12 percent when using two-dimensional analysis (Direct Linear Software). The specific results from this series of evaluations and their impacts on the methodology issues of kinematic data collection and analyses are presented in detail. The accuracy levels observed in these evaluations are also presented.
Error analysis of multi-needle Langmuir probe measurement technique.
Barjatya, Aroh; Merritt, William
2018-04-01
Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.
NASA Astrophysics Data System (ADS)
Rasim; Junaeti, E.; Wirantika, R.
2018-01-01
Accurate forecasting for the sale of a product depends on the forecasting method used. The purpose of this research is to build motorcycle sales forecasting application using Fuzzy Time Series method combined with interval determination using automatic clustering algorithm. Forecasting is done using the sales data of motorcycle sales in the last ten years. Then the error rate of forecasting is measured using Means Percentage Error (MPE) and Means Absolute Percentage Error (MAPE). The results of forecasting in the one-year period obtained in this study are included in good accuracy.
Error analysis of multi-needle Langmuir probe measurement technique
NASA Astrophysics Data System (ADS)
Barjatya, Aroh; Merritt, William
2018-04-01
Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.
Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V
2018-03-01
Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were <30% (predefined criterion) and correlation (r) was at least 0.7950 for the consolidated internal and external datasets of 102 healthy subjects for the AUC 0-t prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error <30% and correlation (r) was at least 0.9339 in the same pool of healthy subjects. A 3-concentration-time points limited sampling model predicts the exposure of saroglitazar (ie, AUC 0-t ) within predefined acceptable bias and imprecision limit. Same model was also used to predict AUC 0-∞ . The same limited sampling model was found to predict the exposure of saroglitazar sulfoxide within predefined criteria. This model can find utility during late-phase clinical development of saroglitazar in the patient population. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.
Error analysis of 3D-PTV through unsteady interfaces
NASA Astrophysics Data System (ADS)
Akutina, Yulia; Mydlarski, Laurent; Gaskin, Susan; Eiff, Olivier
2018-03-01
The feasibility of stereoscopic flow measurements through an unsteady optical interface is investigated. Position errors produced by a wavy optical surface are determined analytically, as are the optimal viewing angles of the cameras to minimize such errors. Two methods of measuring the resulting velocity errors are proposed. These methods are applied to 3D particle tracking velocimetry (3D-PTV) data obtained through the free surface of a water flow within a cavity adjacent to a shallow channel. The experiments were performed using two sets of conditions, one having no strong surface perturbations, and the other exhibiting surface gravity waves. In the latter case, the amplitude of the gravity waves was 6% of the water depth, resulting in water surface inclinations of about 0.2°. (The water depth is used herein as a relevant length scale, because the measurements are performed in the entire water column. In a more general case, the relevant scale is the maximum distance from the interface to the measurement plane, H, which here is the same as the water depth.) It was found that the contribution of the waves to the overall measurement error is low. The absolute position errors of the system were moderate (1.2% of H). However, given that the velocity is calculated from the relative displacement of a particle between two frames, the errors in the measured water velocities were reasonably small, because the error in the velocity is the relative position error over the average displacement distance. The relative position error was measured to be 0.04% of H, resulting in small velocity errors of 0.3% of the free-stream velocity (equivalent to 1.1% of the average velocity in the domain). It is concluded that even though the absolute positions to which the velocity vectors are assigned is distorted by the unsteady interface, the magnitude of the velocity vectors themselves remains accurate as long as the waves are slowly varying (have low curvature). The stronger the disturbances on the interface are (high amplitude, short wave length), the smaller is the distance from the interface at which the measurements can be performed.
Short term load forecasting using a self-supervised adaptive neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, H.; Pimmel, R.L.
The authors developed a self-supervised adaptive neural network to perform short term load forecasts (STLF) for a large power system covering a wide service area with several heavy load centers. They used the self-supervised network to extract correlational features from temperature and load data. In using data from the calendar year 1993 as a test case, they found a 0.90 percent error for hour-ahead forecasting and 1.92 percent error for day-ahead forecasting. These levels of error compare favorably with those obtained by other techniques. The algorithm ran in a couple of minutes on a PC containing an Intel Pentium --more » 120 MHz CPU. Since the algorithm included searching the historical database, training the network, and actually performing the forecasts, this approach provides a real-time, portable, and adaptable STLF.« less
Seebeck Coefficient Metrology: Do Contemporary Protocols Measure Up?
NASA Astrophysics Data System (ADS)
Martin, Joshua; Wong-Ng, Winnie; Green, Martin L.
2015-06-01
Comparative measurements of the Seebeck coefficient are challenging due to the diversity of instrumentation and measurement protocols. With the implementation of standardized measurement protocols and the use of Standard Reference Materials (SRMs®), for example, the recently certified National Institute of Standards and Technology (NIST) SRM® 3451 ``Low Temperature Seebeck Coefficient Standard (10-390 K)'', researchers can reliably analyze and compare data, both intra- and inter-laboratory, thereby accelerating the development of more efficient thermoelectric materials and devices. We present a comparative overview of commonly adopted Seebeck coefficient measurement practices. First, we examine the influence of asynchronous temporal and spatial measurement of electric potential and temperature. Temporal asynchronicity introduces error in the absolute Seebeck coefficient of the order of ≈10%, whereas spatial asynchronicity introduces error of the order of a few percent. Second, we examine the influence of poor thermal contact between the measurement probes and the sample. This is especially critical at high temperature, wherein the prevalent mode of measuring surface temperature is facilitated by pressure contact. Each topic will include the comparison of data measured using different measurement techniques and using different probe arrangements. We demonstrate that the probe arrangement is the primary limit to high accuracy, wherein the Seebeck coefficients measured by the 2-probe arrangement and those measured by the 4-probe arrangement diverge with the increase in temperature, approaching ≈14% at 900 K. Using these analyses, we provide recommended measurement protocols to guide members of the thermoelectric materials community in performing more accurate measurements and in evaluating more comprehensive uncertainty limits.
NASA Astrophysics Data System (ADS)
Mohanty, B.; Jena, S.; Panda, R. K.
2016-12-01
The overexploitation of groundwater elicited in abandoning several shallow tube wells in the study Basin in Eastern India. For the sustainability of groundwater resources, basin-scale modelling of groundwater flow is indispensable for the effective planning and management of the water resources. The basic intent of this study is to develop a 3-D groundwater flow model of the study basin using the Visual MODFLOW Flex 2014.2 package and successfully calibrate and validate the model using 17 years of observed data. The sensitivity analysis was carried out to quantify the susceptibility of aquifer system to the river bank seepage, recharge from rainfall and agriculture practices, horizontal and vertical hydraulic conductivities, and specific yield. To quantify the impact of parameter uncertainties, Sequential Uncertainty Fitting Algorithm (SUFI-2) and Markov chain Monte Carlo (McMC) techniques were implemented. Results from the two techniques were compared and the advantages and disadvantages were analysed. Nash-Sutcliffe coefficient (NSE), Coefficient of Determination (R2), Mean Absolute Error (MAE), Mean Percent Deviation (Dv) and Root Mean Squared Error (RMSE) were adopted as criteria of model evaluation during calibration and validation of the developed model. NSE, R2, MAE, Dv and RMSE values for groundwater flow model during calibration and validation were in acceptable range. Also, the McMC technique was able to provide more reasonable results than SUFI-2. The calibrated and validated model will be useful to identify the aquifer properties, analyse the groundwater flow dynamics and the change in groundwater levels in future forecasts.
Analysis of Sediment Transport for Rivers in South Korea based on Data Mining technique
NASA Astrophysics Data System (ADS)
Jang, Eun-kyung; Ji, Un; Yeo, Woonkwang
2017-04-01
The purpose of this study is to calculate of sediment discharge assessment using data mining in South Korea. The Model Tree was selected for this study which is the most suitable technique to explicitly analyze the relationship between input and output variables in various and diverse databases among the Data Mining. In order to derive the sediment discharge equation using the Model Tree of Data Mining used the dimensionless variables used in Engelund and Hansen, Ackers and White, Brownlie and van Rijn equations as the analytical condition. In addition, total of 14 analytical conditions were set considering the conditions dimensional variables and the combination conditions of the dimensionless variables and the dimensional variables according to the relationship between the flow and the sediment transport. For each case, the analysis results were analyzed by mean of discrepancy ratio, root mean square error, mean absolute percent error, correlation coefficient. The results showed that the best fit was obtained by using five dimensional variables such as velocity, depth, slope, width and Median Diameter. And closest approximation to the best goodness-of-fit was estimated from the depth, slope, width, main grain size of bed material and dimensionless tractive force and except for the slope in the single variable. In addition, the three types of Model Tree that are most appropriate are compared with the Ackers and White equation which is the best fit among the existing equations, the mean discrepancy ration and the correlation coefficient of the Model Tree are improved compared to the Ackers and White equation.
Pixel-based absolute surface metrology by three flat test with shifted and rotated maps
NASA Astrophysics Data System (ADS)
Zhai, Dede; Chen, Shanyong; Xue, Shuai; Yin, Ziqiang
2018-03-01
In traditional three flat test, it only provides the absolute profile along one surface diameter. In this paper, an absolute testing algorithm based on shift-rotation with three flat test has been proposed to reconstruct two-dimensional surface exactly. Pitch and yaw error during shift procedure is analyzed and compensated in our method. Compared with multi-rotation method proposed before, it only needs a 90° rotation and a shift, which is easy to carry out especially in condition of large size surface. It allows pixel level spatial resolution to be achieved without interpolation or assumption to the test surface. In addition, numerical simulations and optical tests are implemented and show the high accuracy recovery capability of the proposed method.
Piezocomposite Actuator Arrays for Correcting and Controlling Wavefront Error in Reflectors
NASA Technical Reports Server (NTRS)
Bradford, Samuel Case; Peterson, Lee D.; Ohara, Catherine M.; Shi, Fang; Agnes, Greg S.; Hoffman, Samuel M.; Wilkie, William Keats
2012-01-01
Three reflectors have been developed and tested to assess the performance of a distributed network of piezocomposite actuators for correcting thermal deformations and total wave-front error. The primary testbed article is an active composite reflector, composed of a spherically curved panel with a graphite face sheet and aluminum honeycomb core composite, and then augmented with a network of 90 distributed piezoelectric composite actuators. The piezoelectric actuator system may be used for correcting as-built residual shape errors, and for controlling low-order, thermally-induced quasi-static distortions of the panel. In this study, thermally-induced surface deformations of 1 to 5 microns were deliberately introduced onto the reflector, then measured using a speckle holography interferometer system. The reflector surface figure was subsequently corrected to a tolerance of 50 nm using the actuators embedded in the reflector's back face sheet. Two additional test articles were constructed: a borosilicate at window at 150 mm diameter with 18 actuators bonded to the back surface; and a direct metal laser sintered reflector with spherical curvature, 230 mm diameter, and 12 actuators bonded to the back surface. In the case of the glass reflector, absolute measurements were performed with an interferometer and the absolute surface was corrected. These test articles were evaluated to determine their absolute surface control capabilities, as well as to assess a multiphysics modeling effort developed under this program for the prediction of active reflector response. This paper will describe the design, construction, and testing of active reflector systems under thermal loads, and subsequent correction of surface shape via distributed peizeoelctric actuation.
Haupenthal, Daniela Pacheco dos Santos; de Noronha, Marcos; Haupenthal, Alessandro; Ruschel, Caroline; Nunes, Guilherme S.
2015-01-01
Context Proprioception of the ankle is determined by the ability to perceive the sense of position of the ankle structures, as well as the speed and direction of movement. Few researchers have investigated proprioception by force-replication ability and particularly after skin cooling. Objective To analyze the ability of the ankle-dorsiflexor muscles to replicate isometric force after a period of skin cooling. Design Randomized controlled clinical trial. Setting Laboratory. Patients or Other Participants Twenty healthy individuals (10 men, 10 women; age = 26.8 ± 5.2 years, height = 171 ± 7 cm, mass = 66.8 ± 10.5 kg). Intervention(s) Skin cooling was carried out using 2 ice applications: (1) after maximal voluntary isometric contraction (MVIC) performance and before data collection for the first target force, maintained for 20 minutes; and (2) before data collection for the second target force, maintained for 10 minutes. We measured skin temperature before and after ice applications to ensure skin cooling. Main Outcome Measure(s) A load cell was placed under an inclined board for data collection, and 10 attempts of force replication were carried out for 2 values of MVIC (20%, 50%) in each condition (ice, no ice). We assessed force sense with absolute and root mean square errors (the difference between the force developed by the dorsiflexors and the target force measured with the raw data and after root mean square analysis, respectively) and variable error (the variance around the mean absolute error score). A repeated-measures multivariate analysis of variance was used for statistical analysis. Results The absolute error was greater for the ice than for the no-ice condition (F1,19 = 9.05, P = .007) and for the target force at 50% of MVIC than at 20% of MVIC (F1,19 = 26.01, P < .001). Conclusions The error was greater in the ice condition and at 50% of MVIC. Skin cooling reduced the proprioceptive ability of the ankle-dorsiflexor muscles to replicate isometric force. PMID:25761136
NASA Astrophysics Data System (ADS)
de Jong, G. Theodoor; Geerke, Daan P.; Diefenbach, Axel; Matthias Bickelhaupt, F.
2005-06-01
We have evaluated the performance of 24 popular density functionals for describing the potential energy surface (PES) of the archetypal oxidative addition reaction of the methane C-H bond to the palladium atom by comparing the results with our recent ab initio [CCSD(T)] benchmark study of this reaction. The density functionals examined cover the local density approximation (LDA), the generalized gradient approximation (GGA), meta-GGAs as well as hybrid density functional theory. Relativistic effects are accounted for through the zeroth-order regular approximation (ZORA). The basis-set dependence of the density-functional-theory (DFT) results is assessed for the Becke-Lee-Yang-Parr (BLYP) functional using a hierarchical series of Slater-type orbital (STO) basis sets ranging from unpolarized double-ζ (DZ) to quadruply polarized quadruple-ζ quality (QZ4P). Stationary points on the reaction surface have been optimized using various GGA functionals, all of which yield geometries that differ only marginally. Counterpoise-corrected relative energies of stationary points are converged to within a few tenths of a kcal/mol if one uses the doubly polarized triple-ζ (TZ2P) basis set and the basis-set superposition error (BSSE) drops to 0.0 kcal/mol for our largest basis set (QZ4P). Best overall agreement with the ab initio benchmark PES is achieved by functionals of the GGA, meta-GGA, and hybrid-DFT type, with mean absolute errors of 1.3-1.4 kcal/mol and errors in activation energies ranging from +0.8 to -1.4 kcal/mol. Interestingly, the well-known BLYP functional compares very reasonably with an only slightly larger mean absolute error of 2.5 kcal/mol and an underestimation by -1.9 kcal/mol of the overall barrier (i.e., the difference in energy between the TS and the separate reactants). For comparison, with B3LYP we arrive at a mean absolute error of 3.8 kcal/mol and an overestimation of the overall barrier by 4.5 kcal/mol.
NASA Technical Reports Server (NTRS)
Thome, Kurtis; McCorkel, Joel; Hair, Jason; McAndrew, Brendan; Daw, Adrian; Jennings, Donald; Rabin, Douglas
2012-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe high-accuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change. One of the major objectives of CLARREO is to advance the accuracy of SI traceable absolute calibration at infrared and reflected solar wavelengths. This advance is required to reach the on-orbit absolute accuracy required to allow climate change observations to survive data gaps while remaining sufficiently accurate to observe climate change to within the uncertainty of the limit of natural variability. While these capabilities exist at NIST in the laboratory, there is a need to demonstrate that it can move successfully from NIST to NASA and/or instrument vendor capabilities for future spaceborne instruments. The current work describes the test plan for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches , alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The end result of efforts with the SOLARIS CDS will be an SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climate-quality data collections. The CLARREO mission addresses the need to observe high-accuracy, long-term climate change trends and advance the accuracy of SI traceable absolute calibration. The current work describes the test plan for the SOLARIS which is the calibration demonstration system for the reflected solar portion of CLARREO. SOLARIS provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The end result will be an SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climate-quality data collections.
Decay properties of {sup 265}Sg(Z=106) and {sup 266}Sg(Z=106)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuerler, A.; Dressler, R.; Eichler, B.
1998-04-01
The presently known most neutron-rich isotopes of element 106 (seaborgium, Sg), {sup 265}Sg and {sup 266}Sg, were produced in the fusion reaction {sup 22}Ne+{sup 248}Cm at beam energies of 121 and 123 MeV. Using the On-Line Gas chemistry Apparatus OLGA, a continuous separation of Sg was achieved within a few seconds. Final products were assayed by {alpha}-particle and spontaneous fission (SF) spectrometry. {sup 265}Sg and {sup 266}Sg were identified by observing time correlated {alpha}-{alpha}-({alpha}) and {alpha}-SF decay chains. A total of 13 correlated decay chains of {sup 265}Sg (with an estimated number of 2.8 random correlations) and 3 decay chainsmore » of {sup 266}Sg (0.6 random correlations) were identified. Deduced decay properties were T{sub 1/2}=7.4{sub {minus}2.7}{sup +3.3} s (68{percent} c.i.) and E{sub {alpha}}=8.69 MeV (8{percent}), 8.76 MeV (23{percent}), 8.84 MeV (46{percent}), and 8.94 MeV (23{percent}) for {sup 265}Sg; and T{sub 1/2}=21{sub {minus}12}{sup +20} s (68{percent} c.i.) and E{sub {alpha}}=8.52 MeV (33{percent}) and 8.77 MeV (66{percent}) for {sup 266}Sg. The resolution of the detectors was between 50{endash}100 keV (full width at half maximum). Upper limits for SF of {le}35{percent} and {le}82{percent} were established for {sup 265}Sg and {sup 266}Sg, respectively. The upper limits for SF are given with a 16{percent} error probability. Using the lower error limits of the half-lives of {sup 265}Sg and {sup 266}Sg, the resulting lower limits for the partial SF half-lives are T{sub 1/2}{sup SF}({sup 265}Sg){ge}13 s and T{sub 1/2}{sup SF}({sup 266}Sg){ge}11 s. Correspondingly, the partial {alpha}-decay half-lives are between T{sub 1/2}{sup {alpha}}({sup 265}Sg)=4.7{endash}16.5 s (68{percent} c.i.) and T{sub 1/2}{sup {alpha}}({sup 266}Sg)=9{endash}228 s (68{percent} c.i.), using the upper and lower error limits of the half-lives of {sup 265}Sg and {sup 266}Sg. The lower limit on the partial SF half-life of {sup 266}Sg is in good agreement with theoretical predictions. Production cross sections of about 240 pb and 25 pb for the {alpha}-decay branch in {sup 265}Sg and {sup 266}Sg were estimated, respectively. {copyright} {ital 1998} {ital The American Physical Society}« less
Litvinenko, G I; Shurlygina, A V; Gritsyk, O B; Mel'nikova, E V; Tenditnik, M V; Avrorov, P A; Trufakin, V A
2015-10-01
We studied the response of the pineal gland and organs of the immune system to melatonin treatment in Wistar rats kept under conditions of abnormal illumination regimen. The animals were kept under natural light regimen or continuous illumination for 14 days and then received daily injections of melatonin (once a day in the evening) for 7 days. Administration of melatonin to rats kept at natural light cycle was followed by a decrease in percent ratio of CD4+8+ splenocytes and CD4-8+ thymocytes. In 24-h light with the following melatonin injections were accompanied by an increase in percent rate and absolute amount of CD4+8+ cells in the spleen, and a decrease in percent rate of CD11b/c and CD4-8+ splenocytes. In the thymus amount of CD4-8+ cells increased, and absolute number of CD4+25+ cells reduced. Melatonin significantly decreased lipofuscin concentration in the pineal gland during continuous light. Direction and intensity of effects of melatonin on parameters of cell immunity and state of the pineal gland were different under normal and continuous light conditions. It should be taken into account during using of this hormone for correction of immune and endocrine impairments developing during change in light/dark rhythm.
Measurement of flows for two irrigation districts in the lower Colorado River basin, Texas
Coplin, L.S.; Liscum, Fred; East, J.W.; Goldstein, L.B.
1996-01-01
The Lower Colorado River Authority sells and distributes water for irrigation of rice farms in two irrigation districts, the Lakeside district and the Gulf Coast district, in the lower Colorado River Basin of Texas. In 1993, the Lower Colorado River Authority implemented a water-measurement program to account for the water delivered to rice farms and to promote water conservation. During the rice-irrigation season (summer and fall) of 1995, the U.S. Geological Survey measured flows at 30 sites in the Lakeside district and 24 sites in the Gulf Coast district coincident with Lower Colorado River Authority measuring sites. In each district, the Survey made essentially simultaneous flow measurements with different types of meters twice a day once in the morning and once in the afternoon at each site on selected days for comparison with Lower Colorado River Authority measurements. One-hundred pairs of corresponding (same site, same date) Lower Colorado River Authority and U.S. Geological Survey measurements from the Lakeside district and 104 measurement pairs from the Gulf Coast district are compared statistically and graphically. For comparison, the measurement pairs are grouped by irrigation district and further subdivided by the time difference between corresponding measurements less than or equal to 1 hour or more than 1 hour. Wilcoxon signed-rank tests (to indicate whether two groups of paired observations are statistically different) on Lakeside district measurement pairs with 1 hour or less between measurements indicate that the Lower Colorado River Authority and U.S. Geological Survey measurements are not statistically different. The median absolute percent difference between the flow measurements is 5.9 percent; and 33 percent of the flow measurements differ by more than 10 percent. Similar statistical tests on Gulf Coast district measurement pairs with 1 hour or less between measurements indicate that the Lower Colorado River Authority and U.S. Geological Survey measurements are not statistically different. The median absolute percent difference between the flow measurements is 2.6 percent; and 30 percent of the flow measurements differ by more than 10 percent. The differences noted above between Lower Colorado River Authority and U.S. Geological Survey measurements with 1 hour or less between measurements and the differences between essentially simultaneous U.S. Geological Survey measurements are of similar orders of magnitude and, in some cases, very close.
The Absolute Magnitude of the Sun in Several Filters
NASA Astrophysics Data System (ADS)
Willmer, Christopher N. A.
2018-06-01
This paper presents a table with estimates of the absolute magnitude of the Sun and the conversions from vegamag to the AB and ST systems for several wide-band filters used in ground-based and space-based observatories. These estimates use the dustless spectral energy distribution (SED) of Vega, calibrated absolutely using the SED of Sirius, to set the vegamag zero-points and a composite spectrum of the Sun that coadds space-based observations from the ultraviolet to the near-infrared with models of the Solar atmosphere. The uncertainty of the absolute magnitudes is estimated by comparing the synthetic colors with photometric measurements of solar analogs and is found to be ∼0.02 mag. Combined with the uncertainty of ∼2% in the calibration of the Vega SED, the errors of these absolute magnitudes are ∼3%–4%. Using these SEDs, for three of the most utilized filters in extragalactic work the estimated absolute magnitudes of the Sun are M B = 5.44, M V = 4.81, and M K = 3.27 mag in the vegamag system and M B = 5.31, M V = 4.80, and M K = 5.08 mag in AB.
Health plan auditing: 100-percent-of-claims vs. random-sample audits.
Sillup, George P; Klimberg, Ronald K
2011-01-01
The objective of this study was to examine the relative efficacy of two different methodologies for auditing self-funded medical claim expenses: 100-percent-of-claims auditing versus random-sampling auditing. Multiple data sets of claim errors or 'exceptions' from two Fortune-100 corporations were analysed and compared to 100 simulated audits of 300- and 400-claim random samples. Random-sample simulations failed to identify a significant number and amount of the errors that ranged from $200,000 to $750,000. These results suggest that health plan expenses of corporations could be significantly reduced if they audited 100% of claims and embraced a zero-defect approach.
Analytical skin friction and heat transfer formula for compressible internal flows
NASA Technical Reports Server (NTRS)
Dechant, Lawrence J.; Tattar, Marc J.
1994-01-01
An analytic, closed-form friction formula for turbulent, internal, compressible, fully developed flow was derived by extending the incompressible law-of-the-wall relation to compressible cases. The model is capable of analyzing heat transfer as a function of constant surface temperatures and surface roughness as well as analyzing adiabatic conditions. The formula reduces to Prandtl's law of friction for adiabatic, smooth, axisymmetric flow. In addition, the formula reduces to the Colebrook equation for incompressible, adiabatic, axisymmetric flow with various roughnesses. Comparisons with available experiments show that the model averages roughly 12.5 percent error for adiabatic flow and 18.5 percent error for flow involving heat transfer.
Cost-effectiveness of the stream-gaging program in New Jersey
Schopp, R.D.; Ulery, R.L.
1984-01-01
The results of a study of the cost-effectiveness of the stream-gaging program in New Jersey are documented. This study is part of a 5-year nationwide analysis undertaken by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. This report identifies the principal uses of the data and relates those uses to funding sources, applies, at selected stations, alternative less costly methods (that is flow routing, regression analysis) for furnishing the data, and defines a strategy for operating the program which minimizes uncertainty in the streamflow data for specific operating budgets. Uncertainty in streamflow data is primarily a function of the percentage of missing record and the frequency of discharge measurements. In this report, 101 continuous stream gages and 73 crest-stage or stage-only gages are analyzed. A minimum budget of $548,000 is required to operate the present stream-gaging program in New Jersey with an average standard error of 27.6 percent. The maximum budget analyzed was $650,000, which resulted in an average standard error of 17.8 percent. The 1983 budget of $569,000 resulted in a standard error of 24.9 percent under present operating policy. (USGS)
TOMS total ozone data compared with northern latitude Dobson ground stations
NASA Technical Reports Server (NTRS)
Heese, B.; Barthel, K.; Hov, O.
1994-01-01
Ozone measurements from the Total Ozone Mapping Spectrometer on the Nimbus 7 satellite are compared with ground-based measurements from five Dobson stations at northern latitudes to evaluate the accuracy of the TOMS data, particularly in regions north of 50 deg N. The measurements from the individual stations show mean differences from -2.5 percent up to plus 8.3 percent relative to TOMS measurements and two of the ground stations, Oslo and Longyearbyen, show a significant drift of plus 1.2 percent and plus 3.7 percent per year, respectively. It can be shown from nearly simultaneous measurements in two different wavelength double pairs at Oslo that at least 2 percent of the differences result from the use of the CC' wavelength double pair instead of the standard AD wavelength double pair. Since all Norwegian stations used the CC' wavelength double pair exclusively a similar error can be assumed for Tromso and Longyearbyren. A comparison between the tropospheric ozone content in TOMS data and from ECC ozonesonde measurements at Ny-Alesund and Bear Island shows that the amount of tropospheric ozone in the standard profiles used in the TOMS algorithm is too low, which leads to an error of about 2 percent in total ozone. Particularly at high solar zenith angles (greater than 80 deg), Dobson measurements become unreliable. They are up to 20 percent lower than TOMS measurements averaged over solar zenith angles of 88 deg to 89 deg.
NASA Technical Reports Server (NTRS)
Steffen, K.; Schweiger, A. J.
1990-01-01
The validation of sea ice products derived from the Special Sensor Microwave Imager (SSM/I) on board a DMSP platform is examined using data from the Landsat MSS and NOAA-AVHRR sensors. Image processing techniques for retrieving ice concentrations from each type of imagery are developed and results are intercompared to determine the ice parameter retrieval accuracy of the SSM/I NASA-Team algorithm. For case studies in the Beaufort Sea and East Greenland Sea, average retrieval errors of the SSM/I algorithm are between 1.7 percent for spring conditions and 4.3 percent during freeze up in comparison with Landsat derived ice concentrations. For a case study in the East Greenland Sea, SSM/I derived ice concentration in comparison with AVHRR imagery display a mean error of 9.6 percent.
Evaluating the accuracy and large inaccuracy of two continuous glucose monitoring systems.
Leelarathna, Lalantha; Nodale, Marianna; Allen, Janet M; Elleri, Daniela; Kumareswaran, Kavita; Haidar, Ahmad; Caldwell, Karen; Wilinska, Malgorzata E; Acerini, Carlo L; Evans, Mark L; Murphy, Helen R; Dunger, David B; Hovorka, Roman
2013-02-01
This study evaluated the accuracy and large inaccuracy of the Freestyle Navigator (FSN) (Abbott Diabetes Care, Alameda, CA) and Dexcom SEVEN PLUS (DSP) (Dexcom, Inc., San Diego, CA) continuous glucose monitoring (CGM) systems during closed-loop studies. Paired CGM and plasma glucose values (7,182 data pairs) were collected, every 15-60 min, from 32 adults (36.2±9.3 years) and 20 adolescents (15.3±1.5 years) with type 1 diabetes who participated in closed-loop studies. Levels 1, 2, and 3 of large sensor error with increasing severity were defined according to absolute relative deviation greater than or equal to ±40%, ±50%, and ±60% at a reference glucose level of ≥6 mmol/L or absolute deviation greater than or equal to ±2.4 mmol/L,±3.0 mmol/L, and ±3.6 mmol/L at a reference glucose level of <6 mmol/L. Median absolute relative deviation was 9.9% for FSN and 12.6% for DSP. Proportions of data points in Zones A and B of Clarke error grid analysis were similar (96.4% for FSN vs. 97.8% for DSP). Large sensor over-reading, which increases risk of insulin over-delivery and hypoglycemia, occurred two- to threefold more frequently with DSP than FSN (once every 2.5, 4.6, and 10.7 days of FSN use vs. 1.2, 2.0, and 3.7 days of DSP use for Level 1-3 errors, respectively). At levels 2 and 3, large sensor errors lasting 1 h or longer were absent with FSN but persisted with DSP. FSN and DSP differ substantially in the frequency and duration of large inaccuracy despite only modest differences in conventional measures of numerical and clinical accuracy. Further evaluations are required to confirm that FSN is more suitable for integration into closed-loop delivery systems.
Effect of proprioception training on knee joint position sense in female team handball players.
Pánics, G; Tállay, A; Pavlik, A; Berkes, I
2008-06-01
A number of studies have shown that proprioception training can reduce the risk of injuries in pivoting sports, but the mechanism is not clearly understood. To determine the contributing effects of propioception on knee joint position sense among team handball players. Prospective cohort study. Two professional female handball teams were followed prospectively for the 2005-6 season. 20 players in the intervention team followed a prescribed proprioceptive training programme while 19 players in the control team did not have a specific propioceptive training programme. The coaches recorded all exposures of the individual players. The location and nature of injuries were recorded. Joint position sense (JPS) was measured by a goniometer on both knees in three angle intervals, testing each angle five times. Assessments were performed before and after the season by the same examiner for both teams. In the intervention team a third assessment was also performed during the season. Complete data were obtained for 15 subjects in the intervention team and 16 in the control team. Absolute error score, error of variation score and SEM were calculated and the results of the intervention and control teams were compared. The proprioception sensory function of the players in the intervention team was significantly improved between the assessments made at the start and the end of the season (mean (SD) absolute error 9.78-8.21 degrees (7.19-6.08 degrees ) vs 3.61-4.04 degrees (3.71-3.20 degrees ), p<0.05). No improvement was seen in the sensory function in the control team between the start and the end of the season (mean (SD) absolute error 6.31-6.22 degrees (6.12-3.59 degrees ) vs 6.13-6.69 degrees (7.46-6.49 degrees ), p>0.05). This is the first study to show that proprioception training improves the joint position sense in elite female handball players. This may explain the effect of neuromuscular training in reducing the injury rate.
Calculating tumor trajectory and dose-of-the-day using cone-beam CT projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Bernard L., E-mail: bernard.jones@ucdenver.edu; Westerly, David; Miften, Moyed
2015-02-15
Purpose: Cone-beam CT (CBCT) projection images provide anatomical data in real-time over several respiratory cycles, forming a comprehensive picture of tumor movement. The authors developed and validated a method which uses these projections to determine the trajectory of and dose to highly mobile tumors during each fraction of treatment. Methods: CBCT images of a respiration phantom were acquired, the trajectory of which mimicked a lung tumor with high amplitude (up to 2.5 cm) and hysteresis. A template-matching algorithm was used to identify the location of a steel BB in each CBCT projection, and a Gaussian probability density function for themore » absolute BB position was calculated which best fit the observed trajectory of the BB in the imager geometry. Two modifications of the trajectory reconstruction were investigated: first, using respiratory phase information to refine the trajectory estimation (Phase), and second, using the Monte Carlo (MC) method to sample the estimated Gaussian tumor position distribution. The accuracies of the proposed methods were evaluated by comparing the known and calculated BB trajectories in phantom-simulated clinical scenarios using abdominal tumor volumes. Results: With all methods, the mean position of the BB was determined with accuracy better than 0.1 mm, and root-mean-square trajectory errors averaged 3.8% ± 1.1% of the marker amplitude. Dosimetric calculations using Phase methods were more accurate, with mean absolute error less than 0.5%, and with error less than 1% in the highest-noise trajectory. MC-based trajectories prevent the overestimation of dose, but when viewed in an absolute sense, add a small amount of dosimetric error (<0.1%). Conclusions: Marker trajectory and target dose-of-the-day were accurately calculated using CBCT projections. This technique provides a method to evaluate highly mobile tumors using ordinary CBCT data, and could facilitate better strategies to mitigate or compensate for motion during stereotactic body radiotherapy.« less
Investigation of advanced phase-shifting projected fringe profilometry techniques
NASA Astrophysics Data System (ADS)
Liu, Hongyu
1999-11-01
The phase-shifting projected fringe profilometry (PSPFP) technique is a powerful tool in the profile measurements of rough engineering surfaces. Compared with other competing techniques, this technique is notable for its full-field measurement capacity, system simplicity, high measurement speed, and low environmental vulnerability. The main purpose of this dissertation is to tackle three important problems, which severely limit the capability and the accuracy of the PSPFP technique, with some new approaches. Chapter 1 provides some background information of the PSPFP technique including the measurement principles, basic features, and related techniques is briefly introduced. The objectives and organization of the thesis are also outlined. Chapter 2 gives a theoretical treatment to the absolute PSPFP measurement. The mathematical formulations and basic requirements of the absolute PSPFP measurement and its supporting techniques are discussed in detail. Chapter 3 introduces the experimental verification of the proposed absolute PSPFP technique. Some design details of a prototype system are discussed as supplements to the previous theoretical analysis. Various fundamental experiments performed for concept verification and accuracy evaluation are introduced together with some brief comments. Chapter 4 presents the theoretical study of speckle- induced phase measurement errors. In this analysis, the expression for speckle-induced phase errors is first derived based on the multiplicative noise model of image- plane speckles. The statistics and the system dependence of speckle-induced phase errors are then thoroughly studied through numerical simulations and analytical derivations. Based on the analysis, some suggestions on the system design are given to improve measurement accuracy. Chapter 5 discusses a new technique combating surface reflectivity variations. The formula used for error compensation is first derived based on a simplified model of the detection process. The techniques coping with two major effects of surface reflectivity variations are then introduced. Some fundamental problems in the proposed technique are studied through simulations. Chapter 6 briefly summarizes the major contributions of the current work and provides some suggestions for the future research.
Derated Application of Parts for ESD (Electronic Systems Division) System Development. Revision
1985-03-01
SSVAC Coearaen ~ree , neem ! adIrcuit ’design)~ Stress derating., __ o~~~~~:.mponent anplicatieonl ~blc ,ub~ This documnent establishes part...Channel & Breakdown Voltage 60% 70% 70% o N Channel) Max Tj (C) 95 105 125 48 MAXDM ALLOWABLE ABSOLUTE VALUE * OIL PERCENT OF RATED VALUE PART TYPE
FLIR Common Module Design Manual. Revision 1
1978-03-01
degrade off-axis. The afocal assem- bly is very critical to system performance and normally constitutes a signif- icant portion of the system...not significantly degrade the performance at 10 lp/mm because chromatic errors are about 1/2 of the diffraction error. The chromatic errors are... degradation , though only 3 percent, is unavoidable. It is caused by field curvature in the galilean afocal assembly. This field curvature is
NASA Technical Reports Server (NTRS)
Carpenter, K. G.; Wing, R. F.; Stencel, R. E.
1985-01-01
The ultraviolet spectrum of Arcturus has been observed at high resolution with the IUE satellite. Line identifications, mean absolute 'continuum' flux measurements, integrated absolute emission-line fluxes, and measurements of selected absorption line strengths are presented for the 2250-2930 A region. In the 1150-2000 A region, identifications are given primarily on the basis of low-resolution spectra. Chromospheric emission lines have been identified with low-excitation species including H I, C I, C II, O I, Mg I, Mg II, Al II, Si I, Si II, S I, and Fe II; there is no evidence for lines of C IV, N V, or other species requiring high temperatures. A search for molecular absorption features in the 2500-2930 A interval has led to several tentative identifications, but only OH could be established as definitely present. Iron lines strongly dominate the identifications in the 2250-2930 A region, Fe II accounting for about 86 percent of the emission features and Fe I for 43 percent of the identified absorption features.
A radio telescope for the calibration of radio sources at 32 gigahertz
NASA Technical Reports Server (NTRS)
Gatti, M. S.; Stewart, S. R.; Bowen, J. G.; Paulsen, E. B.
1994-01-01
A 1.5-m-diameter radio telescope has been designed, developed, and assembled to directly measure the flux density of radio sources in the 32-GHz (Ka-band) frequency band. The main goal of the design and development was to provide a system that could yield the greatest absolute accuracy yet possible with such a system. The accuracy of the measurements have a heritage that is traceable to the National Institute of Standards and Technology. At the present time, the absolute accuracy of flux density measurements provided by this telescope system, during Venus observations at nearly closest approach to Earth, is plus or minus 5 percent, with an associated precision of plus or minus 2 percent. Combining a cooled high-electron mobility transistor low-noise amplifier, twin-beam Dicke switching antenna, and accurate positioning system resulted in a state-of-the-art system at 32 GHz. This article describes the design and performance of the system as it was delivered to the Owens Valley Radio Observatory to support direct calibrations of the strongest radio sources at Ka-band.
In search of the `impenetrable' volume of a molecule in a noncovalent complex
NASA Astrophysics Data System (ADS)
Murray, Jane S.; Politzer, Peter
2018-03-01
We propose to characterise the "impenetrable" volumes of molecules A and B in a complex A--B by finding that contour of its electronic density that separates the molecular surfaces of A and B but leaves them almost touching. The volume of the complex within that contour is always less than within the 0.001 au contour. The percent difference measures the interpenetration of the two molecules at equilibrium, and is found to directly correlate with the binding energy of the complex. We interpret the volume of each molecule that is enclosed by the almost-touching contour as that molecule's impenetrable volume relative to its particular partner. The percents by which the molecules' relative impenetrable volumes differ from their 0.001 au volumes in the free states also correlate with the strengths of the interactions. This allows the "absolute" impenetrable volume of any molecule to be estimated as ∼25% of its 0.001 au volume in the free state. However this absolute impenetrable volume is only approached by the molecule in a relatively strong interaction.
Algebra Students' Difficulty with Fractions: An Error Analysis
ERIC Educational Resources Information Center
Brown, George; Quinn, Robert J.
2006-01-01
An analysis of the 1990 National Assessment of Educational Progress (NAEP) found that only 46 percent of all high school seniors demonstrated success with a grasp of decimals, percentages, fractions and simple algebra. This article investigates error patterns that emerge as students attempt to answer questions involving the ability to apply…
Cross sections for H(-) and Cl(-) production from HCl by dissociative electron attachment
NASA Technical Reports Server (NTRS)
Orient, O. J.; Srivastava, S. K.
1985-01-01
A crossed target beam-electron beam collision geometry and a quadrupole mass spectrometer have been used to conduct dissociative electron attachment cross section measurements for the case of H(-) and Cl(-) production from HCl. The relative flow technique is used to determine the absolute values of cross sections. A tabulation is given of the attachment energies corresponding to various cross section maxima. Error sources contributing to total errors are also estimated.
Gingerich, Stephen B.
2005-01-01
Flow-duration statistics under natural (undiverted) and diverted flow conditions were estimated for gaged and ungaged sites on 21 streams in northeast Maui, Hawaii. The estimates were made using the optimal combination of continuous-record gaging-station data, low-flow measurements, and values determined from regression equations developed as part of this study. Estimated 50- and 95-percent flow duration statistics for streams are presented and the analyses done to develop and evaluate the methods used in estimating the statistics are described. Estimated streamflow statistics are presented for sites where various amounts of streamflow data are available as well as for locations where no data are available. Daily mean flows were used to determine flow-duration statistics for continuous-record stream-gaging stations in the study area following U.S. Geological Survey established standard methods. Duration discharges of 50- and 95-percent were determined from total flow and base flow for each continuous-record station. The index-station method was used to adjust all of the streamflow records to a common, long-term period. The gaging station on West Wailuaiki Stream (16518000) was chosen as the index station because of its record length (1914-2003) and favorable geographic location. Adjustments based on the index-station method resulted in decreases to the 50-percent duration total flow, 50-percent duration base flow, 95-percent duration total flow, and 95-percent duration base flow computed on the basis of short-term records that averaged 7, 3, 4, and 1 percent, respectively. For the drainage basin of each continuous-record gaged site and selected ungaged sites, morphometric, geologic, soil, and rainfall characteristics were quantified using Geographic Information System techniques. Regression equations relating the non-diverted streamflow statistics to basin characteristics of the gaged basins were developed using ordinary-least-squares regression analyses. Rainfall rate, maximum basin elevation, and the elongation ratio of the basin were the basin characteristics used in the final regression equations for 50-percent duration total flow and base flow. Rainfall rate and maximum basin elevation were used in the final regression equations for the 95-percent duration total flow and base flow. The relative errors between observed and estimated flows ranged from 10 to 20 percent for the 50-percent duration total flow and base flow, and from 29 to 56 percent for the 95-percent duration total flow and base flow. The regression equations developed for this study were used to determine the 50-percent duration total flow, 50-percent duration base flow, 95-percent duration total flow, and 95-percent duration base flow at selected ungaged diverted and undiverted sites. Estimated streamflow, prediction intervals, and standard errors were determined for 48 ungaged sites in the study area and for three gaged sites west of the study area. Relative errors were determined for sites where measured values of 95-percent duration discharge of total flow were available. East of Keanae Valley, the 95-percent duration discharge equation generally underestimated flow, and within and west of Keanae Valley, the equation generally overestimated flow. Reduction in 50- and 95-percent flow-duration values in stream reaches affected by diversions throughout the study area average 58 to 60 percent.
Jeyasingh, Suganthi; Veluchamy, Malathi
2017-05-01
Early diagnosis of breast cancer is essential to save lives of patients. Usually, medical datasets include a large variety of data that can lead to confusion during diagnosis. The Knowledge Discovery on Database (KDD) process helps to improve efficiency. It requires elimination of inappropriate and repeated data from the dataset before final diagnosis. This can be done using any of the feature selection algorithms available in data mining. Feature selection is considered as a vital step to increase the classification accuracy. This paper proposes a Modified Bat Algorithm (MBA) for feature selection to eliminate irrelevant features from an original dataset. The Bat algorithm was modified using simple random sampling to select the random instances from the dataset. Ranking was with the global best features to recognize the predominant features available in the dataset. The selected features are used to train a Random Forest (RF) classification algorithm. The MBA feature selection algorithm enhanced the classification accuracy of RF in identifying the occurrence of breast cancer. The Wisconsin Diagnosis Breast Cancer Dataset (WDBC) was used for estimating the performance analysis of the proposed MBA feature selection algorithm. The proposed algorithm achieved better performance in terms of Kappa statistic, Mathew’s Correlation Coefficient, Precision, F-measure, Recall, Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE). Creative Commons Attribution License
Rousset, Sylvie; Fardet, Anthony; Lacomme, Philippe; Normand, Sylvie; Montaurier, Christophe; Boirie, Yves; Morio, Béatrice
2015-01-01
The objective of this study was to evaluate the validity of total energy expenditure (TEE) provided by Actiheart and Armband. Normal-weight adult volunteers wore both devices either for 17 hours in a calorimetric chamber (CC, n = 49) or for 10 days in free-living conditions (FLC) outside the laboratory (n = 41). The two devices and indirect calorimetry or doubly labelled water, respectively, were used to estimate TEE in the CC group and FLC group. In the CC, the relative value of TEE error was not significant (p > 0.05) for Actiheart but significantly different from zero for Armband, showing TEE underestimation (-4.9%, p < 0.0001). However, the mean absolute values of errors were significantly different between Actiheart and Armband: 8.6% and 6.7%, respectively (p = 0.05). Armband was more accurate for estimating TEE during sleeping, rest, recovery periods and sitting-standing. Actiheart provided better estimation during step and walking. In FLC, no significant error in relative value was detected. Nevertheless, Armband produced smaller errors in absolute value than Actiheart (8.6% vs. 12.8%). The distributions of differences were more scattered around the means, suggesting a higher inter-individual variability in TEE estimated by Actiheart than by Armband. Our results show that both monitors are appropriate for estimating TEE. Armband is more effective than Actiheart at the individual level for daily light-intensity activities.
NASA Astrophysics Data System (ADS)
Möhler, Christian; Russ, Tom; Wohlfahrt, Patrick; Elter, Alina; Runz, Armin; Richter, Christian; Greilich, Steffen
2018-01-01
An experimental setup for consecutive measurement of ion and x-ray absorption in tissue or other materials is introduced. With this setup using a 3D-printed sample container, the reference stopping-power ratio (SPR) of materials can be measured with an uncertainty of below 0.1%. A total of 65 porcine and bovine tissue samples were prepared for measurement, comprising five samples each of 13 tissue types representing about 80% of the total body mass (three different muscle and fatty tissues, liver, kidney, brain, heart, blood, lung and bone). Using a standard stoichiometric calibration for single-energy CT (SECT) as well as a state-of-the-art dual-energy CT (DECT) approach, SPR was predicted for all tissues and then compared to the measured reference. With the SECT approach, the SPRs of all tissues were predicted with a mean error of (-0.84 ± 0.12)% and a mean absolute error of (1.27 ± 0.12)%. In contrast, the DECT-based SPR predictions were overall consistent with the measured reference with a mean error of (-0.02 ± 0.15)% and a mean absolute error of (0.10 ± 0.15)%. Thus, in this study, the potential of DECT to decrease range uncertainty could be confirmed in biological tissue.
[A Quality Assurance (QA) System with a Web Camera for High-dose-rate Brachytherapy].
Hirose, Asako; Ueda, Yoshihiro; Oohira, Shingo; Isono, Masaru; Tsujii, Katsutomo; Inui, Shouki; Masaoka, Akira; Taniguchi, Makoto; Miyazaki, Masayoshi; Teshima, Teruki
2016-03-01
The quality assurance (QA) system that simultaneously quantifies the position and duration of an (192)Ir source (dwell position and time) was developed and the performance of this system was evaluated in high-dose-rate brachytherapy. This QA system has two functions to verify and quantify dwell position and time by using a web camera. The web camera records 30 images per second in a range from 1,425 mm to 1,505 mm. A user verifies the source position from the web camera at real time. The source position and duration were quantified with the movie using in-house software which was applied with a template-matching technique. This QA system allowed verification of the absolute position in real time and quantification of dwell position and time simultaneously. It was evident from the verification of the system that the mean of step size errors was 0.31±0.1 mm and that of dwell time errors 0.1±0.0 s. Absolute position errors can be determined with an accuracy of 1.0 mm at all dwell points in three step sizes and dwell time errors with an accuracy of 0.1% in more than 10.0 s of the planned time. This system is to provide quick verification and quantification of the dwell position and time with high accuracy at various dwell positions without depending on the step size.
Long-term neuromuscular training and ankle joint position sense.
Kynsburg, A; Pánics, G; Halasi, T
2010-06-01
Preventive effect of proprioceptive training is proven by decreasing injury incidence, but its proprioceptive mechanism is not. Major hypothesis: the training has a positive long-term effect on ankle joint position sense in athletes of a high-risk sport (handball). Ten elite-level female handball-players represented the intervention group (training-group), 10 healthy athletes of other sports formed the control-group. Proprioceptive training was incorporated into the regular training regimen of the training-group. Ankle joint position sense function was measured with the "slope-box" test, first described by Robbins et al. Testing was performed one day before the intervention and 20 months later. Mean absolute estimate errors were processed for statistical analysis. Proprioceptive sensory function improved regarding all four directions with a high significance (p<0.0001; avg. mean estimate error improvement: 1.77 degrees). This was also highly significant (p< or =0.0002) in each single directions, with avg. mean estimate error improvement between 1.59 degrees (posterior) and 2.03 degrees (anterior). Mean absolute estimate errors at follow-up (2.24 degrees +/-0.88 degrees) were significantly lower than in uninjured controls (3.29 degrees +/-1.15 degrees) (p<0.0001). Long-term neuromuscular training has improved ankle joint position sense function in the investigated athletes. This joint position sense improvement can be one of the explanations for injury rate reduction effect of neuromuscular training.
The statistical properties and possible causes of polar motion prediction errors
NASA Astrophysics Data System (ADS)
Kosek, Wieslaw; Kalarus, Maciej; Wnek, Agnieszka; Zbylut-Gorska, Maria
2015-08-01
The pole coordinate data predictions from different prediction contributors of the Earth Orientation Parameters Combination of Prediction Pilot Project (EOPCPPP) were studied to determine the statistical properties of polar motion forecasts by looking at the time series of differences between them and the future IERS pole coordinates data. The mean absolute errors, standard deviations as well as the skewness and kurtosis of these differences were computed together with their error bars as a function of prediction length. The ensemble predictions show a little smaller mean absolute errors or standard deviations however their skewness and kurtosis values are similar as the for predictions from different contributors. The skewness and kurtosis enable to check whether these prediction differences satisfy normal distribution. The kurtosis values diminish with the prediction length which means that the probability distribution of these prediction differences is becoming more platykurtic than letptokurtic. Non zero skewness values result from oscillating character of these differences for particular prediction lengths which can be due to the irregular change of the annual oscillation phase in the joint fluid (atmospheric + ocean + land hydrology) excitation functions. The variations of the annual oscillation phase computed by the combination of the Fourier transform band pass filter and the Hilbert transform from pole coordinates data as well as from pole coordinates model data obtained from fluid excitations are in a good agreement.
An Improved Computational Method for the Calculation of Mixture Liquid-Vapor Critical Points
NASA Astrophysics Data System (ADS)
Dimitrakopoulos, Panagiotis; Jia, Wenlong; Li, Changjun
2014-05-01
Knowledge of critical points is important to determine the phase behavior of a mixture. This work proposes a reliable and accurate method in order to locate the liquid-vapor critical point of a given mixture. The theoretical model is developed from the rigorous definition of critical points, based on the SRK equation of state (SRK EoS) or alternatively, on the PR EoS. In order to solve the resulting system of nonlinear equations, an improved method is introduced into an existing Newton-Raphson algorithm, which can calculate all the variables simultaneously in each iteration step. The improvements mainly focus on the derivatives of the Jacobian matrix, on the convergence criteria, and on the damping coefficient. As a result, all equations and related conditions required for the computation of the scheme are illustrated in this paper. Finally, experimental data for the critical points of 44 mixtures are adopted in order to validate the method. For the SRK EoS, average absolute errors of the predicted critical-pressure and critical-temperature values are 123.82 kPa and 3.11 K, respectively, whereas the commercial software package Calsep PVTSIM's prediction errors are 131.02 kPa and 3.24 K. For the PR EoS, the two above mentioned average absolute errors are 129.32 kPa and 2.45 K, while the PVTSIM's errors are 137.24 kPa and 2.55 K, respectively.
Absolute method of measuring magnetic susceptibility
Thorpe, A.; Senftle, F.E.
1959-01-01
An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.
Ching, Joan M; Williams, Barbara L; Idemoto, Lori M; Blackmore, C Craig
2014-08-01
Virginia Mason Medical Center (Seattle) employed the Lean concept of Jidoka (automation with a human touch) to plan for and deploy bar code medication administration (BCMA) to hospitalized patients. Integrating BCMA technology into the nursing work flow with minimal disruption was accomplished using three steps ofJidoka: (1) assigning work to humans and machines on the basis of their differing abilities, (2) adapting machines to the human work flow, and (3) monitoring the human-machine interaction. Effectiveness of BCMA to both reinforce safe administration practices and reduce medication errors was measured using the Collaborative Alliance for Nursing Outcomes (CALNOC) Medication Administration Accuracy Quality Study methodology. Trained nurses observed a total of 16,149 medication doses for 3,617 patients in a three-year period. Following BCMA implementation, the number of safe practice violations decreased from 54.8 violations/100 doses (January 2010-September 2011) to 29.0 violations/100 doses (October 2011-December 2012), resulting in an absolute risk reduction of 25.8 violations/100 doses (95% confidence interval [CI]: 23.7, 27.9, p < .001). The number of medication errors decreased from 5.9 errors/100 doses at baseline to 3.0 errors/100 doses after BCMA implementation (absolute risk reduction: 2.9 errors/100 doses [95% CI: 2.2, 3.6,p < .001]). The number of unsafe administration practices (estimate, -5.481; standard error 1.133; p < .001; 95% CI: -7.702, -3.260) also decreased. As more hospitals respond to health information technology meaningful use incentives, thoughtful, methodical, and well-managed approaches to technology deployment are crucial. This work illustrates how Jidoka offers opportunities for a smooth transition to new technology.
Farooqui, Javed Hussain; Sharma, Mansi; Koul, Archana; Dutta, Ranjan; Shroff, Noshir Minoo
2017-01-01
The aim of this study is to compare two different methods of analysis of preoperative reference marking for toric intraocular lens (IOL) after marking with an electronic marker. Cataract and IOL Implantation Service, Shroff Eye Centre, New Delhi, India. Fifty-two eyes of thirty patients planned for toric IOL implantation were included in the study. All patients had preoperative marking performed with an electronic preoperative two-step toric IOL reference marker (ASICO AE-2929). Reference marks were placed at 3-and 9-o'clock positions. Marks were analyzed with two systems. First, slit-lamp photographs taken and analyzed using Adobe Photoshop (version 7.0). Second, Tracey iTrace Visual Function Analyzer (version 5.1.1) was used for capturing corneal topograph examination and position of marks noted. Amount of alignment error was calculated. Mean absolute rotation error was 2.38 ± 1.78° by Photoshop and 2.87 ± 2.03° by iTrace which was not statistically significant ( P = 0.215). Nearly 72.7% of eyes by Photoshop and 61.4% by iTrace had rotation error ≤3° ( P = 0.359); and 90.9% of eyes by Photoshop and 81.8% by iTrace had rotation error ≤5° ( P = 0.344). No significant difference in absolute amount of rotation between eyes when analyzed by either method. Difference in reference mark positions when analyzed by two systems suggests the presence of varying cyclotorsion at different points of time. Both analysis methods showed an approximately 3° of alignment error, which could contribute to 10% loss of astigmatic correction of toric IOL. This can be further compounded by intra-operative marking errors and final placement of IOL in the bag.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L.
Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter atmore » central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Lee, H; Choi, K
Purpose: The mechanical quality assurance (QA) of medical accelerators consists of a time consuming series of procedures. Since most of the procedures are done manually – e.g., checking gantry rotation angle with the naked eye using a level attached to the gantry –, it is considered to be a process with high potential for human errors. To remove the possibilities of human errors and reduce the procedure duration, we developed a smartphone application for automated mechanical QA. Methods: The preparation for the automated process was done by attaching a smartphone to the gantry facing upward. For the assessments of gantrymore » and collimator angle indications, motion sensors (gyroscope, accelerator, and magnetic field sensor) embedded in the smartphone were used. For the assessments of jaw position indicator, cross-hair centering, and optical distance indicator (ODI), an optical-image processing module using a picture taken by the high-resolution camera embedded in the smartphone was implemented. The application was developed with the Android software development kit (SDK) and OpenCV library. Results: The system accuracies in terms of angle detection error and length detection error were < 0.1° and < 1 mm, respectively. The mean absolute error for gantry and collimator rotation angles were 0.03° and 0.041°, respectively. The mean absolute error for the measured light field size was 0.067 cm. Conclusion: The automated system we developed can be used for the mechanical QA of medical accelerators with proven accuracy. For more convenient use of this application, the wireless communication module is under development. This system has a strong potential for the automation of the other QA procedures such as light/radiation field coincidence and couch translation/rotations.« less
Bae, Hyoung Won; Lee, Yun Ha; Kim, Do Wook; Lee, Taekjune; Hong, Samin; Seong, Gong Je; Kim, Chan Yun
2016-08-01
The objective of the study is to examine the effect of trabeculectomy on intraocular lens power calculations in patients with open-angle glaucoma (OAG) undergoing cataract surgery. The design is retrospective data analysis. There are a total of 55 eyes of 55 patients with OAG who had a cataract surgery alone or in combination with trabeculectomy. We classified OAG subjects into the following groups based on surgical history: only cataract surgery (OC group), cataract surgery after prior trabeculectomy (CAT group), and cataract surgery performed in combination with trabeculectomy (CCT group). Differences between actual and predicted postoperative refractive error. Mean error (ME, difference between postoperative and predicted SE) in the CCT group was significantly lower (towards myopia) than that of the OC group (P = 0.008). Additionally, mean absolute error (MAE, absolute value of ME) in the CAT group was significantly greater than in the OC group (P = 0.006). Using linear mixed models, the ME calculated with the SRK II formula was more accurate than the ME predicted by the SRK T formula in the CAT (P = 0.032) and CCT (P = 0.035) groups. The intraocular lens power prediction accuracy was lower in the CAT and CCT groups than in the OC group. The prediction error was greater in the CAT group than in the OC group, and the direction of the prediction error tended to be towards myopia in the CCT group. The SRK II formula may be more accurate in predicting residual refractive error in the CAT and CCT groups. © 2016 Royal Australian and New Zealand College of Ophthalmologists.