Liu, Xiaofeng Steven
2011-05-01
The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.
Willem W.S. van Hees
2002-01-01
Comparisons of estimated standard error for a ratio-of-means (ROM) estimator are presented for forest resource inventories conducted in southeast Alaska between 1995 and 2000. Estimated standard errors for the ROM were generated by using a traditional variance estimator and also approximated by bootstrap methods. Estimates of standard error generated by both...
Hypothesis Testing Using Factor Score Regression
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2015-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886
Computer Programs for the Semantic Differential: Further Modifications.
ERIC Educational Resources Information Center
Lawson, Edwin D.; And Others
The original nine programs for semantic differential analysis have been condensed into three programs which have been further refined and augmented. They yield: (1) means, standard deviations, and standard errors for each subscale on each concept; (2) Evaluation, Potency, and Activity (EPA) means, standard deviations, and standard errors; (3)…
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
What to use to express the variability of data: Standard deviation or standard error of mean?
Barde, Mohini P; Barde, Prajakt J
2012-07-01
Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.
NASA Technical Reports Server (NTRS)
Knox, C. E.
1978-01-01
Navigation error data from these flights are presented in a format utilizing three independent axes - horizontal, vertical, and time. The navigation position estimate error term and the autopilot flight technical error term are combined to form the total navigation error in each axis. This method of error presentation allows comparisons to be made between other 2-, 3-, or 4-D navigation systems and allows experimental or theoretical determination of the navigation error terms. Position estimate error data are presented with the navigation system position estimate based on dual DME radio updates that are smoothed with inertial velocities, dual DME radio updates that are smoothed with true airspeed and magnetic heading, and inertial velocity updates only. The normal mode of navigation with dual DME updates that are smoothed with inertial velocities resulted in a mean error of 390 m with a standard deviation of 150 m in the horizontal axis; a mean error of 1.5 m low with a standard deviation of less than 11 m in the vertical axis; and a mean error as low as 252 m with a standard deviation of 123 m in the time axis.
Total ozone trend significance from space time variability of daily Dobson data
NASA Technical Reports Server (NTRS)
Wilcox, R. W.
1981-01-01
Estimates of standard errors of total ozone time and area means, as derived from ozone's natural temporal and spatial variability and autocorrelation in middle latitudes determined from daily Dobson data are presented. Assessing the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. Standard errors of time averages depend on the temporal variability and correlation of the averaged parameter. Trend detectability is discussed, both for the present network and for satellite measurements.
A Note on Standard Deviation and Standard Error
ERIC Educational Resources Information Center
Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth
2010-01-01
Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.
A Hands-On Exercise Improves Understanding of the Standard Error of the Mean
ERIC Educational Resources Information Center
Ryan, Robert S.
2006-01-01
One of the most difficult concepts for statistics students is the standard error of the mean. To improve understanding of this concept, 1 group of students used a hands-on procedure to sample from small populations representing either a true or false null hypothesis. The distribution of 120 sample means (n = 3) from each population had standard…
Pleil, Joachim D
2016-01-01
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.
Evrendilek, Fatih
2007-12-12
This study aims at quantifying spatio-temporal dynamics of monthly mean dailyincident photosynthetically active radiation (PAR) over a vast and complex terrain such asTurkey. The spatial interpolation method of universal kriging, and the combination ofmultiple linear regression (MLR) models and map algebra techniques were implemented togenerate surface maps of PAR with a grid resolution of 500 x 500 m as a function of fivegeographical and 14 climatic variables. Performance of the geostatistical and MLR modelswas compared using mean prediction error (MPE), root-mean-square prediction error(RMSPE), average standard prediction error (ASE), mean standardized prediction error(MSPE), root-mean-square standardized prediction error (RMSSPE), and adjustedcoefficient of determination (R² adj. ). The best-fit MLR- and universal kriging-generatedmodels of monthly mean daily PAR were validated against an independent 37-year observeddataset of 35 climate stations derived from 160 stations across Turkey by the Jackknifingmethod. The spatial variability patterns of monthly mean daily incident PAR were moreaccurately reflected in the surface maps created by the MLR-based models than in thosecreated by the universal kriging method, in particular, for spring (May) and autumn(November). The MLR-based spatial interpolation algorithms of PAR described in thisstudy indicated the significance of the multifactor approach to understanding and mappingspatio-temporal dynamics of PAR for a complex terrain over meso-scales.
ERIC Educational Resources Information Center
Pan, Tianshu; Yin, Yue
2012-01-01
In the discussion of mean square difference (MSD) and standard error of measurement (SEM), Barchard (2012) concluded that the MSD between 2 sets of test scores is greater than 2(SEM)[superscript 2] and SEM underestimates the score difference between 2 tests when the 2 tests are not parallel. This conclusion has limitations for 2 reasons. First,…
Hess, G.W.; Bohman, L.R.
1996-01-01
Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada were developed using streamflow records at six gaged sites and basin physical and climatic characteristics. Streamflow data at gaged sites were related by regression techniques to concurrent flows at nearby gaging stations so that monthly mean streamflows for periods of missing or no record can be estimated for gaged sites in central Nevada. The standard error of estimate for relations at these sites ranged from 12 to 196 percent. Also, monthly streamflow data for selected percent exceedence levels were used in regression analyses with basin and climatic variables to determine relations for ungaged basins for annual and monthly percent exceedence levels. Analyses indicate that the drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the annual percent exceedence, the standard error of estimate of the relations for ungaged sites ranged from 51 to 96 percent and standard error of prediction for ungaged sites ranged from 96 to 249 percent. For the monthly percent exceedence values, the standard error of estimate of the relations ranged from 31 to 168 percent, and the standard error of prediction ranged from 115 to 3,124 percent. Reliability and limitations of the estimating methods are described.
NASA Astrophysics Data System (ADS)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
Verification of calculated skin doses in postmastectomy helical tomotherapy.
Ito, Shima; Parker, Brent C; Levine, Renee; Sanders, Mary Ella; Fontenot, Jonas; Gibbons, John; Hogstrom, Kenneth
2011-10-01
To verify the accuracy of calculated skin doses in helical tomotherapy for postmastectomy radiation therapy (PMRT). In vivo thermoluminescent dosimeters (TLDs) were used to measure the skin dose at multiple points in each of 14 patients throughout the course of treatment on a TomoTherapy Hi·Art II system, for a total of 420 TLD measurements. Five patients were evaluated near the location of the mastectomy scar, whereas 9 patients were evaluated throughout the treatment volume. The measured dose at each location was compared with calculations from the treatment planning system. The mean difference and standard error of the mean difference between measurement and calculation for the scar measurements was -1.8% ± 0.2% (standard deviation [SD], 4.3%; range, -11.1% to 10.6%). The mean difference and standard error of the mean difference between measurement and calculation for measurements throughout the treatment volume was -3.0% ± 0.4% (SD, 4.7%; range, -18.4% to 12.6%). The mean difference and standard error of the mean difference between measurement and calculation for all measurements was -2.1% ± 0.2% (standard deviation, 4.5%: range, -18.4% to 12.6%). The mean difference between measured and calculated TLD doses was statistically significant at two standard deviations of the mean, but was not clinically significant (i.e., was <5%). However, 23% of the measured TLD doses differed from the calculated TLD doses by more than 5%. The mean of the measured TLD doses agreed with TomoTherapy calculated TLD doses within our clinical criterion of 5%. Copyright © 2011 Elsevier Inc. All rights reserved.
Accuracy of a pulse-coherent acoustic Doppler profiler in a wave-dominated flow
Lacy, J.R.; Sherwood, C.R.
2004-01-01
The accuracy of velocities measured by a pulse-coherent acoustic Doppler profiler (PCADP) in the bottom boundary layer of a wave-dominated inner-shelf environment is evaluated. The downward-looking PCADP measured velocities in eight 10-cm cells at 1 Hz. Velocities measured by the PCADP are compared to those measured by an acoustic Doppler velocimeter for wave orbital velocities up to 95 cm s-1 and currents up to 40 cm s-1. An algorithm for correcting ambiguity errors using the resolution velocities was developed. Instrument bias, measured as the average error in burst mean speed, is -0.4 cm s-1 (standard deviation = 0.8). The accuracy (root-mean-square error) of instantaneous velocities has a mean of 8.6 cm s-1 (standard deviation = 6.5) for eastward velocities (the predominant direction of waves), 6.5 cm s-1 (standard deviation = 4.4) for northward velocities, and 2.4 cm s-1 (standard deviation = 1.6) for vertical velocities. Both burst mean and root-mean-square errors are greater for bursts with ub ??? 50 cm s-1. Profiles of burst mean speeds from the bottom five cells were fit to logarithmic curves: 92% of bursts with mean speed ??? 5 cm s-1 have a correlation coefficient R2 > 0.96. In cells close to the transducer, instantaneous velocities are noisy, burst mean velocities are biased low, and bottom orbital velocities are biased high. With adequate blanking distances for both the profile and resolution velocities, the PCADP provides sufficient accuracy to measure velocities in the bottom boundary layer under moderately energetic inner-shelf conditions.
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
Increasing point-count duration increases standard error
Smith, W.P.; Twedt, D.J.; Hamel, P.B.; Ford, R.P.; Wiedenfeld, D.A.; Cooper, R.J.
1998-01-01
We examined data from point counts of varying duration in bottomland forests of west Tennessee and the Mississippi Alluvial Valley to determine if counting interval influenced sampling efficiency. Estimates of standard error increased as point count duration increased both for cumulative number of individuals and species in both locations. Although point counts appear to yield data with standard errors proportional to means, a square root transformation of the data may stabilize the variance. Using long (>10 min) point counts may reduce sample size and increase sampling error, both of which diminish statistical power and thereby the ability to detect meaningful changes in avian populations.
Analysis of DGPS/INS and MLS/INS final approach navigation errors and control performance data
NASA Technical Reports Server (NTRS)
Hueschen, Richard M.; Spitzer, Cary R.
1992-01-01
Flight tests were conducted jointly by NASA Langley Research Center and Honeywell, Inc., on a B-737 research aircraft to record a data base for evaluating the performance of a differential DGPS/inertial navigation system (INS) which used GPS Course/Acquisition code receivers. Estimates from the DGPS/INS and a Microwave Landing System (MLS)/INS, and various aircraft parameter data were recorded in real time aboard the aircraft while flying along the final approach path to landing. This paper presents the mean and standard deviation of the DGPS/INS and MLS/INS navigation position errors computed relative to the laser tracker system and of the difference between the DGPS/INS and MLS/INS velocity estimates. RMS errors are presented for DGPS/INS and MLS/INS guidance errors (localizer and glideslope). The mean navigation position errors and standard deviation of the x position coordinate of the DGPS/INS and MLS/INS systems were found to be of similar magnitude while the standard deviation of the y and z position coordinate errors were significantly larger for DGPS/INS compared to MLS/INS.
Marchetti, Bárbara V; Candotti, Cláudia T; Raupp, Eduardo G; Oliveira, Eduardo B C; Furlanetto, Tássia S; Loss, Jefferson F
The purpose of this study was to assess a radiographic method for spinal curvature evaluation in children, based on spinous processes, and identify its normality limits. The sample consisted of 90 radiographic examinations of the spines of children in the sagittal plane. Thoracic and lumbar curvatures were evaluated using angular (apex angle [AA]) and linear (sagittal arrow [SA]) measurements based on the spinous processes. The same curvatures were also evaluated using the Cobb angle (CA) method, which is considered the gold standard. For concurrent validity (AA vs CA), Pearson's product-moment correlation coefficient, root-mean-square error, Pitman- Morgan test, and Bland-Altman analysis were used. For reproducibility (AA, SA, and CA), the intraclass correlation coefficient, standard error of measurement, and minimal detectable change measurements were used. A significant correlation was found between CA and AA measurements, as was a low root-mean-square error. The mean difference between the measurements was 0° for thoracic and lumbar curvatures, and the mean standard deviations of the differences were ±5.9° and 6.9°, respectively. The intraclass correlation coefficients of AA and SA were similar to or higher than the gold standard (CA). The standard error of measurement and minimal detectable change of the AA were always lower than the CA. This study determined the concurrent validity, as well as intra- and interrater reproducibility, of the radiographic measurements of kyphosis and lordosis in children. Copyright © 2017. Published by Elsevier Inc.
The Influence of Dimensionality on Estimation in the Partial Credit Model.
ERIC Educational Resources Information Center
De Ayala, R. J.
1995-01-01
The effect of multidimensionality on partial credit model parameter estimation was studied with noncompensatory and compensatory data. Analysis results, consisting of root mean square error bias, Pearson product-moment corrections, standardized root mean squared differences, standardized differences between means, and descriptive statistics…
Standard deviation and standard error of the mean.
Lee, Dong Kyu; In, Junyong; Lee, Sangseok
2015-06-01
In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results.
Standard deviation and standard error of the mean
In, Junyong; Lee, Sangseok
2015-01-01
In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results. PMID:26045923
ERIC Educational Resources Information Center
Longford, Nicholas T.
Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…
Spectral combination of spherical gravitational curvature boundary-value problems
NASA Astrophysics Data System (ADS)
PitoÅák, Martin; Eshagh, Mehdi; Šprlák, Michal; Tenzer, Robert; Novák, Pavel
2018-04-01
Four solutions of the spherical gravitational curvature boundary-value problems can be exploited for the determination of the Earth's gravitational potential. In this article we discuss the combination of simulated satellite gravitational curvatures, i.e., components of the third-order gravitational tensor, by merging these solutions using the spectral combination method. For this purpose, integral estimators of biased- and unbiased-types are derived. In numerical studies, we investigate the performance of the developed mathematical models for the gravitational field modelling in the area of Central Europe based on simulated satellite measurements. Firstly, we verify the correctness of the integral estimators for the spectral downward continuation by a closed-loop test. Estimated errors of the combined solution are about eight orders smaller than those from the individual solutions. Secondly, we perform a numerical experiment by considering the Gaussian noise with the standard deviation of 6.5× 10-17 m-1s-2 in the input data at the satellite altitude of 250 km above the mean Earth sphere. This value of standard deviation is equivalent to a signal-to-noise ratio of 10. Superior results with respect to the global geopotential model TIM-r5 are obtained by the spectral downward continuation of the vertical-vertical-vertical component with the standard deviation of 2.104 m2s-2, but the root mean square error is the largest and reaches 9.734 m2s-2. Using the spectral combination of all gravitational curvatures the root mean square error is more than 400 times smaller but the standard deviation reaches 17.234 m2s-2. The combination of more components decreases the root mean square error of the corresponding solutions while the standard deviations of the combined solutions do not improve as compared to the solution from the vertical-vertical-vertical component. The presented method represents a weight mean in the spectral domain that minimizes the root mean square error of the combined solutions and improves standard deviation of the solution based only on the least accurate components.
Static Scene Statistical Non-Uniformity Correction
2015-03-01
Error NUC Non-Uniformity Correction RMSE Root Mean Squared Error RSD Relative Standard Deviation S3NUC Static Scene Statistical Non-Uniformity...Deviation ( RSD ) which normalizes the standard deviation, σ, to the mean estimated value, µ using the equation RS D = σ µ × 100. The RSD plot of the gain...estimates is shown in Figure 4.1(b). The RSD plot shows that after a sample size of approximately 10, the different photocount values and the inclusion
Schoenberg, Mike R; Rum, Ruba S
2017-11-01
Rapid, clear and efficient communication of neuropsychological results is essential to benefit patient care. Errors in communication are a lead cause of medical errors; nevertheless, there remains a lack of consistency in how neuropsychological scores are communicated. A major limitation in the communication of neuropsychological results is the inconsistent use of qualitative descriptors for standardized test scores and the use of vague terminology. PubMed search from 1 Jan 2007 to 1 Aug 2016 to identify guidelines or consensus statements for the description and reporting of qualitative terms to communicate neuropsychological test scores was conducted. The review found the use of confusing and overlapping terms to describe various ranges of percentile standardized test scores. In response, we propose a simplified set of qualitative descriptors for normalized test scores (Q-Simple) as a means to reduce errors in communicating test results. The Q-Simple qualitative terms are: 'very superior', 'superior', 'high average', 'average', 'low average', 'borderline' and 'abnormal/impaired'. A case example illustrates the proposed Q-Simple qualitative classification system to communicate neuropsychological results for neurosurgical planning. The Q-Simple qualitative descriptor system is aimed as a means to improve and standardize communication of standardized neuropsychological test scores. Research are needed to further evaluate neuropsychological communication errors. Conveying the clinical implications of neuropsychological results in a manner that minimizes risk for communication errors is a quintessential component of evidence-based practice. Copyright © 2017 Elsevier B.V. All rights reserved.
A Brief Look at: Test Scores and the Standard Error of Measurement. E&R Report No. 10.13
ERIC Educational Resources Information Center
Holdzkom, David; Sumner, Brian; McMillen, Brad
2010-01-01
In the context of standardized testing, the standard error of measurement (SEM) is a measure of the factors other than the student's actual knowledge of the tested material that may affect the student's test score. Such factors may include distractions in the testing environment, fatigue, hunger, or even luck. This means that a student's observed…
Assessing dental caries prevalence in African-American youth and adults.
Seibert, Wilda; Farmer-Dixon, Cherae; Bolden, Theodore E; Stewart, James H
2004-01-01
It has been well documented that dental caries affect millions of children in the USA with the majority experiencing decay by the late teens. This is especially true for low-income minorities. The objective of this descriptive study was to determine dental caries prevalence in a sample of low-income African-American youth and adults. A total of 1034 individuals were examined. They were divided into two age groups: youth, 9-19 years and adults, 20-39 years. Females comprised approximately 65 percent (64.5) of the study group. The DMFT Index was used to determine caries prevalence in this study population. The DMFT findings showed that approximately 73 percent (72.9 percent) of the youth had either decayed, missing or filled teeth. Male youth had slightly higher DMFT mean scores than female youth: male mean = 7.93, standard error = 0.77, female mean = 7.52, standard error = 0.36; however, as females reached adulthood their DMFT scores increased substantially, mean = 15.18, standard error = 0.36. Caries prevalence was much lower in male adults, DMFT, mean = 7.22, standard error of 0.33. The decayed component for female adults mean score was 6.81, a slight increase over adult males, mean = 6.58. Although there were few filled teeth in both age groups, female adults had slightly more filled teeth than male adults, females mean = 2.91 vs. males; however, adult males experienced slightly more missing teeth, mean = 5.62 as compared to adult females, mean = 5.46. n = 2.20. Both female and male adults had an increase in missing teeth. As age increased there was a significant correlation among decayed, missing and filled teeth as tested by Analysis of Variance (ANOVA), p < 0.01. A significant correlation was found between filled teeth by sex, p < .005. We conclude that caries prevalence was higher in female and male youth, but dental caries increased more rapidly in females as they reached adulthood.
Regionalization of harmonic-mean streamflows in Kentucky
Martin, Gary R.; Ruhl, Kevin J.
1993-01-01
Harmonic-mean streamflow (Qh), defined as the reciprocal of the arithmetic mean of the reciprocal daily streamflow values, was determined for selected stream sites in Kentucky. Daily mean discharges for the available period of record through the 1989 water year at 230 continuous record streamflow-gaging stations located in and adjacent to Kentucky were used in the analysis. Periods of record affected by regulation were identified and analyzed separately from periods of record unaffected by regulation. Record-extension procedures were applied to short-term stations to reducetime-sampling error and, thus, improve estimates of the long-term Qh. Techniques to estimate the Qh at ungaged stream sites in Kentucky were developed. A regression model relating Qh to total drainage area and streamflow-variability index was presented with example applications. The regression model has a standard error of estimate of 76 percent and a standard error of prediction of 78 percent.
Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels
Laenen, Antonius; Curtis, R. E.
1989-01-01
Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)
A method for estimating mean and low flows of streams in national forests of Montana
Parrett, Charles; Hull, J.A.
1985-01-01
Equations were developed for estimating mean annual discharge, 80-percent exceedance discharge, and 95-percent exceedance discharge for streams on national forest lands in Montana. The equations for mean annual discharge used active-channel width, drainage area and mean annual precipitation as independent variables, with active-channel width being most significant. The equations for 80-percent exceedance discharge and 95-percent exceedance discharge used only active-channel width as an independent variable. The standard error or estimate for the best equation for estimating mean annual discharge was 27 percent. The standard errors of estimate for the equations were 67 percent for estimating 80-percent exceedance discharge and 75 percent for estimating 95-percent exceedance discharge. (USGS)
Evaluation of the depth-integration method of measuring water discharge in large rivers
Moody, J.A.; Troutman, B.M.
1992-01-01
The depth-integration method oor measuring water discharge makes a continuos measurement of the water velocity from the water surface to the bottom at 20 to 40 locations or verticals across a river. It is especially practical for large rivers where river traffic makes it impractical to use boats attached to taglines strung across the river or to use current meters suspended from bridges. This method has the additional advantage over the standard two- and eight-tenths method in that a discharge-weighted suspended-sediment sample can be collected at the same time. When this method is used in large rivers such as the Missouri, Mississippi and Ohio, a microwave navigation system is used to determine the ship's position at each vertical sampling location across the river, and to make accurate velocity corrections to compensate for shift drift. An essential feature is a hydraulic winch that can lower and raise the current meter at a constant transit velocity so that the velocities at all depths are measured for equal lengths of time. Field calibration measurements show that: (1) the mean velocity measured on the upcast (bottom to surface) is within 1% of the standard mean velocity determined by 9-11 point measurements; (2) if the transit velocity is less than 25% of the mean velocity, then average error in the mean velocity is 4% or less. The major source of bias error is a result of mounting the current meter above a sounding weight and sometimes above a suspended-sediment sampling bottle, which prevents measurement of the velocity all the way to the bottom. The measured mean velocity is slightly larger than the true mean velocity. This bias error in the discharge is largest in shallow water (approximately 8% for the Missouri River at Hermann, MO, where the mean depth was 4.3 m) and smallest in deeper water (approximately 3% for the Mississippi River at Vickbsurg, MS, where the mean depth was 14.5 m). The major source of random error in the discharge is the natural variability of river velocities, which we assumed to be independent and random at each vertical. The standard error of the estimated mean velocity, at an individual vertical sampling location, may be as large as 9%, for large sand-bed alluvial rivers. The computed discharge, however, is a weighted mean of these random velocities. Consequently the standard error of computed discharge is divided by the square root of the number of verticals, producing typical values between 1 and 2%. The discharges measured by the depth-integrated method agreed within ??5% of those measured simultaneously by the standard two- and eight-tenths, six-tenth and moving boat methods. ?? 1992.
ERIC Educational Resources Information Center
Doppelt, Jerome E.
1956-01-01
The standard error of measurement as a means for estimating the margin of error that should be allowed for in test scores is discussed. The true score measures the performance that is characteristic of the person tested; the variations, plus and minus, around the true score describe a characteristic of the test. When the standard deviation is used…
Composite Gauss-Legendre Quadrature with Error Control
ERIC Educational Resources Information Center
Prentice, J. S. C.
2011-01-01
We describe composite Gauss-Legendre quadrature for determining definite integrals, including a means of controlling the approximation error. We compare the form and performance of the algorithm with standard Newton-Cotes quadrature. (Contains 1 table.)
Technology research for strapdown inertial experiment and digital flight control and guidance
NASA Technical Reports Server (NTRS)
Carestia, R. A.; Cottrell, D. E.
1985-01-01
A helicopter flight-test program to evaluate the performance of Honeywell's Tetrad - a strapdown, laser gyro, inertial navitation system is discussed. The results of 34 flights showed a mean final navigational velocity error of 5.06 knots, with a standard deviation of 3.84 knots; a corresponding mean final position error of 2.66 n.mi., with a standard deviation of 1.48 n.m.; and a modeled mean-position-error growth rate for the 34 tests of 1.96 knots, with a standard deviation of 1.09 knots. Tetrad's four-ring laser gyros provided reliable and accurate angular rate sensing during the test program and on sensor failures were detected during the evaluation. Criteria suitable for investigating cockpit systems in rotorcraft were developed. This criteria led to the development of two basic simulators. The first was a standard simulator which could be used to obtain baseline information for studying pilot workload and interactions. The second was an advanced simulator which integrated the RODAAS developed by Honeywell into this simulator. The second area also included surveying the aerospace industry to determine the level of use and impact of microcomputers and related components on avionics systems.
NASA Astrophysics Data System (ADS)
Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.
2006-06-01
Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-20
...The Food and Drug Administration (FDA or we) is correcting the preamble to a proposed rule that published in the Federal Register of January 16, 2013. That proposed rule would establish science-based minimum standards for the safe growing, harvesting, packing, and holding of produce, meaning fruits and vegetables grown for human consumption. FDA proposed these standards as part of our implementation of the FDA Food Safety Modernization Act. The document published with several technical errors, including some errors in cross references, as well as several errors in reference numbers cited throughout the document. This document corrects those errors. We are also placing a corrected copy of the proposed rule in the docket.
Parrett, Charles; Omang, R.J.; Hull, J.A.
1983-01-01
Equations for estimating mean annual runoff and peak discharge from measurements of channel geometry were developed for western and northeastern Montana. The study area was divided into two regions for the mean annual runoff analysis, and separate multiple-regression equations were developed for each region. The active-channel width was determined to be the most important independent variable in each region. The standard error of estimate for the estimating equation using active-channel width was 61 percent in the Northeast Region and 38 percent in the West region. The study area was divided into six regions for the peak discharge analysis, and multiple regression equations relating channel geometry and basin characteristics to peak discharges having recurrence intervals of 2, 5, 10, 25, 50 and 100 years were developed for each region. The standard errors of estimate for the regression equations using only channel width as an independent variable ranged from 35 to 105 percent. The standard errors improved in four regions as basin characteristics were added to the estimating equations. (USGS)
An affordable cuff-less blood pressure estimation solution.
Jain, Monika; Kumar, Niranjan; Deb, Sujay
2016-08-01
This paper presents a cuff-less hypertension pre-screening device that non-invasively monitors the Blood Pressure (BP) and Heart Rate (HR) continuously. The proposed device simultaneously records two clinically significant and highly correlated biomedical signals, viz., Electrocardiogram (ECG) and Photoplethysmogram (PPG). The device provides a common data acquisition platform that can interface with PC/laptop, Smart phone/tablet and Raspberry-pi etc. The hardware stores and processes the recorded ECG and PPG in order to extract the real-time BP and HR using kernel regression approach. The BP and HR estimation error is measured in terms of normalized mean square error, Error Standard Deviation (ESD) and Mean Absolute Error (MAE), with respect to a clinically proven digital BP monitor (OMRON HBP1300). The computed error falls under the maximum standard allowable error mentioned by Association for the Advancement of Medical Instrumentation; MAE <; 5 mmHg and ESD <; 8mmHg. The results are validated using two-tailed dependent sample t-test also. The proposed device is a portable low-cost home and clinic bases solution for continuous health monitoring.
Wiley, Jeffrey B.
2012-01-01
Base flows were compared with published streamflow statistics to assess climate variability and to determine the published statistics that can be substituted for annual and seasonal base flows of unregulated streams in West Virginia. The comparison study was done by the U.S. Geological Survey, in cooperation with the West Virginia Department of Environmental Protection, Division of Water and Waste Management. The seasons were defined as winter (January 1-March 31), spring (April 1-June 30), summer (July 1-September 30), and fall (October 1-December 31). Differences in mean annual base flows for five record sub-periods (1930-42, 1943-62, 1963-69, 1970-79, and 1980-2002) range from -14.9 to 14.6 percent when compared to the values for the period 1930-2002. Differences between mean seasonal base flows and values for the period 1930-2002 are less variable for winter and spring, -11.2 to 11.0 percent, than for summer and fall, -47.0 to 43.6 percent. Mean summer base flows (July-September) and mean monthly base flows for July, August, September, and October are approximately equal, within 7.4 percentage points of mean annual base flow. The mean of each of annual, spring, summer, fall, and winter base flows are approximately equal to the annual 50-percent (standard error of 10.3 percent), 45-percent (error of 14.6 percent), 75-percent (error of 11.8 percent), 55-percent (error of 11.2 percent), and 35-percent duration flows (error of 11.1 percent), respectively. The mean seasonal base flows for spring, summer, fall, and winter are approximately equal to the spring 50- to 55-percent (standard error of 6.8 percent), summer 45- to 50-percent (error of 6.7 percent), fall 45-percent (error of 15.2 percent), and winter 60-percent duration flows (error of 8.5 percent), respectively. Annual and seasonal base flows representative of the period 1930-2002 at unregulated streamflow-gaging stations and ungaged locations in West Virginia can be estimated using previously published values of statistics and procedures.
ERIC Educational Resources Information Center
Nugent, William Robert; Moore, Matthew; Story, Erin
2015-01-01
The standardized mean difference (SMD) is perhaps the most important meta-analytic effect size. It is typically used to represent the difference between treatment and control population means in treatment efficacy research. It is also used to represent differences between populations with different characteristics, such as persons who are…
Chan, Kelvin K W; Xie, Feng; Willan, Andrew R; Pullenayegum, Eleanor M
2017-04-01
Parameter uncertainty in value sets of multiattribute utility-based instruments (MAUIs) has received little attention previously. This false precision leads to underestimation of the uncertainty of the results of cost-effectiveness analyses. The aim of this study is to examine the use of multiple imputation as a method to account for this uncertainty of MAUI scoring algorithms. We fitted a Bayesian model with random effects for respondents and health states to the data from the original US EQ-5D-3L valuation study, thereby estimating the uncertainty in the EQ-5D-3L scoring algorithm. We applied these results to EQ-5D-3L data from the Commonwealth Fund (CWF) Survey for Sick Adults ( n = 3958), comparing the standard error of the estimated mean utility in the CWF population using the predictive distribution from the Bayesian mixed-effect model (i.e., incorporating parameter uncertainty in the value set) with the standard error of the estimated mean utilities based on multiple imputation and the standard error using the conventional approach of using MAUI (i.e., ignoring uncertainty in the value set). The mean utility in the CWF population based on the predictive distribution of the Bayesian model was 0.827 with a standard error (SE) of 0.011. When utilities were derived using the conventional approach, the estimated mean utility was 0.827 with an SE of 0.003, which is only 25% of the SE based on the full predictive distribution of the mixed-effect model. Using multiple imputation with 20 imputed sets, the mean utility was 0.828 with an SE of 0.011, which is similar to the SE based on the full predictive distribution. Ignoring uncertainty of the predicted health utilities derived from MAUIs could lead to substantial underestimation of the variance of mean utilities. Multiple imputation corrects for this underestimation so that the results of cost-effectiveness analyses using MAUIs can report the correct degree of uncertainty.
Code of Federal Regulations, 2010 CFR
2010-01-01
... defined in section 1 of this appendix is as follows: (a) The standard deviation of lateral track errors shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean... standard deviation about the mean encompasses approximately 68 percent of the data and plus or minus 2...
Derivation of an analytic expression for the error associated with the noise reduction rating
NASA Astrophysics Data System (ADS)
Murphy, William J.
2005-04-01
Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.
Using Least Squares for Error Propagation
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2015-01-01
The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…
Dörnberger, V; Dörnberger, G
1987-01-01
On 99 testes of corpses (death had occurred between 26 und 86 years) comparative volumetry was done. In the left surrounding capsules (without scrotal skin and tunica dartos) the testes were measured via real time sonography in a waterbath (7.5 MHz linear-scan), afterwards length, breadth and height were measured by a sliding calibre, the largest diameter (the length) of the testis was determined by Schirren's circle and finally the size of the testis was measured via Prader's orchidometer. After all the testes were surgically exposed, their volume (by litres) was determined according to Archimedes' principle. As for the Archimedes' principle a random mean error of 7% must be accepted, sonographic determination of the volume showed a random mean error of 15%. Whereas the accuracy of measurement increases with increasing volumes, both methods should be used with caution if the volumes are below 4 ml, since the possibilities of error are rather great. According to Prader's orchidometer the measured volumes on average were higher (+ 27%) with a random mean error of 19.5%. With Schirren's circle the obtained mean value was even higher (+ 52%) in comparison to the "real" volume by Archimedes' principle with a random mean error of 19%. The measurements of the testes in the left capsules by sliding calibre can be optimized, if one applies a correcting factor f (sliding calibre) = 0.39 for calculation of the testis volume corresponding to an ellipsoid. Here you will get the same mean value as in Archimedes' principle with a standard mean error of only 9%. If one applies the correction factor of real time sonography of testis f (sono) = 0.65 the mean value of sliding calibre measurements would be 68.8% too high with a standard mean error of 20.3%. For measurements via sliding calibre the calculation of the testis volume corresponding to an ellipsoid one should apply the smaller factor f (sliding calibre) = 0.39, because in this way the left capsules of testis and the epididymis are considered.
ERIC Educational Resources Information Center
Vardeman, Stephen B.; Wendelberger, Joanne R.
2005-01-01
There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean [mu] and variance [sigma][superscript 2], the expected value of the sample variance is [sigma][superscript 2]. The generalization justifies the use of the usual standard error of the sample mean in possibly…
Verification of Calculated Skin Doses in Postmastectomy Helical Tomotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ito, Shima; Parker, Brent C., E-mail: bcparker@marybird.com; Mary Bird Perkins Cancer Center, Baton Rouge, LA
2011-10-01
Purpose: To verify the accuracy of calculated skin doses in helical tomotherapy for postmastectomy radiation therapy (PMRT). Methods and Materials: In vivo thermoluminescent dosimeters (TLDs) were used to measure the skin dose at multiple points in each of 14 patients throughout the course of treatment on a TomoTherapy Hi.Art II system, for a total of 420 TLD measurements. Five patients were evaluated near the location of the mastectomy scar, whereas 9 patients were evaluated throughout the treatment volume. The measured dose at each location was compared with calculations from the treatment planning system. Results: The mean difference and standard errormore » of the mean difference between measurement and calculation for the scar measurements was -1.8% {+-} 0.2% (standard deviation [SD], 4.3%; range, -11.1% to 10.6%). The mean difference and standard error of the mean difference between measurement and calculation for measurements throughout the treatment volume was -3.0% {+-} 0.4% (SD, 4.7%; range, -18.4% to 12.6%). The mean difference and standard error of the mean difference between measurement and calculation for all measurements was -2.1% {+-} 0.2% (standard deviation, 4.5%: range, -18.4% to 12.6%). The mean difference between measured and calculated TLD doses was statistically significant at two standard deviations of the mean, but was not clinically significant (i.e., was <5%). However, 23% of the measured TLD doses differed from the calculated TLD doses by more than 5%. Conclusions: The mean of the measured TLD doses agreed with TomoTherapy calculated TLD doses within our clinical criterion of 5%.« less
Quantifying the uncertainty of regional and national estimates of soil carbon stocks
NASA Astrophysics Data System (ADS)
Papritz, Andreas
2013-04-01
At regional and national scales, carbon (C) stocks are frequently estimated by means of regression models. Such statistical models link measurements of carbons stocks, recorded for a set of soil profiles or soil cores, to covariates that characterize soil formation conditions and land management. A prerequisite is that these covariates are available for any location within a region of interest G because they are used along with the fitted regression coefficients to predict the carbon stocks at the nodes of a fine-meshed grid that is laid over G. The mean C stock in G is then estimated by the arithmetic mean of the stock predictions for the grid nodes. Apart from the mean stock, the precision of the estimate is often also of interest, for example to judge whether the mean C stock has changed significantly between two inventories. The standard error of the estimated mean stock in G can be computed from the regression results as well. Two issues are thereby important: (i) How large is the area of G relative to the support of the measurements? (ii) Are the residuals of the regression model spatially auto-correlated or is the assumption of statistical independence tenable? Both issues are correctly handled if one adopts a geostatistical block kriging approach for estimating the mean C stock within a region and its standard error. In the presentation I shall summarize the main ideas of external drift block kriging. To compute the standard error of the mean stock, one has in principle to sum the elements a potentially very large covariance matrix of point prediction errors, but I shall show that the required term can be approximated very well by Monte Carlo techniques. I shall further illustrated with a few examples how the standard error of the mean stock estimate changes with the size of G and with the strenght of the auto-correlation of the regression residuals. As an application a robust variant of block kriging is used to quantify the mean carbon stock stored in the soils of Swiss forests (Nussbaum et al., 2012). Nussbaum, M., Papritz, A., Baltensweiler, A., and Walthert, L. (2012). Organic carbon stocks of swiss forest soils. Final report, Institute of Terrestrial Ecosystems, ETH Zürich and Swiss Federal Institute for Forest, Snow and Landscape Research (WSL), pp. 51, http://e-collection.library.ethz.ch/eserv/eth:6027/eth-6027-01.pdf
Methods for estimating streamflow at mountain fronts in southern New Mexico
Waltemeyer, S.D.
1994-01-01
The infiltration of streamflow is potential recharge to alluvial-basin aquifers at or near mountain fronts in southern New Mexico. Data for 13 streamflow-gaging stations were used to determine a relation between mean annual stream- flow and basin and climatic conditions. Regression analysis was used to develop an equation that can be used to estimate mean annual streamflow on the basis of drainage areas and mean annual precipi- tation. The average standard error of estimate for this equation is 46 percent. Regression analysis also was used to develop an equation to estimate mean annual streamflow on the basis of active- channel width. Measurements of the width of active channels were determined for 6 of the 13 gaging stations. The average standard error of estimate for this relation is 29 percent. Stream- flow estimates made using a regression equation based on channel geometry are considered more reliable than estimates made from an equation based on regional relations of basin and climatic conditions. The sample size used to develop these relations was small, however, and the reported standard error of estimate may not represent that of the entire population. Active-channel-width measurements were made at 23 ungaged sites along the Rio Grande upstream from Elephant Butte Reservoir. Data for additional sites would be needed for a more comprehensive assessment of mean annual streamflow in southern New Mexico.
Xia, Lang; Mao, Kebiao; Ma, Ying; Zhao, Fen; Jiang, Lipeng; Shen, Xinyi; Qin, Zhihao
2014-01-01
A practical algorithm was proposed to retrieve land surface temperature (LST) from Visible Infrared Imager Radiometer Suite (VIIRS) data in mid-latitude regions. The key parameter transmittance is generally computed from water vapor content, while water vapor channel is absent in VIIRS data. In order to overcome this shortcoming, the water vapor content was obtained from Moderate Resolution Imaging Spectroradiometer (MODIS) data in this study. The analyses on the estimation errors of vapor content and emissivity indicate that when the water vapor errors are within the range of ±0.5 g/cm2, the mean retrieval error of the present algorithm is 0.634 K; while the land surface emissivity errors range from −0.005 to +0.005, the mean retrieval error is less than 1.0 K. Validation with the standard atmospheric simulation shows the average LST retrieval error for the twenty-three land types is 0.734 K, with a standard deviation value of 0.575 K. The comparison between the ground station LST data indicates the retrieval mean accuracy is −0.395 K, and the standard deviation value is 1.490 K in the regions with vegetation and water cover. Besides, the retrieval results of the test data have also been compared with the results measured by the National Oceanic and Atmospheric Administration (NOAA) VIIRS LST products, and the results indicate that 82.63% of the difference values are within the range of −1 to 1 K, and 17.37% of the difference values are within the range of ±2 to ±1 K. In a conclusion, with the advantages of multi-sensors taken fully exploited, more accurate results can be achieved in the retrieval of land surface temperature. PMID:25397919
Can Ultrasound Accurately Assess Ischiofemoral Space Dimensions? A Validation Study.
Finnoff, Jonathan T; Johnson, Adam C; Hollman, John H
2017-04-01
Ischiofemoral impingement is a potential cause of hip and buttock pain. It is evaluated commonly with magnetic resonance imaging (MRI). To our knowledge, no study previously has evaluated the ability of ultrasound to measure the ischiofemoral space (IFS) dimensions reliably. To determine whether ultrasound could accurately measure the IFS dimensions when compared with the gold standard imaging modality of MRI. A methods comparison study. Sports medicine center within a tertiary-care institution. A total of 5 male and 5 female asymptomatic adult subjects (age mean = 29.2 years, range = 23-35 years; body mass index mean = 23.5, range = 19.5-26.6) were recruited to participate in the study. Subjects were secured in a prone position on a MRI table with their hips in a neutral position. Their IFS dimensions were then acquired in a randomized order using diagnostic ultrasound and MRI. The main outcome measurements were the IFS dimensions acquired with ultrasound and MRI. The mean IFS dimensions measured with ultrasound was 29.5 mm (standard deviation [SD] 4.99 mm, standard error mean 1.12 mm), whereas those obtained with MRI were 28.25 mm (SD 5.91 mm, standard error mean 1.32 mm). The mean difference between the ultrasound and MRI measurements was 1.25 mm, which was not statistically significant (SD 3.71 mm, standard error mean 3.71 mm, 95% confidence interval -0.49 mm to 2.98 mm, t 19 = 1.506, P = .15). The Bland-Altman analysis indicated that the 95% limits of agreement between the 2 measurement was -6.0 to 8.5 mm, indicating that there was no systematic bias between the ultrasound and MRI measurements. Our findings suggest that the IFS measurements obtained with ultrasound are very similar to those obtained with MRI. Therefore, when evaluating individuals with suspected ischiofemoral impingement, one could consider using ultrasound to measure their IFS dimensions. III. Copyright © 2017 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Flight test results of the strapdown ring laser gyro tetrad inertial navigation system
NASA Technical Reports Server (NTRS)
Carestia, R. A.; Hruby, R. J.; Bjorkman, W. S.
1983-01-01
A helicopter flight test program undertaken to evaluate the performance of Tetrad (a strap down, laser gyro, inertial navigation system) is described. The results of 34 flights show a mean final navigational velocity error of 5.06 knots, with a standard deviation of 3.84 knots; a corresponding mean final position error of 2.66 n. mi., with a standard deviation of 1.48 n. mi.; and a modeled mean position error growth rate for the 34 tests of 1.96 knots, with a standard deviation of 1.09 knots. No laser gyro or accelerometer failures were detected during the flight tests. Off line parity residual studies used simulated failures with the prerecorded flight test and laboratory test data. The airborne Tetrad system's failure--detection logic, exercised during the tests, successfully demonstrated the detection of simulated ""hard'' failures and the system's ability to continue successfully to navigate by removing the simulated faulted sensor from the computations. Tetrad's four ring laser gyros provided reliable and accurate angular rate sensing during the 4 yr of the test program, and no sensor failures were detected during the evaluation of free inertial navigation performance.
Prediction of ethanol in bottled Chinese rice wine by NIR spectroscopy
NASA Astrophysics Data System (ADS)
Ying, Yibin; Yu, Haiyan; Pan, Xingxiang; Lin, Tao
2006-10-01
To evaluate the applicability of non-invasive visible and near infrared (VIS-NIR) spectroscopy for determining ethanol concentration of Chinese rice wine in square brown glass bottle, transmission spectra of 100 bottled Chinese rice wine samples were collected in the spectral range of 350-1200 nm. Statistical equations were established between the reference data and VIS-NIR spectra by partial least squares (PLS) regression method. Performance of three kinds of mathematical treatment of spectra (original spectra, first derivative spectra and second derivative spectra) were also discussed. The PLS models of original spectra turned out better results, with higher correlation coefficient in calibration (R cal) of 0.89, lower root mean standard error of calibration (RMSEC) of 0.165, and lower root mean standard error of cross validation (RMSECV) of 0.179. Using original spectra, PLS models for ethanol concentration prediction were developed. The R cal and the correlation coefficient in validation (R val) were 0.928 and 0.875, respectively; and the RMSEC and the root mean standard error of validation (RMSEP) were 0.135 (%, v v -1) and 0.177 (%, v v -1), respectively. The results demonstrated that VIS-NIR spectroscopy could be used to predict ethanol concentration in bottled Chinese rice wine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven
The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less
Toward a new culture in verified quantum operations
NASA Astrophysics Data System (ADS)
Flammia, Steve
Measuring error rates of quantum operations has become an indispensable component in any aspiring platform for quantum computation. As the quality of controlled quantum operations increases, the demands on the accuracy and precision with which we measure these error rates also grows. However, well-meaning scientists that report these error measures are faced with a sea of non-standardized methodologies and are often asked during publication for only coarse information about how their estimates were obtained. Moreover, there are serious incentives to use methodologies and measures that will continually produce numbers that improve with time to show progress. These problems will only get exacerbated as our typical error rates go from 1 in 100 to 1 in 1000 or less. This talk will survey existing challenges presented by the current paradigm and offer some suggestions for solutions than can help us move toward fair and standardized methods for error metrology in quantum computing experiments, and towards a culture that values full disclose of methodologies and higher standards for data analysis.
Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons
Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit
2012-01-01
In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π. PMID:24027379
Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.
Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit
2013-08-01
In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.
Huang, Kuo-Chen; Wang, Hsiu-Feng; Chen, Chun-Ching
2010-06-01
Effects of shape, size, and chromaticity of stimuli on participants' errors when estimating the size of simultaneously presented standard and comparison stimuli were examined. 48 Taiwanese college students ages 20 to 24 years old (M = 22.3, SD = 1.3) participated. Analysis showed that the error for estimated size was significantly greater for those in the low-vision group than for those in the normal-vision and severe-myopia groups. The errors were significantly greater with green and blue stimuli than with red stimuli. Circular stimuli produced smaller mean errors than did square stimuli. The actual size of the standard stimulus significantly affected the error for estimated size. Errors for estimations using smaller sizes were significantly higher than when the sizes were larger. Implications of the results for graphics-based interface design, particularly when taking account of visually impaired users, are discussed.
Error-Based Design Space Windowing
NASA Technical Reports Server (NTRS)
Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman
2002-01-01
Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.
Tamburini, Elena; Tagliati, Chiara; Bonato, Tiziano; Costa, Stefania; Scapoli, Chiara; Pedrini, Paola
2016-01-01
Near-infrared spectroscopy (NIRS) has been widely used for quantitative and/or qualitative determination of a wide range of matrices. The objective of this study was to develop a NIRS method for the quantitative determination of fluorine content in polylactide (PLA)-talc blends. A blending profile was obtained by mixing different amounts of PLA granules and talc powder. The calibration model was built correlating wet chemical data (alkali digestion method) and NIR spectra. Using FT (Fourier Transform)-NIR technique, a Partial Least Squares (PLS) regression model was set-up, in a concentration interval of 0 ppm of pure PLA to 800 ppm of pure talc. Fluorine content prediction (R2cal = 0.9498; standard error of calibration, SEC = 34.77; standard error of cross-validation, SECV = 46.94) was then externally validated by means of a further 15 independent samples (R2EX.V = 0.8955; root mean standard error of prediction, RMSEP = 61.08). A positive relationship between an inorganic component as fluorine and NIR signal has been evidenced, and used to obtain quantitative analytical information from the spectra. PMID:27490548
Estimating extreme stream temperatures by the standard deviate method
NASA Astrophysics Data System (ADS)
Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz
2006-02-01
It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.
Cost effectiveness of the US Geological Survey stream-gaging program in Alabama
Jeffcoat, H.H.
1987-01-01
A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)
Cook, Sarah F; Roberts, Jessica K; Samiee-Zafarghandy, Samira; Stockmann, Chris; King, Amber D; Deutsch, Nina; Williams, Elaine F; Allegaert, Karel; Wilkins, Diana G; Sherwin, Catherine M T; van den Anker, John N
2016-01-01
The aims of this study were to develop a population pharmacokinetic model for intravenous paracetamol in preterm and term neonates and to assess the generalizability of the model by testing its predictive performance in an external dataset. Nonlinear mixed-effects models were constructed from paracetamol concentration-time data in NONMEM 7.2. Potential covariates included body weight, gestational age, postnatal age, postmenstrual age, sex, race, total bilirubin, and estimated glomerular filtration rate. An external dataset was used to test the predictive performance of the model through calculation of bias, precision, and normalized prediction distribution errors. The model-building dataset included 260 observations from 35 neonates with a mean gestational age of 33.6 weeks [standard deviation (SD) 6.6]. Data were well-described by a one-compartment model with first-order elimination. Weight predicted paracetamol clearance and volume of distribution, which were estimated as 0.348 L/h (5.5 % relative standard error; 30.8 % coefficient of variation) and 2.46 L (3.5 % relative standard error; 14.3 % coefficient of variation), respectively, at the mean subject weight of 2.30 kg. An external evaluation was performed on an independent dataset that included 436 observations from 60 neonates with a mean gestational age of 35.6 weeks (SD 4.3). The median prediction error was 10.1 % [95 % confidence interval (CI) 6.1-14.3] and the median absolute prediction error was 25.3 % (95 % CI 23.1-28.1). Weight predicted intravenous paracetamol pharmacokinetics in neonates ranging from extreme preterm to full-term gestational status. External evaluation suggested that these findings should be generalizable to other similar patient populations.
Cook, Sarah F.; Roberts, Jessica K.; Samiee-Zafarghandy, Samira; Stockmann, Chris; King, Amber D.; Deutsch, Nina; Williams, Elaine F.; Allegaert, Karel; Sherwin, Catherine M. T.; van den Anker, John N.
2017-01-01
Objectives The aims of this study were to develop a population pharmacokinetic model for intravenous paracetamol in preterm and term neonates and to assess the generalizability of the model by testing its predictive performance in an external dataset. Methods Nonlinear mixed-effects models were constructed from paracetamol concentration–time data in NONMEM 7.2. Potential covariates included body weight, gestational age, postnatal age, postmenstrual age, sex, race, total bilirubin, and estimated glomerular filtration rate. An external dataset was used to test the predictive performance of the model through calculation of bias, precision, and normalized prediction distribution errors. Results The model-building dataset included 260 observations from 35 neonates with a mean gestational age of 33.6 weeks [standard deviation (SD) 6.6]. Data were well-described by a one-compartment model with first-order elimination. Weight predicted paracetamol clearance and volume of distribution, which were estimated as 0.348 L/h (5.5 % relative standard error; 30.8 % coefficient of variation) and 2.46 L (3.5 % relative standard error; 14.3 % coefficient of variation), respectively, at the mean subject weight of 2.30 kg. An external evaluation was performed on an independent dataset that included 436 observations from 60 neonates with a mean gestational age of 35.6 weeks (SD 4.3). The median prediction error was 10.1 % [95 % confidence interval (CI) 6.1–14.3] and the median absolute prediction error was 25.3 % (95 % CI 23.1–28.1). Conclusions Weight predicted intravenous paracetamol pharmacokinetics in neonates ranging from extreme preterm to full-term gestational status. External evaluation suggested that these findings should be generalizable to other similar patient populations. PMID:26201306
NASA Technical Reports Server (NTRS)
Warner, Joseph D.; Theofylaktos, Onoufrios
2012-01-01
A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.
Concentrations of teicoplanin in serum and atrial appendages of patients undergoing cardiac surgery.
Bergeron, M G; Saginur, R; Desaulniers, D; Trottier, S; Goldstein, W; Foucault, P; Lessard, C
1990-01-01
The concentrations of teicoplanin in sera and heart tissues of 49 patients undergoing coronary bypass were measured. Each patient received a 6- or 12-mg/kg dose of teicoplanin administered in a slow intravenous bolus injection over 3 to 5 min beginning at the time of induction of anesthesia. Mean +/- standard error of the mean concentrations in serum were, for the two doses, respectively, 58.1 +/- 1.7 and 123.3 +/- 7.4 micrograms/ml 5 min after administration and 22.2 +/- 0.7 and 56.5 +/- 2.8 micrograms/ml at the time of removal of atrial appendages. Mean +/- standard error of the mean concentrations in tissue were 70.6 +/- 1.7 and 139.8 +/- 2.2 micrograms/g, respectively, giving mean tissue/serum ratios of 3.7 +/- 0.3 and 2.8 +/- 0.2, respectively. Teicoplanin penetrates heart tissue readily and reaches levels in the serum far in excess of the MICs for most pathogens that have been found to cause infections following open heart surgery. PMID:2149493
Multicollinearity and Regression Analysis
NASA Astrophysics Data System (ADS)
Daoud, Jamal I.
2017-12-01
In regression analysis it is obvious to have a correlation between the response and predictor(s), but having correlation among predictors is something undesired. The number of predictors included in the regression model depends on many factors among which, historical data, experience, etc. At the end selection of most important predictors is something objective due to the researcher. Multicollinearity is a phenomena when two or more predictors are correlated, if this happens, the standard error of the coefficients will increase [8]. Increased standard errors means that the coefficients for some or all independent variables may be found to be significantly different from In other words, by overinflating the standard errors, multicollinearity makes some variables statistically insignificant when they should be significant. In this paper we focus on the multicollinearity, reasons and consequences on the reliability of the regression model.
Half-lives of 214Pb and 214Bi.
Martz, D E; Langner, G H; Johnson, P R
1991-10-01
New measurements on chemically separated samples of 214Bi have yielded a mean half-life value of 19.71 +/- 0.02 min, where the error quoted is twice the standard deviation of the mean based on 23 decay runs. This result provides strong support for the historic 19.72 +/- 0.04 min half-life value and essentially excludes the 19.9-min value, both reported in previous studies. New measurements of the decay rate of 222Rn progeny activity initially in radioactive equilibrium have yielded a value of 26.89 +/- 0.03 min for the half-life of 214Pb, where the error quoted is twice the standard deviation of the mean based on 12 decay runs. This value is 0.1 min longer than the currently accepted 214Pb half-value of 26.8 min.
Analysis of tractable distortion metrics for EEG compression applications.
Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando
2012-07-01
Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.
Fernández-Friera, Leticia; García-Ruiz, José Manuel; García-Álvarez, Ana; Fernández-Jiménez, Rodrigo; Sánchez-González, Javier; Rossello, Xavier; Gómez-Talavera, Sandra; López-Martín, Gonzalo J; Pizarro, Gonzalo; Fuster, Valentín; Ibáñez, Borja
2017-05-01
Area at risk (AAR) quantification is important to evaluate the efficacy of cardioprotective therapies. However, postinfarction AAR assessment could be influenced by the infarcted coronary territory. Our aim was to determine the accuracy of T 2 -weighted short tau triple-inversion recovery (T 2 W-STIR) cardiac magnetic resonance (CMR) imaging for accurate AAR quantification in anterior, lateral, and inferior myocardial infarctions. Acute reperfused myocardial infarction was experimentally induced in 12 pigs, with 40-minute occlusion of the left anterior descending (n = 4), left circumflex (n = 4), and right coronary arteries (n = 4). Perfusion CMR was performed during selective intracoronary gadolinium injection at the coronary occlusion site (in vivo criterion standard) and, additionally, a 7-day CMR, including T 2 W-STIR sequences, was performed. Finally, all animals were sacrificed and underwent postmortem Evans blue staining (classic criterion standard). The concordance between the CMR-based criterion standard and T 2 W-STIR to quantify AAR was high for anterior and inferior infarctions (r = 0.73; P = .001; mean error = 0.50%; limits = -12.68%-13.68% and r = 0.87; P = .001; mean error = -1.5%; limits = -8.0%-5.8%, respectively). Conversely, the correlation for the circumflex territories was poor (r = 0.21, P = .37), showing a higher mean error and wider limits of agreement. A strong correlation between pathology and the CMR-based criterion standard was observed (r = 0.84, P < .001; mean error = 0.91%; limits = -7.55%-9.37%). T 2 W-STIR CMR sequences are accurate to determine the AAR for anterior and inferior infarctions; however, their accuracy for lateral infarctions is poor. These findings may have important implications for the design and interpretation of clinical trials evaluating the effectiveness of cardioprotective therapies. Copyright © 2016 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
Practicality of performing medical procedures in chemical protective ensembles.
Garner, Alan; Laurence, Helen; Lee, Anna
2004-04-01
To determine whether certain life saving medical procedures can be successfully performed while wearing different levels of personal protective equipment (PPE), and whether these procedures can be performed in a clinically useful time frame. We assessed the capability of eight medical personnel to perform airway maintenance and antidote administration procedures on manikins, in all four described levels of PPE. The levels are: Level A--a fully encapsulated chemically resistant suit; Level B--a chemically resistant suit, gloves and boots with a full-faced positive pressure supplied air respirator; Level C--a chemically resistant splash suit, boots and gloves with an air-purifying positive or negative pressure respirator; Level D--a work uniform. Time in seconds to inflate the lungs of the manikin with bag-valve-mask, laryngeal mask airway (LMA) and endotracheal tube (ETT) were determined, as was the time to secure LMAs and ETTs with either tape or linen ties. Time to insert a cannula in a manikin was also determined. There was a significant difference in time taken to perform procedures in differing levels of personal protective equipment (F21,72 = 1.75, P = 0.04). Significant differences were found in: time to lung inflation using an endotracheal tube (A vs. C mean difference and standard error 75.6 +/- 23.9 s, P = 0.03; A vs. D mean difference and standard error 78.6 +/- 23.9 s, P = 0.03); time to insert a cannula (A vs. D mean difference and standard error 63.6 +/- 11.1 s, P < 0.001; C vs. D mean difference and standard error 40.0 +/- 11.1 s, P = 0.01). A significantly greater time to complete procedures was documented in Level A PPE (fully encapsulated suits) compared with Levels C and D. There was however, no significant difference in times between Level B and Level C. The common practice of equipping hospital and medical staff with only Level C protection should be re-evaluated.
Zonal average earth radiation budget measurements from satellites for climate studies
NASA Technical Reports Server (NTRS)
Ellis, J. S.; Haar, T. H. V.
1976-01-01
Data from 29 months of satellite radiation budget measurements, taken intermittently over the period 1964 through 1971, are composited into mean month, season and annual zonally averaged meridional profiles. Individual months, which comprise the 29 month set, were selected as representing the best available total flux data for compositing into large scale statistics for climate studies. A discussion of spatial resolution of the measurements along with an error analysis, including both the uncertainty and standard error of the mean, are presented.
NASA Astrophysics Data System (ADS)
Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.
2016-12-01
Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.
Eppenhof, Koen A J; Pluim, Josien P W
2018-04-01
Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.
NASA Astrophysics Data System (ADS)
Skourup, Henriette; Farrell, Sinéad Louise; Hendricks, Stefan; Ricker, Robert; Armitage, Thomas W. K.; Ridout, Andy; Andersen, Ole Baltazar; Haas, Christian; Baker, Steven
2017-11-01
State-of-the-art Arctic Ocean mean sea surface (MSS) models and global geoid models (GGMs) are used to support sea ice freeboard estimation from satellite altimeters, as well as in oceanographic studies such as mapping sea level anomalies and mean dynamic ocean topography. However, errors in a given model in the high-frequency domain, primarily due to unresolved gravity features, can result in errors in the estimated along-track freeboard. These errors are exacerbated in areas with a sparse lead distribution in consolidated ice pack conditions. Additionally model errors can impact ocean geostrophic currents, derived from satellite altimeter data, while remaining biases in these models may impact longer-term, multisensor oceanographic time series of sea level change in the Arctic. This study focuses on an assessment of five state-of-the-art Arctic MSS models (UCL13/04 and DTU15/13/10) and a commonly used GGM (EGM2008). We describe errors due to unresolved gravity features, intersatellite biases, and remaining satellite orbit errors, and their impact on the derivation of sea ice freeboard. The latest MSS models, incorporating CryoSat-2 sea surface height measurements, show improved definition of gravity features, such as the Gakkel Ridge. The standard deviation between models ranges 0.03-0.25 m. The impact of remaining MSS/GGM errors on freeboard retrieval can reach several decimeters in parts of the Arctic. While the maximum observed freeboard difference found in the central Arctic was 0.59 m (UCL13 MSS minus EGM2008 GGM), the standard deviation in freeboard differences is 0.03-0.06 m.
Breast Tissue Characterization with Photon-counting Spectral CT Imaging: A Postmortem Breast Study
Ding, Huanjun; Klopfer, Michael J.; Ducote, Justin L.; Masaki, Fumitaro
2014-01-01
Purpose To investigate the feasibility of breast tissue characterization in terms of water, lipid, and protein contents with a spectral computed tomographic (CT) system based on a cadmium zinc telluride (CZT) photon-counting detector by using postmortem breasts. Materials and Methods Nineteen pairs of postmortem breasts were imaged with a CZT-based photon-counting spectral CT system with beam energy of 100 kVp. The mean glandular dose was estimated to be in the range of 1.8–2.2 mGy. The images were corrected for pulse pile-up and other artifacts by using spectral distortion corrections. Dual-energy decomposition was then applied to characterize each breast into water, lipid, and protein contents. The precision of the three-compartment characterization was evaluated by comparing the composition of right and left breasts, where the standard error of the estimations was determined. The results of dual-energy decomposition were compared by using averaged root mean square to chemical analysis, which was used as the reference standard. Results The standard errors of the estimations of the right-left correlations obtained from spectral CT were 7.4%, 6.7%, and 3.2% for water, lipid, and protein contents, respectively. Compared with the reference standard, the average root mean square error in breast tissue composition was 2.8%. Conclusion Spectral CT can be used to accurately quantify the water, lipid, and protein contents in breast tissue in a laboratory study by using postmortem specimens. © RSNA, 2014 PMID:24814180
Wang, Liang; Yuan, Jin; Jiang, Hong; Yan, Wentao; Cintrón-Colón, Hector R; Perez, Victor L; DeBuc, Delia C; Feuer, William J; Wang, Jianhua
2016-03-01
This study determined (1) how many vessels (i.e., the vessel sampling) are needed to reliably characterize the bulbar conjunctival microvasculature and (2) if characteristic information can be obtained from the distribution histogram of the blood flow velocity and vessel diameter. Functional slitlamp biomicroscope was used to image hundreds of venules per subject. The bulbar conjunctiva in five healthy human subjects was imaged on six different locations in the temporal bulbar conjunctiva. The histograms of the diameter and velocity were plotted to examine whether the distribution was normal. Standard errors were calculated from the standard deviation and vessel sample size. The ratio of the standard error of the mean over the population mean was used to determine the sample size cutoff. The velocity was plotted as a function of the vessel diameter to display the distribution of the diameter and velocity. The results showed that the sampling size was approximately 15 vessels, which generated a standard error equivalent to 15% of the population mean from the total vessel population. The distributions of the diameter and velocity were not only unimodal, but also somewhat positively skewed and not normal. The blood flow velocity was related to the vessel diameter (r=0.23, P<0.05). This was the first study to determine the sampling size of the vessels and the distribution histogram of the blood flow velocity and vessel diameter, which may lead to a better understanding of the human microvascular system of the bulbar conjunctiva.
An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.
ERIC Educational Resources Information Center
De Ayala, R. J.; And Others
Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…
Fish: A New Computer Program for Friendly Introductory Statistics Help
ERIC Educational Resources Information Center
Brooks, Gordon P.; Raffle, Holly
2005-01-01
All introductory statistics students must master certain basic descriptive statistics, including means, standard deviations and correlations. Students must also gain insight into such complex concepts as the central limit theorem and standard error. This article introduces and describes the Friendly Introductory Statistics Help (FISH) computer…
Magnetic Field Measurements of the Spotted Yellow Dwarf DE Boo During 2001-2004
NASA Astrophysics Data System (ADS)
Plachinda, S.; Baklanova, D.; Butkovskaya, V.; Pankov, N.
2017-06-01
Spectropolarimetric observations of DE Boo have been performed at Crimean astrophysical observatory during 18 nights in 2001-2004. We present the result of the longitudinal magnetic field measurements on this star. The magnetic field varies from +44 G to -36 G with mean Standard Error (SE) of 8.2 G. For full array of the magnetic field measurements the difference between experimental errors and Monte Carlo errors is not statistically significant.
Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)
NASA Technical Reports Server (NTRS)
Adler, Robert; Gu, Guojun; Huffman, George
2012-01-01
A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.
Using First Differences to Reduce Inhomogeneity in Radiosonde Temperature Datasets.
NASA Astrophysics Data System (ADS)
Free, Melissa; Angell, James K.; Durre, Imke; Lanzante, John; Peterson, Thomas C.; Seidel, Dian J.
2004-11-01
The utility of a “first difference” method for producing temporally homogeneous large-scale mean time series is assessed. Starting with monthly averages, the method involves dropping data around the time of suspected discontinuities and then calculating differences in temperature from one year to the next, resulting in a time series of year-to-year differences for each month at each station. These first difference time series are then combined to form large-scale means, and mean temperature time series are constructed from the first difference series. When applied to radiosonde temperature data, the method introduces random errors that decrease with the number of station time series used to create the large-scale time series and increase with the number of temporal gaps in the station time series. Root-mean-square errors for annual means of datasets produced with this method using over 500 stations are estimated at no more than 0.03 K, with errors in trends less than 0.02 K decade-1 for 1960 97 at 500 mb. For a 50-station dataset, errors in trends in annual global means introduced by the first differencing procedure may be as large as 0.06 K decade-1 (for six breaks per series), which is greater than the standard error of the trend. Although the first difference method offers significant resource and labor advantages over methods that attempt to adjust the data, it introduces an error in large-scale mean time series that may be unacceptable in some cases.
Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Bentler, Peter M.
2000-01-01
Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)
The statistical properties and possible causes of polar motion prediction errors
NASA Astrophysics Data System (ADS)
Kosek, Wieslaw; Kalarus, Maciej; Wnek, Agnieszka; Zbylut-Gorska, Maria
2015-08-01
The pole coordinate data predictions from different prediction contributors of the Earth Orientation Parameters Combination of Prediction Pilot Project (EOPCPPP) were studied to determine the statistical properties of polar motion forecasts by looking at the time series of differences between them and the future IERS pole coordinates data. The mean absolute errors, standard deviations as well as the skewness and kurtosis of these differences were computed together with their error bars as a function of prediction length. The ensemble predictions show a little smaller mean absolute errors or standard deviations however their skewness and kurtosis values are similar as the for predictions from different contributors. The skewness and kurtosis enable to check whether these prediction differences satisfy normal distribution. The kurtosis values diminish with the prediction length which means that the probability distribution of these prediction differences is becoming more platykurtic than letptokurtic. Non zero skewness values result from oscillating character of these differences for particular prediction lengths which can be due to the irregular change of the annual oscillation phase in the joint fluid (atmospheric + ocean + land hydrology) excitation functions. The variations of the annual oscillation phase computed by the combination of the Fourier transform band pass filter and the Hilbert transform from pole coordinates data as well as from pole coordinates model data obtained from fluid excitations are in a good agreement.
Parameter recovery, bias and standard errors in the linear ballistic accumulator model.
Visser, Ingmar; Poessé, Rens
2017-05-01
The linear ballistic accumulator (LBA) model (Brown & Heathcote, , Cogn. Psychol., 57, 153) is increasingly popular in modelling response times from experimental data. An R package, glba, has been developed to fit the LBA model using maximum likelihood estimation which is validated by means of a parameter recovery study. At sufficient sample sizes parameter recovery is good, whereas at smaller sample sizes there can be large bias in parameters. In a second simulation study, two methods for computing parameter standard errors are compared. The Hessian-based method is found to be adequate and is (much) faster than the alternative bootstrap method. The use of parameter standard errors in model selection and inference is illustrated in an example using data from an implicit learning experiment (Visser et al., , Mem. Cogn., 35, 1502). It is shown that typical implicit learning effects are captured by different parameters of the LBA model. © 2017 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Ali, Mumtaz; Deo, Ravinesh C.; Downs, Nathan J.; Maraseni, Tek
2018-07-01
Forecasting drought by means of the World Meteorological Organization-approved Standardized Precipitation Index (SPI) is considered to be a fundamental task to support socio-economic initiatives and effectively mitigating the climate-risk. This study aims to develop a robust drought modelling strategy to forecast multi-scalar SPI in drought-rich regions of Pakistan where statistically significant lagged combinations of antecedent SPI are used to forecast future SPI. With ensemble-Adaptive Neuro Fuzzy Inference System ('ensemble-ANFIS') executed via a 10-fold cross-validation procedure, a model is constructed by randomly partitioned input-target data. Resulting in 10-member ensemble-ANFIS outputs, judged by mean square error and correlation coefficient in the training period, the optimal forecasts are attained by the averaged simulations, and the model is benchmarked with M5 Model Tree and Minimax Probability Machine Regression (MPMR). The results show the proposed ensemble-ANFIS model's preciseness was notably better (in terms of the root mean square and mean absolute error including the Willmott's, Nash-Sutcliffe and Legates McCabe's index) for the 6- and 12- month compared to the 3-month forecasts as verified by the largest error proportions that registered in smallest error band. Applying 10-member simulations, ensemble-ANFIS model was validated for its ability to forecast severity (S), duration (D) and intensity (I) of drought (including the error bound). This enabled uncertainty between multi-models to be rationalized more efficiently, leading to a reduction in forecast error caused by stochasticity in drought behaviours. Through cross-validations at diverse sites, a geographic signature in modelled uncertainties was also calculated. Considering the superiority of ensemble-ANFIS approach and its ability to generate uncertainty-based information, the study advocates the versatility of a multi-model approach for drought-risk forecasting and its prime importance for estimating drought properties over confidence intervals to generate better information for strategic decision-making.
Enoxacin penetration into human prostatic tissue.
Bergeron, M G; Roy, R; Lessard, C; Foucault, P
1988-01-01
Concurrent enoxacin concentrations in serum and prostatic tissue were determined in 14 patients. The mean ratios of enoxacin concentration in tissue over concentration in serum were 1.4 +/- 0.2 (standard error of the mean). The levels in serum and prostatic tissue were above the MICs for most urinary pathogens. PMID:3196004
Brindal, Emily; Wilson, Carlene; Mohr, Philip; Wittert, Gary
2012-02-01
To assess Australian consumers' perception of portion size of fast-food items and their ability to estimate energy content. Cross-sectional computer-based survey. Australia. Fast-food consumers (168 male, 324 female) were asked to recall the items eaten at the most recent visit to a fast-food restaurant, rate the prospective satiety and estimate the energy content of seven fast-food or 'standard' meals relative to a 9000 kJ Guideline Daily Amount. Nine dietitians also completed the energy estimation task. Ratings of prospective satiety generally aligned with the actual size of the meals and indicated that consumers perceived all meals to provide an adequate amount of food, although this differed by gender. The magnitude of the error in energy estimation by consumers was three to ten times that of the dietitians. In both males and females, the average error in energy estimation for the fast-food meals (females: mean 3911 (sd 1998) kJ; males: mean 3382 (sd 1957) kJ) was significantly (P < 0·001) larger than for the standard meals (females: mean 2607 (sd 1623) kJ; males: mean 2754 (sd 1652) kJ). In women, error in energy estimation for fast-food items predicted actual energy intake from fast-food items (β = 0·16, P < 0·01). Knowledge of the energy content of standard and fast-food meals in fast-food consumers in Australia is poor. Awareness of dietary energy should be a focus of health promotion if nutrition information, in its current format, is going to alter behaviour.
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
Statistical considerations for grain-size analyses of tills
Jacobs, A.M.
1971-01-01
Relative percentages of sand, silt, and clay from samples of the same till unit are not identical because of different lithologies in the source areas, sorting in transport, random variation, and experimental error. Random variation and experimental error can be isolated from the other two as follows. For each particle-size class of each till unit, a standard population is determined by using a normally distributed, representative group of data. New measurements are compared with the standard population and, if they compare satisfactorily, the experimental error is not significant and random variation is within the expected range for the population. The outcome of the comparison depends on numerical criteria derived from a graphical method rather than on a more commonly used one-way analysis of variance with two treatments. If the number of samples and the standard deviation of the standard population are substituted in a t-test equation, a family of hyperbolas is generated, each of which corresponds to a specific number of subsamples taken from each new sample. The axes of the graphs of the hyperbolas are the standard deviation of new measurements (horizontal axis) and the difference between the means of the new measurements and the standard population (vertical axis). The area between the two branches of each hyperbola corresponds to a satisfactory comparison between the new measurements and the standard population. Measurements from a new sample can be tested by plotting their standard deviation vs. difference in means on axes containing a hyperbola corresponding to the specific number of subsamples used. If the point lies between the branches of the hyperbola, the measurements are considered reliable. But if the point lies outside this region, the measurements are repeated. Because the critical segment of the hyperbola is approximately a straight line parallel to the horizontal axis, the test is simplified to a comparison between the means of the standard population and the means of the subsample. The minimum number of subsamples required to prove significant variation between samples caused by different lithologies in the source areas and sorting in transport can be determined directly from the graphical method. The minimum number of subsamples required is the maximum number to be run for economy of effort. ?? 1971 Plenum Publishing Corporation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giri, U; Ganesh, T; Saini, V
2016-06-15
Purpose: To quantify inherent uncertainty associated with a volumetric imaging system in its determination of positional shifts. Methods: The study was performed on an Elekta Axesse™ linac’s XVI cone beam computed tomography (CBCT) system. A CT image data set of a Penta- Guide phantom was used as reference image by placing isocenter at the center of the phantom.The phantom was placed arbitrarily on the couch close to isocenter and CBCT images were obtained. The CBCT dataset was matched with the reference image using XVI software and the shifts were determined in 6-dimensions. Without moving the phantom, this process was repeatedmore » 20 times consecutively within 30 minutes on a single day. Mean shifts and their standard deviations in all 6-dimensions were determined for all the 20 instances of imaging. For any given day, the first set of shifts obtained was kept as reference and the deviations of the subsequent 19 sets from the reference set were scored. Mean differences and their standard deviations were determined. In this way, data were obtained for 30 consecutive working days. Results: Tabulating the mean deviations and their standard deviations observed on each day for the 30 measurement days, systematic and random errors in the determination of shifts by XVI software were calculated. The systematic errors were found to be 0.03, 0.04 and 0.03 mm while random errors were 0.05, 0.06 and 0.06 mm in lateral, craniocaudal and anterio-posterior directions respectively. For rotational shifts, the systematic errors were 0.02°, 0.03° and 0.03° and random errors were 0.06°, 0.05° and 0.05° in pitch, roll and yaw directions respectively. Conclusion: The inherent uncertainties in every image guidance system should be assessed and baseline values established at the time of its commissioning. These shall be periodically tested as part of the QA protocol.« less
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around...
Improving patient safety through quality assurance.
Raab, Stephen S
2006-05-01
Anatomic pathology laboratories use several quality assurance tools to detect errors and to improve patient safety. To review some of the anatomic pathology laboratory patient safety quality assurance practices. Different standards and measures in anatomic pathology quality assurance and patient safety were reviewed. Frequency of anatomic pathology laboratory error, variability in the use of specific quality assurance practices, and use of data for error reduction initiatives. Anatomic pathology error frequencies vary according to the detection method used. Based on secondary review, a College of American Pathologists Q-Probes study showed that the mean laboratory error frequency was 6.7%. A College of American Pathologists Q-Tracks study measuring frozen section discrepancy found that laboratories improved the longer they monitored and shared data. There is a lack of standardization across laboratories even for governmentally mandated quality assurance practices, such as cytologic-histologic correlation. The National Institutes of Health funded a consortium of laboratories to benchmark laboratory error frequencies, perform root cause analysis, and design error reduction initiatives, using quality assurance data. Based on the cytologic-histologic correlation process, these laboratories found an aggregate nongynecologic error frequency of 10.8%. Based on gynecologic error data, the laboratory at my institution used Toyota production system processes to lower gynecologic error frequencies and to improve Papanicolaou test metrics. Laboratory quality assurance practices have been used to track error rates, and laboratories are starting to use these data for error reduction initiatives.
Navigation Operational Concept,
1991-08-01
Area Control Facility AFSS Automated Flight Service Station AGL Above Ground Level ALSF-2 Approach Light System with Sequence Flasher Model 2 ATC Air...equipment contributes less than 0.30 NM error at the missed approach point. This total system use accuracy allows for flight technical error of up to...means for transition from instrument to visual flight . This function is provided by a series of standard lighting systems : the Approach Lighting
Comparative study of anatomical normalization errors in SPM and 3D-SSP using digital brain phantom.
Onishi, Hideo; Matsutake, Yuki; Kawashima, Hiroki; Matsutomo, Norikazu; Amijima, Hizuru
2011-01-01
In single photon emission computed tomography (SPECT) cerebral blood flow studies, two major algorithms are widely used statistical parametric mapping (SPM) and three-dimensional stereotactic surface projections (3D-SSP). The aim of this study is to compare an SPM algorithm-based easy Z score imaging system (eZIS) and a 3D-SSP system in the errors of anatomical standardization using 3D-digital brain phantom images. We developed a 3D-brain digital phantom based on MR images to simulate the effects of head tilt, perfusion defective region size, and count value reduction rate on the SPECT images. This digital phantom was used to compare the errors of anatomical standardization by the eZIS and the 3D-SSP algorithms. While the eZIS allowed accurate standardization of the images of the phantom simulating a head in rotation, lateroflexion, anteflexion, or retroflexion without angle dependency, the standardization by 3D-SSP was not accurate enough at approximately 25° or more head tilt. When the simulated head contained perfusion defective regions, one of the 3D-SSP images showed an error of 6.9% from the true value. Meanwhile, one of the eZIS images showed an error as large as 63.4%, revealing a significant underestimation. When required to evaluate regions with decreased perfusion due to such causes as hemodynamic cerebral ischemia, the 3D-SSP is desirable. In a statistical image analysis, we must reconfirm the image after anatomical standardization by all means.
Comparative study of standard space and real space analysis of quantitative MR brain data.
Aribisala, Benjamin S; He, Jiabao; Blamire, Andrew M
2011-06-01
To compare the robustness of region of interest (ROI) analysis of magnetic resonance imaging (MRI) brain data in real space with analysis in standard space and to test the hypothesis that standard space image analysis introduces more partial volume effect errors compared to analysis of the same dataset in real space. Twenty healthy adults with no history or evidence of neurological diseases were recruited; high-resolution T(1)-weighted, quantitative T(1), and B(0) field-map measurements were collected. Algorithms were implemented to perform analysis in real and standard space and used to apply a simple standard ROI template to quantitative T(1) datasets. Regional relaxation values and histograms for both gray and white matter tissues classes were then extracted and compared. Regional mean T(1) values for both gray and white matter were significantly lower using real space compared to standard space analysis. Additionally, regional T(1) histograms were more compact in real space, with smaller right-sided tails indicating lower partial volume errors compared to standard space analysis. Standard space analysis of quantitative MRI brain data introduces more partial volume effect errors biasing the analysis of quantitative data compared to analysis of the same dataset in real space. Copyright © 2011 Wiley-Liss, Inc.
Leão, William L.; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor’s 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model. PMID:29333210
Leão, William L; Abanto-Valle, Carlos A; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor's 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model.
Sangnawakij, Patarawan; Böhning, Dankmar; Adams, Stephen; Stanton, Michael; Holling, Heinz
2017-04-30
Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta-analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta-analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end-points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta-analytic inference can be developed. We suggest two methods to estimate study-specific variances in such a meta-analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta-analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta-analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Duan, Lian; Makita, Shuichi; Yamanari, Masahiro; Lim, Yiheng; Yasuno, Yoshiaki
2011-08-01
A Monte-Carlo-based phase retardation estimator is developed to correct the systematic error in phase retardation measurement by polarization sensitive optical coherence tomography (PS-OCT). Recent research has revealed that the phase retardation measured by PS-OCT has a distribution that is neither symmetric nor centered at the true value. Hence, a standard mean estimator gives us erroneous estimations of phase retardation, and it degrades the performance of PS-OCT for quantitative assessment. In this paper, the noise property in phase retardation is investigated in detail by Monte-Carlo simulation and experiments. A distribution transform function is designed to eliminate the systematic error by using the result of the Monte-Carlo simulation. This distribution transformation is followed by a mean estimator. This process provides a significantly better estimation of phase retardation than a standard mean estimator. This method is validated both by numerical simulations and experiments. The application of this method to in vitro and in vivo biological samples is also demonstrated.
Parrett, Charles; Johnson, D.R.; Hull, J.A.
1989-01-01
Estimates of streamflow characteristics (monthly mean flow that is exceeded 90, 80, 50, and 20 percent of the time for all years of record and mean monthly flow) were made and are presented in tabular form for 312 sites in the Missouri River basin in Montana. Short-term gaged records were extended to the base period of water years 1937-86, and were used to estimate monthly streamflow characteristics at 100 sites. Data from 47 gaged sites were used in regression analysis relating the streamflow characteristics to basin characteristics and to active-channel width. The basin-characteristics equations, with standard errors of 35% to 97%, were used to estimate streamflow characteristics at 179 ungaged sites. The channel-width equations, with standard errors of 36% to 103%, were used to estimate characteristics at 138 ungaged sites. Streamflow measurements were correlated with concurrent streamflows at nearby gaged sites to estimate streamflow characteristics at 139 ungaged sites. In a test using 20 pairs of gages, the standard errors ranged from 31% to 111%. At 139 ungaged sites, the estimates from two or more of the methods were weighted and combined in accordance with the variance of individual methods. When estimates from three methods were combined the standard errors ranged from 24% to 63 %. A drainage-area-ratio adjustment method was used to estimate monthly streamflow characteristics at seven ungaged sites. The reliability of the drainage-area-ratio adjustment method was estimated to be about equal to that of the basin-characteristics method. The estimate were checked for reliability. Estimates of monthly streamflow characteristics from gaged records were considered to be most reliable, and estimates at sites with actual flow record from 1937-86 were considered to be completely reliable (zero error). Weighted-average estimates were considered to be the most reliable estimates made at ungaged sites. (USGS)
A Generally Robust Approach for Testing Hypotheses and Setting Confidence Intervals for Effect Sizes
ERIC Educational Resources Information Center
Keselman, H. J.; Algina, James; Lix, Lisa M.; Wilcox, Rand R.; Deering, Kathleen N.
2008-01-01
Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of…
Cost-effectiveness of the Federal stream-gaging program in Virginia
Carpenter, D.H.
1985-01-01
Data uses and funding sources were identified for the 77 continuous stream gages currently being operated in Virginia by the U.S. Geological Survey with a budget of $446,000. Two stream gages were identified as not being used sufficiently to warrant continuing their operation. Operation of these stations should be considered for discontinuation. Data collected at two other stations were identified as having uses primarily related to short-term studies; these stations should also be considered for discontinuation at the end of the data collection phases of the studies. The remaining 73 stations should be kept in the program for the foreseeable future. The current policy for operation of the 77-station program requires a budget of $446,000/yr. The average standard error of estimation of streamflow records is 10.1%. It was shown that this overall level of accuracy at the 77 sites could be maintained with a budget of $430,500 if resources were redistributed among the gages. A minimum budget of $428,500 is required to operate the 77-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, with optimized operation, the average standard error would be 10.4%. The maximum budget analyzed was $650,000, which resulted in an average standard error of 5.5%. The study indicates that a major component of error is caused by lost or missing data. If perfect equipment were available, the standard error for the current program and budget could be reduced to 7.6%. This also can be interpreted to mean that the streamflow data have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)
de Gusmão, Claudio M; Guerriero, Réjean M; Bernson-Leung, Miya Elizabeth; Pier, Danielle; Ibeziako, Patricia I; Bujoreanu, Simona; Maski, Kiran P; Urion, David K; Waugh, Jeff L
2014-08-01
In children, functional neurological symptom disorders are frequently the basis for presentation for emergency care. Pediatric epidemiological and outcome data remain scarce. Assess diagnostic accuracy of trainee's first impression in our pediatric emergency room; describe manner of presentation, demographic data, socioeconomic impact, and clinical outcomes, including parental satisfaction. (1) More than 1 year, psychiatry consultations for neurology patients with a functional neurological symptom disorder were retrospectively reviewed. (2) For 3 months, all children whose emergency room presentation suggested the diagnosis were prospectively collected. (3) Three to six months after prospective collection, families completed a structured telephone interview on outcome measures. Twenty-seven patients were retrospectively assessed; 31 patients were prospectively collected. Trainees' accurately predicted the diagnosis in 93% (retrospective) and 94% (prospective) cohorts. Mixed presentations were most common (usually sensory-motor changes, e.g. weakness and/or paresthesias). Associated stressors were mundane and ubiquitous, rarely severe. Families were substantially affected, reporting mean symptom duration 7.4 (standard error of the mean ± 1.33) weeks, missing 22.4 (standard error of the mean ± 5.47) days of school, and 8.3 (standard error of the mean ± 2.88) of parental workdays (prospective cohort). At follow-up, 78% were symptom free. Parental dissatisfaction was rare, attributed to poor rapport and/or insufficient information conveyed. Trainees' clinical impression was accurate in predicting a later diagnosis of functional neurological symptom disorder. Extraordinary life stressors are not required to trigger the disorder in children. Although prognosis is favorable, families incur substantial economic burden and negative educational impact. Improving recognition and appropriately communicating the diagnosis may speed access to treatment and potentially reduce the disability and cost of this disorder. Copyright © 2014 Elsevier Inc. All rights reserved.
Correlation and registration of ERTS multispectral imagery. [by a digital processing technique
NASA Technical Reports Server (NTRS)
Bonrud, L. O.; Henrikson, P. J.
1974-01-01
Examples of automatic digital processing demonstrate the feasibility of registering one ERTS multispectral scanner (MSS) image with another obtained on a subsequent orbit, and automatic matching, correlation, and registration of MSS imagery with aerial photography (multisensor correlation) is demonstrated. Excellent correlation was obtained with patch sizes exceeding 16 pixels square. Qualities which lead to effective control point selection are distinctive features, good contrast, and constant feature characteristics. Results of the study indicate that more than 300 degrees of freedom are required to register two standard ERTS-1 MSS frames covering 100 by 100 nautical miles to an accuracy of 0.6 pixel mean radial displacement error. An automatic strip processing technique demonstrates 600 to 1200 degrees of freedom over a quater frame of ERTS imagery. Registration accuracies in the range of 0.3 pixel to 0.5 pixel mean radial error were confirmed by independent error analysis. Accuracies in the range of 0.5 pixel to 1.4 pixel mean radial error were demonstrated by semi-automatic registration over small geographic areas.
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros
2004-01-01
The MODIS Level-3 optical thickness and effective radius cloud product is a gridded l deg. x 1 deg. dataset that is derived from aggregation and subsampling at 5 km of 1 km, resolution Level-2 orbital swath data (Level-2 granules). This study examines the impact of the 5 km subsampling on the mean, standard deviation and inhomogeneity parameter statistics of optical thickness and effective radius. The methodology is simple and consists of estimating mean errors for a large collection of Terra and Aqua Level-2 granules by taking the difference of the statistics at the original and subsampled resolutions. It is shown that the Level-3 sampling does not affect the various quantities investigated to the same degree, with second order moments suffering greater subsampling errors, as expected. Mean errors drop dramatically when averages over a sufficient number of regions (e.g., monthly and/or latitudinal averages) are taken, pointing to a dominance of errors that are of random nature. When histograms built from subsampled data with the same binning rules as in the Level-3 dataset are used to reconstruct the quantities of interest, the mean errors do not deteriorate significantly. The results in this paper provide guidance to users of MODIS Level-3 optical thickness and effective radius cloud products on the range of errors due to subsampling they should expect and perhaps account for, in scientific work with this dataset. In general, subsampling errors should not be a serious concern when moderate temporal and/or spatial averaging is performed.
A Simple Model Predicting Individual Weight Change in Humans
Thomas, Diana M.; Martin, Corby K.; Heymsfield, Steven; Redman, Leanne M.; Schoeller, Dale A.; Levine, James A.
2010-01-01
Excessive weight in adults is a national concern with over 2/3 of the US population deemed overweight. Because being overweight has been correlated to numerous diseases such as heart disease and type 2 diabetes, there is a need to understand mechanisms and predict outcomes of weight change and weight maintenance. A simple mathematical model that accurately predicts individual weight change offers opportunities to understand how individuals lose and gain weight and can be used to foster patient adherence to diets in clinical settings. For this purpose, we developed a one dimensional differential equation model of weight change based on the energy balance equation is paired to an algebraic relationship between fat free mass and fat mass derived from a large nationally representative sample of recently released data collected by the Centers for Disease Control. We validate the model's ability to predict individual participants’ weight change by comparing model estimates of final weight data from two recent underfeeding studies and one overfeeding study. Mean absolute error and standard deviation between model predictions and observed measurements of final weights are less than 1.8 ± 1.3 kg for the underfeeding studies and 2.5 ± 1.6 kg for the overfeeding study. Comparison of the model predictions to other one dimensional models of weight change shows improvement in mean absolute error, standard deviation of mean absolute error, and group mean predictions. The maximum absolute individual error decreased by approximately 60% substantiating reliability in individual weight change predictions. The model provides a viable method for estimating individual weight change as a result of changes in intake and determining individual dietary adherence during weight change studies. PMID:24707319
Sloat, J.V.; Gain, W.S.
1995-01-01
Index-velocity data collected with acoustic velocity meters, stage data, and cross-sectional area data were used to calculate discharge at three low-velocity, tidal streamflow stations in north-east Florida. Discharge at three streamflow stations was computed as the product of the channel cross-sectional area and the mean velocity as determined from an index velocity measured in the stream using an acoustic velocity meter. The tidal streamlflow stations used in the study were: Six Mile Creek near Picolata, Fla.; Dunns Creek near Satsuma, Fla.; and the St. Johns River at Buffalo Bluff. Cross-sectional areas at the measurement sections ranged from about 3,000 square feet at Six Mile Creek to about 18,500 square feet at St. Johns River at Buffalo Bluff. Physical characteristics for all three streams were similar except for drainage area. The topography primarily is low-relief, swampy terrain; stream velocities ranged from about -2 to 2 feet per second; and the average change in stage was about 1 foot. Instantaneous discharge was measured using a portable acoustic current meter at each of the three streams to develop a relation between the mean velocity in the stream and the index velocity measured by the acoustic velocity meter. Using least-squares linear regression, a simple linear relation between mean velocity and index velocity was determined. Index velocity was the only significant linear predictor of mean velocity for Six Mile Creek and St. Johns River at Buffalo Bluff. For Dunns Creek, both index velocity and stage were used to develop a multiple-linear predictor of mean velocity. Stage-area curves for each stream were developed from bathymetric data. Instantaneous discharge was computed by multiplying results of relations developed for cross-sectional area and mean velocity. Principal sources of error in the estimated discharge are identified as: (1) instrument errors associated with measurement of stage and index velocity, (2) errors in the representation of mean daily stage and index velocity due to natural variability over time and space, and (3) errors in cross-sectional area and mean-velocity ratings based on stage and index velocity. Standard errors for instantaneous discharge for the median cross-sectional area for Six Mile Creek, Dunns Creek, and St. Johns River at Buffalo Bluff were 94,360, and 1,980 cubic feet per second, respectively. Standard errors for mean daily discharge for the median cross-sectional area for Six Mile Creek, Dunns Creek, and St. Johns River at Buffalo Bluff were 25, 65, and 455 cubic feet per second, respectively. Mean daily discharge at the three sites ranged from about -500 to 1,500 cubic feet per second at Six Mile Creek and Dunns Creek and from about -500 to 15,000 cubic feet per second on the St. Johns River at Buffalo Bluff. For periods of high discharge, the AVM index-velocity method tended to produce estimates accurate with 2 to 6 percent. For periods of moderate discharge, errors in discharge may increase to more than 50 percent. At low flows, errors as a percentage of discharge increase toward infinity.
Corsica: A Multi-Mission Absolute Calibration Site
NASA Astrophysics Data System (ADS)
Bonnefond, P.; Exertier, P.; Laurain, O.; Guinle, T.; Femenias, P.
2013-09-01
In collaboration with the CNES and NASA oceanographic projects (TOPEX/Poseidon and Jason), the OCA (Observatoire de la Côte d'Azur) developed a verification site in Corsica since 1996, operational since 1998. CALibration/VALidation embraces a wide variety of activities, ranging from the interpretation of information from internal-calibration modes of the sensors to validation of the fully corrected estimates of the reflector heights using in situ data. Now, Corsica is, like the Harvest platform (NASA side) [14], an operating calibration site able to support a continuous monitoring with a high level of accuracy: a 'point calibration' which yields instantaneous bias estimates with a 10-day repeatability of 30 mm (standard deviation) and mean errors of 4 mm (standard error). For a 35-day repeatability (ERS, Envisat), due to a smaller time series, the standard error is about the double ( 7 mm).In this paper, we will present updated results of the absolute Sea Surface Height (SSH) biases for TOPEX/Poseidon (T/P), Jason-1, Jason-2, ERS-2 and Envisat.
Multivariate Statistics Applied to Seismic Phase Picking
NASA Astrophysics Data System (ADS)
Velasco, A. A.; Zeiler, C. P.; Anderson, D.; Pingitore, N. E.
2008-12-01
The initial effort of the Seismogram Picking Error from Analyst Review (SPEAR) project has been to establish a common set of seismograms to be picked by the seismological community. Currently we have 13 analysts from 4 institutions that have provided picks on the set of 26 seismograms. In comparing the picks thus far, we have identified consistent biases between picks from different institutions; effects of the experience of analysts; and the impact of signal-to-noise on picks. The institutional bias in picks brings up the important concern that picks will not be the same between different catalogs. This difference means less precision and accuracy when combing picks from multiple institutions. We also note that depending on the experience level of the analyst making picks for a catalog the error could fluctuate dramatically. However, the experience level is based off of number of years in picking seismograms and this may not be an appropriate criterion for determining an analyst's precision. The common data set of seismograms provides a means to test an analyst's level of precision and biases. The analyst is also limited by the quality of the signal and we show that the signal-to-noise ratio and pick error are correlated to the location, size and distance of the event. This makes the standard estimate of picking error based on SNR more complex because additional constraints are needed to accurately constrain the measurement error. We propose to extend the current measurement of error by adding the additional constraints of institutional bias and event characteristics to the standard SNR measurement. We use multivariate statistics to model the data and provide constraints to accurately assess earthquake location and measurement errors.
Glaucoma and Driving: On-Road Driving Characteristics
Wood, Joanne M.; Black, Alex A.; Mallon, Kerry; Thomas, Ravi; Owsley, Cynthia
2016-01-01
Purpose To comprehensively investigate the types of driving errors and locations that are most problematic for older drivers with glaucoma compared to those without glaucoma using a standardized on-road assessment. Methods Participants included 75 drivers with glaucoma (mean = 73.2±6.0 years) with mild to moderate field loss (better-eye MD = -1.21 dB; worse-eye MD = -7.75 dB) and 70 age-matched controls without glaucoma (mean = 72.6 ± 5.0 years). On-road driving performance was assessed in a dual-brake vehicle by an occupational therapist using a standardized scoring system which assessed the types of driving errors and the locations where they were made and the number of critical errors that required an instructor intervention. Driving safety was rated on a 10-point scale. Self-reported driving ability and difficulties were recorded using the Driving Habits Questionnaire. Results Drivers with glaucoma were rated as significantly less safe, made more driving errors, and had almost double the rate of critical errors than those without glaucoma. Driving errors involved lane positioning and planning/approach, and were significantly more likely to occur at traffic lights and yield/give-way intersections. There were few between group differences in self-reported driving ability. Conclusions Older drivers with glaucoma with even mild to moderate field loss exhibit impairments in driving ability, particularly during complex driving situations that involve tactical problems with lane-position, planning ahead and observation. These results, together with the fact that these drivers self-report their driving to be relatively good, reinforce the need for evidence-based on-road assessments for evaluating driving fitness. PMID:27472221
Glaucoma and Driving: On-Road Driving Characteristics.
Wood, Joanne M; Black, Alex A; Mallon, Kerry; Thomas, Ravi; Owsley, Cynthia
2016-01-01
To comprehensively investigate the types of driving errors and locations that are most problematic for older drivers with glaucoma compared to those without glaucoma using a standardized on-road assessment. Participants included 75 drivers with glaucoma (mean = 73.2±6.0 years) with mild to moderate field loss (better-eye MD = -1.21 dB; worse-eye MD = -7.75 dB) and 70 age-matched controls without glaucoma (mean = 72.6 ± 5.0 years). On-road driving performance was assessed in a dual-brake vehicle by an occupational therapist using a standardized scoring system which assessed the types of driving errors and the locations where they were made and the number of critical errors that required an instructor intervention. Driving safety was rated on a 10-point scale. Self-reported driving ability and difficulties were recorded using the Driving Habits Questionnaire. Drivers with glaucoma were rated as significantly less safe, made more driving errors, and had almost double the rate of critical errors than those without glaucoma. Driving errors involved lane positioning and planning/approach, and were significantly more likely to occur at traffic lights and yield/give-way intersections. There were few between group differences in self-reported driving ability. Older drivers with glaucoma with even mild to moderate field loss exhibit impairments in driving ability, particularly during complex driving situations that involve tactical problems with lane-position, planning ahead and observation. These results, together with the fact that these drivers self-report their driving to be relatively good, reinforce the need for evidence-based on-road assessments for evaluating driving fitness.
Sources of variability and systematic error in mouse timing behavior.
Gallistel, C R; King, Adam; McDonald, Robert
2004-01-01
In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.
Styck, Kara M; Walsh, Shana M
2016-01-01
The purpose of the present investigation was to conduct a meta-analysis of the literature on examiner errors for the Wechsler scales of intelligence. Results indicate that a mean of 99.7% of protocols contained at least 1 examiner error when studies that included a failure to record examinee responses as an error were combined and a mean of 41.2% of protocols contained at least 1 examiner error when studies that ignored errors of omission were combined. Furthermore, graduate student examiners were significantly more likely to make at least 1 error on Wechsler intelligence test protocols than psychologists. However, psychologists made significantly more errors per protocol than graduate student examiners regardless of the inclusion or exclusion of failure to record examinee responses as errors. On average, 73.1% of Full-Scale IQ (FSIQ) scores changed as a result of examiner errors, whereas 15.8%-77.3% of scores on the Verbal Comprehension Index (VCI), Perceptual Reasoning Index (PRI), Working Memory Index (WMI), and Processing Speed Index changed as a result of examiner errors. In addition, results suggest that examiners tend to overestimate FSIQ scores and underestimate VCI scores. However, no strong pattern emerged for the PRI and WMI. It can be concluded that examiner errors occur frequently and impact index and FSIQ scores. Consequently, current estimates for the standard error of measurement of popular IQ tests may not adequately capture the variance due to the examiner. (c) 2016 APA, all rights reserved).
A simplified physical model for assessing solar radiation over Brazil using GOES 8 visible imagery
NASA Astrophysics Data System (ADS)
Ceballos, Juan Carlos; Bottino, Marcus Jorge; de Souza, Jaidete Monteiro
2004-01-01
Solar radiation assessment by satellite is constrained by physical limitations of imagery and by the accuracy of instantaneous local atmospheric parameters, suggesting that one should use simplified but physically consistent models for operational work. Such a model is presented for use with GOES 8 imagery applied to atmospheres with low aerosol optical depth. Fundamental satellite-derived parameters are reflectance and cloud cover. A classification method applied to a set of images shows that reflectance, usually defined as upper-threshold Rmax in algorithms assessing cloud cover, would amount ˜0.465, corresponding to the transition between a cumuliform and a stratiform cloud field. Ozone absorption is limited to the stratosphere. The model considers two spectral broadband intervals for tropospheric radiative transfer: ultraviolet and visible intervals are essentially nonabsorbing and can be processed as a single interval, while near-infrared intervals have negligible atmospheric scattering and very low cloud transmittance. Typical values of CO2 and O3 content and of precipitable water are considered. A comparison of daily values of modeled mean irradiance with data of three sites (in rural, urban industrial, and urban coastal environments), September-October 2002, exhibits a bias of +5 W m-2 and a standard deviation of ˜15 W m-2 (0.4 and 1.3 MJ m-2 for daily irradiation). A comparison with monthly means from about 80 automatic weather stations (covering a large area throughout the Brazilian territory) still shows a bias generally within ±10 W m-2 and a low standard deviation (<20 W m-2), but the bias has a trend in September-December 2002, suggesting an annual cycle of local Rmax values. Systematic (mean) errors in partial cloud cover and in nearly clear-sky situations may be enhanced using regional values for atmospheric and surface parameters, such as precipitable water, Rmax, and ground reflectance. The larger errors are observed in situations of high aerosol load (especially in regions with industrial activity or forest or agricultural fires). The last case is evident when sites in the Amazonian region or São Paulo city are selected. When considering daily values averaged within 2.5° × 2.5° cells, the standard error is lower than 20 W m-2; present results suggest an annual cycle of mean bias ranging from +10 to -10 W m-2, with an amplitude of ˜10 W m-2. These values are close to the proposed requirements of 10 W m-2 for the mean deviation and 25 W m-2 for the standard deviation. It is expected that the introduction of a reference grid containing mean values of parameters within a cell could induce a decrease in the standard deviation of mean errors and the correction of their annual cycle. A model adaptation for assessing the effect of high aerosol loads is needed in order to extend improvements to the whole Brazilian area.
Research: Comparison of the Accuracy of a Pocket versus Standard Pulse Oximeter.
da Costa, João Cordeiro; Faustino, Paula; Lima, Ricardo; Ladeira, Inês; Guimarães, Miguel
2016-01-01
Pulse oximetry has become an essential tool in clinical practice. With patient self-management becoming more prevalent, pulse oximetry self-monitoring has the potential to become common practice in the near future. This study sought to compare the accuracy of two pulse oximeters, a high-quality standard pulse oximeter and an inexpensive pocket pulse oximeter, and to compare both devices with arterial blood co-oximetry oxygen saturation. A total of 95 patients (35.8% women; mean [±SD] age 63.1 ± 13.9 years; mean arterial pressure was 92 ± 12.0 mmHg; mean axillar temperature 36.3 ± 0.4°C) presenting to our hospital for blood gas analysis was evaluated. The Bland-Altman technique was performed to calculate bias and precision, as well as agreement limits. Student's t test was performed. Standard oximeter presented 1.84% bias and a precision error of 1.80%. Pocket oximeter presented a bias of 1.85% and a precision error of 2.21%. Agreement limits were -1.69% to 5.37% (standard oximeter) and -2.48% to 6.18% (pocket oximeter). Both oximeters presented bias, which was expected given previous research. The pocket oximeter was less precise but had agreement limits that were comparable with current evidence. Pocket oximeters can be powerful allies in clinical monitoring of patients based on a self-monitoring/efficacy strategy.
NASA Astrophysics Data System (ADS)
Baker, S.; Berryman, E.; Hawbaker, T. J.; Ewers, B. E.
2015-12-01
While much attention has been focused on large scale forest disturbances such as fire, harvesting, drought and insect attacks, small scale forest disturbances that create gaps in forest canopies and below ground root and mycorrhizal networks may accumulate to impact regional scale carbon budgets. In a lodgepole pine (Pinus contorta) forest near Fox Park, WY, clusters of 15 and 30 trees were removed in 1988 to assess the effect of tree gap disturbance on fine root density and nitrogen transformation. Twenty seven years later the gaps remain with limited regeneration present only in the center of the 30 tree plots, beyond the influence of roots from adjacent intact trees. Soil respiration was measured in the summer of 2015 to assess the influence of these disturbances on carbon cycling in Pinus contorta forests. Positions at the centers of experimental disturbances were found to have the lowest respiration rates (mean 2.45 μmol C/m2/s, standard error 0.17 C/m2/s), control plots in the undisturbed forest were highest (mean 4.15 μmol C/m2/s, standard error 0.63 C/m2/s), and positions near the margin of the disturbance were intermediate (mean 3.7 μmol C/m2/s, standard error 0.34 C/m2/s). Fine root densities, soil nitrogen, and microclimate changes were also measured and played an important role in respiration rates of disturbed plots. This demonstrates that a long-term effect on carbon cycling occurs when gaps are created in the canopy and root network of lodgepole forests.
Khorasani, Fahimeh; Beigi, Marjan
2017-01-01
Recently, evaluation and accreditation system of hospitals has had a special emphasis on reporting malpractices and sharing errors or lessons learnt from errors, but still due to lack of promotion of systematic approach for solving problems from the same system, this issue has remained unattended. This study was conducted to determine the effective factors for reporting medical errors among midwives. This project was a descriptive cross-sectional observational study. Data gathering tools were a standard checklist and two researcher-made questionnaires. Sampling for this study was conducted from all the midwives who worked at teaching hospitals affiliated to Isfahan University of Medical Sciences through census method (convenient) and lasted for 3 months. Data were analyzed using descriptive and inferential statistics through SPSS 16. Results showed that 79.1% of the staff reported errors and the highest rate of errors was in the process of patients' tests. In this study, the mean score of midwives' knowledge about the errors was 79.1 and the mean score of their attitude toward reporting errors was 70.4. There was a direct relation between the score of errors' knowledge and attitude in the midwifery staff and reporting errors. Based on the results of this study about the appropriate knowledge and attitude of midwifery staff regarding errors and action toward reporting them, it is recommended to strengthen the system when it comes to errors and hospitals risks.
Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals
Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G.
2016-01-01
This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors’ previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp–p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat. PMID:27382478
Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.
Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G
2016-06-01
This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.
Bandwagon effects and error bars in particle physics
NASA Astrophysics Data System (ADS)
Jeng, Monwhea
2007-02-01
We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit "bandwagon effects": reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations.
Chen, Yasheng; Juttukonda, Meher; Su, Yi; Benzinger, Tammie; Rubin, Brian G.; Lee, Yueh Z.; Lin, Weili; Shen, Dinggang; Lalush, David
2015-01-01
Purpose To develop a positron emission tomography (PET) attenuation correction method for brain PET/magnetic resonance (MR) imaging by estimating pseudo computed tomographic (CT) images from T1-weighted MR and atlas CT images. Materials and Methods In this institutional review board–approved and HIPAA-compliant study, PET/MR/CT images were acquired in 20 subjects after obtaining written consent. A probabilistic air segmentation and sparse regression (PASSR) method was developed for pseudo CT estimation. Air segmentation was performed with assistance from a probabilistic air map. For nonair regions, the pseudo CT numbers were estimated via sparse regression by using atlas MR patches. The mean absolute percentage error (MAPE) on PET images was computed as the normalized mean absolute difference in PET signal intensity between a method and the reference standard continuous CT attenuation correction method. Friedman analysis of variance and Wilcoxon matched-pairs tests were performed for statistical comparison of MAPE between the PASSR method and Dixon segmentation, CT segmentation, and population averaged CT atlas (mean atlas) methods. Results The PASSR method yielded a mean MAPE ± standard deviation of 2.42% ± 1.0, 3.28% ± 0.93, and 2.16% ± 1.75, respectively, in the whole brain, gray matter, and white matter, which were significantly lower than the Dixon, CT segmentation, and mean atlas values (P < .01). Moreover, 68.0% ± 16.5, 85.8% ± 12.9, and 96.0% ± 2.5 of whole-brain volume had within ±2%, ±5%, and ±10% percentage error by using PASSR, respectively, which was significantly higher than other methods (P < .01). Conclusion PASSR outperformed the Dixon, CT segmentation, and mean atlas methods by reducing PET error owing to attenuation correction. © RSNA, 2014 PMID:25521778
An Evaluation of Portable Wet Bulb Globe Temperature Monitor Accuracy.
Cooper, Earl; Grundstein, Andrew; Rosen, Adam; Miles, Jessica; Ko, Jupil; Curry, Patrick
2017-12-01
Wet bulb globe temperature (WBGT) is the gold standard for assessing environmental heat stress during physical activity. Many manufacturers of commercially available instruments fail to report WBGT accuracy. To determine the accuracy of several commercially available WBGT monitors compared with a standardized reference device. Observational study. Field test. Six commercially available WBGT devices. Data were recorded for 3 sessions (1 in the morning and 2 in the afternoon) at 2-minute intervals for at least 2 hours. Mean absolute error (MAE), root mean square error (RMSE), mean bias error (MBE), and the Pearson correlation coefficient ( r) were calculated to determine instrument performance compared with the reference unit. The QUESTemp° 34 (MAE = 0.24°C, RMSE = 0.44°C, MBE = -0.64%) and Extech HT30 Heat Stress Wet Bulb Globe Temperature Meter (Extech; MAE = 0.61°C, RMSE = 0.79°C, MBE = 0.44%) demonstrated the least error in relation to the reference standard, whereas the General WBGT8778 Heat Index Checker (General; MAE = 1.18°C, RMSE = 1.34°C, MBE = 4.25%) performed the poorest. The QUESTemp° 34 and Kestrel 4400 Heat Stress Tracker units provided conservative measurements that slightly overestimated the WBGT provided by the reference unit. Finally, instruments using the psychrometric wet bulb temperature (General, REED Heat Index WBGT Meter, and WBGT-103 Heat Stroke Checker) tended to underestimate the WBGT, and the resulting values more frequently fell into WBGT-based activity categories with fewer restrictions as defined by the American College of Sports Medicine. The QUESTemp° 34, followed by the Extech, had the smallest error compared with the reference unit. Moreover, the QUESTemp° 34, Extech, and Kestrel units appeared to offer conservative yet accurate assessments of the WBGT, potentially minimizing the risk of allowing physical activity to continue in stressful heat environments. Instruments using the psychrometric wet bulb temperature tended to underestimate WBGT under low wind-speed conditions. Accurate WBGT interpretations are important to enable clinicians to guide activities in hot and humid weather conditions.
NASA Astrophysics Data System (ADS)
Rawat, Kishan Singh; Sehgal, Vinay Kumar; Pradhan, Sanatan; Ray, Shibendu S.
2018-03-01
We have estimated soil moisture (SM) by using circular horizontal polarization backscattering coefficient (σ o_{RH}), differences of circular vertical and horizontal σ o (σ o_{RV} {-} σ o_{RH}) from FRS-1 data of Radar Imaging Satellite (RISAT-1) and surface roughness in terms of RMS height ({RMS}_{height}). We examined the performance of FRS-1 in retrieving SM under wheat crop at tillering stage. Results revealed that it is possible to develop a good semi-empirical model (SEM) to estimate SM of the upper soil layer using RISAT-1 SAR data rather than using existing empirical model based on only single parameter, i.e., σ o. Near surface SM measurements were related to σ o_{RH}, σ o_{RV} {-} σ o_{RH} derived using 5.35 GHz (C-band) image of RISAT-1 and {RMS}_{height}. The roughness component derived in terms of {RMS}_{height} showed a good positive correlation with σ o_{RV} {-} σ o_{RH} (R2 = 0.65). By considering all the major influencing factors (σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}), an SEM was developed where SM (volumetric) predicted values depend on σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}. This SEM showed R2 of 0.87 and adjusted R2 of 0.85, multiple R=0.94 and with standard error of 0.05 at 95% confidence level. Validation of the SM derived from semi-empirical model with observed measurement ({SM}_{Observed}) showed root mean square error (RMSE) = 0.06, relative-RMSE (R-RMSE) = 0.18, mean absolute error (MAE) = 0.04, normalized RMSE (NRMSE) = 0.17, Nash-Sutcliffe efficiency (NSE) = 0.91 ({≈ } 1), index of agreement (d) = 1, coefficient of determination (R2) = 0.87, mean bias error (MBE) = 0.04, standard error of estimate (SEE) = 0.10, volume error (VE) = 0.15, variance of the distribution of differences ({S}d2) = 0.004. The developed SEM showed better performance in estimating SM than Topp empirical model which is based only on σ o. By using the developed SEM, top soil SM can be estimated with low mean absolute percent error (MAPE) = 1.39 and can be used for operational applications.
Application of Lamendin's adult dental aging technique to a diverse skeletal sample.
Prince, Debra A; Ubelaker, Douglas H
2002-01-01
Lamendin et al. (1) proposed a technique to estimate age at death for adults by analyzing single-rooted teeth. They expressed age as a function of two factors: translucency of the tooth root and periodontosis (gingival regression). In their study, they analyzed 306 singled rooted teeth that were extracted at autopsy from 208 individuals of known age at death, all of whom were considered as having a French ancestry. Their sample consisted of 135 males, 73 females, 198 whites, and 10 blacks. The sample ranged in age from 22 to 90 years of age. By using a simple formulae (A = 0.18 x P + 0.42 x T + 25.53, where A = Age in years, P = Periodontosis height x 100/root height, and T = Transparency height x 100/root height), Lamendin et al. were able to estimate age at death with a mean error of +/- 10 years on their working sample and +/- 8.4 years on a forensic control sample. Lamendin found this technique to work well with a French population, but did not test it outside of that sample area. This study tests the accuracy of this adult aging technique on a more diverse skeletal population, the Terry Collection housed at the Smithsonian's National Museum of Natural History. Our sample consists of 400 teeth from 94 black females, 72 white females, 98 black males, and 95 white males, ranging from 25 to 99 years. Lamendin's technique was applied to this sample to test its applicability to a population not of French origin. Providing results from a diverse skeletal population will aid in establishing the validity of this method to be used in forensic cases, its ideal purpose. Our results suggest that Lamendin's method estimates age fairly accurately outside of the French sample yielding a mean error of 8.2 years, standard deviation 6.9 years, and standard error of the mean 0.34 years. In addition, when ancestry and sex are accounted for, the mean errors are reduced for each group (black females, white females, black males, and white males). Lamendin et al. reported an inter-observer error of 9+/-1.8 and 10+/-2 sears from two independent observers. Forty teeth were randomly remeasured from the Terry Collection in order to assess an intra-observer error. From this retest, an intra-observer error of 6.5 years was detected.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, Antonio L., E-mail: adamato@lroc.harvard.edu; Viswanathan, Akila N.; Don, Sarah M.
2014-10-15
Purpose: To investigate the use of a system using electromagnetic tracking (EMT), post-processing and an error-detection algorithm for detecting errors and resolving uncertainties in high-dose-rate brachytherapy catheter digitization for treatment planning. Methods: EMT was used to localize 15 catheters inserted into a phantom using a stepwise acquisition technique. Five distinct acquisition experiments were performed. Noise associated with the acquisition was calculated. The dwell location configuration was extracted from the EMT data. A CT scan of the phantom was performed, and five distinct catheter digitization sessions were performed. No a priori registration of the CT scan coordinate system with the EMTmore » coordinate system was performed. CT-based digitization was automatically extracted from the brachytherapy plan DICOM files (CT), and rigid registration was performed between EMT and CT dwell positions. EMT registration error was characterized in terms of the mean and maximum distance between corresponding EMT and CT dwell positions per catheter. An algorithm for error detection and identification was presented. Three types of errors were systematically simulated: swap of two catheter numbers, partial swap of catheter number identification for parts of the catheters (mix), and catheter-tip shift. Error-detection sensitivity (number of simulated scenarios correctly identified as containing an error/number of simulated scenarios containing an error) and specificity (number of scenarios correctly identified as not containing errors/number of correct scenarios) were calculated. Catheter identification sensitivity (number of catheters correctly identified as erroneous across all scenarios/number of erroneous catheters across all scenarios) and specificity (number of catheters correctly identified as correct across all scenarios/number of correct catheters across all scenarios) were calculated. The mean detected and identified shift was calculated. Results: The maximum noise ±1 standard deviation associated with the EMT acquisitions was 1.0 ± 0.1 mm, and the mean noise was 0.6 ± 0.1 mm. Registration of all the EMT and CT dwell positions was associated with a mean catheter error of 0.6 ± 0.2 mm, a maximum catheter error of 0.9 ± 0.4 mm, a mean dwell error of 1.0 ± 0.3 mm, and a maximum dwell error of 1.3 ± 0.7 mm. Error detection and catheter identification sensitivity and specificity of 100% were observed for swap, mix and shift (≥2.6 mm for error detection; ≥2.7 mm for catheter identification) errors. A mean detected shift of 1.8 ± 0.4 mm and a mean identified shift of 1.9 ± 0.4 mm were observed. Conclusions: Registration of the EMT dwell positions to the CT dwell positions was possible with a residual mean error per catheter of 0.6 ± 0.2 mm and a maximum error for any dwell of 1.3 ± 0.7 mm. These low residual registration errors show that quality assurance of the general characteristics of the catheters and of possible errors affecting one specific dwell position is possible. The sensitivity and specificity of the catheter digitization verification algorithm was 100% for swap and mix errors and for shifts ≥2.6 mm. On average, shifts ≥1.8 mm were detected, and shifts ≥1.9 mm were detected and identified.« less
Vogel, P; Rüschoff, J; Kümmel, S; Zirngibl, H; Hofstädter, F; Hohenberger, W; Jauch, K W
2000-01-01
We evaluated the incidence and prognostic relevance of microscopic intraperitoneal tumor cell dissemination of colon cancer in comparison with dissemination of gastric cancer as a rational for additive intraperitoneal therapy. Peritoneal washouts of 90 patients with colon and 111 patients with gastric cancer were investigated prospectively. Sixty patients with benign diseases and 8 patients with histologically proven gross visible peritoneal carcinomatosis served as controls. Intraoperatively, 100 ml of warm NaCl 0.9 percent were instilled and 20 ml were reaspirated. In all patients hematoxylin and eosin staining (conventional cytology) was performed. Additionally, in 36 patients with colon cancer and 47 patients with gastric cancer, immunostaining with the HEA-125 antibody (immunocytology) was prepared. The results of cytology were assessed for an association with TNM category and cancer grade, based on all patients, and with patient survival, among the R0 resected patients. In conventional cytology 35.5 percent (32/90) of patients with colon cancer and 42.3 percent (47/111) of patients with gastric cancer had a positive cytology. In immunocytology 47.2 percent (17/36) of patients with colon cancer and 46.8 percent (22/47) of patients with gastric cancer were positive. In colon cancer, positive conventional cytology was associated with pT and M category (P = 0.044 and P = 0.0002), whereas immunocytology was only associated with M category (P = 0.007). No association was found between nodal status and immunocytology in colon cancer and with the grading. There was a statistically significant correlation between pT M category and conventional and immunocytology in gastric cancer (P < 0.0015/P = 0.007 and P < 0.001/P = 0.009, respectively). Positive immunocytology was additionally associated with pN category (P = 0.05). In a univariate analysis of R0 resected patients (no residual tumor), positive immunocytology was significantly related to an unfavorable prognosis in patients with gastric cancer only (n = 30). Mean survival time was significantly increased in patients with gastric cancer with negative cytology compared with positive cytology (1,205 (standard error of the mean, 91) vs. 771 (standard error of the mean, 147) days; P = 0.007) but not in patients with colon cancer (1,215 (standard error of the mean, 95) vs. 1,346 (standard error of the mean, 106) days; P = 0.55). Because microscopic peritoneal dissemination influences survival time after R0 resections only in patients with gastric but not with colon cancer, our results may provide a basis for a decision on additive, prophylactic (intraperitoneal) therapy in gastric but not colon cancer.
Guelpa, Anina; Bevilacqua, Marta; Marini, Federico; O'Kennedy, Kim; Geladi, Paul; Manley, Marena
2015-04-15
It has been established in this study that the Rapid Visco Analyser (RVA) can describe maize hardness, irrespective of the RVA profile, when used in association with appropriate multivariate data analysis techniques. Therefore, the RVA can complement or replace current and/or conventional methods as a hardness descriptor. Hardness modelling based on RVA viscograms was carried out using seven conventional hardness methods (hectoliter mass (HLM), hundred kernel mass (HKM), particle size index (PSI), percentage vitreous endosperm (%VE), protein content, percentage chop (%chop) and near infrared (NIR) spectroscopy) as references and three different RVA profiles (hard, soft and standard) as predictors. An approach using locally weighted partial least squares (LW-PLS) was followed to build the regression models. The resulted prediction errors (root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP)) for the quantification of hardness values were always lower or in the same order of the laboratory error of the reference method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Sampling for mercury at subnanogram per litre concentrations for load estimation in rivers
Colman, J.A.; Breault, R.F.
2000-01-01
Estimation of constituent loads in streams requires collection of stream samples that are representative of constituent concentrations, that is, composites of isokinetic multiple verticals collected along a stream transect. An all-Teflon isokinetic sampler (DH-81) cleaned in 75??C, 4 N HCl was tested using blank, split, and replicate samples to assess systematic and random sample contamination by mercury species. Mean mercury concentrations in field-equipment blanks were low: 0.135 ng??L-1 for total mercury (??Hg) and 0.0086 ng??L-1 for monomethyl mercury (MeHg). Mean square errors (MSE) for ??Hg and MeHg duplicate samples collected at eight sampling stations were not statistically different from MSE of samples split in the laboratory, which represent the analytical and splitting error. Low fieldblank concentrations and statistically equal duplicate- and split-sample MSE values indicate that no measurable contamination was occurring during sampling. Standard deviations associated with example mercury load estimations were four to five times larger, on a relative basis, than standard deviations calculated from duplicate samples, indicating that error of the load determination was primarily a function of the loading model used, not of sampling or analytical methods.
Smartphone virtual reality to increase clinical balance assessment responsiveness.
Rausch, Matthew; Simon, Janet E; Starkey, Chad; Grooms, Dustin R
2018-05-22
To determine if a low cost smartphone based, clinically applicable virtual reality (VR) modification to the standard Balance Error Scoring System (BESS) can challenge postural stability beyond the traditional BESS. Cross-sectional study. University research laboratory. 28 adults (mean age 23.36 ± 2.38 years, mean height 1.74 m ± 0.13, mean weight 77.95 kg ± 16.63). BESS postural control errors and center of pressure (CoP) velocity were recorded during the BESS test and a VR modified BESS (VR-BESS). The VR-BESS used a headset and phone to display a rollercoaster ride to induce a visual and vestibular challenge to postural stability. The VR-BESS significantly increased total errors (20.93 vs. 11.42, p < 0.05) and CoP velocity summed across all stances and surfaces (52.96 cm/s vs. 37.73 cm/s, p < 0.05) beyond the traditional BESS. The VR-BESS provides a standardized, and effective way to increase postural stability challenge in the clinical setting. The VR-BESS can use any smartphone technology to induce postural stability deficits that may otherwise normalize with traditional testing. Thus, providing a unique relatively inexpensive and simple to operate clinical assessment tool and∖or training stimulus. Copyright © 2018 Elsevier Ltd. All rights reserved.
Methods of editing cloud and atmospheric layer affected pixels from satellite data
NASA Technical Reports Server (NTRS)
Nixon, P. R.; Wiegand, C. L.; Richardson, A. J.; Johnson, M. P. (Principal Investigator)
1982-01-01
Subvisible cirrus clouds (SCi) were easily distinguished in mid-infrared (MIR) TIROS-N daytime data from south Texas and northeast Mexico. The MIR (3.55-3.93 micrometer) pixel digital count means of the SCi affected areas were more than 3.5 standard deviations on the cold side of the scene means. (These standard deviations were made free of the effects of unusual instrument error by factoring out the Ch 3 MIR noise on the basis of detailed examination of noisy and noise-free pixels). SCi affected areas in the IR Ch 4 (10.5-11.5 micrometer) appeared cooler than the general scene, but were not as prominent as in Ch 3, being less than 2 standard deviations from the scene mean. Ch 3 and 4 standard deviations and coefficients of variation are not reliable indicators, by themselves, of the presence of SCi because land features can have similar statistical properties.
Evolving geometrical heterogeneities of fault trace data
NASA Astrophysics Data System (ADS)
Wechsler, Neta; Ben-Zion, Yehuda; Christofferson, Shari
2010-08-01
We perform a systematic comparative analysis of geometrical fault zone heterogeneities using derived measures from digitized fault maps that are not very sensitive to mapping resolution. We employ the digital GIS map of California faults (version 2.0) and analyse the surface traces of active strike-slip fault zones with evidence of Quaternary and historic movements. Each fault zone is broken into segments that are defined as a continuous length of fault bounded by changes of angle larger than 1°. Measurements of the orientations and lengths of fault zone segments are used to calculate the mean direction and misalignment of each fault zone from the local plate motion direction, and to define several quantities that represent the fault zone disorder. These include circular standard deviation and circular standard error of segments, orientation of long and short segments with respect to the mean direction, and normal separation distances of fault segments. We examine the correlations between various calculated parameters of fault zone disorder and the following three potential controlling variables: cumulative slip, slip rate and fault zone misalignment from the plate motion direction. The analysis indicates that the circular standard deviation and circular standard error of segments decrease overall with increasing cumulative slip and increasing slip rate of the fault zones. The results imply that the circular standard deviation and error, quantifying the range or dispersion in the data, provide effective measures of the fault zone disorder, and that the cumulative slip and slip rate (or more generally slip rate normalized by healing rate) represent the fault zone maturity. The fault zone misalignment from plate motion direction does not seem to play a major role in controlling the fault trace heterogeneities. The frequency-size statistics of fault segment lengths can be fitted well by an exponential function over the entire range of observations.
Omang, R.J.; Parrett, Charles; Hull, J.A.
1983-01-01
Equations using channel-geometry measurements were developed for estimating mean runoff and peak flows of ungaged streams in southeastern Montana. Two separate sets of esitmating equations were developed for determining mean annual runoff: one for perennial streams and one for ephemeral and intermittent streams. Data from 29 gaged sites on perennial streams and 21 gaged sites on ephemeral and intermittent streams were used in these analyses. Data from 78 gaged sites were used in the peak-flow analyses. Southeastern Montana was divided into three regions and separate multiple-regression equations for each region were developed that relate channel dimensions to peak discharge having recurrence intervals of 2, 5, 10, 25, 50, and 100 years. Channel-geometery relations were developed using measurements of the active-channel width and bankfull width. Active-channel width and bankfull width were the most significant channel features for estimating mean annual runoff for al types of streams. Use of this method requires that onsite measurements be made of channel width. The standard error of estimate for predicting mean annual runoff ranged from about 38 to 79 percent. The standard error of estimate relating active-channel width or bankfull width to peak flow ranged from about 37 to 115 percent. (USGS)
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-01-01
Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476
Improving estimates of streamflow characteristics by using Landsat-1 imagery
Hollyday, Este F.
1976-01-01
Imagery from the first Earth Resources Technology Satellite (renamed Landsat-1) was used to discriminate physical features of drainage basins in an effort to improve equations used to estimate streamflow characteristics at gaged and ungaged sites. Records of 20 gaged basins in the Delmarva Peninsula of Maryland, Delaware, and Virginia were analyzed for 40 statistical streamflow characteristics. Equations relating these characteristics to basin characteristics were obtained by a technique of multiple linear regression. A control group of equations contains basin characteristics derived from maps. An experimental group of equations contains basin characteristics derived from maps and imagery. Characteristics from imagery were forest, riparian (streambank) vegetation, water, and combined agricultural and urban land use. These basin characteristics were isolated photographically by techniques of film-density discrimination. The area of each characteristic in each basin was measured photometrically. Comparison of equations in the control group with corresponding equations in the experimental group reveals that for 12 out of 40 equations the standard error of estimate was reduced by more than 10 percent. As an example, the standard error of estimate of the equation for the 5-year recurrence-interval flood peak was reduced from 46 to 32 percent. Similarly, the standard error of the equation for the mean monthly flow for September was reduced from 32 to 24 percent, the standard error for the 7-day, 2-year recurrence low flow was reduced from 136 to 102 percent, and the standard error for the 3-day, 2-year flood volume was reduced from 30 to 12 percent. It is concluded that data from Landsat imagery can substantially improve the accuracy of estimates of some streamflow characteristics at sites in the Delmarva Peninsula.
Barth, Nancy A.; Veilleux, Andrea G.
2012-01-01
The U.S. Geological Survey (USGS) is currently updating at-site flood frequency estimates for USGS streamflow-gaging stations in the desert region of California. The at-site flood-frequency analysis is complicated by short record lengths (less than 20 years is common) and numerous zero flows/low outliers at many sites. Estimates of the three parameters (mean, standard deviation, and skew) required for fitting the log Pearson Type 3 (LP3) distribution are likely to be highly unreliable based on the limited and heavily censored at-site data. In a generalization of the recommendations in Bulletin 17B, a regional analysis was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the LP3 distribution. A regional skew value of zero from a previously published report was used with a new estimated mean squared error (MSE) of 0.20. A weighted least squares (WLS) regression method was used to develop both a regional standard deviation and a mean model based on annual peak-discharge data for 33 USGS stations throughout California’s desert region. At-site standard deviation and mean values were determined by using an expected moments algorithm (EMA) method for fitting the LP3 distribution to the logarithms of annual peak-discharge data. Additionally, a multiple Grubbs-Beck (MGB) test, a generalization of the test recommended in Bulletin 17B, was used for detecting multiple potentially influential low outliers in a flood series. The WLS regression found that no basin characteristics could explain the variability of standard deviation. Consequently, a constant regional standard deviation model was selected, resulting in a log-space value of 0.91 with a MSE of 0.03 log units. Yet drainage area was found to be statistically significant at explaining the site-to-site variability in mean. The linear WLS regional mean model based on drainage area had a Pseudo- 2 R of 51 percent and a MSE of 0.32 log units. The regional parameter estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final equations are functions of drainage area.Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent.
Commentary on Values and Standards in Performance Assessment.
ERIC Educational Resources Information Center
Guion, Robert M.
1995-01-01
This commentary discusses three essential themes in performance assessment and its scoring. First, scores should mean something. Second, performance scores should permit fair and meaningful comparisons. Third, validity-reducing errors should be minimal. Increased attention to performance assessment may overcome these problems. (SLD)
Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.
Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia
2017-06-01
Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.
Eisner, Brian H; Kambadakone, Avinash; Monga, Manoj; Anderson, James K; Thoreson, Andrew A; Lee, Hang; Dretler, Stephen P; Sahani, Dushyant V
2009-04-01
We determined the most accurate method of measuring urinary stones on computerized tomography. For the in vitro portion of the study 24 calculi, including 12 calcium oxalate monohydrate and 12 uric acid stones, that had been previously collected at our clinic were measured manually with hand calipers as the gold standard measurement. The calculi were then embedded into human kidney-sized potatoes and scanned using 64-slice multidetector computerized tomography. Computerized tomography measurements were performed at 4 window settings, including standard soft tissue windows (window width-320 and window length-50), standard bone windows (window width-1120 and window length-300), 5.13x magnified soft tissue windows and 5.13x magnified bone windows. Maximum stone dimensions were recorded. For the in vivo portion of the study 41 patients with distal ureteral stones who underwent noncontrast computerized tomography and subsequently spontaneously passed the stones were analyzed. All analyzed stones were 100% calcium oxalate monohydrate or mixed, calcium based stones. Stones were prospectively collected at the clinic and the largest diameter was measured with digital calipers as the gold standard. This was compared to computerized tomography measurements using 4.0x magnified soft tissue windows and 4.0x magnified bone windows. Statistical comparisons were performed using Pearson's correlation and paired t test. In the in vitro portion of the study the most accurate measurements were obtained using 5.13x magnified bone windows with a mean 0.13 mm difference from caliper measurement (p = 0.6). Measurements performed in the soft tissue window with and without magnification, and in the bone window without magnification were significantly different from hand caliper measurements (mean difference 1.2, 1.9 and 1.4 mm, p = 0.003, <0.001 and 0.0002, respectively). When comparing measurement errors between stones of different composition in vitro, the error for calcium oxalate calculi was significantly different from the gold standard for all methods except bone window settings with magnification. For uric acid calculi the measurement error was observed only in standard soft tissue window settings. In vivo 4.0x magnified bone windows was superior to 4.0x magnified soft tissue windows in measurement accuracy. Magnified bone window measurements were not statistically different from digital caliper measurements (mean underestimation vs digital caliper 0.3 mm, p = 0.4), while magnified soft tissue windows were statistically distinct (mean underestimation 1.4 mm, p = 0.001). In this study magnified bone windows were the most accurate method of stone measurements in vitro and in vivo. Therefore, we recommend the routine use of magnified bone windows for computerized tomography measurement of stones. In vitro the measurement error in calcium oxalate stones was greater than that in uric acid stones, suggesting that stone composition may be responsible for measurement inaccuracies.
Intervention strategies for the management of human error
NASA Technical Reports Server (NTRS)
Wiener, Earl L.
1993-01-01
This report examines the management of human error in the cockpit. The principles probably apply as well to other applications in the aviation realm (e.g. air traffic control, dispatch, weather, etc.) as well as other high-risk systems outside of aviation (e.g. shipping, high-technology medical procedures, military operations, nuclear power production). Management of human error is distinguished from error prevention. It is a more encompassing term, which includes not only the prevention of error, but also a means of disallowing an error, once made, from adversely affecting system output. Such techniques include: traditional human factors engineering, improvement of feedback and feedforward of information from system to crew, 'error-evident' displays which make erroneous input more obvious to the crew, trapping of errors within a system, goal-sharing between humans and machines (also called 'intent-driven' systems), paperwork management, and behaviorally based approaches, including procedures, standardization, checklist design, training, cockpit resource management, etc. Fifteen guidelines for the design and implementation of intervention strategies are included.
Fossum, Kenneth D.; O'Day, Christie M.; Wilson, Barbara J.; Monical, Jim E.
2001-01-01
Stormwater and streamflow in Maricopa County were monitored to (1) describe the physical, chemical, and toxicity characteristics of stormwater from areas having different land uses, (2) describe the physical, chemical, and toxicity characteristics of streamflow from areas that receive urban stormwater, and (3) estimate constituent loads in stormwater. Urban stormwater and streamflow had similar ranges in most constituent concentrations. The mean concentration of dissolved solids in urban stormwater was lower than in streamflow from the Salt River and Indian Bend Wash. Urban stormwater, however, had a greater chemical oxygen demand and higher concentrations of most nutrients. Mean seasonal loads and mean annual loads of 11 constituents and volumes of runoff were estimated for municipalities in the metropolitan Phoenix area, Arizona, by adjusting regional regression equations of loads. This adjustment procedure uses the original regional regression equation and additional explanatory variables that were not included in the original equation. The adjusted equations had standard errors that ranged from 161 to 196 percent. The large standard errors of the prediction result from the large variability of the constituent concentration data used in the regression analysis. Adjustment procedures produced unsatisfactory results for nine of the regressions?suspended solids, dissolved solids, total phosphorus, dissolved phosphorus, total recoverable cadmium, total recoverable copper, total recoverable lead, total recoverable zinc, and storm runoff. These equations had no consistent direction of bias and no other additional explanatory variables correlated with the observed loads. A stepwise-multiple regression or a three-variable regression (total storm rainfall, drainage area, and impervious area) and local data were used to develop local regression equations for these nine constituents. These equations had standard errors from 15 to 183 percent.
Hennig, Cheryl; Cooper, David
2011-08-01
Histomorphometric aging methods report varying degrees of precision, measured through Standard Error of the Estimate (SEE). These techniques have been developed from variable samples sizes (n) and the impact of n on reported aging precision has not been rigorously examined in the anthropological literature. This brief communication explores the relation between n and SEE through a review of the literature (abstracts, articles, book chapters, theses, and dissertations), predictions based upon sampling theory and a simulation. Published SEE values for age prediction, derived from 40 studies, range from 1.51 to 16.48 years (mean 8.63; sd: 3.81 years). In general, these values are widely distributed for smaller samples and the distribution narrows as n increases--a pattern expected from sampling theory. For the two studies that have samples in excess of 200 individuals, the SEE values are very similar (10.08 and 11.10 years) with a mean of 10.59 years. Assuming this mean value is a 'true' characterization of the error at the population level, the 95% confidence intervals for SEE values from samples of 10, 50, and 150 individuals are on the order of ± 4.2, 1.7, and 1.0 years, respectively. While numerous sources of variation potentially affect the precision of different methods, the impact of sample size cannot be overlooked. The uncertainty associated with SEE values derived from smaller samples complicates the comparison of approaches based upon different methodology and/or skeletal elements. Meaningful comparisons require larger samples than have frequently been used and should ideally be based upon standardized samples. Copyright © 2011 Wiley-Liss, Inc.
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-11-01
Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
Chang, Jenghwa
2017-06-01
To develop a statistical model that incorporates the treatment uncertainty from the rotational error of the single isocenter for multiple targets technique, and calculates the extra PTV (planning target volume) margin required to compensate for this error. The random vector for modeling the setup (S) error in the three-dimensional (3D) patient coordinate system was assumed to follow a 3D normal distribution with a zero mean, and standard deviations of σ x , σ y , σ z . It was further assumed that the rotation of clinical target volume (CTV) about the isocenter happens randomly and follows a three-dimensional (3D) independent normal distribution with a zero mean and a uniform standard deviation of σ δ . This rotation leads to a rotational random error (R), which also has a 3D independent normal distribution with a zero mean and a uniform standard deviation of σ R equal to the product of σδπ180 and dI⇔T, the distance between the isocenter and CTV. Both (S and R) random vectors were summed, normalized, and transformed to the spherical coordinates to derive the Chi distribution with three degrees of freedom for the radial coordinate of S+R. PTV margin was determined using the critical value of this distribution for a 0.05 significance level so that 95% of the time the treatment target would be covered by the prescription dose. The additional PTV margin required to compensate for the rotational error was calculated as a function of σ R and dI⇔T. The effect of the rotational error is more pronounced for treatments that require high accuracy/precision like stereotactic radiosurgery (SRS) or stereotactic body radiotherapy (SBRT). With a uniform 2-mm PTV margin (or σ x = σ y = σ z = 0.715 mm), a σ R = 0.328 mm will decrease the CTV coverage probability from 95.0% to 90.9%, or an additional 0.2-mm PTV margin is needed to prevent this loss of coverage. If we choose 0.2 mm as the threshold, any σ R > 0.328 mm will lead to an extra PTV margin that cannot be ignored, and the maximal σ δ that can be ignored is 0.45° (or 0.0079 rad ) for dI⇔T = 50 mm or 0.23° (or 0.004 rad ) for dI⇔T = 100 mm. The rotational error cannot be ignored for high-accuracy/-precision treatments like SRS/SBRT, particularly when the distance between the isocenter and target is large. © 2017 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, N; DiCostanzo, D; Fullenkamp, M
2015-06-15
Purpose: To determine appropriate couch tolerance values for modern radiotherapy linac R&V systems with indexed patient setup. Methods: Treatment table tolerance values have been the most difficult to lower, due to many factors including variations in patient positioning and differences in table tops between machines. We recently installed nine linacs with similar tables and started indexing every patient in our clinic. In this study we queried our R&V database and analyzed the deviation of couch position values from the acquired values at verification simulation for all patients treated with indexed positioning. Mean and standard deviations of daily setup deviations weremore » computed in the longitudinal, lateral and vertical direction for 343 patient plans. The mean, median and standard error of the standard deviations across the whole patient population and for some disease sites were computed to determine tolerance values. Results: The plot of our couch deviation values showed a gaussian distribution, with some small deviations, corresponding to setup uncertainties on non-imaging days, and SRS/SRT/SBRT patients, as well as some large deviations which were spot checked and found to be corresponding to indexing errors that were overriden. Setting our tolerance values based on the median + 1 standard error resulted in tolerance values of 1cm lateral and longitudinal, and 0.5 cm vertical for all non- SRS/SRT/SBRT cases. Re-analizing the data, we found that about 92% of the treated fractions would be within these tolerance values (ignoring the mis-indexed patients). We also analyzed data for disease site based subpopulations and found no difference in the tolerance values that needed to be used. Conclusion: With the use of automation, auto-setup and other workflow efficiency tools being introduced into radiotherapy workflow, it is very essential to set table tolerances that allow safe treatments, but flag setup errors that need to be reassessed before treatments.« less
Top-of-Climb Matching Method for Reducing Aircraft Trajectory Prediction Errors.
Thipphavong, David P
2016-09-01
The inaccuracies of the aircraft performance models utilized by trajectory predictors with regard to takeoff weight, thrust, climb profile, and other parameters result in altitude errors during the climb phase that often exceed the vertical separation standard of 1000 feet. This study investigates the potential reduction in altitude trajectory prediction errors that could be achieved for climbing flights if just one additional parameter is made available: top-of-climb (TOC) time. The TOC-matching method developed and evaluated in this paper is straightforward: a set of candidate trajectory predictions is generated using different aircraft weight parameters, and the one that most closely matches TOC in terms of time is selected. This algorithm was tested using more than 1000 climbing flights in Fort Worth Center. Compared to the baseline trajectory predictions of a real-time research prototype (Center/TRACON Automation System), the TOC-matching method reduced the altitude root mean square error (RMSE) for a 5-minute prediction time by 38%. It also decreased the percentage of flights with absolute altitude error greater than the vertical separation standard of 1000 ft for the same look-ahead time from 55% to 30%.
Top-of-Climb Matching Method for Reducing Aircraft Trajectory Prediction Errors
Thipphavong, David P.
2017-01-01
The inaccuracies of the aircraft performance models utilized by trajectory predictors with regard to takeoff weight, thrust, climb profile, and other parameters result in altitude errors during the climb phase that often exceed the vertical separation standard of 1000 feet. This study investigates the potential reduction in altitude trajectory prediction errors that could be achieved for climbing flights if just one additional parameter is made available: top-of-climb (TOC) time. The TOC-matching method developed and evaluated in this paper is straightforward: a set of candidate trajectory predictions is generated using different aircraft weight parameters, and the one that most closely matches TOC in terms of time is selected. This algorithm was tested using more than 1000 climbing flights in Fort Worth Center. Compared to the baseline trajectory predictions of a real-time research prototype (Center/TRACON Automation System), the TOC-matching method reduced the altitude root mean square error (RMSE) for a 5-minute prediction time by 38%. It also decreased the percentage of flights with absolute altitude error greater than the vertical separation standard of 1000 ft for the same look-ahead time from 55% to 30%. PMID:28684883
Top-of-Climb Matching Method for Reducing Aircraft Trajectory Prediction Errors
NASA Technical Reports Server (NTRS)
Thipphavong, David P.
2016-01-01
The inaccuracies of the aircraft performance models utilized by trajectory predictors with regard to takeoff weight, thrust, climb profile, and other parameters result in altitude errors during the climb phase that often exceed the vertical separation standard of 1000 feet. This study investigates the potential reduction in altitude trajectory prediction errors that could be achieved for climbing flights if just one additional parameter is made available: top-of-climb (TOC) time. The TOC-matching method developed and evaluated in this paper is straightforward: a set of candidate trajectory predictions is generated using different aircraft weight parameters, and the one that most closely matches TOC in terms of time is selected. This algorithm was tested using more than 1000 climbing flights in Fort Worth Center. Compared to the baseline trajectory predictions of a real-time research prototype (Center/TRACON Automation System), the TOC-matching method reduced the altitude root mean square error (RMSE) for a 5-minute prediction time by 38%. It also decreased the percentage of flights with absolute altitude error greater than the vertical separation standard of 1000 ft for the same look-ahead time from 55% to 30%.
Hoos, Anne B.; Patel, Anant R.
1996-01-01
Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.
Representations of Intervals and Optimal Error Bounds.
1980-07-01
OAA629-8O-C-0ONI UNCLASS I FI IEDMRC TSR-2098 NL 11111L 3 -2 11111 ~ 13.6 1111 125 .4 111.6 MCROCOPY RESOLUTION TEST CHART NATIONA’ 13UREAU OF STANDARDS...geometric and harmonic means, Excess width Work Unit Number 3 (Numerical Analysis and Computer Science) Sponsored by the United States Army under...example in the next section, following which the general theory will be dis- cussed. 3 . An example of an optimal point and error bound. A simple
Absolute color scale for improved diagnostics with wavefront error mapping.
Smolek, Michael K; Klyce, Stephen D
2007-11-01
Wavefront data are expressed in micrometers and referenced to the pupil plane, but current methods to map wavefront error lack standardization. Many use normalized or floating scales that may confuse the user by generating ambiguous, noisy, or varying information. An absolute scale that combines consistent clinical information with statistical relevance is needed for wavefront error mapping. The color contours should correspond better to current corneal topography standards to improve clinical interpretation. Retrospective analysis of wavefront error data. Historic ophthalmic medical records. Topographic modeling system topographical examinations of 120 corneas across 12 categories were used. Corneal wavefront error data in micrometers from each topography map were extracted at 8 Zernike polynomial orders and for 3 pupil diameters expressed in millimeters (3, 5, and 7 mm). Both total aberrations (orders 2 through 8) and higher-order aberrations (orders 3 through 8) were expressed in the form of frequency histograms to determine the working range of the scale across all categories. The standard deviation of the mean error of normal corneas determined the map contour resolution. Map colors were based on corneal topography color standards and on the ability to distinguish adjacent color contours through contrast. Higher-order and total wavefront error contour maps for different corneal conditions. An absolute color scale was produced that encompassed a range of +/-6.5 microm and a contour interval of 0.5 microm. All aberrations in the categorical database were plotted with no loss of clinical information necessary for classification. In the few instances where mapped information was beyond the range of the scale, the type and severity of aberration remained legible. When wavefront data are expressed in micrometers, this absolute scale facilitates the determination of the severity of aberrations present compared with a floating scale, particularly for distinguishing normal from abnormal levels of wavefront error. The new color palette makes it easier to identify disorders. The corneal mapping method can be extended to mapping whole eye wavefront errors. When refraction data are expressed in diopters, the previously published corneal topography scale is suggested.
Gross Motor Development in Children Aged 3-5 Years, United States 2012.
Kit, Brian K; Akinbami, Lara J; Isfahani, Neda Sarafrazi; Ulrich, Dale A
2017-07-01
Objective Gross motor development in early childhood is important in fostering greater interaction with the environment. The purpose of this study is to describe gross motor skills among US children aged 3-5 years using the Test of Gross Motor Development (TGMD-2). Methods We used 2012 NHANES National Youth Fitness Survey (NNYFS) data, which included TGMD-2 scores obtained according to an established protocol. Outcome measures included locomotor and object control raw and age-standardized scores. Means and standard errors were calculated for demographic and weight status with SUDAAN using sample weights to calculate nationally representative estimates, and survey design variables to account for the complex sampling methods. Results The sample included 339 children aged 3-5 years. As expected, locomotor and object control raw scores increased with age. Overall mean standardized scores for locomotor and object control were similar to the mean value previously determined using a normative sample. Girls had a higher mean locomotor, but not mean object control, standardized score than boys (p < 0.05). However, the mean locomotor standardized scores for both boys and girls fell into the range categorized as "average." There were no other differences by age, race/Hispanic origin, weight status, or income in either of the subtest standardized scores (p > 0.05). Conclusions In a nationally representative sample of US children aged 3-5 years, TGMD-2 mean locomotor and object control standardized scores were similar to the established mean. These results suggest that standardized gross motor development among young children generally did not differ by demographic or weight status.
Lead theft--a study of the "uniqueness" of lead from church roofs.
Bond, John W; Hainsworth, Sarah V; Lau, Tien L
2013-07-01
In the United Kingdom, theft of lead is common, particularly from churches and other public buildings with lead roofs. To assess the potential to distinguish lead from different sources, 41 samples of lead from 24 church roofs in Northamptonshire, U.K, have been analyzed for relative abundance of trace elements and isotopes of lead using X-ray fluorescence (XRF) and inductively coupled plasma mass spectrometry, respectively. XRF revealed the overall presence of 12 trace elements with the four most abundant, calcium, phosphorus, silicon, and sulfur, showing a large weight percentage standard error of the mean of all samples suggesting variation in the weight percentage of these elements between different church roofs. Multiple samples from the same roofs, but different lead sheets, showed much lower weight percentage standard errors of the mean suggesting similar trace element concentrations. Lead isotope ratios were similar for all samples. Factors likely to affect the occurrence of these trace elements are discussed. © 2013 American Academy of Forensic Sciences.
Computer-socket manufacturing error: How much before it is clinically apparent?
Sanders, Joan E.; Severance, Michael R.; Allyn, Kathryn J.
2015-01-01
The purpose of this research was to pursue quality standards for computer-manufacturing of prosthetic sockets for people with transtibial limb loss. Thirty-three duplicates of study participants’ normally used sockets were fabricated using central fabrication facilities. Socket-manufacturing errors were compared with clinical assessments of socket fit. Of the 33 sockets tested, 23 were deemed clinically to need modification. All 13 sockets with mean radial error (MRE) greater than 0.25 mm were clinically unacceptable, and 11 of those were deemed in need of sizing reduction. Of the remaining 20 sockets, 5 sockets with interquartile range (IQR) greater than 0.40 mm were deemed globally or regionally oversized and in need of modification. Of the remaining 15 sockets, 5 sockets with closed contours of elevated surface normal angle error (SNAE) were deemed clinically to need shape modification at those closed contour locations. The remaining 10 sockets were deemed clinically acceptable and not in need modification. MRE, IQR, and SNAE may serve as effective metrics to characterize quality of computer-manufactured prosthetic sockets, helping facilitate the development of quality standards for the socket manufacturing industry. PMID:22773260
Faridnasr, Maryam; Ghanbari, Bastam; Sassani, Ardavan
2016-05-01
A novel approach was applied for optimization of a moving-bed biofilm sequencing batch reactor (MBSBR) to treat sugar-industry wastewater (BOD5=500-2500 and COD=750-3750 mg/L) at 2-4 h of cycle time (CT). Although the experimental data showed that MBSBR reached high BOD5 and COD removal performances, it failed to achieve the standard limits at the mentioned CTs. Thus, optimization of the reactor was rendered by kinetic computational modeling and using statistical error indicator normalized root mean square error (NRMSE). The results of NRMSE revealed that Stover-Kincannon (error=6.40%) and Grau (error=6.15%) models provide better fits to the experimental data and may be used for CT optimization in the reactor. The models predicted required CTs of 4.5, 6.5, 7 and 7.5 h for effluent standardization of 500, 1000, 1500 and 2500 mg/L influent BOD5 concentrations, respectively. Similar pattern of the experimental data also confirmed these findings. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Möhler, Christian; Russ, Tom; Wohlfahrt, Patrick; Elter, Alina; Runz, Armin; Richter, Christian; Greilich, Steffen
2018-01-01
An experimental setup for consecutive measurement of ion and x-ray absorption in tissue or other materials is introduced. With this setup using a 3D-printed sample container, the reference stopping-power ratio (SPR) of materials can be measured with an uncertainty of below 0.1%. A total of 65 porcine and bovine tissue samples were prepared for measurement, comprising five samples each of 13 tissue types representing about 80% of the total body mass (three different muscle and fatty tissues, liver, kidney, brain, heart, blood, lung and bone). Using a standard stoichiometric calibration for single-energy CT (SECT) as well as a state-of-the-art dual-energy CT (DECT) approach, SPR was predicted for all tissues and then compared to the measured reference. With the SECT approach, the SPRs of all tissues were predicted with a mean error of (-0.84 ± 0.12)% and a mean absolute error of (1.27 ± 0.12)%. In contrast, the DECT-based SPR predictions were overall consistent with the measured reference with a mean error of (-0.02 ± 0.15)% and a mean absolute error of (0.10 ± 0.15)%. Thus, in this study, the potential of DECT to decrease range uncertainty could be confirmed in biological tissue.
Kuselman, Ilya; Pennecchi, Francesca; Epstein, Malka; Fajgelj, Ales; Ellison, Stephen L R
2014-12-01
Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors. Copyright © 2014 Elsevier B.V. All rights reserved.
Comparison of estimators of standard deviation for hydrologic time series
Tasker, Gary D.; Gilroy, Edward J.
1982-01-01
Unbiasing factors as a function of serial correlation, ρ, and sample size, n for the sample standard deviation of a lag one autoregressive model were generated by random number simulation. Monte Carlo experiments were used to compare the performance of several alternative methods for estimating the standard deviation σ of a lag one autoregressive model in terms of bias, root mean square error, probability of underestimation, and expected opportunity design loss. Three methods provided estimates of σ which were much less biased but had greater mean square errors than the usual estimate of σ: s = (1/(n - 1) ∑ (xi −x¯)2)½. The three methods may be briefly characterized as (1) a method using a maximum likelihood estimate of the unbiasing factor, (2) a method using an empirical Bayes estimate of the unbiasing factor, and (3) a robust nonparametric estimate of σ suggested by Quenouille. Because s tends to underestimate σ, its use as an estimate of a model parameter results in a tendency to underdesign. If underdesign losses are considered more serious than overdesign losses, then the choice of one of the less biased methods may be wise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balderson, Michael, E-mail: michael.balderson@rmp.uhn.ca; Brown, Derek; Johnson, Patricia
The purpose of this work was to compare static gantry intensity-modulated radiation therapy (IMRT) with volume-modulated arc therapy (VMAT) in terms of tumor control probability (TCP) under scenarios involving large geometric misses, i.e., those beyond what are accounted for when margin expansion is determined. Using a planning approach typical for these treatments, a linear-quadratic–based model for TCP was used to compare mean TCP values for a population of patients who experiences a geometric miss (i.e., systematic and random shifts of the clinical target volume within the planning target dose distribution). A Monte Carlo approach was used to account for themore » different biological sensitivities of a population of patients. Interestingly, for errors consisting of coplanar systematic target volume offsets and three-dimensional random offsets, static gantry IMRT appears to offer an advantage over VMAT in that larger shift errors are tolerated for the same mean TCP. For example, under the conditions simulated, erroneous systematic shifts of 15 mm directly between or directly into static gantry IMRT fields result in mean TCP values between 96% and 98%, whereas the same errors on VMAT plans result in mean TCP values between 45% and 74%. Random geometric shifts of the target volume were characterized using normal distributions in each Cartesian dimension. When the standard deviations were doubled from those values assumed in the derivation of the treatment margins, our model showed a 7% drop in mean TCP for the static gantry IMRT plans but a 20% drop in TCP for the VMAT plans. Although adding a margin for error to a clinical target volume is perhaps the best approach to account for expected geometric misses, this work suggests that static gantry IMRT may offer a treatment that is more tolerant to geometric miss errors than VMAT.« less
Statistics Using Just One Formula
ERIC Educational Resources Information Center
Rosenthal, Jeffrey S.
2018-01-01
This article advocates that introductory statistics be taught by basing all calculations on a single simple margin-of-error formula and deriving all of the standard introductory statistical concepts (confidence intervals, significance tests, comparisons of means and proportions, etc) from that one formula. It is argued that this approach will…
The Estimation of Gestational Age at Birth in Database Studies.
Eberg, Maria; Platt, Robert W; Filion, Kristian B
2017-11-01
Studies on the safety of prenatal medication use require valid estimation of the pregnancy duration. However, gestational age is often incompletely recorded in administrative and clinical databases. Our objective was to compare different approaches to estimating the pregnancy duration. Using data from the Clinical Practice Research Datalink and Hospital Episode Statistics, we examined the following four approaches to estimating missing gestational age: (1) generalized estimating equations for longitudinal data; (2) multiple imputation; (3) estimation based on fetal birth weight and sex; and (4) conventional approaches that assigned a fixed value (39 weeks for all or 39 weeks for full term and 35 weeks for preterm). The gestational age recorded in Hospital Episode Statistics was considered the gold standard. We conducted a simulation study comparing the described approaches in terms of estimated bias and mean square error. A total of 25,929 infants from 22,774 mothers were included in our "gold standard" cohort. The smallest average absolute bias was observed for the generalized estimating equation that included birth weight, while the largest absolute bias occurred when assigning 39-week gestation to all those with missing values. The smallest mean square errors were detected with generalized estimating equations while multiple imputation had the highest mean square errors. The use of generalized estimating equations resulted in the most accurate estimation of missing gestational age when birth weight information was available. In the absence of birth weight, assignment of fixed gestational age based on term/preterm status may be the optimal approach.
Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation
NASA Technical Reports Server (NTRS)
Mandrake, Lukas
2013-01-01
Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.
Tests for qualitative treatment-by-centre interaction using a 'pushback' procedure.
Ciminera, J L; Heyse, J F; Nguyen, H H; Tukey, J W
1993-06-15
In multicentre clinical trials using a common protocol, the centres are usually regarded as being a fixed factor, thus allowing any treatment-by-centre interaction to be omitted from the error term for the effect of treatment. However, we feel it necessary to use the treatment-by-centre interaction as the error term if there is substantial evidence that the interaction with centres is qualitative instead of quantitative. To make allowance for the estimated uncertainties of the centre means, we propose choosing a reference value (for example, the median of the ordered array of centre means) and converting the individual centre results into standardized deviations from the reference value. The deviations are then reordered, and the results 'pushed back' by amounts appropriate for the corresponding order statistics in a sample from the relevant distribution. The pushed-back standardized deviations are then restored to the original scale. The appearance of opposite signs among the destandardized values for the various centres is then taken as 'substantial evidence' of qualitative interaction. Procedures are presented using, in any combination: (i) Gaussian, or Student's t-distribution; (ii) order-statistic medians or outward 90 per cent points of the corresponding order statistic distributions; (iii) pooling or grouping and pooling the internally estimated standard deviations of the centre means. The use of the least conservative combination--Student's t, outward 90 per cent points, grouping and pooling--is recommended.
NASA Technical Reports Server (NTRS)
Hueschen, R. M.
1986-01-01
Five flight tests of the Digital Automated Landing System (DIALS) were conducted on the Advanced Transport Operating Systems (ATOPS) Transportation Research Vehicle (TSRV) -- a modified Boeing 737 aircraft for advanced controls and displays research. These flight tests were conducted at NASA's Wallops Flight Center using the microwave landing system (MLS) installation on runway 22. This report describes the flight software equations of the DIALS which was designed using modern control theory direct-digital design methods and employed a constant gain Kalman filter. Selected flight test performance data is presented for localizer (runway centerline) capture and track at various intercept angles, for glideslope capture and track of 3, 4.5, and 5 degree glideslopes, for the decrab maneuver, and for the flare maneuver. Data is also presented to illustrate the system performance in the presence of cross, gust, and shear winds. The mean and standard deviation of the peak position errors for localizer capture were, respectively, 24 feet and 26 feet. For mild wind conditions, glideslope and localizer tracking position errors did not exceed, respectively, 5 and 20 feet. For gusty wind conditions (8 to 10 knots), these errors were, respectively, 10 and 30 feet. Ten hands off automatic lands were performed. The standard deviation of the touchdown position and velocity errors from the mean values were, respectively, 244 feet and 0.7 feet/sec.
Two ultraviolet radiation datasets that cover China
NASA Astrophysics Data System (ADS)
Liu, Hui; Hu, Bo; Wang, Yuesi; Liu, Guangren; Tang, Liqin; Ji, Dongsheng; Bai, Yongfei; Bao, Weikai; Chen, Xin; Chen, Yunming; Ding, Weixin; Han, Xiaozeng; He, Fei; Huang, Hui; Huang, Zhenying; Li, Xinrong; Li, Yan; Liu, Wenzhao; Lin, Luxiang; Ouyang, Zhu; Qin, Boqiang; Shen, Weijun; Shen, Yanjun; Su, Hongxin; Song, Changchun; Sun, Bo; Sun, Song; Wang, Anzhi; Wang, Genxu; Wang, Huimin; Wang, Silong; Wang, Youshao; Wei, Wenxue; Xie, Ping; Xie, Zongqiang; Yan, Xiaoyuan; Zeng, Fanjiang; Zhang, Fawei; Zhang, Yangjian; Zhang, Yiping; Zhao, Chengyi; Zhao, Wenzhi; Zhao, Xueyong; Zhou, Guoyi; Zhu, Bo
2017-07-01
Ultraviolet (UV) radiation has significant effects on ecosystems, environments, and human health, as well as atmospheric processes and climate change. Two ultraviolet radiation datasets are described in this paper. One contains hourly observations of UV radiation measured at 40 Chinese Ecosystem Research Network stations from 2005 to 2015. CUV3 broadband radiometers were used to observe the UV radiation, with an accuracy of 5%, which meets the World Meteorology Organization's measurement standards. The extremum method was used to control the quality of the measured datasets. The other dataset contains daily cumulative UV radiation estimates that were calculated using an all-sky estimation model combined with a hybrid model. The reconstructed daily UV radiation data span from 1961 to 2014. The mean absolute bias error and root-mean-square error are smaller than 30% at most stations, and most of the mean bias error values are negative, which indicates underestimation of the UV radiation intensity. These datasets can improve our basic knowledge of the spatial and temporal variations in UV radiation. Additionally, these datasets can be used in studies of potential ozone formation and atmospheric oxidation, as well as simulations of ecological processes.
Cost effectiveness of the US Geological Survey's stream-gaging program in New York
Wolcott, S.W.; Gannon, W.B.; Johnston, W.H.
1986-01-01
The U.S. Geological Survey conducted a 5-year nationwide analysis to define and document the most cost effective means of obtaining streamflow data. This report describes the stream gaging network in New York and documents the cost effectiveness of its operation; it also identifies data uses and funding sources for the 174 continuous-record stream gages currently operated (1983). Those gages as well as 189 crest-stage, stage-only, and groundwater gages are operated with a budget of $1.068 million. One gaging station was identified as having insufficient reason for continuous operation and was converted to a crest-stage gage. Current operation of the 363-station program requires a budget of $1.068 million/yr. The average standard error of estimation of continuous streamflow data is 13.4%. Results indicate that this degree of accuracy could be maintained with a budget of approximately $1.006 million if the gaging resources were redistributed among the gages. The average standard error for 174 stations was calculated for five hypothetical budgets. A minimum budget of $970,000 would be needed to operated the 363-gage program; a budget less than this does not permit proper servicing and maintenance of the gages and recorders. Under the restrictions of a minimum budget, the average standard error would be 16.0%. The maximum budget analyzed was $1.2 million, which would decrease the average standard error to 9.4%. (Author 's abstract)
Accuracy and precision of Legionella isolation by US laboratories in the ELITE program pilot study.
Lucas, Claressa E; Taylor, Thomas H; Fields, Barry S
2011-10-01
A pilot study for the Environmental Legionella Isolation Techniques Evaluation (ELITE) Program, a proficiency testing scheme for US laboratories that culture Legionella from environmental samples, was conducted September 1, 2008 through March 31, 2009. Participants (n=20) processed panels consisting of six sample types: pure and mixed positive, pure and mixed negative, pure and mixed variable. The majority (93%) of all samples (n=286) were correctly characterized, with 88.5% of samples positive for Legionella and 100% of negative samples identified correctly. Variable samples were incorrectly identified as negative in 36.9% of reports. For all samples reported positive (n=128), participants underestimated the cfu/ml by a mean of 1.25 logs with standard deviation of 0.78 logs, standard error of 0.07 logs, and a range of 3.57 logs compared to the CDC re-test value. Centering results around the interlaboratory mean yielded a standard deviation of 0.65 logs, standard error of 0.06 logs, and a range of 3.22 logs. Sampling protocol, treatment regimen, culture procedure, and laboratory experience did not significantly affect the accuracy or precision of reported concentrations. Qualitative and quantitative results from the ELITE pilot study were similar to reports from a corresponding proficiency testing scheme available in the European Union, indicating these results are probably valid for most environmental laboratories worldwide. The large enumeration error observed suggests that the need for remediation of a water system should not be determined solely by the concentration of Legionella observed in a sample since that value is likely to underestimate the true level of contamination. Published by Elsevier Ltd.
An analysis of Landsat-4 Thematic Mapper geometric properties
NASA Technical Reports Server (NTRS)
Walker, R. E.; Zobrist, A. L.; Bryant, N. A.; Gohkman, B.; Friedman, S. Z.; Logan, T. L.
1984-01-01
Landsat-4 Thematic Mapper data of Washington, DC, Harrisburg, PA, and Salton Sea, CA were analyzed to determine geometric integrity and conformity of the data to known earth surface geometry. Several tests were performed. Intraband correlation and interband registration were investigated. No problems were observed in the intraband analysis, and aside from indications of slight misregistration between bands of the primary versus bands of the secondary focal planes, interband registration was well within the specified tolerances. A substantial number of ground control points were found and used to check the images' conformity to the Space Oblique Mercator (SOM) projection of their respective areas. The means of the residual offsets, which included nonprocessing related measurement errors, were close to the one pixel level in the two scenes examined. The Harrisburg scene residual mean was 28.38 m (0.95 pixels) with a standard deviation of 19.82 m (0.66 pixels), while the mean and standard deviation for the Salton Sea scene were 40.46 (1.35 pixels) and 30.57 m (1.02 pixels), respectively. Overall, the data were judged to be a high geometric quality with errors close to those targeted by the TM sensor design specifications.
NASA Astrophysics Data System (ADS)
Elangovan, Premkumar; Mackenzie, Alistair; Dance, David R.; Young, Kenneth C.; Cooke, Victoria; Wilkinson, Louise; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Wells, Kevin
2017-04-01
A novel method has been developed for generating quasi-realistic voxel phantoms which simulate the compressed breast in mammography and digital breast tomosynthesis (DBT). The models are suitable for use in virtual clinical trials requiring realistic anatomy which use the multiple alternative forced choice (AFC) paradigm and patches from the complete breast image. The breast models are produced by extracting features of breast tissue components from DBT clinical images including skin, adipose and fibro-glandular tissue, blood vessels and Cooper’s ligaments. A range of different breast models can then be generated by combining these components. Visual realism was validated using a receiver operating characteristic (ROC) study of patches from simulated images calculated using the breast models and from real patient images. Quantitative analysis was undertaken using fractal dimension and power spectrum analysis. The average areas under the ROC curves for 2D and DBT images were 0.51 ± 0.06 and 0.54 ± 0.09 demonstrating that simulated and real images were statistically indistinguishable by expert breast readers (7 observers); errors represented as one standard error of the mean. The average fractal dimensions (2D, DBT) for real and simulated images were (2.72 ± 0.01, 2.75 ± 0.01) and (2.77 ± 0.03, 2.82 ± 0.04) respectively; errors represented as one standard error of the mean. Excellent agreement was found between power spectrum curves of real and simulated images, with average β values (2D, DBT) of (3.10 ± 0.17, 3.21 ± 0.11) and (3.01 ± 0.32, 3.19 ± 0.07) respectively; errors represented as one standard error of the mean. These results demonstrate that radiological images of these breast models realistically represent the complexity of real breast structures and can be used to simulate patches from mammograms and DBT images that are indistinguishable from patches from the corresponding real breast images. The method can generate about 500 radiological patches (~30 mm × 30 mm) per day for AFC experiments on a single workstation. This is the first study to quantitatively validate the realism of simulated radiological breast images using direct blinded comparison with real data via the ROC paradigm with expert breast readers.
Air-braked cycle ergometers: validity of the correction factor for barometric pressure.
Finn, J P; Maxwell, B F; Withers, R T
2000-10-01
Barometric pressure exerts by far the greatest influence of the three environmental factors (barometric pressure, temperature and humidity) on power outputs from air-braked ergometers. The barometric pressure correction factor for power outputs from air-braked ergometers is in widespread use but apparently has never been empirically validated. Our experiment validated this correction factor by calibrating two air-braked cycle ergometers in a hypobaric chamber using a dynamic calibration rig. The results showed that if the power output correction for changes in air resistance at barometric pressures corresponding to altitudes of 38, 600, 1,200 and 1,800 m above mean sea level were applied, then the coefficients of variation were 0.8-1.9% over the range of 160-1,597 W. The overall mean error was 3.0 % but this included up to 0.73 % for the propagated error that was associated with errors in the measurement of: a) temperature b) relative humidity c) barometric pressure d) force, distance and angular velocity by the dynamic calibration rig. The overall mean error therefore approximated the +/- 2.0% of true load that was specified by the Laboratory Standards Assistance Scheme of the Australian Sports Commission. The validity of the correction factor for barometric pressure on power output was therefore demonstrated over the altitude range of 38-1,800 m.
Calibration of a laboratory spectrophotometer for specular light by means of stacked glass plates.
NASA Technical Reports Server (NTRS)
Allen, W. A.; Richardson, A. J.
1971-01-01
Stacked glass plates have been used to calibrate a laboratory spectrophotometer, over the spectral range 0.5-2.5 microns, for specular light. The uncalibrated instrument was characterized by systematic errors when used to measure the reflectance and transmittance of stacked glass plates. Calibration included first, a determination of the reflectance of a standard composed of barium sulfate paint deposited on an aluminum plate; second, the approximation of the reflectance and transmittance residuals between observed and computed values by means of cubic equations; and, finally, the removal of the systematic errors by a computer. The instrument, after calibration, was accurate to 1% when used to measure the reflectance and transmittance of stacked glass plates.
Agogo, George O.
2017-01-01
Measurement error in exposure variables is a serious impediment in epidemiological studies that relate exposures to health outcomes. In nutritional studies, interest could be in the association between long-term dietary intake and disease occurrence. Long-term intake is usually assessed with food frequency questionnaire (FFQ), which is prone to recall bias. Measurement error in FFQ-reported intakes leads to bias in parameter estimate that quantifies the association. To adjust for bias in the association, a calibration study is required to obtain unbiased intake measurements using a short-term instrument such as 24-hour recall (24HR). The 24HR intakes are used as response in regression calibration to adjust for bias in the association. For foods not consumed daily, 24HR-reported intakes are usually characterized by excess zeroes, right skewness, and heteroscedasticity posing serious challenge in regression calibration modeling. We proposed a zero-augmented calibration model to adjust for measurement error in reported intake, while handling excess zeroes, skewness, and heteroscedasticity simultaneously without transforming 24HR intake values. We compared the proposed calibration method with the standard method and with methods that ignore measurement error by estimating long-term intake with 24HR and FFQ-reported intakes. The comparison was done in real and simulated datasets. With the 24HR, the mean increase in mercury level per ounce fish intake was about 0.4; with the FFQ intake, the increase was about 1.2. With both calibration methods, the mean increase was about 2.0. Similar trend was observed in the simulation study. In conclusion, the proposed calibration method performs at least as good as the standard method. PMID:27704599
Smith, S. Jerrod; Lewis, Jason M.; Graves, Grant M.
2015-09-28
Generalized-least-squares multiple-linear regression analysis was used to formulate regression relations between peak-streamflow frequency statistics and basin characteristics. Contributing drainage area was the only basin characteristic determined to be statistically significant for all percentage of annual exceedance probabilities and was the only basin characteristic used in regional regression equations for estimating peak-streamflow frequency statistics on unregulated streams in and near the Oklahoma Panhandle. The regression model pseudo-coefficient of determination, converted to percent, for the Oklahoma Panhandle regional regression equations ranged from about 38 to 63 percent. The standard errors of prediction and the standard model errors for the Oklahoma Panhandle regional regression equations ranged from about 84 to 148 percent and from about 76 to 138 percent, respectively. These errors were comparable to those reported for regional peak-streamflow frequency regression equations for the High Plains areas of Texas and Colorado. The root mean square errors for the Oklahoma Panhandle regional regression equations (ranging from 3,170 to 92,000 cubic feet per second) were less than the root mean square errors for the Oklahoma statewide regression equations (ranging from 18,900 to 412,000 cubic feet per second); therefore, the Oklahoma Panhandle regional regression equations produce more accurate peak-streamflow statistic estimates for the irrigated period of record in the Oklahoma Panhandle than do the Oklahoma statewide regression equations. The regression equations developed in this report are applicable to streams that are not substantially affected by regulation, impoundment, or surface-water withdrawals. These regression equations are intended for use for stream sites with contributing drainage areas less than or equal to about 2,060 square miles, the maximum value for the independent variable used in the regression analysis.
Hess, Glen W.
2002-01-01
Techniques for estimating monthly streamflow-duration characteristics at ungaged and partial-record sites in central Nevada have been updated. These techniques were developed using streamflow records at six continuous-record sites, basin physical and climatic characteristics, and concurrent streamflow measurements at four partial-record sites. Two methods, the basin-characteristic method and the concurrent-measurement method, were developed to provide estimating techniques for selected streamflow characteristics at ungaged and partial-record sites in central Nevada. In the first method, logarithmic-regression analyses were used to relate monthly mean streamflows (from all months and by month) from continuous-record gaging sites of various percent exceedence levels or monthly mean streamflows (by month) to selected basin physical and climatic variables at ungaged sites. Analyses indicate that the total drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the equations developed from all months of monthly mean streamflow, the coefficient of determination averaged 0.84 and the standard error of estimate of the relations for the ungaged sites averaged 72 percent. For the equations derived from monthly means by month, the coefficient of determination averaged 0.72 and the standard error of estimate of the relations averaged 78 percent. If standard errors are compared, the relations developed in this study appear generally to be less accurate than those developed in a previous study. However, the new relations are based on additional data and the slight increase in error may be due to the wider range of streamflow for a longer period of record, 1995-2000. In the second method, streamflow measurements at partial-record sites were correlated with concurrent streamflows at nearby gaged sites by the use of linear-regression techniques. Statistical measures of results using the second method typically indicated greater accuracy than for the first method. However, to make estimates for individual months, the concurrent-measurement method requires several years additional streamflow data at more partial-record sites. Thus, exceedence values for individual months are not yet available due to the low number of concurrent-streamflow-measurement data available. Reliability, limitations, and applications of both estimating methods are described herein.
Validity of an ultra-wideband local positioning system to measure locomotion in indoor sports.
Serpiello, F R; Hopkins, W G; Barnes, S; Tavrou, J; Duthie, G M; Aughey, R J; Ball, K
2018-08-01
The validity of an Ultra-wideband (UWB) positioning system was investigated during linear and change-of-direction (COD) running drills. Six recreationally-active men performed ten repetitions of four activities (walking, jogging, maximal acceleration, and 45º COD) on an indoor court. Activities were repeated twice, in the centre of the court and on the side. Participants wore a receiver tag (Clearsky T6, Catapult Sports) and two reflective markers placed on the tag to allow for comparisons with the criterion system (Vicon). Distance, mean and peak velocity, acceleration, and deceleration were assessed. Validity was assessed via percentage least-square means difference (Clearsky-Vicon) with 90% confidence interval and magnitude-based inference; typical error was expressed as within-subject standard deviation. The mean differences for distance, mean/peak speed, and mean/peak accelerations in the linear drills were in the range of 0.2-12%, with typical errors between 1.2 and 9.3%. Mean and peak deceleration had larger differences and errors between systems. In the COD drill, moderate-to-large differences were detected for the activity performed in the centre of the court, increasing to large/very large on the side. When filtered and smoothed following a similar process, the UWB-based positioning system had acceptable validity, compared to Vicon, to assess movements representative of indoor sports.
Error recovery in shared memory multiprocessors using private caches
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.
1990-01-01
The problem of recovering from processor transient faults in shared memory multiprocesses systems is examined. A user-transparent checkpointing and recovery scheme using private caches is presented. Processes can recover from errors due to faulty processors by restarting from the checkpointed computation state. Implementation techniques using checkpoint identifiers and recovery stacks are examined as a means of reducing performance degradation in processor utilization during normal execution. This cache-based checkpointing technique prevents rollback propagation, provides rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions to take error latency into account are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keeling, V; Jin, H; Hossain, S
2014-06-15
Purpose: To evaluate setup accuracy and quantify individual systematic and random errors for the various hardware and software components of the frameless 6D-BrainLAB ExacTrac system. Methods: 35 patients with cranial lesions, some with multiple isocenters (50 total lesions treated in 1, 3, 5 fractions), were investigated. All patients were simulated with a rigid head-and-neck mask and the BrainLAB localizer. CT images were transferred to the IPLAN treatment planning system where optimized plans were generated using stereotactic reference frame based on the localizer. The patients were setup initially with infrared (IR) positioning ExacTrac system. Stereoscopic X-ray images (XC: X-ray Correction) weremore » registered to their corresponding digitally-reconstructed-radiographs, based on bony anatomy matching, to calculate 6D-translational and rotational (Lateral, Longitudinal, Vertical, Pitch, Roll, Yaw) shifts. XC combines systematic errors of the mask, localizer, image registration, frame, and IR. If shifts were below tolerance (0.7 mm translational and 1 degree rotational), treatment was initiated; otherwise corrections were applied and additional X-rays were acquired to verify patient position (XV: X-ray Verification). Statistical analysis was used to extract systematic and random errors of the different components of the 6D-ExacTrac system and evaluate the cumulative setup accuracy. Results: Mask systematic errors (translational; rotational) were the largest and varied from one patient to another in the range (−15 to 4mm; −2.5 to 2.5degree) obtained from mean of XC for each patient. Setup uncertainty in IR positioning (0.97,2.47,1.62mm;0.65,0.84,0.96degree) was extracted from standard-deviation of XC. Combined systematic errors of the frame and localizer (0.32,−0.42,−1.21mm; −0.27,0.34,0.26degree) was extracted from mean of means of XC distributions. Final patient setup uncertainty was obtained from the standard deviations of XV (0.57,0.77,0.67mm,0.39,0.35,0.30degree). Conclusion: Statistical analysis was used to calculate cumulative and individual systematic errors from the different hardware and software components of the 6D-ExacTrac-system. Patients were treated with cumulative errors (<1mm,<1degree) with XV image guidance.« less
Evaluation of the 3dMDface system as a tool for soft tissue analysis.
Hong, C; Choi, K; Kachroo, Y; Kwon, T; Nguyen, A; McComb, R; Moon, W
2017-06-01
To evaluate the accuracy of three-dimensional stereophotogrammetry by comparing values obtained from direct anthropometry and the 3dMDface system. To achieve a more comprehensive evaluation of the reliability of 3dMD, both linear and surface measurements were examined. UCLA Section of Orthodontics. Mannequin head as model for anthropometric measurements. Image acquisition and analysis were carried out on a mannequin head using 16 anthropometric landmarks and 21 measured parameters for linear and surface distances. 3D images using 3dMDface system were made at 0, 1 and 24 hours; 1, 2, 3 and 4 weeks. Error magnitude statistics used include mean absolute difference, standard deviation of error, relative error magnitude and root mean square error. Intra-observer agreement for all measurements was attained. Overall mean errors were lower than 1.00 mm for both linear and surface parameter measurements, except in 5 of the 21 measurements. The three longest parameter distances showed increased variation compared to shorter distances. No systematic errors were observed for all performed paired t tests (P<.05). Agreement values between two observers ranged from 0.91 to 0.99. Measurements on a mannequin confirmed the accuracy of all landmarks and parameters analysed in this study using the 3dMDface system. Results indicated that 3dMDface system is an accurate tool for linear and surface measurements, with potentially broad-reaching applications in orthodontics, surgical treatment planning and treatment evaluation. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Saunders, Kathryn J; Little, Julie-Anne; McClelland, Julie F; Jackson, A Jonathan
2010-06-01
To describe refractive status in children and young adults with cerebral palsy (CP) and relate refractive error to standardized measures of type and severity of CP impairment and to ocular dimensions. A population-based sample of 118 participants aged 4 to 23 years with CP (mean 11.64 +/- 4.06) and an age-appropriate control group (n = 128; age, 4-16 years; mean, 9.33 +/- 3.52) were recruited. Motor impairment was described with the Gross Motor Function Classification Scale (GMFCS), and subtype was allocated with the Surveillance of Cerebral Palsy in Europe (SCPE). Measures of refractive error were obtained from all participants and ocular biometry from a subgroup with CP. A significantly higher prevalence and magnitude of refractive error was found in the CP group compared to the control group. Axial length and spherical refractive error were strongly related. This relation did not improve with inclusion of corneal data. There was no relation between the presence or magnitude of spherical refractive errors in CP and the level of motor impairment, intellectual impairment, or the presence of communication difficulties. Higher spherical refractive errors were significantly associated with the nonspastic CP subtype. The presence and magnitude of astigmatism were greater when intellectual impairment was more severe, and astigmatic errors were explained by corneal dimensions. Conclusions. High refractive errors are common in CP, pointing to impairment of the emmetropization process. Biometric data support this In contrast to other functional vision measures, spherical refractive error is unrelated to CP severity, but those with nonspastic CP tend to demonstrate the most extreme errors in refraction.
An Optimal Control Modification to Model-Reference Adaptive Control for Fast Adaptation
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Krishnakumar, Kalmanje; Boskovic, Jovan
2008-01-01
This paper presents a method that can achieve fast adaptation for a class of model-reference adaptive control. It is well-known that standard model-reference adaptive control exhibits high-gain control behaviors when a large adaptive gain is used to achieve fast adaptation in order to reduce tracking error rapidly. High gain control creates high-frequency oscillations that can excite unmodeled dynamics and can lead to instability. The fast adaptation approach is based on the minimization of the squares of the tracking error, which is formulated as an optimal control problem. The necessary condition of optimality is used to derive an adaptive law using the gradient method. This adaptive law is shown to result in uniform boundedness of the tracking error by means of the Lyapunov s direct method. Furthermore, this adaptive law allows a large adaptive gain to be used without causing undesired high-gain control effects. The method is shown to be more robust than standard model-reference adaptive control. Simulations demonstrate the effectiveness of the proposed method.
Program documentation: Surface heating rate of thin skin models (THNSKN)
NASA Technical Reports Server (NTRS)
Mcbryde, J. D.
1975-01-01
Program THNSKN computes the mean heating rate at a maximum of 100 locations on the surface of thin skin transient heating rate models. Output is printed in tabular form and consists of time history tabulation of temperatures, average temperatures, heat loss without conduction correction, mean heating rate, least squares heating rate, and the percent standard error of the least squares heating rates. The input tape used is produced by the program EHTS03.
Merged SAGE II / MIPAS / OMPS Ozone Record : Impact of Transfer Standard on Ozone Trends.
NASA Astrophysics Data System (ADS)
Kramarova, N. A.; Laeng, A.; von Clarmann, T.; Stiller, G. P.; Walker, K. A.; Zawodny, J. M.; Plieninger, J.
2017-12-01
The deseasonalized ozone anomalies from SAGE II, MIPAS and OMPS-LP datasets are merged into one long record. Two versions of the dataset will be presented : ACE-FTS instrument or MLS instrument are used as a transfer standard. The data are provided in 10 degrees latitude bins, going from 60N to 60S for the period from October 1984 to March 2017. The main differences between presented in this study merged ozone record and the merged SAGE II / Ozone_CCI / OMPS-Saskatoon dataset by V. Sofieva are: - the OMPS-LP data are from the NASA GSFC version 2 processor - the MIPAS 2002-2004 date are taken into the record - Data are merged using a transfer standard. In overlapping periods data are merged as weighted means where the weights are inversely proportional to the standard errors of the means (SEM) of the corresponding individual monthly means. The merged dataset comes with the uncertainty estimates. Ozone trends are calculated out of both versions of the dataset. The impact of transfer standard on obtained trends is discussed.
Cost-effectiveness of the stream-gaging program in New Jersey
Schopp, R.D.; Ulery, R.L.
1984-01-01
The results of a study of the cost-effectiveness of the stream-gaging program in New Jersey are documented. This study is part of a 5-year nationwide analysis undertaken by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. This report identifies the principal uses of the data and relates those uses to funding sources, applies, at selected stations, alternative less costly methods (that is flow routing, regression analysis) for furnishing the data, and defines a strategy for operating the program which minimizes uncertainty in the streamflow data for specific operating budgets. Uncertainty in streamflow data is primarily a function of the percentage of missing record and the frequency of discharge measurements. In this report, 101 continuous stream gages and 73 crest-stage or stage-only gages are analyzed. A minimum budget of $548,000 is required to operate the present stream-gaging program in New Jersey with an average standard error of 27.6 percent. The maximum budget analyzed was $650,000, which resulted in an average standard error of 17.8 percent. The 1983 budget of $569,000 resulted in a standard error of 24.9 percent under present operating policy. (USGS)
Daud-Gallotti, Renata Mahfuz; Morinaga, Christian Valle; Arlindo-Rodrigues, Marcelo; Velasco, Irineu Tadeu; Arruda Martins, Milton; Tiberio, Iolanda Calvo
2011-01-01
INTRODUCTION: Patient safety is seldom assessed using objective evaluations during undergraduate medical education. OBJECTIVE: To evaluate the performance of fifth-year medical students using an objective structured clinical examination focused on patient safety after implementation of an interactive program based on adverse events recognition and disclosure. METHODS: In 2007, a patient safety program was implemented in the internal medicine clerkship of our hospital. The program focused on human error theory, epidemiology of incidents, adverse events, and disclosure. Upon completion of the program, students completed an objective structured clinical examination with five stations and standardized patients. One station focused on patient safety issues, including medical error recognition/disclosure, the patient-physician relationship and humanism issues. A standardized checklist was completed by each standardized patient to assess the performance of each student. The student's global performance at each station and performance in the domains of medical error, the patient-physician relationship and humanism were determined. The correlations between the student performances in these three domains were calculated. RESULTS: A total of 95 students participated in the objective structured clinical examination. The mean global score at the patient safety station was 87.59±1.24 points. Students' performance in the medical error domain was significantly lower than their performance on patient-physician relationship and humanistic issues. Less than 60% of students (n = 54) offered the simulated patient an apology after a medical error occurred. A significant correlation was found between scores obtained in the medical error domains and scores related to both the patient-physician relationship and humanistic domains. CONCLUSIONS: An objective structured clinical examination is a useful tool to evaluate patient safety competencies during the medical student clerkship. PMID:21876976
Jin, Mengtong; Sun, Wenshuo; Li, Qin; Sun, Xiaohong; Pan, Yingjie; Zhao, Yong
2014-04-04
We evaluated the difference of three standard curves in quantifying viable Vibrio parahaemolyticus in samples by real-time reverse-transcriptase PCR (Real-time RT-PCR). The standard curve A was established by 10-fold diluted cDNA. The cDNA was reverse transcripted after RNA synthesized in vitro. The standard curve B and C were established by 10-fold diluted cDNA. The cDNA was synthesized after RNA isolated from Vibrio parahaemolyticus in pure cultures (10(8) CFU/mL) and shrimp samples (10(6) CFU/g) (Standard curve A and C were proposed for the first time). Three standard curves were performed to quantitatively detect V. parahaemolyticus in six samples, respectively (Two pure cultured V. parahaemolyticus samples, two artificially contaminated cooked Litopenaeus vannamei samples and two artificially contaminated Litopenaeus vannamei samples). Then we evaluated the quantitative results of standard curve and the plate counting results and then analysed the differences. The three standard curves all show a strong linear relationship between the fractional cycle number and V. parahaemolyticus concentration (R2 > 0.99); The quantitative results of Real-time PCR were significantly (p < 0.05) lower than the results of plate counting. The relative errors compared with the results of plate counting ranked standard curve A (30.0%) > standard curve C (18.8%) > standard curve B (6.9%); The average differences between standard curve A and standard curve B and C were - 2.25 Lg CFU/mL and - 0.75 Lg CFU/mL, respectively, and the mean relative errors were 48.2% and 15.9%, respectively; The average difference between standard curve B and C was among (1.47 -1.53) Lg CFU/mL and the average relative errors were among 19.0% - 23.8%. Standard curve B could be applied to Real-time RT-PCR when quantify the number of viable microorganisms in samples.
Cruz, Jennifer L; Brown, Jamie N
2015-06-01
Rigorous practices for safe dispensing of investigational drugs are not standardized. This investigation sought to identify error-prevention processes utilized in the provision of investigational drug services (IDS) and to characterize pharmacists' perceptions about safety risks posed by investigational drugs. An electronic questionnaire was distributed to an audience of IDS pharmacists within the Veteran Affairs Health System. Multiple facets were examined including demographics, perceptions of medication safety, and standard processes used to support investigational drug protocols. Twenty-one respondents (32.8% response rate) from the Northeast, Midwest, South, West, and Non-contiguous United States participated. The mean number of pharmacist full-time equivalents (FTEs) dedicated to the IDS was 0.77 per site with 0.2 technician FTEs. The mean number of active protocols was 22. Seventeen respondents (81%) indicated some level of concern for safety risks. Concerns related to the packaging of medications were expressed, most notably lack of product differentiation, expiration dating, barcodes, and choice of font size or color. Regarding medication safety practices, the majority of sites had specific procedures in place for storing and securing drug supply, temperature monitoring, and prescription labeling. Repackaging bulk items and proactive error-identification strategies were less common. Sixty-seven percent of respondents reported that an independent double check was not routinely performed. Medication safety concerns exist among pharmacists in an investigational drug service; however, a variety of measures have been employed to improve medication safety practices. Best practices for the safe dispensing of investigational medications should be developed in order to standardize these error-prevention strategies.
Brown, Jamie N.
2015-01-01
Objectives: Rigorous practices for safe dispensing of investigational drugs are not standardized. This investigation sought to identify error-prevention processes utilized in the provision of investigational drug services (IDS) and to characterize pharmacists’ perceptions about safety risks posed by investigational drugs. Methods: An electronic questionnaire was distributed to an audience of IDS pharmacists within the Veteran Affairs Health System. Multiple facets were examined including demographics, perceptions of medication safety, and standard processes used to support investigational drug protocols. Results: Twenty-one respondents (32.8% response rate) from the Northeast, Midwest, South, West, and Non-contiguous United States participated. The mean number of pharmacist full-time equivalents (FTEs) dedicated to the IDS was 0.77 per site with 0.2 technician FTEs. The mean number of active protocols was 22. Seventeen respondents (81%) indicated some level of concern for safety risks. Concerns related to the packaging of medications were expressed, most notably lack of product differentiation, expiration dating, barcodes, and choice of font size or color. Regarding medication safety practices, the majority of sites had specific procedures in place for storing and securing drug supply, temperature monitoring, and prescription labeling. Repackaging bulk items and proactive error-identification strategies were less common. Sixty-seven percent of respondents reported that an independent double check was not routinely performed. Conclusions: Medication safety concerns exist among pharmacists in an investigational drug service; however, a variety of measures have been employed to improve medication safety practices. Best practices for the safe dispensing of investigational medications should be developed in order to standardize these error-prevention strategies. PMID:26240744
Validation of the firefighter WFI treadmill protocol for predicting VO2 max.
Dolezal, B A; Barr, D; Boland, D M; Smith, D L; Cooper, C B
2015-03-01
The Wellness-Fitness Initiative submaximal treadmill exercise test (WFI-TM) is recommended by the US National Fire Protection Agency to assess aerobic capacity (VO2 max) in firefighters. However, predicting VO2 max from submaximal tests can result in errors leading to erroneous conclusions about fitness. To investigate the level of agreement between VO2 max predicted from the WFI-TM against its direct measurement using exhaled gas analysis. The WFI-TM was performed to volitional fatigue. Differences between estimated VO2 max (derived from the WFI-TM equation) and direct measurement (exhaled gas analysis) were compared by paired t-test and agreement was determined using Pearson Product-Moment correlation and Bland-Altman analysis. Statistical significance was set at P < 0.05. Fifty-nine men performed the WFI-TM. Mean (standard deviation) values for estimated and measured VO2 max were 44.6 (3.4) and 43.6 (7.9) ml/kg/min, respectively (P < 0.01). The mean bias by which WFI-TM overestimated VO2 max was 0.9ml/kg/min with a 95% prediction interval of ±13.1. Prediction errors for 22% of subjects were within ±5%; 36% had errors greater than or equal to ±15% and 7% had greater than ±30% errors. The correlation between predicted and measured VO2 max was r = 0.55 (standard error of the estimate = 2.8ml/kg/min). WFI-TM predicts VO2 max with 11% error. There is a tendency to overestimate aerobic capacity in less fit individuals and to underestimate it in more fit individuals leading to a clustering of values around 42ml/kg/min, a criterion used by some fire departments to assess fitness for duty. © The Author 2015. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Statistical modelling of thermal annealing of fission tracks in apatite
NASA Astrophysics Data System (ADS)
Laslett, G. M.; Galbraith, R. F.
1996-12-01
We develop an improved methodology for modelling the relationship between mean track length, temperature, and time in fission track annealing experiments. We consider "fanning Arrhenius" models, in which contours of constant mean length on an Arrhenius plot are straight lines meeting at a common point. Features of our approach are explicit use of subject matter knowledge, treating mean length as the response variable, modelling of the mean-variance relationship with two components of variance, improved modelling of the control sample, and using information from experiments in which no tracks are seen. This approach overcomes several weaknesses in previous models and provides a robust six parameter model that is widely applicable. Estimation is via direct maximum likelihood which can be implemented using a standard numerical optimisation package. Because the model is highly nonlinear, some reparameterisations are needed to achieve stable estimation and calculation of precisions. Experience suggests that precisions are more convincingly estimated from profile log-likelihood functions than from the information matrix. We apply our method to the B-5 and Sr fluorapatite data of Crowley et al. (1991) and obtain well-fitting models in both cases. For the B-5 fluorapatite, our model exhibits less fanning than that of Crowley et al. (1991), although fitted mean values above 12 μm are fairly similar. However, predictions can be different, particularly for heavy annealing at geological time scales, where our model is less retentive. In addition, the refined error structure of our model results in tighter prediction errors, and has components of error that are easier to verify or modify. For the Sr fluorapatite, our fitted model for mean lengths does not differ greatly from that of Crowley et al. (1991), but our error structure is quite different.
Combining forecast weights: Why and how?
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim
2012-09-01
This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.
A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry
NASA Astrophysics Data System (ADS)
Huang, S.; Feng, M. C.; Zheng, T. X.; Li, F.; Wang, J. Q.; Xiao, L. F.
2018-03-01
Multi-camera calibration plays an important role in many field. In the paper, we present a novel multi-camera calibration method based on flat refractive geometry. All cameras can acquire calibration images of transparent glass calibration board (TGCB) at the same time. The application of TGCB leads to refractive phenomenon which can generate calibration error. The theory of flat refractive geometry is employed to eliminate the error. The new method can solve the refractive phenomenon of TGCB. Moreover, the bundle adjustment method is used to minimize the reprojection error and obtain optimized calibration results. Finally, the four-cameras calibration results of real data show that the mean value and standard deviation of the reprojection error of our method are 4.3411e-05 and 0.4553 pixel, respectively. The experimental results show that the proposed method is accurate and reliable.
Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard
2011-01-01
In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.
Rapid Detection of Volatile Oil in Mentha haplocalyx by Near-Infrared Spectroscopy and Chemometrics.
Yan, Hui; Guo, Cheng; Shao, Yang; Ouyang, Zhen
2017-01-01
Near-infrared spectroscopy combined with partial least squares regression (PLSR) and support vector machine (SVM) was applied for the rapid determination of chemical component of volatile oil content in Mentha haplocalyx . The effects of data pre-processing methods on the accuracy of the PLSR calibration models were investigated. The performance of the final model was evaluated according to the correlation coefficient ( R ) and root mean square error of prediction (RMSEP). For PLSR model, the best preprocessing method combination was first-order derivative, standard normal variate transformation (SNV), and mean centering, which had of 0.8805, of 0.8719, RMSEC of 0.091, and RMSEP of 0.097, respectively. The wave number variables linking to volatile oil are from 5500 to 4000 cm-1 by analyzing the loading weights and variable importance in projection (VIP) scores. For SVM model, six LVs (less than seven LVs in PLSR model) were adopted in model, and the result was better than PLSR model. The and were 0.9232 and 0.9202, respectively, with RMSEC and RMSEP of 0.084 and 0.082, respectively, which indicated that the predicted values were accurate and reliable. This work demonstrated that near infrared reflectance spectroscopy with chemometrics could be used to rapidly detect the main content volatile oil in M. haplocalyx . The quality of medicine directly links to clinical efficacy, thus, it is important to control the quality of Mentha haplocalyx . Near-infrared spectroscopy combined with partial least squares regression (PLSR) and support vector machine (SVM) was applied for the rapid determination of chemical component of volatile oil content in Mentha haplocalyx . For SVM model, 6 LVs (less than 7 LVs in PLSR model) were adopted in model, and the result was better than PLSR model. It demonstrated that near infrared reflectance spectroscopy with chemometrics could be used to rapidly detect the main content volatile oil in Mentha haplocalyx . Abbreviations used: 1 st der: First-order derivative; 2 nd der: Second-order derivative; LOO: Leave-one-out; LVs: Latent variables; MC: Mean centering, NIR: Near-infrared; NIRS: Near infrared spectroscopy; PCR: Principal component regression, PLSR: Partial least squares regression; RBF: Radial basis function; RMSEC: Root mean square error of cross validation, RMSEC: Root mean square error of calibration; RMSEP: Root mean square error of prediction; SNV: Standard normal variate transformation; SVM: Support vector machine; VIP: Variable Importance in projection.
Analysis of a first order phase locked loop in the presence of Gaussian noise
NASA Technical Reports Server (NTRS)
Blasche, P. R.
1977-01-01
A first-order digital phase locked loop is analyzed by application of a Markov chain model. Steady state loop error probabilities, phase standard deviation, and mean loop transient times are determined for various input signal to noise ratios. Results for direct loop simulation are presented for comparison.
A Comparison of Latent Growth Models for Constructs Measured by Multiple Items
ERIC Educational Resources Information Center
Leite, Walter L.
2007-01-01
Univariate latent growth modeling (LGM) of composites of multiple items (e.g., item means or sums) has been frequently used to analyze the growth of latent constructs. This study evaluated whether LGM of composites yields unbiased parameter estimates, standard errors, chi-square statistics, and adequate fit indexes. Furthermore, LGM was compared…
How Sample Size Affects a Sampling Distribution
ERIC Educational Resources Information Center
Mulekar, Madhuri S.; Siegel, Murray H.
2009-01-01
If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…
A new SAS program for behavioral analysis of Electrical Penetration Graph (EPG) data
USDA-ARS?s Scientific Manuscript database
A new program is introduced that uses SAS software to duplicate output of descriptive statistics from the Sarria Excel workbook for EPG waveform analysis. Not only are publishable means and standard errors or deviations output, the user also is guided through four relatively simple sub-programs for ...
Optimal estimation of suspended-sediment concentrations in streams
Holtschlag, D.J.
2001-01-01
Optimal estimators are developed for computation of suspended-sediment concentrations in streams. The estimators are a function of parameters, computed by use of generalized least squares, which simultaneously account for effects of streamflow, seasonal variations in average sediment concentrations, a dynamic error component, and the uncertainty in concentration measurements. The parameters are used in a Kalman filter for on-line estimation and an associated smoother for off-line estimation of suspended-sediment concentrations. The accuracies of the optimal estimators are compared with alternative time-averaging interpolators and flow-weighting regression estimators by use of long-term daily-mean suspended-sediment concentration and streamflow data from 10 sites within the United States. For sampling intervals from 3 to 48 days, the standard errors of on-line and off-line optimal estimators ranged from 52.7 to 107%, and from 39.5 to 93.0%, respectively. The corresponding standard errors of linear and cubic-spline interpolators ranged from 48.8 to 158%, and from 50.6 to 176%, respectively. The standard errors of simple and multiple regression estimators, which did not vary with the sampling interval, were 124 and 105%, respectively. Thus, the optimal off-line estimator (Kalman smoother) had the lowest error characteristics of those evaluated. Because suspended-sediment concentrations are typically measured at less than 3-day intervals, use of optimal estimators will likely result in significant improvements in the accuracy of continuous suspended-sediment concentration records. Additional research on the integration of direct suspended-sediment concentration measurements and optimal estimators applied at hourly or shorter intervals is needed.
Giordano, Lydia; Friedman, David S; Repka, Michael X; Katz, Joanne; Ibironke, Josephine; Hawes, Patricia; Tielsch, James M
2009-04-01
To determine the age-specific prevalence of refractive errors in white and African-American preschool children. The Baltimore Pediatric Eye Disease Study is a population-based evaluation of the prevalence of ocular disorders in children aged 6 to 71 months in Baltimore, Maryland. Among 4132 children identified, 3990 eligible children (97%) were enrolled and 2546 children (62%) were examined. Cycloplegic autorefraction was attempted in all children with the use of a Nikon Retinomax K-Plus 2 (Nikon Corporation, Tokyo, Japan). If a reliable autorefraction could not be obtained after 3 attempts, cycloplegic streak retinoscopy was performed. Mean spherical equivalent (SE) refractive error, astigmatism, and prevalence of higher refractive errors among African-American and white children. The mean SE of right eyes was +1.49 diopters (D) (standard deviation [SD] = 1.23) in white children and +0.71 D (SD = 1.35) in African-American children (mean difference of 0.78 D; 95% confidence interval [CI], 0.67-0.89). Mean SE refractive error did not decline with age in either group. The prevalence of myopia of 1.00 D or more in the eye with the lesser refractive error was 0.7% in white children and 5.5% in African-American children (relative risk [RR], 8.01; 95% CI, 3.70-17.35). The prevalence of hyperopia of +3 D or more in the eye with the lesser refractive error was 8.9% in white children and 4.4% in African-American children (RR, 0.49; 95% CI, 0.35-0.68). The prevalence of emmetropia (<-1.00 D to <+1.00 D) was 35.6% in white children and 58.0% in African-American children (RR, 1.64; 95% CI, 1.49-1.80). On the basis of published prescribing guidelines, 5.1% of the children would have benefited from spectacle correction. However, only 1.3% had been prescribed correction. Significant refractive errors are uncommon in this population of urban preschool children. There was no evidence for a myopic shift over this age range in this cross-sectional study. A small proportion of preschool children would likely benefit from refractive correction, but few have had this prescribed.
Cooperstein, Robert; Young, Morgan
2014-01-01
Upright examination procedures like radiology, thermography, manual muscle testing, and spinal motion palpation may lead to spinal interventions with the patient prone. The reliability and accuracy of mapping upright examination findings to the prone position is unknown. This study had 2 primary goals: (1) investigate how erroneous spine-scapular landmark associations may lead to errors in treating and charting spine levels; and (2) study the interexaminer reliability of a novel method for mapping upright spinal sites to the prone position. Experiment 1 was a thought experiment exploring the consequences of depending on the erroneous landmark association of the inferior scapular tip with the T7 spinous process upright and T6 spinous process prone (relatively recent studies suggest these levels are T8 and T9, respectively). This allowed deduction of targeting and charting errors. In experiment 2, 10 examiners (2 experienced, 8 novice) used an index finger to maintain contact with a mid-thoracic spinous process as each of 2 participants slowly moved from the upright to the prone position. Interexaminer reliability was assessed by computing Intraclass Correlation Coefficient, standard error of the mean, root mean squared error, and the absolute value of the mean difference for each examiner from the 10 examiner mean for each of the 2 participants. The thought experiment suggesting that using the (inaccurate) scapular tip landmark rule would result in a 3 level targeting and charting error when radiological findings are mapped to the prone position. Physical upright exam procedures like motion palpation would result in a 2 level targeting error for intervention, and a 3 level error for charting. The reliability experiment showed examiners accurately maintained contact with the same thoracic spinous process as the participant went from upright to prone, ICC (2,1) = 0.83. As manual therapists, the authors have emphasized how targeting errors may impact upon manual care of the spine. Practitioners in other fields that need to accurately locate spinal levels, such as acupuncture and anesthesiology, would also be expected to draw important conclusions from these findings.
2014-01-01
Background Upright examination procedures like radiology, thermography, manual muscle testing, and spinal motion palpation may lead to spinal interventions with the patient prone. The reliability and accuracy of mapping upright examination findings to the prone position is unknown. This study had 2 primary goals: (1) investigate how erroneous spine-scapular landmark associations may lead to errors in treating and charting spine levels; and (2) study the interexaminer reliability of a novel method for mapping upright spinal sites to the prone position. Methods Experiment 1 was a thought experiment exploring the consequences of depending on the erroneous landmark association of the inferior scapular tip with the T7 spinous process upright and T6 spinous process prone (relatively recent studies suggest these levels are T8 and T9, respectively). This allowed deduction of targeting and charting errors. In experiment 2, 10 examiners (2 experienced, 8 novice) used an index finger to maintain contact with a mid-thoracic spinous process as each of 2 participants slowly moved from the upright to the prone position. Interexaminer reliability was assessed by computing Intraclass Correlation Coefficient, standard error of the mean, root mean squared error, and the absolute value of the mean difference for each examiner from the 10 examiner mean for each of the 2 participants. Results The thought experiment suggesting that using the (inaccurate) scapular tip landmark rule would result in a 3 level targeting and charting error when radiological findings are mapped to the prone position. Physical upright exam procedures like motion palpation would result in a 2 level targeting error for intervention, and a 3 level error for charting. The reliability experiment showed examiners accurately maintained contact with the same thoracic spinous process as the participant went from upright to prone, ICC (2,1) = 0.83. Conclusions As manual therapists, the authors have emphasized how targeting errors may impact upon manual care of the spine. Practitioners in other fields that need to accurately locate spinal levels, such as acupuncture and anesthesiology, would also be expected to draw important conclusions from these findings. PMID:24904747
Evaluating the utility of mid-infrared spectral subspaces for predicting soil properties.
Sila, Andrew M; Shepherd, Keith D; Pokhariyal, Ganesh P
2016-04-15
We propose four methods for finding local subspaces in large spectral libraries. The proposed four methods include (a) cosine angle spectral matching; (b) hit quality index spectral matching; (c) self-organizing maps and (d) archetypal analysis methods. Then evaluate prediction accuracies for global and subspaces calibration models. These methods were tested on a mid-infrared spectral library containing 1907 soil samples collected from 19 different countries under the Africa Soil Information Service project. Calibration models for pH, Mehlich-3 Ca, Mehlich-3 Al, total carbon and clay soil properties were developed for the whole library and for the subspace. Root mean square error of prediction was used to evaluate predictive performance of subspace and global models. The root mean square error of prediction was computed using a one-third-holdout validation set. Effect of pretreating spectra with different methods was tested for 1st and 2nd derivative Savitzky-Golay algorithm, multiplicative scatter correction, standard normal variate and standard normal variate followed by detrending methods. In summary, the results show that global models outperformed the subspace models. We, therefore, conclude that global models are more accurate than the local models except in few cases. For instance, sand and clay root mean square error values from local models from archetypal analysis method were 50% poorer than the global models except for subspace models obtained using multiplicative scatter corrected spectra with which were 12% better. However, the subspace approach provides novel methods for discovering data pattern that may exist in large spectral libraries.
Geospatial interpolation and mapping of tropospheric ozone pollution using geostatistics.
Kethireddy, Swatantra R; Tchounwou, Paul B; Ahmad, Hafiz A; Yerramilli, Anjaneyulu; Young, John H
2014-01-10
Tropospheric ozone (O3) pollution is a major problem worldwide, including in the United States of America (USA), particularly during the summer months. Ozone oxidative capacity and its impact on human health have attracted the attention of the scientific community. In the USA, sparse spatial observations for O3 may not provide a reliable source of data over a geo-environmental region. Geostatistical Analyst in ArcGIS has the capability to interpolate values in unmonitored geo-spaces of interest. In this study of eastern Texas O3 pollution, hourly episodes for spring and summer 2012 were selectively identified. To visualize the O3 distribution, geostatistical techniques were employed in ArcMap. Using ordinary Kriging, geostatistical layers of O3 for all the studied hours were predicted and mapped at a spatial resolution of 1 kilometer. A decent level of prediction accuracy was achieved and was confirmed from cross-validation results. The mean prediction error was close to 0, the root mean-standardized-prediction error was close to 1, and the root mean square and average standard errors were small. O3 pollution map data can be further used in analysis and modeling studies. Kriging results and O3 decadal trends indicate that the populace in Houston-Sugar Land-Baytown, Dallas-Fort Worth-Arlington, Beaumont-Port Arthur, San Antonio, and Longview are repeatedly exposed to high levels of O3-related pollution, and are prone to the corresponding respiratory and cardiovascular health effects. Optimization of the monitoring network proves to be an added advantage for the accurate prediction of exposure levels.
Li, Wen-bing; Yao, Lin-tao; Liu, Mu-hua; Huang, Lin; Yao, Ming-yin; Chen, Tian-bing; He, Xiu-wen; Yang, Ping; Hu, Hui-qin; Nie, Jiang-hui
2015-05-01
Cu in navel orange was detected rapidly by laser-induced breakdown spectroscopy (LIBS) combined with partial least squares (PLS) for quantitative analysis, then the effect on the detection accuracy of the model with different spectral data ptetreatment methods was explored. Spectral data for the 52 Gannan navel orange samples were pretreated by different data smoothing, mean centralized and standard normal variable transform. Then 319~338 nm wavelength section containing characteristic spectral lines of Cu was selected to build PLS models, the main evaluation indexes of models such as regression coefficient (r), root mean square error of cross validation (RMSECV) and the root mean square error of prediction (RMSEP) were compared and analyzed. Three indicators of PLS model after 13 points smoothing and processing of the mean center were found reaching 0. 992 8, 3. 43 and 3. 4 respectively, the average relative error of prediction model is only 5. 55%, and in one word, the quality of calibration and prediction of this model are the best results. The results show that selecting the appropriate data pre-processing method, the prediction accuracy of PLS quantitative model of fruits and vegetables detected by LIBS can be improved effectively, providing a new method for fast and accurate detection of fruits and vegetables by LIBS.
Gilliom, Robert J.; Helsel, Dennis R.
1986-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.
NASA Astrophysics Data System (ADS)
Kim, Younsu; Kim, Sungmin; Boctor, Emad M.
2017-03-01
An ultrasound image-guided needle tracking systems have been widely used due to their cost-effectiveness and nonionizing radiation properties. Various surgical navigation systems have been developed by utilizing state-of-the-art sensor technologies. However, ultrasound transmission beam thickness causes unfair initial evaluation conditions due to inconsistent placement of the target with respect to the ultrasound probe. This inconsistency also brings high uncertainty and results in large standard deviations for each measurement when we compare accuracy with and without the guidance. To resolve this problem, we designed a complete evaluation platform by utilizing our mid-plane detection and time of flight measurement systems. The evaluating system uses a PZT element target and an ultrasound transmitting needle. In this paper, we evaluated an optical tracker-based surgical ultrasound-guided navigation system whereby the optical tracker tracks marker frames attached on the ultrasound probe and the needle. We performed ten needle trials of guidance experiment with a mid-plane adjustment algorithm and with a B-mode segmentation method. With the midplane adjustment, the result showed a mean error of 1.62+/-0.72mm. The mean error increased to 3.58+/-2.07mm without the mid-plane adjustment. Our evaluation system can reduce the effect of the beam-thickness problem, and measure ultrasound image-guided technologies consistently with a minimal standard deviation. Using our novel evaluation system, ultrasound image-guided technologies can be compared under equal initial conditions. Therefore, the error can be evaluated more accurately, and the system provides better analysis on the error sources such as ultrasound beam thickness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1986-02-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensoredmore » observations, for determining the best performing parameter estimation method for any particular data det. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.« less
Estimation of distributional parameters for censored trace-level water-quality data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1984-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less
Blood collection techniques, heparin and quinidine protein binding.
Kessler, K M; Leech, R C; Spann, J F
1979-02-01
With the use of glass syringes without heparin and all glass equipment, the percent of unbound quinidine was measured by ultrafiltration and a double-extraction assay method after addition of 2 microgram/ml of quinidine sulfate. Compared to the all-glass method, collection of blood using Vacutainers resulted in an erroneous and variable decrease in quinidine binding related to blood to rubber-stopper contact. With glass, the unbound quinidine fraction was (mean +/- standard error) 10 +/- 1% in 10 normal volunteers, 8.5 +/- 1.5% in 10 patients with congestive heart failure, and 11 +/- 2% in 11 patients with chronic renal failure (although in 8 of the latter 11 patients the percent of unbound quinidine was 4 or more standard errors from the mean of the normal group). During cardiac catheterization, patients had markedly elevated unbound quinidine fractions: 24 +/- 2% (p less than 0.001). This abnormality coincided with the addition of heparin in vivo and was less apparent after the addition of up to 10 U/ml of heparin in vitro (120% and 29% increase in unbound quinidine fractions, respectively). Quinidine binding should be measured with all glass or equivalent equipment.
Lehman, Niles; Clarkson, Peter; Mech, L. David; Meier, Thomas J.; Wayne, Robert K.
1992-01-01
DNA fingerprinting and mitochondrial DNA analyses have not been used in combination to study relatedness in natural populations. We present an approach that involves defining the mean fingerprint similarities among individuals thought to be unrelated because they have different mtDNA genotypes. Two classes of related individuals are identified by their distance in standard errors above this mean value. The number of standard errors is determined by analysis of the association between fingerprint similarity and relatedness in a population with a known genealogy. We apply this approach to gray wolf packs from Minnesota, Alaska, and the Northwest Territories. Our results show that: (1) wolf packs consist primarily of individuals that are closely related genetically, but some packs contain unrelated, non-reproducing individuals; (2) dispersal among packs within the same area is common; and (3) short-range dispersal appears more common for female than male wolves. The first two of these genetically-based observations are consistent with behavioral data on pack structure and dispersal in wolves, while the apparent sex bias in dispersal was not expected.
NASA Astrophysics Data System (ADS)
Gidey, Amanuel
2018-06-01
Determining suitability and vulnerability of groundwater quality for irrigation use is a key alarm and first aid for careful management of groundwater resources to diminish the impacts on irrigation. This study was conducted to determine the overall suitability of groundwater quality for irrigation use and to generate their spatial distribution maps in Elala catchment, Northern Ethiopia. Thirty-nine groundwater samples were collected to analyze and map the water quality variables. Atomic absorption spectrophotometer, ultraviolet spectrophotometer, titration and calculation methods were used for laboratory groundwater quality analysis. Arc GIS, geospatial analysis tools, semivariogram model types and interpolation methods were used to generate geospatial distribution maps. Twelve and eight water quality variables were used to produce weighted overlay and irrigation water quality index models, respectively. Root-mean-square error, mean square error, absolute square error, mean error, root-mean-square standardized error, measured values versus predicted values were used for cross-validation. The overall weighted overlay model result showed that 146 km2 areas are highly suitable, 135 km2 moderately suitable and 60 km2 area unsuitable for irrigation use. The result of irrigation water quality index confirms 10.26% with no restriction, 23.08% with low restriction, 20.51% with moderate restriction, 15.38% with high restriction and 30.76% with the severe restriction for irrigation use. GIS and irrigation water quality index are better methods for irrigation water resources management to achieve a full yield irrigation production to improve food security and to sustain it for a long period, to avoid the possibility of increasing environmental problems for the future generation.
Investigations in adaptive processing of multispectral data
NASA Technical Reports Server (NTRS)
Kriegler, F. J.; Horwitz, H. M.
1973-01-01
Adaptive data processing procedures are applied to the problem of classifying objects in a scene scanned by multispectral sensor. These procedures show a performance improvement over standard nonadaptive techniques. Some sources of error in classification are identified and those correctable by adaptive processing are discussed. Experiments in adaptation of signature means by decision-directed methods are described. Some of these methods assume correlation between the trajectories of different signature means; for others this assumption is not made.
Salerno, Stephen M; Arnett, Michael V; Domanski, Jeremy P
2009-01-01
Prior research on reducing variation in housestaff handoff procedures have depended on proprietary checkout software. Use of low-technology standardization techniques has not been widely studied. We wished to determine if standardizing the process of intern sign-out using low-technology sign-out tools could reduce perception of errors and missing handoff data. We conducted a pre-post prospective study of a cohort of 34 interns on a general internal medicine ward. Night interns coming off duty and day interns reassuming care were surveyed on their perception of erroneous sign-out data, mistakes made by the night intern overnight, and occurrences unanticipated by sign-out. Trainee satisfaction with the sign-out process was assessed with a 5-point Likert survey. There were 399 intern surveys performed 8 weeks before and 6 weeks after the introduction of a standardized sign-out form. The response rate was 95% for the night interns and 70% for the interns reassuming care in the morning. After the standardized form was introduced, night interns were significantly (p < .003) less likely to detect missing sign-out data including missing important diseases, contingency plans, or medications. Standardized sign-out did not significantly alter the frequency of dropped tasks or missed lab and X-ray data as perceived by the night intern. However, the day teams thought there were significantly less perceived errors on the part of the night intern (p = .001) after introduction of the standardized sign-out sheet. There was no difference in mean Likert scores of resident satisfaction with sign-out before and after the intervention. Standardized written sign-out sheets significantly improve the completeness and effectiveness of handoffs between night and day interns. Further research is needed to determine if these process improvements are related to better patient outcomes.
Missing portion sizes in FFQ--alternatives to use of standard portions.
Køster-Rasmussen, Rasmus; Siersma, Volkert; Halldorsson, Thorhallur I; de Fine Olivarius, Niels; Henriksen, Jan E; Heitmann, Berit L
2015-08-01
Standard portions or substitution of missing portion sizes with medians may generate bias when quantifying the dietary intake from FFQ. The present study compared four different methods to include portion sizes in FFQ. We evaluated three stochastic methods for imputation of portion sizes based on information about anthropometry, sex, physical activity and age. Energy intakes computed with standard portion sizes, defined as sex-specific medians (median), or with portion sizes estimated with multinomial logistic regression (MLR), 'comparable categories' (Coca) or k-nearest neighbours (KNN) were compared with a reference based on self-reported portion sizes (quantified by a photographic food atlas embedded in the FFQ). The Danish Health Examination Survey 2007-2008. The study included 3728 adults with complete portion size data. Compared with the reference, the root-mean-square errors of the mean daily total energy intake (in kJ) computed with portion sizes estimated by the four methods were (men; women): median (1118; 1061), MLR (1060; 1051), Coca (1230; 1146), KNN (1281; 1181). The equivalent biases (mean error) were (in kJ): median (579; 469), MLR (248; 178), Coca (234; 188), KNN (-340; 218). The methods MLR and Coca provided the best agreement with the reference. The stochastic methods allowed for estimation of meaningful portion sizes by conditioning on information about physiology and they were suitable for multiple imputation. We propose to use MLR or Coca to substitute missing portion size values or when portion sizes needs to be included in FFQ without portion size data.
Statistical Data Editing in Scientific Articles.
Habibzadeh, Farrokh
2017-07-01
Scientific journals are important scholarly forums for sharing research findings. Editors have important roles in safeguarding standards of scientific publication and should be familiar with correct presentation of results, among other core competencies. Editors do not have access to the raw data and should thus rely on clues in the submitted manuscripts. To identify probable errors, they should look for inconsistencies in presented results. Common statistical problems that can be picked up by a knowledgeable manuscript editor are discussed in this article. Manuscripts should contain a detailed section on statistical analyses of the data. Numbers should be reported with appropriate precisions. Standard error of the mean (SEM) should not be reported as an index of data dispersion. Mean (standard deviation [SD]) and median (interquartile range [IQR]) should be used for description of normally and non-normally distributed data, respectively. If possible, it is better to report 95% confidence interval (CI) for statistics, at least for main outcome variables. And, P values should be presented, and interpreted with caution, if there is a hypothesis. To advance knowledge and skills of their members, associations of journal editors are better to develop training courses on basic statistics and research methodology for non-experts. This would in turn improve research reporting and safeguard the body of scientific evidence. © 2017 The Korean Academy of Medical Sciences.
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.
2013-01-01
Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
Tridandapani, Srini; Ramamurthy, Senthil; Provenzale, James; Obuchowski, Nancy A; Evanoff, Michael G; Bhatti, Pamela
2014-08-01
To evaluate whether the presence of facial photographs obtained at the point-of-care of portable radiography leads to increased detection of wrong-patient errors. In this institutional review board-approved study, 166 radiograph-photograph combinations were obtained from 30 patients. Consecutive radiographs from the same patients resulted in 83 unique pairs (ie, a new radiograph and prior, comparison radiograph) for interpretation. To simulate wrong-patient errors, mismatched pairs were generated by pairing radiographs from different patients chosen randomly from the sample. Ninety radiologists each interpreted a unique randomly chosen set of 10 radiographic pairs, containing up to 10% mismatches (ie, error pairs). Radiologists were randomly assigned to interpret radiographs with or without photographs. The number of mismatches was identified, and interpretation times were recorded. Ninety radiologists with 21 ± 10 (mean ± standard deviation) years of experience were recruited to participate in this observer study. With the introduction of photographs, the proportion of errors detected increased from 31% (9 of 29) to 77% (23 of 30; P = .006). The odds ratio for detection of error with photographs to detection without photographs was 7.3 (95% confidence interval: 2.29-23.18). Observer qualifications, training, or practice in cardiothoracic radiology did not influence sensitivity for error detection. There is no significant difference in interpretation time for studies without photographs and those with photographs (60 ± 22 vs. 61 ± 25 seconds; P = .77). In this observer study, facial photographs obtained simultaneously with portable chest radiographs increased the identification of any wrong-patient errors, without substantial increase in interpretation time. This technique offers a potential means to increase patient safety through correct patient identification. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
A flexible wearable sensor for knee flexion assessment during gait.
Papi, Enrica; Bo, Yen Nee; McGregor, Alison H
2018-05-01
Gait analysis plays an important role in the diagnosis and management of patients with movement disorders but it is usually performed within a laboratory. Recently interest has shifted towards the possibility of conducting gait assessments in everyday environments thus facilitating long-term monitoring. This is possible by using wearable technologies rather than laboratory based equipment. This study aims to validate a novel wearable sensor system's ability to measure peak knee sagittal angles during gait. The proposed system comprises a flexible conductive polymer unit interfaced with a wireless acquisition node attached over the knee on a pair of leggings. Sixteen healthy volunteers participated to two gait assessments on separate occasions. Data was simultaneously collected from the novel sensor and a gold standard 10 camera motion capture system. The relationship between sensor signal and reference knee flexion angles was defined for each subject to allow the transformation of sensor voltage outputs to angular measures (degrees). The knee peak flexion angle from the sensor and reference system were compared by means of root mean square error (RMSE), absolute error, Bland-Altman plots and intra-class correlation coefficients (ICCs) to assess test-retest reliability. Comparisons of knee peak flexion angles calculated from the sensor and gold standard yielded an absolute error of 0.35(±2.9°) and RMSE of 1.2(±0.4)°. Good agreement was found between the two systems with the majority of data lying within the limits of agreement. The sensor demonstrated high test-retest reliability (ICCs>0.8). These results show the ability of the sensor to monitor knee peak sagittal angles with small margins of error and in agreement with the gold standard system. The sensor has potential to be used in clinical settings as a discreet, unobtrusive wearable device allowing for long-term gait analysis. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Cembrowski, G S; Hackney, J R; Carey, N
1993-04-01
The Clinical Laboratory Improvement Act of 1988 (CLIA 88) has dramatically changed proficiency testing (PT) practices having mandated (1) satisfactory PT for certain analytes as a condition of laboratory operation, (2) fixed PT limits for many of these "regulated" analytes, and (3) an increased number of PT specimens (n = 5) for each testing cycle. For many of these analytes, the fixed limits are much broader than the previously employed Standard Deviation Index (SDI) criteria. Paradoxically, there may be less incentive to identify and evaluate analytically significant outliers to improve the analytical process. Previously described "control rules" to evaluate these PT results are unworkable as they consider only two or three results. We used Monte Carlo simulations of Kodak Ektachem analyzers participating in PT to determine optimal control rules for the identification of PT results that are inconsistent with those from other laboratories using the same methods. The analysis of three representative analytes, potassium, creatine kinase, and iron was simulated with varying intrainstrument and interinstrument standard deviations (si and sg, respectively) obtained from the College of American Pathologists (Northfield, Ill) Quality Assurance Services data and Proficiency Test data, respectively. Analytical errors were simulated in each of the analytes and evaluated in terms of multiples of the interlaboratory SDI. Simple control rules for detecting systematic and random error were evaluated with power function graphs, graphs of probability of error detected vs magnitude of error. Based on the simulation results, we recommend screening all analytes for the occurrence of two or more observations exceeding the same +/- 1 SDI limit. For any analyte satisfying this condition, the mean of the observations should be calculated. For analytes with sg/si ratios between 1.0 and 1.5, a significant systematic error is signaled by the mean exceeding 1.0 SDI. Significant random error is signaled by one observation exceeding the +/- 3-SDI limit or the range of the observations exceeding 4 SDIs. For analytes with higher sg/si, significant systematic or random error is signaled by violation of the screening rule (having at least two observations exceeding the same +/- 1 SDI limit). Random error can also be signaled by one observation exceeding the +/- 1.5-SDI limit or the range of the observations exceeding 3 SDIs. We present a practical approach to the workup of apparent PT errors.
Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.
Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas
2016-11-14
Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.
Adler, I.; Axelrod, J.M.
1955-01-01
The use of internal standards in the analysis of ores and minerals of widely-varying matrix by means of fluorescent X-ray spectroscopy is frequently the most practical approach. Internal standards correct for absorption and enhancement effects except when an absorption edge falls between the comparison lines or a very strong emission line falls between the absorption edges responsible for the comparison lines. Particle size variations may introduce substantial errors. One method of coping with the particle size problem is grinding the sample with an added abrasive. ?? 1955.
NASA Astrophysics Data System (ADS)
Son, Young-Sun; Kim, Hyun-cheol
2018-05-01
Chlorophyll (Chl) concentration is one of the key indicators identifying changes in the Arctic marine ecosystem. However, current Chl algorithms are not accurate in the Arctic Ocean due to different bio-optical properties from those in the lower latitude oceans. In this study, we evaluated the current Chl algorithms and analyzed the cause of the error in the western coastal waters of Svalbard, which are known to be sensitive to climate change. The NASA standard algorithms showed to overestimate the Chl concentration in the region. This was due to the high non-algal particles (NAP) absorption and colored dissolved organic matter (CDOM) variability at the blue wavelength. In addition, at lower Chl concentrations (0.1-0.3 mg m-3), chlorophyll-specific absorption coefficients were ∼2.3 times higher than those of other Arctic oceans. This was another reason for the overestimation of Chl concentration. OC4 algorithm-based regionally tuned-Svalbard Chl (SC4) algorithm for retrieving more accurate Chl estimates reduced the mean absolute percentage difference (APD) error from 215% to 49%, the mean relative percentage difference (RPD) error from 212% to 16%, and the normalized root mean square (RMS) error from 211% to 68%. This region has abundant suspended matter due to the melting of tidal glaciers. We evaluated the performance of total suspended matter (TSM) algorithms. Previous published TSM algorithms generally overestimated the TSM concentration in this region. The Svalbard TSM-single band algorithm for low TSM range (ST-SB-L) decreased the APD and RPD errors by 52% and 14%, respectively, but the RMS error still remained high (105%).
Evaluation of MLACF based calculated attenuation brain PET imaging for FDG patient studies
NASA Astrophysics Data System (ADS)
Bal, Harshali; Panin, Vladimir Y.; Platsch, Guenther; Defrise, Michel; Hayden, Charles; Hutton, Chloe; Serrano, Benjamin; Paulmier, Benoit; Casey, Michael E.
2017-04-01
Calculating attenuation correction for brain PET imaging rather than using CT presents opportunities for low radiation dose applications such as pediatric imaging and serial scans to monitor disease progression. Our goal is to evaluate the iterative time-of-flight based maximum-likelihood activity and attenuation correction factors estimation (MLACF) method for clinical FDG brain PET imaging. FDG PET/CT brain studies were performed in 57 patients using the Biograph mCT (Siemens) four-ring scanner. The time-of-flight PET sinograms were acquired using the standard clinical protocol consisting of a CT scan followed by 10 min of single-bed PET acquisition. Images were reconstructed using CT-based attenuation correction (CTAC) and used as a gold standard for comparison. Two methods were compared with respect to CTAC: a calculated brain attenuation correction (CBAC) and MLACF based PET reconstruction. Plane-by-plane scaling was performed for MLACF images in order to fix the variable axial scaling observed. The noise structure of the MLACF images was different compared to those obtained using CTAC and the reconstruction required a higher number of iterations to obtain comparable image quality. To analyze the pooled data, each dataset was registered to a standard template and standard regions of interest were extracted. An SUVr analysis of the brain regions of interest showed that CBAC and MLACF were each well correlated with CTAC SUVrs. A plane-by-plane error analysis indicated that there were local differences for both CBAC and MLACF images with respect to CTAC. Mean relative error in the standard regions of interest was less than 5% for both methods and the mean absolute relative errors for both methods were similar (3.4% ± 3.1% for CBAC and 3.5% ± 3.1% for MLACF). However, the MLACF method recovered activity adjoining the frontal sinus regions more accurately than CBAC method. The use of plane-by-plane scaling of MLACF images was found to be a crucial step in order to obtain improved activity estimates. Presence of local errors in both MLACF and CBAC based reconstructions would require the use of a normal database for clinical assessment. However, further work is required in order to assess the clinical advantage of MLACF over CBAC based method.
Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks
NASA Astrophysics Data System (ADS)
Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.
2015-03-01
The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which is to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.
Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks
NASA Astrophysics Data System (ADS)
Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.
2014-11-01
The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which are to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.
Warrick, J.A.; Rubin, D.M.; Ruggiero, P.; Harney, J.N.; Draut, A.E.; Buscombe, D.
2009-01-01
A new application of the autocorrelation grain size analysis technique for mixed to coarse sediment settings has been investigated. Photographs of sand- to boulder-sized sediment along the Elwha River delta beach were taken from approximately 1??2 m above the ground surface, and detailed grain size measurements were made from 32 of these sites for calibration and validation. Digital photographs were found to provide accurate estimates of the long and intermediate axes of the surface sediment (r2 > 0??98), but poor estimates of the short axes (r2 = 0??68), suggesting that these short axes were naturally oriented in the vertical dimension. The autocorrelation method was successfully applied resulting in total irreducible error of 14% over a range of mean grain sizes of 1 to 200 mm. Compared with reported edge and object-detection results, it is noted that the autocorrelation method presented here has lower error and can be applied to a much broader range of mean grain sizes without altering the physical set-up of the camera (~200-fold versus ~6-fold). The approach is considerably less sensitive to lighting conditions than object-detection methods, although autocorrelation estimates do improve when measures are taken to shade sediments from direct sunlight. The effects of wet and dry conditions are also evaluated and discussed. The technique provides an estimate of grain size sorting from the easily calculated autocorrelation standard error, which is correlated with the graphical standard deviation at an r2 of 0??69. The technique is transferable to other sites when calibrated with linear corrections based on photo-based measurements, as shown by excellent grain-size analysis results (r2 = 0??97, irreducible error = 16%) from samples from the mixed grain size beaches of Kachemak Bay, Alaska. Thus, a method has been developed to measure mean grain size and sorting properties of coarse sediments. ?? 2009 John Wiley & Sons, Ltd.
Feller, David; Peterson, Kirk A
2013-08-28
The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies <0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.
Telemetry Standards, RCC Standard 106-17, Annex A.1, Pulse Amplitude Modulation Standards
2017-07-01
conform to either Figure Error! No text of specified style in document.-1 or Figure Error! No text of specified style in document.-2. Figure Error...No text of specified style in document.-1. 50 percent duty cycle PAM with amplitude synchronization A 20-25 percent deviation reserved for pulse...synchronization is recommended. Telemetry Standards, RCC Standard 106-17 Annex A.1, July 2017 A.1.2 Figure Error! No text of specified style
Balderson, Michael; Brown, Derek; Johnson, Patricia; Kirkby, Charles
2016-01-01
The purpose of this work was to compare static gantry intensity-modulated radiation therapy (IMRT) with volume-modulated arc therapy (VMAT) in terms of tumor control probability (TCP) under scenarios involving large geometric misses, i.e., those beyond what are accounted for when margin expansion is determined. Using a planning approach typical for these treatments, a linear-quadratic-based model for TCP was used to compare mean TCP values for a population of patients who experiences a geometric miss (i.e., systematic and random shifts of the clinical target volume within the planning target dose distribution). A Monte Carlo approach was used to account for the different biological sensitivities of a population of patients. Interestingly, for errors consisting of coplanar systematic target volume offsets and three-dimensional random offsets, static gantry IMRT appears to offer an advantage over VMAT in that larger shift errors are tolerated for the same mean TCP. For example, under the conditions simulated, erroneous systematic shifts of 15mm directly between or directly into static gantry IMRT fields result in mean TCP values between 96% and 98%, whereas the same errors on VMAT plans result in mean TCP values between 45% and 74%. Random geometric shifts of the target volume were characterized using normal distributions in each Cartesian dimension. When the standard deviations were doubled from those values assumed in the derivation of the treatment margins, our model showed a 7% drop in mean TCP for the static gantry IMRT plans but a 20% drop in TCP for the VMAT plans. Although adding a margin for error to a clinical target volume is perhaps the best approach to account for expected geometric misses, this work suggests that static gantry IMRT may offer a treatment that is more tolerant to geometric miss errors than VMAT. Copyright © 2016 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
Gross, Daniel J; Golijanin, Petar; Dumont, Guillaume D; Parada, Stephen A; Vopat, Bryan G; Reinert, Steven E; Romeo, Anthony A; Provencher, C D R Matthew T
2016-01-01
Computed tomography (CT) scans of the shoulder are often not well aligned to the axis of the scapula and glenoid. The purpose of this paper was to determine the effect of sagittal rotation of the glenoid on axial measurements of anterior-posterior (AP) glenoid width and glenoid version attained by standard CT scan. In addition, we sought to define the angle of rotation required to correct the CT scan to optimal positioning. A total of 30 CT scans of the shoulder were reformatted using OsiriX software multiplanar reconstruction. The uncorrected (UNCORR) and corrected (CORR) CT scans were compared for measurements of both (1) axial AP glenoid width and (2) glenoid version at 5 standardized axial cuts. The mean difference in glenoid version was 2.6% (2° ± 0.1°; P = .0222) and the mean difference in AP glenoid width was 5.2% (1.2 ± 0.42 mm; P = .0026) in comparing the CORR and UNCORR scans. The mean angle of correction required to align the sagittal plane was 20.1° of rotation (range, 9°-39°; standard error of mean, 1.2°). These findings demonstrate that UNCORR CT scans of the glenohumeral joint do not correct for the sagittal rotation of the glenoid, and this affects the characteristics of the axial images. Failure to align the sagittal image to the 12-o'clock to 6-o'clock axis results in measurement error in both glenoid version and AP glenoid width. Use of UNCORR CT images may have notable implications for decision-making and surgical treatment. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Ko, Wen-Ru; Hung, Wei-Te; Chang, Hui-Chin; Lin, Long-Yau
2014-03-01
The study was designed to investigate the frequency of misusing standard error of the mean (SEM) in place of standard deviation (SD) to describe study samples in four selected journals published in 2011. Citation counts of articles and the relationship between the misuse rate and impact factor, immediacy index, or cited half-life were also evaluated. All original articles in the four selected journals published in 2011 were searched for descriptive statistics reporting with either mean ± SD or mean ± SEM. The impact factor, immediacy index, and cited half-life of the journals were gathered from Journal Citation Reports Science edition 2011. Scopus was used to search for citations of individual articles. The difference in citation counts between the SD group and SEM group was tested by the Mann-Whitney U test. The relationship between the misuse rate and impact factor, immediacy index, or cited half-life was also evaluated. The frequency of inappropriate reporting of SEM was 13.60% for all four journals. For individual journals, the misuse rate was from 2.9% in Acta Obstetricia et Gynecologica Scandinavica to 22.68% in American Journal of Obstetrics & Gynecology. Articles using SEM were cited more frequently than those using SD (p = 0.025). An approximate positive correlation between the misuse rate and cited half-life was observed. Inappropriate reporting of SEM is common in medical journals. Authors of biomedical papers should be responsible for maintaining an integrated statistical presentation because valuable articles are in danger of being wasted through the misuse of statistics. Copyright © 2014. Published by Elsevier B.V.
Science communication. Response to Comment on "Quantifying long-term scientific impact".
Wang, Dashun; Song, Chaoming; Shen, Hua-Wei; Barabási, Albert-László
2014-07-11
Wang, Mei, and Hicks claim that they observed large mean prediction errors when using our model. We find that their claims are a simple consequence of overfitting, which can be avoided by standard regularization methods. Here, we show that our model provides an effective means to identify papers that may be subject to overfitting, and the model, with or without prior treatment, outperforms the proposed naïve approach. Copyright © 2014, American Association for the Advancement of Science.
Joint Seasonal ARMA Approach for Modeling of Load Forecast Errors in Planning Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, Ryan P.; Samaan, Nader A.; Makarov, Yuri V.
2014-04-14
To make informed and robust decisions in the probabilistic power system operation and planning process, it is critical to conduct multiple simulations of the generated combinations of wind and load parameters and their forecast errors to handle the variability and uncertainty of these time series. In order for the simulation results to be trustworthy, the simulated series must preserve the salient statistical characteristics of the real series. In this paper, we analyze day-ahead load forecast error data from multiple balancing authority locations and characterize statistical properties such as mean, standard deviation, autocorrelation, correlation between series, time-of-day bias, and time-of-day autocorrelation.more » We then construct and validate a seasonal autoregressive moving average (ARMA) model to model these characteristics, and use the model to jointly simulate day-ahead load forecast error series for all BAs.« less
NASA Astrophysics Data System (ADS)
Kim, Young-Rok; Park, Eunseo; Choi, Eun-Jung; Park, Sang-Young; Park, Chandeok; Lim, Hyung-Chul
2014-09-01
In this study, genetic resampling (GRS) approach is utilized for precise orbit determination (POD) using the batch filter based on particle filtering (PF). Two genetic operations, which are arithmetic crossover and residual mutation, are used for GRS of the batch filter based on PF (PF batch filter). For POD, Laser-ranging Precise Orbit Determination System (LPODS) and satellite laser ranging (SLR) observations of the CHAMP satellite are used. Monte Carlo trials for POD are performed by one hundred times. The characteristics of the POD results by PF batch filter with GRS are compared with those of a PF batch filter with minimum residual resampling (MRRS). The post-fit residual, 3D error by external orbit comparison, and POD repeatability are analyzed for orbit quality assessments. The POD results are externally checked by NASA JPL’s orbits using totally different software, measurements, and techniques. For post-fit residuals and 3D errors, both MRRS and GRS give accurate estimation results whose mean root mean square (RMS) values are at a level of 5 cm and 10-13 cm, respectively. The mean radial orbit errors of both methods are at a level of 5 cm. For POD repeatability represented as the standard deviations of post-fit residuals and 3D errors by repetitive PODs, however, GRS yields 25% and 13% more robust estimation results than MRRS for post-fit residual and 3D error, respectively. This study shows that PF batch filter with GRS approach using genetic operations is superior to PF batch filter with MRRS in terms of robustness in POD with SLR observations.
Cost effectiveness of the stream-gaging program in Nevada
Arteaga, F.E.
1990-01-01
The stream-gaging network in Nevada was evaluated as part of a nationwide effort by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. Specifically, the study dealt with 79 streamflow gages and 2 canal-flow gages that were under the direct operation of Nevada personnel as of 1983. Cost-effective allocations of resources, including budget and operational criteria, were studied using statistical procedures known as Kalman-filtering techniques. The possibility of developing streamflow data at ungaged sites was evaluated using flow-routing and statistical regression analyses. Neither of these methods provided sufficiently accurate results to warrant their use in place of stream gaging. The 81 gaging stations were being operated in 1983 with a budget of $465,500. As a result of this study, all existing stations were determined to be necessary components of the program for the foreseeable future. At the 1983 funding level, the average standard error of streamflow records was nearly 28%. This same overall level of accuracy could have been maintained with a budget of approximately $445,000 if the funds were redistributed more equitably among the gages. The maximum budget analyzed, $1,164 ,000 would have resulted in an average standard error of 11%. The study indicates that a major source of error is lost data. If perfectly operating equipment were available, the standard error for the 1983 program and budget could have been reduced to 21%. (Thacker-USGS, WRD)
Figueira, Bruno; Gonçalves, Bruno; Folgado, Hugo; Masiulis, Nerijus; Calleja-González, Julio; Sampaio, Jaime
2018-06-14
The present study aims to identify the accuracy of the NBN23 ® system, an indoor tracking system based on radio-frequency and standard Bluetooth Low Energy channels. Twelve capture tags were attached to a custom cart with fixed distances of 0.5, 1.0, 1.5, and 1.8 m. The cart was pushed along a predetermined course following the lines of a standard dimensions Basketball court. The course was performed at low speed (<10.0 km/h), medium speed (>10.0 km/h and <20.0 km/h) and high speed (>20.0 km/h). Root mean square error (RMSE) and percentage of variance accounted for (%VAF) were used as accuracy measures. The obtained data showed acceptable accuracy results for both RMSE and %VAF, despite the expected degree of error in position measurement at higher speeds. The RMSE for all the distances and velocities presented an average absolute error of 0.30 ± 0.13 cm with 90.61 ± 8.34 of %VAF, in line with most available systems, and considered acceptable for indoor sports. The processing of data with filter correction seemed to reduce the noise and promote a lower relative error, increasing the %VAF for each measured distance. Research using positional-derived variables in Basketball is still very scarce; thus, this independent test of the NBN23 ® tracking system provides accuracy details and opens up opportunities to develop new performance indicators that help to optimize training adaptations and performance.
A Procedure for Testing the Difference between Effect Sizes.
ERIC Educational Resources Information Center
Lambert, Richard G.; Flowers, Claudia
A special case of the homogeneity of effect size test, as applied to pairwise comparisons of standardized mean differences, was evaluated. Procedures for comparing pairs of pretest to posttest effect sizes, as well as pairs of treatment versus control group effect sizes, were examined. Monte Carlo simulation was used to generate Type I error rates…
ERIC Educational Resources Information Center
Moses, Tim
2008-01-01
Equating functions are supposed to be population invariant, meaning that the choice of subpopulation used to compute the equating function should not matter. The extent to which equating functions are population invariant is typically assessed in terms of practical difference criteria that do not account for equating functions' sampling…
Validity of the Aberrant Behavior Checklist in Children with Autism Spectrum Disorder
ERIC Educational Resources Information Center
Kaat, Aaron J.; Lecavalier, Luc; Aman, Michael G.
2014-01-01
The Aberrant Behavior Checklist (ABC) is a widely used measure in autism spectrum disorder (ASD) treatment studies. We conducted confirmatory and exploratory factor analyses of the ABC in 1,893 children evaluated as part of the Autism Treatment Network. The root mean square error of approximation was .086 for the standard item assignment, and in…
Standardized mean differences cause funnel plot distortion in publication bias assessments.
Zwetsloot, Peter-Paul; Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris Ah; Chamuleau, Steven Aj; MacLeod, Malcolm R; Wever, Kimberley E
2017-09-08
Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results.
Standardized mean differences cause funnel plot distortion in publication bias assessments
Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris AH; Chamuleau, Steven AJ; MacLeod, Malcolm R
2017-01-01
Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results. PMID:28884685
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kristiansen, J.I.; Balliny, N.; Saxov, S.
Some available information on thermal conductivity of earth materials from the Scandinavian area is collected. The mean conductivities as reported from individual localities are grouped in crystalline and sedimentary rocks. Mean results are displayed in histograms and localities are mapped. The collocation of conductivity information contains new results of granites and sedimentary rocks from Sweden and of limestones and clays from Danish borings. The new values are presented as histograms of individual measurements and given as mean values with standard errors of mean. The crystalline rocks range from about 2 to about 4 W/ (m K), and the sedimentary rocksmore » range from about 0.8 to about 6 W/ (m K).« less
Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A
2015-01-01
This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398
Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale; Selby, Neil
2012-08-14
Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.
Maassen, Gerard H
2010-08-01
In this Journal, Lewis and colleagues introduced a new Reliable Change Index (RCI(WSD)), which incorporated the within-subject standard deviation (WSD) of a repeated measurement design as the standard error. In this note, two opposite errors in using WSD this way are demonstrated. First, being the standard error of measurement of only a single assessment makes WSD too small when practice effects are absent. Then, too many individuals will be designated reliably changed. Second, WSD can grow unlimitedly to the extent that differential practice effects occur. This can even make RCI(WSD) unable to detect any reliable change.
Heimes, F.J.; Luckey, R.R.; Stephens, D.M.
1986-01-01
Combining estimates of applied irrigation water, determined for selected sample sites, with information on irrigated acreage provides one alternative for developing areal estimates of groundwater pumpage for irrigation. The reliability of this approach was evaluated by comparing estimated pumpage with metered pumpage for two years for a three-county area in southwestern Nebraska. Meters on all irrigation wells in the three counties provided a complete data set for evaluation of equipment and comparison with pumpage estimates. Regression analyses were conducted on discharge, time-of-operation, and pumpage data collected at 52 irrigation sites in 1983 and at 57 irrigation sites in 1984 using data from inline flowmeters as the independent variable. The standard error of the estimate for regression analysis of discharge measurements made using a portable flowmeter was 6.8% of the mean discharge metered by inline flowmeters. The standard error of the estimate for regression analysis of time of operation determined from electric meters was 8.1% of the mean time of operation determined from in-line and 15.1% for engine-hour meters. Sampled pumpage, calculated by multiplying the average discharge obtained from the portable flowmeter by the time of operation obtained from energy or hour meters, was compared with metered pumpage from in-line flowmeters at sample sites. The standard error of the estimate for the regression analysis of sampled pumpage was 10.3% of the mean of the metered pumpage for 1983 and 1984 combined. The difference in the mean of the sampled pumpage and the mean of the metered pumpage was only 1.8% for 1983 and 2.3% for 1984. Estimated pumpage, for each county and for the study area, was calculated by multiplying application (sampled pumpage divided by irrigated acreages at sample sites) by irrigated acreage compiled from Landsat (Land satellite) imagery. Estimated pumpage was compared with total metered pumpage for each county and the study area. Estimated pumpage by county varied from 9% less, to 20% more, than metered pumpage in 1983 and from 0 to 15% more than metered pumpage in 1984. Estimated pumpage for the study area was 11 % more than metered pumpage in 1983 and 5% more than metered pumpage in 1984. (Author 's abstract)
Eechaute, Christophe; Vaes, Peter; Duquet, William; Van Gheluwe, Bart
2007-01-01
Sudden ankle inversion tests have been used to investigate whether the onset of peroneal muscle activity is delayed in patients with chronically unstable ankle joints. Before interpreting test results of latency times in patients with chronic ankle instability and healthy subjects, the reliability of these measures must be first demonstrated. To investigate the test-retest reliability of variables measured during a sudden ankle inversion movement in standing subjects with healthy ankle joints. Validation study. Research laboratory. 15 subjects with healthy ankle joints (30 ankles). Subjects stood on an ankle inversion platform with both feet tightly fixed to independently moveable trapdoors. An unexpected sudden ankle inversion of 50 degrees was imposed. We measured latency and motor response times and electromechanical delay of the peroneus longus muscle, along with the time and angular position of the first and second decelerating moments, the mean and maximum inversion speed, and the total inversion time. Correlation coefficients and standard error of measurements were calculated. Intraclass correlation coefficients ranged from 0.17 for the electromechanical delay of the peroneus longus muscle (standard error of measurement = 2.7 milliseconds) to 0.89 for the maximum inversion speed (standard error of measurement = 34.8 milliseconds). The reliability of the latency and motor response times of the peroneus longus muscle, the time of the first and second decelerating moments, and the mean and maximum inversion speed was acceptable in subjects with healthy ankle joints and supports the investigation of the reliability of these measures in subjects with chronic ankle instability. The lower reliability of the electromechanical delay of the peroneus longus muscle and the angular positions of both decelerating moments calls the use of these variables into question.
Geometric errors in 3D optical metrology systems
NASA Astrophysics Data System (ADS)
Harding, Kevin; Nafis, Chris
2008-08-01
The field of 3D optical metrology has seen significant growth in the commercial market in recent years. The methods of using structured light to obtain 3D range data is well documented in the literature, and continues to be an area of development in universities. However, the step between getting 3D data, and getting geometrically correct 3D data that can be used for metrology is not nearly as well developed. Mechanical metrology systems such as CMMs have long established standard means of verifying the geometric accuracies of their systems. Both local and volumentric measurments are characterized on such system using tooling balls, grid plates, and ball bars. This paper will explore the tools needed to characterize and calibrate an optical metrology system, and discuss the nature of the geometric errors often found in such systems, and suggest what may be a viable standard method of doing characterization of 3D optical systems. Finally, we will present a tradeoff analysis of ways to correct geometric errors in an optical systems considering what can be gained by hardware methods versus software corrections.
Performance Evaluation of Five Turbidity Sensors in Three Primary Standards
Snazelle, Teri T.
2015-10-28
Open-File Report 2015-1172 is temporarily unavailable.Five commercially available turbidity sensors were evaluated by the U.S. Geological Survey, Hydrologic Instrumentation Facility (HIF) for accuracy and precision in three types of turbidity standards; formazin, StablCal, and AMCO Clear (AMCO–AEPA). The U.S. Environmental Protection Agency (EPA) recognizes all three turbidity standards as primary standards, meaning they are acceptable for reporting purposes. The Forrest Technology Systems (FTS) DTS-12, the Hach SOLITAX sc, the Xylem EXO turbidity sensor, the Yellow Springs Instrument (YSI) 6136 turbidity sensor, and the Hydrolab Series 5 self-cleaning turbidity sensor were evaluated to determine if turbidity measurements in the three primary standards are comparable to each other, and to ascertain if the primary standards are truly interchangeable. A formazin 4000 nephelometric turbidity unit (NTU) stock was purchased and dilutions of 40, 100, 400, 800, and 1000 NTU were made fresh the day of testing. StablCal and AMCO Clear (for Hach 2100N) standards with corresponding concentrations were also purchased for the evaluation. Sensor performance was not evaluated in turbidity levels less than 40 NTU due to the unavailability of polymer-bead turbidity standards rated for general use. The percent error was calculated as the true (not absolute) difference between the measured turbidity and the standard value, divided by the standard value.The sensors that demonstrated the best overall performance in the evaluation were the Hach SOLITAX and the Hydrolab Series 5 turbidity sensor when the operating range (0.001–4000 NTU for the SOLITAX and 0.1–3000 NTU for the Hydrolab) was considered in addition to sensor accuracy and precision. The average percent error in the three standards was 3.80 percent for the SOLITAX and -4.46 percent for the Hydrolab. The DTS-12 also demonstrated good accuracy with an average percent error of 2.02 percent and a maximum relative standard deviation of 0.51 percent for the operating range, which was limited to 0.01–1600 NTU at the time of this report. Test results indicated an average percent error of 19.81 percent in the three standards for the EXO turbidity sensor and 9.66 percent for the YSI 6136. The significant variability in sensor performance in the three primary standards suggests that although all three types are accepted as primary calibration standards, they are not interchangeable, and sensor results in the three types of standards are not directly comparable.
The precision of a special purpose analog computer in clinical cardiac output determination.
Sullivan, F J; Mroz, E A; Miller, R E
1975-01-01
Three hundred dye-dilution curves taken during our first year of clinical experience with the Waters CO-4 cardiac output computer were analyzed to estimate the errors involved in its use. Provided that calibration is accurate and 5.0 mg of dye are injected for each curve, then the percentage standard deviation of measurement using this computer is about 8.7%. Included in this are the errors inherent in the computer, errors due to baseline drift, errors in the injection of dye and acutal variation of cardiac output over a series of successive determinations. The size of this error is comparable to that involved in manual calculation. The mean value of five successive curves will be within 10% of the real value in 99 cases out of 100. Advances in methodology and equipment are discussed which make calibration simpler and more accurate, and which should also improve the quality of computer determination. A list of suggestions is given to minimize the errors involved in the clinical use of this equipment. Images Fig. 4. PMID:1089394
Experimental determination of a Viviparus contectus thermometry equation.
Bugler, Melanie J; Grimes, Stephen T; Leng, Melanie J; Rundle, Simon D; Price, Gregory D; Hooker, Jerry J; Collinson, Margaret E
2009-09-01
Experimental measurements of the (18)O/(16)O isotope fractionation between the biogenic aragonite of Viviparus contectus (Gastropoda) and its host freshwater were undertaken to generate a species-specific thermometry equation. The temperature dependence of the fractionation factor and the relationship between Deltadelta(18)O (delta(18)O(carb.) - delta(18)O(water)) and temperature were calculated from specimens maintained under laboratory and field (collection and cage) conditions. The field specimens were grown (Somerset, UK) between August 2007 and August 2008, with water samples and temperature measurements taken monthly. Specimens grown in the laboratory experiment were maintained under constant temperatures (15 degrees C, 20 degrees C and 25 degrees C) with water samples collected weekly. Application of a linear regression to the datasets indicated that the gradients of all three experiments were within experimental error of each other (+/-2 times the standard error); therefore, a combined (laboratory and field data) correlation could be applied. The relationship between Deltadelta(18)O (delta(18)O(carb.) - delta(18)O(water)) and temperature (T) for this combined dataset is given by: T = - 7.43( + 0.87, - 1.13)*Deltadelta18O + 22.89(+/- 2.09) (T is in degrees C, delta(18)O(carb.) is with respect to Vienna Pee Dee Belemnite (VPDB) and delta(18)O(water) is with respect to Vienna Standard Mean Ocean Water (VSMOW). Quoted errors are 2 times standard error).Comparisons made with existing aragonitic thermometry equations reveal that the linear regression for the combined Viviparus contectus equation is within 2 times the standard error of previously reported aragonitic thermometry equations. This suggests there are no species-specific vital effects for Viviparus contectus. Seasonal delta(18)O(carb.) profiles from specimens retrieved from the field cage experiment indicate that during shell secretion the delta(18)O(carb.) of the shell carbonate is not influenced by size, sex or whether females contained eggs or juveniles. Copyright (c) 2009 John Wiley & Sons, Ltd.
Molavi Tabrizi, Amirhossein; Goossens, Spencer; Mehdizadeh Rahimi, Ali; Cooper, Christopher D; Knepley, Matthew G; Bardhan, Jaydeep P
2017-06-13
We extend the linearized Poisson-Boltzmann (LPB) continuum electrostatic model for molecular solvation to address charge-hydration asymmetry. Our new solvation-layer interface condition (SLIC)/LPB corrects for first-shell response by perturbing the traditional continuum-theory interface conditions at the protein-solvent and the Stern-layer interfaces. We also present a GPU-accelerated treecode implementation capable of simulating large proteins, and our results demonstrate that the new model exhibits significant accuracy improvements over traditional LPB models, while reducing the number of fitting parameters from dozens (atomic radii) to just five parameters, which have physical meanings related to first-shell water behavior at an uncharged interface. In particular, atom radii in the SLIC model are not optimized but uniformly scaled from their Lennard-Jones radii. Compared to explicit-solvent free-energy calculations of individual atoms in small molecules, SLIC/LPB is significantly more accurate than standard parametrizations (RMS error 0.55 kcal/mol for SLIC, compared to RMS error of 3.05 kcal/mol for standard LPB). On parametrizing the electrostatic model with a simple nonpolar component for total molecular solvation free energies, our model predicts octanol/water transfer free energies with an RMS error 1.07 kcal/mol. A more detailed assessment illustrates that standard continuum electrostatic models reproduce total charging free energies via a compensation of significant errors in atomic self-energies; this finding offers a window into improving the accuracy of Generalized-Born theories and other coarse-grained models. Most remarkably, the SLIC model also reproduces positive charging free energies for atoms in hydrophobic groups, whereas standard PB models are unable to generate positive charging free energies regardless of the parametrized radii. The GPU-accelerated solver is freely available online, as is a MATLAB implementation.
Reproducibility of 3D kinematics and surface electromyography measurements of mastication.
Remijn, Lianne; Groen, Brenda E; Speyer, Renée; van Limbeek, Jacques; Nijhuis-van der Sanden, Maria W G
2016-03-01
The aim of this study was to determine the measurement reproducibility for a procedure evaluating the mastication process and to estimate the smallest detectable differences of 3D kinematic and surface electromyography (sEMG) variables. Kinematics of mandible movements and sEMG activity of the masticatory muscles were obtained over two sessions with four conditions: two food textures (biscuit and bread) of two sizes (small and large). Twelve healthy adults (mean age 29.1 years) completed the study. The second to the fifth chewing cycle of 5 bites were used for analyses. The reproducibility per outcome variable was calculated with an intraclass correlation coefficient (ICC) and a Bland-Altman analysis was applied to determine the standard error of measurement relative error of measurement and smallest detectable differences of all variables. ICCs ranged from 0.71 to 0.98 for all outcome variables. The outcome variables consisted of four bite and fourteen chewing cycle variables. The relative standard error of measurement of the bite variables was up to 17.3% for 'time-to-swallow', 'time-to-transport' and 'number of chewing cycles', but ranged from 31.5% to 57.0% for 'change of chewing side'. The relative standard error of measurement ranged from 4.1% to 24.7% for chewing cycle variables and was smaller for kinematic variables than sEMG variables. In general, measurements obtained with 3D kinematics and sEMG are reproducible techniques to assess the mastication process. The duration of the chewing cycle and frequency of chewing were the best reproducible measurements. Change of chewing side could not be reproduced. The published measurement error and smallest detectable differences will aid the interpretation of the results of future clinical studies using the same study variables. Copyright © 2015 Elsevier Inc. All rights reserved.
Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; ...
2014-06-03
A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.« less
Ban, Ilija; Troelsen, Anders; Kristensen, Morten Tange
2016-10-01
The Constant score (CS) has been the primary endpoint in most studies on clavicle fractures. However, the CS was not developed to assess patients with clavicle fractures. Our aim was to examine inter-rater reliability and agreement of the CS in patients with clavicle fractures. The secondary aim was to estimate the correlation between the CS and the Disabilities of the Arm, Shoulder and Hand score and the internal consistency of the 2 scores. On the basis of sample sizing, 36 patients (31 male and 5 female patients; mean age, 41.3 years) with clavicle fractures underwent standardized CS assessment at a mean of 6.8 weeks (SD, 1.0 weeks) after injury. Reliability and agreement of the CS were determined by 2 raters. The interclass correlation coefficient (ICC2,1), standard error of measurement, minimal detectable change, Cronbach α coefficient, and Pearson correlation coefficient were estimated. Inter-rater reliability of the total CS was excellent (interclass correlation coefficient, 0.94; 95% confidence interval, 0.88-0.97), with no systematic difference between the 2 raters (P = .75). The standard error of measurement (measurement error at the group level) was 4.9, whereas the minimal detectable change (smallest change needed to indicate a real change for an individual) was 13.6 CS points. The internal consistency of the 10 CS items was good, with a Cronbach α of .85, and we found a strong correlation (r = -0.92) between the CS and Disabilities of the Arm, Shoulder and Hand score. The CS was found to be reliable for assessing patients with clavicle fractures, especially at the group level. With high inter-rater reliability and agreement, in addition to good internal consistency, the standardized CS used in this study can be used for comparison of results from different settings. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Accuracy of Jump-Mat Systems for Measuring Jump Height.
Pueo, Basilio; Lipinska, Patrycja; Jiménez-Olmedo, José M; Zmijewski, Piotr; Hopkins, Will G
2017-08-01
Vertical-jump tests are commonly used to evaluate lower-limb power of athletes and nonathletes. Several types of equipment are available for this purpose. To compare the error of measurement of 2 jump-mat systems (Chronojump-Boscosystem and Globus Ergo Tester) with that of a motion-capture system as a criterion and to determine the modifying effect of foot length on jump height. Thirty-one young adult men alternated 4 countermovement jumps with 4 squat jumps. Mean jump height and standard deviations representing technical error of measurement arising from each device and variability arising from the subjects themselves were estimated with a novel mixed model and evaluated via standardization and magnitude-based inference. The jump-mat systems produced nearly identical measures of jump height (differences in means and in technical errors of measurement ≤1 mm). Countermovement and squat-jump height were both 13.6 cm higher with motion capture (90% confidence limits ±0.3 cm), but this very large difference was reduced to small unclear differences when adjusted to a foot length of zero. Variability in countermovement and squat-jump height arising from the subjects was small (1.1 and 1.5 cm, respectively, 90% confidence limits ±0.3 cm); technical error of motion capture was similar in magnitude (1.7 and 1.6 cm, ±0.3 and ±0.4 cm), and that of the jump mats was similar or smaller (1.2 and 0.3 cm, ±0.5 and ±0.9 cm). The jump-mat systems provide trustworthy measurements for monitoring changes in jump height. Foot length can explain the substantially higher jump height observed with motion capture.
Pagoulatos, N; Edwards, W S; Haynor, D R; Kim, Y
1999-12-01
The use of stereotactic systems has been one of the main approaches for image-based guidance of the surgical tool within the brain. The main limitation of stereotactic systems is that they are based on preoperative images that might become outdated and invalid during the course of surgery. Ultrasound (US) is considered the most practical and cost-effective intraoperative imaging modality, but US images inherently have a low signal-to-noise ratio. Integrating intraoperative US with stereotactic systems has recently been attempted. In this paper, we present a new system for interactively registering two-dimensional US and three-dimensional magnetic resonance (MR) images. This registration is based on tracking the US probe with a dc magnetic position sensor. We have performed an extensive analysis of the errors of our system by using a custom-built phantom. The registration error between the MR and the position sensor space was found to have a mean value of 1.78 mm and a standard deviation of 0.18 mm. The registration error between US and MR space was dependent on the distance of the target point from the US probe face. For a 3.5-MHz phased one-dimensional array transducer and a depth of 6 cm, the mean value of the registration error was 2.00 mm and the standard deviation was 0.75 mm. The registered MR images were reconstructed using either zeroth-order or first-order interpolation. The ease of use and the interactive nature of our system (approximately 6.5 frames/s for 344 x 310 images and first-order interpolation on a Pentium II 450 MHz) demonstrates its potential to be used in the operating room.
Wagner, Julia Y; Körner, Annmarie; Schulte-Uentrop, Leonie; Kubik, Mathias; Reichenspurner, Hermann; Kluge, Stefan; Reuter, Daniel A; Saugel, Bernd
2018-04-01
The CNAP technology (CNSystems Medizintechnik AG, Graz, Austria) allows continuous noninvasive arterial pressure waveform recording based on the volume clamp method and estimation of cardiac output (CO) by pulse contour analysis. We compared CNAP-derived CO measurements (CNCO) with intermittent invasive CO measurements (pulmonary artery catheter; PAC-CO) in postoperative cardiothoracic surgery patients. In 51 intensive care unit patients after cardiothoracic surgery, we measured PAC-CO (criterion standard) and CNCO at three different time points. We conducted two separate comparative analyses: (1) CNCO auto-calibrated to biometric patient data (CNCO bio ) versus PAC-CO and (2) CNCO calibrated to the first simultaneously measured PAC-CO value (CNCO cal ) versus PAC-CO. The agreement between the two methods was statistically assessed by Bland-Altman analysis and the percentage error. In a subgroup of patients, a passive leg raising maneuver was performed for clinical indications and we present the changes in PAC-CO and CNCO in four-quadrant plots (exclusion zone 0.5 L/min) in order to evaluate the trending ability of CNCO. The mean difference between CNCO bio and PAC-CO was +0.5 L/min (standard deviation ± 1.3 L/min; 95% limits of agreement -1.9 to +3.0 L/min). The percentage error was 49%. The concordance rate was 100%. For CNCOcal, the mean difference was -0.3 L/min (±0.5 L/min; -1.2 to +0.7 L/min) with a percentage error of 19%. In this clinical study in cardiothoracic surgery patients, CNCO cal showed good agreement when compared with PAC-CO. For CNCO bio , we observed a higher percentage error and good trending ability (concordance rate 100%).
Predictors of driving safety in early Alzheimer disease.
Dawson, J D; Anderson, S W; Uc, E Y; Dastrup, E; Rizzo, M
2009-02-10
To measure the association of cognition, visual perception, and motor function with driving safety in Alzheimer disease (AD). Forty drivers with probable early AD (mean Mini-Mental State Examination score 26.5) and 115 elderly drivers without neurologic disease underwent a battery of cognitive, visual, and motor tests, and drove a standardized 35-mile route in urban and rural settings in an instrumented vehicle. A composite cognitive score (COGSTAT) was calculated for each subject based on eight neuropsychological tests. Driving safety errors were noted and classified by a driving expert based on video review. Drivers with AD committed an average of 42.0 safety errors/drive (SD = 12.8), compared to an average of 33.2 (SD = 12.2) for drivers without AD (p < 0.0001); the most common errors were lane violations. Increased age was predictive of errors, with a mean of 2.3 more errors per drive observed for each 5-year age increment. After adjustment for age and gender, COGSTAT was a significant predictor of safety errors in subjects with AD, with a 4.1 increase in safety errors observed for a 1 SD decrease in cognitive function. Significant increases in safety errors were also found in subjects with AD with poorer scores on Benton Visual Retention Test, Complex Figure Test-Copy, Trail Making Subtest-A, and the Functional Reach Test. Drivers with Alzheimer disease (AD) exhibit a range of performance on tests of cognition, vision, and motor skills. Since these tests provide additional predictive value of driving performance beyond diagnosis alone, clinicians may use these tests to help predict whether a patient with AD can safely operate a motor vehicle.
Initializing a Mesoscale Boundary-Layer Model with Radiosonde Observations
NASA Astrophysics Data System (ADS)
Berri, Guillermo J.; Bertossa, Germán
2018-01-01
A mesoscale boundary-layer model is used to simulate low-level regional wind fields over the La Plata River of South America, a region characterized by a strong daily cycle of land-river surface-temperature contrast and low-level circulations of sea-land breeze type. The initial and boundary conditions are defined from a limited number of local observations and the upper boundary condition is taken from the only radiosonde observations available in the region. The study considers 14 different upper boundary conditions defined from the radiosonde data at standard levels, significant levels, level of the inversion base and interpolated levels at fixed heights, all of them within the first 1500 m. The period of analysis is 1994-2008 during which eight daily observations from 13 weather stations of the region are used to validate the 24-h surface-wind forecast. The model errors are defined as the root-mean-square of relative error in wind-direction frequency distribution and mean wind speed per wind sector. Wind-direction errors are greater than wind-speed errors and show significant dispersion among the different upper boundary conditions, not present in wind speed, revealing a sensitivity to the initialization method. The wind-direction errors show a well-defined daily cycle, not evident in wind speed, with the minimum at noon and the maximum at dusk, but no systematic deterioration with time. The errors grow with the height of the upper boundary condition level, in particular wind direction, and double the errors obtained when the upper boundary condition is defined from the lower levels. The conclusion is that defining the model upper boundary condition from radiosonde data closer to the ground minimizes the low-level wind-field errors throughout the region.
De Luca, Stefano; Mangiulli, Tatiana; Merelli, Vera; Conforti, Federica; Velandia Palacio, Luz Andrea; Agostini, Susanna; Spinas, Enrico; Cameriere, Roberto
2016-04-01
The aim of this study is to develop a specific formula for the purpose of assessing skeletal age in a sample of Italian growing infants and children by measuring carpals and epiphyses of radio and ulna. A sample of 332 X-rays of left hand-wrist bones (130 boys and 202 girls), aged between 1 and 16 years, was analyzed retrospectively. Analysis of covariance (ANCOVA) was applied to study how sex affects the growth of the ratio Bo/Ca in the boys and girls groups. The regression model, describing age as a linear function of sex and the Bo/Ca ratio for the new Italian sample, yielded the following formula: Age = -1.7702 + 1.0088 g + 14.8166 (Bo/Ca). This model explained 83.5% of total variance (R(2) = 0.835). The median of the absolute values of residuals (observed age minus predicted age) was -0.38, with a quartile deviation of 2.01 and a standard error of estimate of 1.54. A second sample test of 204 Italian children (108 girls and 96 boys), aged between 1 and 16 years, was used to evaluate the accuracy of the specific regression model. A sample paired t-test was used to analyze the mean differences between the skeletal and chronological age. The mean error for girls is 0.00 and the estimated age is slightly underestimated in boys with a mean error of -0.30 years. The standard deviations are 0.70 years for girls and 0.78 years for boys. The obtained results indicate that there is a high relationship between estimated and chronological ages. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Smart, C E; Ross, K; Edge, J A; King, B R; McElduff, P; Collins, C E
2010-03-01
Carbohydrate (CHO) counting allows children with Type 1 diabetes to adjust mealtime insulin dose to carbohydrate intake. Little is known about the ability of children to count CHO and whether a particular method for assessing CHO quantity is better than others. We investigated how accurately children and their caregivers estimate carbohydrate, and whether counting in gram increments improves accuracy compared with CHO portions or exchanges. One hundred and two children and adolescents (age range 8.3-18.1 years) on intensive insulin therapy and 110 caregivers independently estimated the CHO content of 17 standardized meals (containing 8-90 g CHO), using whichever method of carbohydrate quantification they had been taught (gram increments, 10-g portions or 15-g exchanges). Seventy-three per cent (n = 2530) of all estimates were within 10-15 g of actual CHO content. There was no relationship between the mean percentage error and method of carbohydrate counting or glycated haemoglobin (HbA(1c)) (P > 0.05). Mean gram error and meal size were negatively correlated (r = -0.70, P < 0.0001). The longer children had been CHO counting the greater the mean percentage error (r = 0.173, P = 0.014). Core foods in non-standard quantities were most frequently inaccurately estimated, while individually labelled foods were most often accurately estimated. Children with Type 1 diabetes and their caregivers can estimate the carbohydrate content of meals with reasonable accuracy. Teaching CHO counting in gram increments did not improve accuracy compared with CHO portions or exchanges. Large meals tended to be underestimated and snacks overestimated. Repeated age-appropriate education appears necessary to maintain accuracy in carbohydrate estimations.
Malinowski, Kathleen; McAvoy, Thomas J; George, Rohini; Dieterich, Sonja; D'Souza, Warren D
2013-07-01
To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥ 3 mm), and always (approximately once per minute). Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization.
NASA Astrophysics Data System (ADS)
Cao, Qian; Wan, Xiaoxia; Li, Junfeng; Liu, Qiang; Liang, Jingxing; Li, Chan
2016-10-01
This paper proposed two weight functions based on principal component analysis (PCA) to reserve more colorimetric information in spectral data compression process. One weight function consisted of the CIE XYZ color-matching functions representing the characteristic of the human visual system, while another was made up of the CIE XYZ color-matching functions of human visual system and relative spectral power distribution of the CIE standard illuminant D65. The improvement obtained from the proposed two methods were tested to compress and reconstruct the reflectance spectra of 1600 glossy Munsell color chips and 1950 Natural Color System color chips as well as six multispectral images. The performance was evaluated by the mean values of color difference under the CIE 1931 standard colorimetric observer and the CIE standard illuminant D65 and A. The mean values of root mean square errors between the original and reconstructed spectra were also calculated. The experimental results show that the proposed two methods significantly outperform the standard PCA and another two weighted PCA in the aspects of colorimetric reconstruction accuracy with very slight degradation in spectral reconstruction accuracy. In addition, weight functions with the CIE standard illuminant D65 can improve the colorimetric reconstruction accuracy compared to weight functions without the CIE standard illuminant D65.
NASA Astrophysics Data System (ADS)
Milani, G.; Milani, F.
A GUI software (GURU) for experimental data fitting of rheometer curves in Natural Rubber (NR) vulcanized with sulphur at different curing temperatures is presented. Experimental data are automatically loaded in GURU from an Excel spreadsheet coming from the output of the experimental machine (moving die rheometer). To fit the experimental data, the general reaction scheme proposed by Han and co-workers for NR vulcanized with sulphur is considered. From the simplified kinetic scheme adopted, a closed form solution can be found for the crosslink density, with the only limitation that the induction period is excluded from computations. Three kinetic constants must be determined in such a way to minimize the absolute error between normalized experimental data and numerical prediction. Usually, this result is achieved by means of standard least-squares data fitting. On the contrary, GURU works interactively by means of a Graphical User Interface (GUI) to minimize the error and allows an interactive calibration of the kinetic constants by means of sliders. A simple mouse click on the sliders allows the assignment of a value for each kinetic constant and a visual comparison between numerical and experimental curves. Users will thus find optimal values of the constants by means of a classic trial and error strategy. An experimental case of technical relevance is shown as benchmark.
Karsten, Bettina; Baker, Jonathan; Naclerio, Fernando; Klose, Andreas; Bianco, Antonino; Nimmerichter, Alfred
2018-02-01
To investigate single-day time-to-exhaustion (TTE) and time-trial (TT) -based laboratory tests values of critical power (CP), W prime (W'), and respective oxygen-uptake-kinetic responses. Twelve cyclists performed a maximal ramp test followed by 3 TTE and 3 TT efforts interspersed by 60 min recovery between efforts. Oxygen uptake ( V ˙ O 2 ) was measured during all trials. The mean response time was calculated as a description of the overall [Formula: see text]-kinetic response from the onset to 2 min of exercise. TTE-determined CP was 279 ± 52 W, and TT-determined CP was 276 ± 50 W (P = .237). Values of W' were 14.3 ± 3.4 kJ (TTE W') and 16.5 ± 4.2 kJ (TT W') (P = .028). While a high level of agreement (-12 to 17 W) and a low prediction error of 2.7% were established for CP, for W' limits of agreements were markedly lower (-8 to 3.7 kJ), with a prediction error of 18.8%. The mean standard error for TTE CP values was significantly higher than that for TT CP values (2.4% ± 1.9% vs 1.2% ± 0.7% W). The standard errors for TTE W' and TT W' were 11.2% ± 8.1% and 5.6% ± 3.6%, respectively. The [Formula: see text] response was significantly faster during TT (~22 s) than TTE (~28 s). The TT protocol with a 60-min recovery period offers a valid, time-saving, and less error-filled alternative to conventional and more recent testing methods. Results, however, cannot be transferred to W'.
NASA Astrophysics Data System (ADS)
Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan
2018-03-01
T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.
System statistical reliability model and analysis
NASA Technical Reports Server (NTRS)
Lekach, V. S.; Rood, H.
1973-01-01
A digital computer code was developed to simulate the time-dependent behavior of the 5-kwe reactor thermoelectric system. The code was used to determine lifetime sensitivity coefficients for a number of system design parameters, such as thermoelectric module efficiency and degradation rate, radiator absorptivity and emissivity, fuel element barrier defect constant, beginning-of-life reactivity, etc. A probability distribution (mean and standard deviation) was estimated for each of these design parameters. Then, error analysis was used to obtain a probability distribution for the system lifetime (mean = 7.7 years, standard deviation = 1.1 years). From this, the probability that the system will achieve the design goal of 5 years lifetime is 0.993. This value represents an estimate of the degradation reliability of the system.
Moore, C.R.
1989-01-01
This report presents physical, chemical, and biological data collected at 50 sampling sites on selected streams in Chester County, Pennsylvania from 1969 to 1980. The physical data consist of air and water temperature, stream discharge, suspended sediment, pH, specific conductance, and dissolved oxygen. The chemical data consist of laboratory determinations of total nutrients, major ions, and trace metals. The biological data consist of total coliform, fecal coliform, and fecal streptococcus bacteriological analyses, and benthicmacroinvertebrate population analyses. Brillouin's diversity index, maximum diversity, minimum diversity, and evenness for each sample, and median and mean Brilloiuin's diversity index, standard deviation, and standard error of the mean were calculated for the benthic-macroinvertebrate data for each site.
Stephan, Carl N; Simpson, Ellie K
2008-11-01
With the ever increasing production of average soft tissue depth studies, data are becoming increasingly complex, less standardized, and more unwieldy. So far, no overarching review has been attempted to determine: the validity of continued data collection; the usefulness of the existing data subcategorizations; or if a synthesis is possible to produce a manageable soft tissue depth library. While a principal components analysis would provide the best foundation for such an assessment, this type of investigation is not currently possible because of a lack of easily accessible raw data (first, many studies are narrow; second, raw data are infrequently published and/or stored and are not always shared by some authors). This paper provides an alternate means of investigation using an hierarchical approach to review and compare the effects of single variables on published mean values for adults whilst acknowledging measurement errors and within-group variation. The results revealed: (i) no clear secular trends at frequently investigated landmarks; (ii) wide variation in soft tissue depth measures between different measurement techniques irrespective of whether living persons or cadavers were considered; (iii) no clear clustering of non-Caucasoid data far from the Caucasoid means; and (iv) minor differences between males and females. Consequently, the data were pooled across studies using weighted means and standard deviations to cancel out random and opposing study-specific errors, and to produce a single soft tissue depth table with increased sample sizes (e.g., 6786 individuals at pogonion).
Error analysis of the crystal orientations obtained by the dictionary approach to EBSD indexing.
Ram, Farangis; Wright, Stuart; Singh, Saransh; De Graef, Marc
2017-10-01
The efficacy of the dictionary approach to Electron Back-Scatter Diffraction (EBSD) indexing was evaluated through the analysis of the error in the retrieved crystal orientations. EBSPs simulated by the Callahan-De Graef forward model were used for this purpose. Patterns were noised, distorted, and binned prior to dictionary indexing. Patterns with a high level of noise, with optical distortions, and with a 25 × 25 pixel size, when the error in projection center was 0.7% of the pattern width and the error in specimen tilt was 0.8°, were indexed with a 0.8° mean error in orientation. The same patterns, but 60 × 60 pixel in size, were indexed by the standard 2D Hough transform based approach with almost the same orientation accuracy. Optimal detection parameters in the Hough space were obtained by minimizing the orientation error. It was shown that if the error in detector geometry can be reduced to 0.1% in projection center and 0.1° in specimen tilt, the dictionary approach can retrieve a crystal orientation with a 0.2° accuracy. Copyright © 2017 Elsevier B.V. All rights reserved.
Kesselmeier, Miriam; Lorenzo Bermejo, Justo
2017-11-01
Logistic regression is the most common technique used for genetic case-control association studies. A disadvantage of standard maximum likelihood estimators of the genotype relative risk (GRR) is their strong dependence on outlier subjects, for example, patients diagnosed at unusually young age. Robust methods are available to constrain outlier influence, but they are scarcely used in genetic studies. This article provides a non-intimidating introduction to robust logistic regression, and investigates its benefits and limitations in genetic association studies. We applied the bounded Huber and extended the R package 'robustbase' with the re-descending Hampel functions to down-weight outlier influence. Computer simulations were carried out to assess the type I error rate, mean squared error (MSE) and statistical power according to major characteristics of the genetic study and investigated markers. Simulations were complemented with the analysis of real data. Both standard and robust estimation controlled type I error rates. Standard logistic regression showed the highest power but standard GRR estimates also showed the largest bias and MSE, in particular for associated rare and recessive variants. For illustration, a recessive variant with a true GRR=6.32 and a minor allele frequency=0.05 investigated in a 1000 case/1000 control study by standard logistic regression resulted in power=0.60 and MSE=16.5. The corresponding figures for Huber-based estimation were power=0.51 and MSE=0.53. Overall, Hampel- and Huber-based GRR estimates did not differ much. Robust logistic regression may represent a valuable alternative to standard maximum likelihood estimation when the focus lies on risk prediction rather than identification of susceptibility variants. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Yang, Yang; DeGruttola, Victor
2016-01-01
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients. PMID:22740584
Yang, Yang; DeGruttola, Victor
2012-06-22
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.
Lim, Changwon
2015-03-30
Nonlinear regression is often used to evaluate the toxicity of a chemical or a drug by fitting data from a dose-response study. Toxicologists and pharmacologists may draw a conclusion about whether a chemical is toxic by testing the significance of the estimated parameters. However, sometimes the null hypothesis cannot be rejected even though the fit is quite good. One possible reason for such cases is that the estimated standard errors of the parameter estimates are extremely large. In this paper, we propose robust ridge regression estimation procedures for nonlinear models to solve this problem. The asymptotic properties of the proposed estimators are investigated; in particular, their mean squared errors are derived. The performances of the proposed estimators are compared with several standard estimators using simulation studies. The proposed methodology is also illustrated using high throughput screening assay data obtained from the National Toxicology Program. Copyright © 2014 John Wiley & Sons, Ltd.
Accounting for dropout bias using mixed-effects models.
Mallinckrodt, C H; Clark, W S; David, S R
2001-01-01
Treatment effects are often evaluated by comparing change over time in outcome measures. However, valid analyses of longitudinal data can be problematic when subjects discontinue (dropout) prior to completing the study. This study assessed the merits of likelihood-based repeated measures analyses (MMRM) compared with fixed-effects analysis of variance where missing values were imputed using the last observation carried forward approach (LOCF) in accounting for dropout bias. Comparisons were made in simulated data and in data from a randomized clinical trial. Subject dropout was introduced in the simulated data to generate ignorable and nonignorable missingness. Estimates of treatment group differences in mean change from baseline to endpoint from MMRM were, on average, markedly closer to the true value than estimates from LOCF in every scenario simulated. Standard errors and confidence intervals from MMRM accurately reflected the uncertainty of the estimates, whereas standard errors and confidence intervals from LOCF underestimated uncertainty.
Free tropospheric measurements of CS2 over a 45 deg N to 45 deg S latitude range
NASA Technical Reports Server (NTRS)
Tucker, B. J.; Maroulis, P. J.; Bandy, A. R.
1985-01-01
The mean value obtained from 52 free tropospheric measurements of CS2 over the 45 deg N-45 deg S latitude range was 5.7 pptv, with standard deviation and standard error of 1.9 and 0.3 pptv, respectively. Large fluctuations in the CS2 concentration are observed which reflect the apparent short atmospheric residence time and inhomogeneities in the surface sources of CS2. The amounts of CS2 in the Northern and Southern Hemispheres are statistically equal.
Estimating Root Mean Square Errors in Remotely Sensed Soil Moisture over Continental Scale Domains
NASA Technical Reports Server (NTRS)
Draper, Clara S.; Reichle, Rolf; de Jeu, Richard; Naeimi, Vahid; Parinussa, Robert; Wagner, Wolfgang
2013-01-01
Root Mean Square Errors (RMSE) in the soil moisture anomaly time series obtained from the Advanced Scatterometer (ASCAT) and the Advanced Microwave Scanning Radiometer (AMSR-E; using the Land Parameter Retrieval Model) are estimated over a continental scale domain centered on North America, using two methods: triple colocation (RMSETC ) and error propagation through the soil moisture retrieval models (RMSEEP ). In the absence of an established consensus for the climatology of soil moisture over large domains, presenting a RMSE in soil moisture units requires that it be specified relative to a selected reference data set. To avoid the complications that arise from the use of a reference, the RMSE is presented as a fraction of the time series standard deviation (fRMSE). For both sensors, the fRMSETC and fRMSEEP show similar spatial patterns of relatively highlow errors, and the mean fRMSE for each land cover class is consistent with expectations. Triple colocation is also shown to be surprisingly robust to representativity differences between the soil moisture data sets used, and it is believed to accurately estimate the fRMSE in the remotely sensed soil moisture anomaly time series. Comparing the ASCAT and AMSR-E fRMSETC shows that both data sets have very similar accuracy across a range of land cover classes, although the AMSR-E accuracy is more directly related to vegetation cover. In general, both data sets have good skill up to moderate vegetation conditions.
Tirilazad mesylate protects stored erythrocytes against osmotic fragility.
Epps, D E; Knechtel, T J; Bacznskyj, O; Decker, D; Guido, D M; Buxser, S E; Mathews, W R; Buffenbarger, S L; Lutzke, B S; McCall, J M
1994-12-01
The hypoosmotic lysis curve of freshly collected human erythrocytes is consistent with a single Gaussian error function with a mean of 46.5 +/- 0.25 mM NaCl and a standard deviation of 5.0 +/- 0.4 mM NaCl. After extended storage of RBCs under standard blood bank conditions the lysis curve conforms to the sum of two error functions instead of a possible shift in the mean and a broadening of a single error function. Thus, two distinct sub-populations with different fragilities are present instead of a single, broadly distributed population. One population is identical to the freshly collected erythrocytes, whereas the other population consists of osmotically fragile cells. The rate of generation of the new, osmotically fragile, population of cells was used to probe the hypothesis that lipid peroxidation is responsible for the induction of membrane fragility. If it is so, then the antioxidant, tirilazad mesylate (U-74,006f), should protect against this degradation of stored erythrocytes. We found that tirilazad mesylate, at 17 microM (1.5 mol% with respect to membrane lecithin), retards significantly the formation of the osmotically fragile RBCs. Concomitantly, the concentration of free hemoglobin which accumulates during storage is markedly reduced by the drug. Since the presence of the drug also decreases the amount of F2-isoprostanes formed during the storage period, an antioxidant mechanism must be operative. These results demonstrate that tirilazad mesylate significantly decreases the number of fragile erythrocytes formed during storage in the blood bank.
Rochon, Justine; Kieser, Meinhard
2011-11-01
Student's one-sample t-test is a commonly used method when inference about the population mean is made. As advocated in textbooks and articles, the assumption of normality is often checked by a preliminary goodness-of-fit (GOF) test. In a paper recently published by Schucany and Ng it was shown that, for the uniform distribution, screening of samples by a pretest for normality leads to a more conservative conditional Type I error rate than application of the one-sample t-test without preliminary GOF test. In contrast, for the exponential distribution, the conditional level is even more elevated than the Type I error rate of the t-test without pretest. We examine the reasons behind these characteristics. In a simulation study, samples drawn from the exponential, lognormal, uniform, Student's t-distribution with 2 degrees of freedom (t(2) ) and the standard normal distribution that had passed normality screening, as well as the ingredients of the test statistics calculated from these samples, are investigated. For non-normal distributions, we found that preliminary testing for normality may change the distribution of means and standard deviations of the selected samples as well as the correlation between them (if the underlying distribution is non-symmetric), thus leading to altered distributions of the resulting test statistics. It is shown that for skewed distributions the excess in Type I error rate may be even more pronounced when testing one-sided hypotheses. ©2010 The British Psychological Society.
Flood Plain Topography Affects Establishment Success of Direct-Seeded Bottomland Oaks
Emile S. Gardiner; John D. Hodges; T. Conner Fristoe
2004-01-01
Five bottomland oak species were direct seeded along a topographical gradient in a flood plain to determine if environmental factors related to relative position in the flood plain influenced seedling establishment and survival. Two years after installation of the plantation, seedling establishment rates ranged from 12±1.6 (mean ± standard error) percent for overcup...
Spatial and temporal free-ranging cow behaviour pre and post-weaning
USDA-ARS?s Scientific Manuscript database
Global positioning system (GPS) technology can be used to study free-ranging cow behaviors. GPS equipment was deployed on each of ten cows ranging in age from 3 to 15 years in order to compare and contrast mean ± standard errors for pre- and post-weaning travel (m·time-1) in two similar (= 433 ha) a...
USDA-ARS?s Scientific Manuscript database
For any analytical system the population mean (mu) number of entities (e.g., cells or molecules) per tested volume, surface area, or mass also defines the population standard deviation (sigma = square root of mu ). For a preponderance of analytical methods, sigma is very small relative to mu due to...
Some computational techniques for estimating human operator describing functions
NASA Technical Reports Server (NTRS)
Levison, W. H.
1986-01-01
Computational procedures for improving the reliability of human operator describing functions are described. Special attention is given to the estimation of standard errors associated with mean operator gain and phase shift as computed from an ensemble of experimental trials. This analysis pertains to experiments using sum-of-sines forcing functions. Both open-loop and closed-loop measurement environments are considered.
Foster, Ken; Anwar, Nasim; Pogue, Rhea; Morré, Dorothy M.; Keenan, T. W.; Morré, D. James
2003-01-01
Seasonal decomposition analyses were applied to the statistical evaluation of an oscillating activity for a plasma membrane NADH oxidase activity with a temperature compensated period of 24 min. The decomposition fits were used to validate the cyclic oscillatory pattern. Three measured values, average percentage error (MAPE), a measure of the periodic oscillation, mean average deviation (MAD), a measure of the absolute average deviations from the fitted values, and mean standard deviation (MSD), the measure of standard deviation from the fitted values plus R-squared and the Henriksson-Merton p value were used to evaluate accuracy. Decomposition was carried out by fitting a trend line to the data, then detrending the data if necessary, by subtracting the trend component. The data, with or without detrending, were then smoothed by subtracting a centered moving average of length equal to the period length determined by Fourier analysis. Finally, the time series were decomposed into cyclic and error components. The findings not only validate the periodic nature of the major oscillations but suggest, as well, that the minor intervening fluctuations also recur within each period with a reproducible pattern of recurrence. PMID:19330112
Ringler, Adam; Holland, Austin; Wilson, David
2017-01-01
Variability in seismic instrumentation performance plays a fundamental role in our ability to carry out experiments in observational seismology. Many such experiments rely on the assumed performance of various seismic sensors as well as on methods to isolate the sensors from nonseismic noise sources. We look at the repeatability of estimating the self‐noise, midband sensitivity, and the relative orientation by comparing three collocated Nanometrics Trillium Compact sensors. To estimate the repeatability, we conduct a total of 15 trials in which one sensor is repeatedly reinstalled, alongside two undisturbed sensors. We find that we are able to estimate the midband sensitivity with an error of no greater than 0.04% with a 99th percentile confidence, assuming a standard normal distribution. We also find that we are able to estimate mean sensor self‐noise to within ±5.6 dB with a 99th percentile confidence in the 30–100‐s‐period band. Finally, we find our relative orientation errors have a mean difference in orientation of 0.0171° from the reference, but our trials have a standard deviation of 0.78°.
Carbohydrate-rich foods: glycaemic indices and the effect of constituent macronutrients.
Widanagamage, Rahal D; Ekanayake, Sagarika; Welihinda, Jayantha
2009-01-01
The glycaemic index (GI) ranks foods according to their acute glycaemic impact and is used in planning meals for patients invoking glycaemic control through diet. Kurakkan (Eleusine coracana) flour roti, rice flour roti, atta flour roti, boiled breadfruit (Artocarpus altilis/Artocarpus communis) and boiled legumes (mungbean, cowpea and chickpea) were categorized as low-GI foods (relative to white bread; Prima Crust Top), and the corresponding GI (+/- standard error of the mean) values were 70+/-8, 69+/-7, 67+/-9, 64+/-7, 57+/-6, 49+/-8 and 29+/-5, respectively. Kurakkan flour pittu and wheat flour roti were classified as medium-GI foods with GI values of 85+/-6 and 72+/-6. Hoppers, rice flour pittu, wheat flour pittu and Olu-milk rice (seeds of Nymphaea lotus) were categorized as high-GI foods, and the corresponding GI (+/- standard error of the mean) values were 120+/-8, 103+/-7, 101+/-8 and 91+/-8, respectively. The GI values significantly (P<0.01) and negatively correlated with the insoluble dietary fibre (rho = - 0.780), soluble dietary fibre (rho = - 0.712) and protein (rho = - 0.738) contents in grams per 100 g digestible starch containing foods.
Giordano, Lydia; Friedman, David S.; Repka, Michael X.; Katz, Joanne; Ibironke, Josephine; Hawes, Patricia; Tielsch, James M.
2009-01-01
Purpose To determine the age-specific prevalence of refractive errors in White and African-American preschool children. Design The Baltimore Pediatric Eye Disease Study is a population-based evaluation of the prevalence of ocular disorders in children aged six through 71 months in Baltimore, Maryland, United States. Participants Among 4,132 children identified, 3,990 eligible children (97%) were enrolled and 2,546 children (62%) were examined. Methods Cycloplegic autorefraction was attempted on all children using a Nikon Retinomax K-Plus 2. If a reliable autorefraction could not be obtained after three attempts, cycloplegic streak retinoscopy was performed. Main Outcome Measures Mean spherical equivalent (SE) refractive error, astigmatism, and prevalence of higher refractive errors among African American and White children. Results The mean spherical equivalent (SE) of right eyes was +1.49 diopter (D) (standard deviation (SD) =1.23) in White and +0.71D (SD=1.35) in African-American children (mean difference of 0.78D, 95% CI: 0.67, 0.89). Mean SE refractive error did not decline with age in either group. The prevalence of myopia of 1.00 D or more in the eye with the lesser refractive error was 0.7% in White and 5.5% in African-American children (RR: 8.01 95% confidence interval (CI): 3.70, 17.35). The prevalence of hyperopia of +3D or more in the eye with the lesser refractive error was 8.9% in White and 4.4% in African-American children (relative risk (RR): 0.49, 95% CI: 0.35, 0.68). The prevalence of emmetropia (less than −1.00 D to less than +1.00 D) was 35.6% in Whites and 58.0 % in African-Americans (RR: 1.64, 95% CI: 1.49, 1.80). Based on published prescribing guidelines 5.1% of the children would have benefited from spectacle correction. However, only 1.3% had been previously prescribed correction. Conclusions Significant refractive errors are uncommon in this population of urban preschool children. There was no evidence for a myopic shift over this age range in this cross-sectional study. A small proportion of preschool children would likely benefit from refractive correction, but few have had this prescribed. PMID:19243832
Fallon, Joan
2005-01-01
Autism is an ever increasing problem in the United States. Characterized by multiple deficits in the areas of communication, development, and behavior; autistic children are found in every community in this country and abroad. Recent findings point to a significant increase in autism which can not be accounted for by means such as misclassification. The state of California recently reported a 273% increase in the number of cases between 1987 and 1998. Many possible causes have been proposed which range from genetics to environment, with a combination of the two most likely. Since the introduction of clavulanate/amoxicillin in the 1980s there has been the increase in numbers of cases of autism. In this study 206 children under the age of three years with autism were screened by means of a detailed case history. A significant commonality was discerned and that being the level of chronic otitis media. These children were found to have a mean number 9.96 bouts of otitis media (with a standard error of the mean of +/-1.83). This represents a sum total for all 206 children of 2052 bouts of otitis media. These children received a mean number of 12.04 courses of antibiotics (standard error of the mean of +/-.125). The sum total number of courses of antibiotics given to all 206 children was 2480. Of those 893 courses were Augmentin. with 362 of these Augmentin courses administered under the age of one year. A proposed mechanism whereby the production of clavulanate may yield high levels of urea/ammonia in the child is presented. Further an examination of this mechanism needs to be undertaken to determine if a subset of children are at risk for neurotoxicity from the use of clavulanic acid in pharmaceutical preparations.
Zook, Justin M.; Samarov, Daniel; McDaniel, Jennifer; Sen, Shurjo K.; Salit, Marc
2012-01-01
While the importance of random sequencing errors decreases at higher DNA or RNA sequencing depths, systematic sequencing errors (SSEs) dominate at high sequencing depths and can be difficult to distinguish from biological variants. These SSEs can cause base quality scores to underestimate the probability of error at certain genomic positions, resulting in false positive variant calls, particularly in mixtures such as samples with RNA editing, tumors, circulating tumor cells, bacteria, mitochondrial heteroplasmy, or pooled DNA. Most algorithms proposed for correction of SSEs require a data set used to calculate association of SSEs with various features in the reads and sequence context. This data set is typically either from a part of the data set being “recalibrated” (Genome Analysis ToolKit, or GATK) or from a separate data set with special characteristics (SysCall). Here, we combine the advantages of these approaches by adding synthetic RNA spike-in standards to human RNA, and use GATK to recalibrate base quality scores with reads mapped to the spike-in standards. Compared to conventional GATK recalibration that uses reads mapped to the genome, spike-ins improve the accuracy of Illumina base quality scores by a mean of 5 Phred-scaled quality score units, and by as much as 13 units at CpG sites. In addition, since the spike-in data used for recalibration are independent of the genome being sequenced, our method allows run-specific recalibration even for the many species without a comprehensive and accurate SNP database. We also use GATK with the spike-in standards to demonstrate that the Illumina RNA sequencing runs overestimate quality scores for AC, CC, GC, GG, and TC dinucleotides, while SOLiD has less dinucleotide SSEs but more SSEs for certain cycles. We conclude that using these DNA and RNA spike-in standards with GATK improves base quality score recalibration. PMID:22859977
ERIC Educational Resources Information Center
Lord, Frederic M.; Stocking, Martha
A general Computer program is described that will compute asymptotic standard errors and carry out significance tests for an endless variety of (standard and) nonstandard large-sample statistical problems, without requiring the statistician to derive asymptotic standard error formulas. The program assumes that the observations have a multinormal…
Artificial Intelligence Techniques for Predicting and Mapping Daily Pan Evaporation
NASA Astrophysics Data System (ADS)
Arunkumar, R.; Jothiprakash, V.; Sharma, Kirty
2017-09-01
In this study, Artificial Intelligence techniques such as Artificial Neural Network (ANN), Model Tree (MT) and Genetic Programming (GP) are used to develop daily pan evaporation time-series (TS) prediction and cause-effect (CE) mapping models. Ten years of observed daily meteorological data such as maximum temperature, minimum temperature, relative humidity, sunshine hours, dew point temperature and pan evaporation are used for developing the models. For each technique, several models are developed by changing the number of inputs and other model parameters. The performance of each model is evaluated using standard statistical measures such as Mean Square Error, Mean Absolute Error, Normalized Mean Square Error and correlation coefficient (R). The results showed that daily TS-GP (4) model predicted better with a correlation coefficient of 0.959 than other TS models. Among various CE models, CE-ANN (6-10-1) resulted better than MT and GP models with a correlation coefficient of 0.881. Because of the complex non-linear inter-relationship among various meteorological variables, CE mapping models could not achieve the performance of TS models. From this study, it was found that GP performs better for recognizing single pattern (time series modelling), whereas ANN is better for modelling multiple patterns (cause-effect modelling) in the data.
Curran, Christopher A.; Eng, Ken; Konrad, Christopher P.
2012-01-01
Regional low-flow regression models for estimating Q7,10 at ungaged stream sites are developed from the records of daily discharge at 65 continuous gaging stations (including 22 discontinued gaging stations) for the purpose of evaluating explanatory variables. By incorporating the base-flow recession time constant τ as an explanatory variable in the regression model, the root-mean square error for estimating Q7,10 at ungaged sites can be lowered to 72 percent (for known values of τ), which is 42 percent less than if only basin area and mean annual precipitation are used as explanatory variables. If partial-record sites are included in the regression data set, τ must be estimated from pairs of discharge measurements made during continuous periods of declining low flows. Eight measurement pairs are optimal for estimating τ at partial-record sites, and result in a lowering of the root-mean square error by 25 percent. A low-flow survey strategy that includes paired measurements at partial-record sites requires additional effort and planning beyond a standard strategy, but could be used to enhance regional estimates of τ and potentially reduce the error of regional regression models for estimating low-flow characteristics at ungaged sites.
Evaluation of Satellite and Model Precipitation Products Over Turkey
NASA Astrophysics Data System (ADS)
Yilmaz, M. T.; Amjad, M.
2017-12-01
Satellite-based remote sensing, gauge stations, and models are the three major platforms to acquire precipitation dataset. Among them satellites and models have the advantage of retrieving spatially and temporally continuous and consistent datasets, while the uncertainty estimates of these retrievals are often required for many hydrological studies to understand the source and the magnitude of the uncertainty in hydrological response parameters. In this study, satellite and model precipitation data products are validated over various temporal scales (daily, 3-daily, 7-daily, 10-daily and monthly) using in-situ measured precipitation observations from a network of 733 gauges from all over the Turkey. Tropical Rainfall Measurement Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) 3B42 version 7 and European Center of Medium-Range Weather Forecast (ECMWF) model estimates (daily, 3-daily, 7-daily and 10-daily accumulated forecast) are used in this study. Retrievals are evaluated for their mean and standard deviation and their accuracies are evaluated via bias, root mean square error, error standard deviation and correlation coefficient statistics. Intensity vs frequency analysis and some contingency table statistics like percent correct, probability of detection, false alarm ratio and critical success index are determined using daily time-series. Both ECMWF forecasts and TRMM observations, on average, overestimate the precipitation compared to gauge estimates; wet biases are 10.26 mm/month and 8.65 mm/month, respectively for ECMWF and TRMM. RMSE values of ECMWF forecasts and TRMM estimates are 39.69 mm/month and 41.55 mm/month, respectively. Monthly correlations between Gauges-ECMWF, Gauges-TRMM and ECMWF-TRMM are 0.76, 0.73 and 0.81, respectively. The model and the satellite error statistics are further compared against the gauges error statistics based on inverse distance weighting (IWD) analysis. Both the model and satellite data have less IWD errors (14.72 mm/month and 10.75 mm/month, respectively) compared to gauges IWD error (21.58 mm/month). These results show that, on average, ECMWF forecast data have higher skill than TRMM observations. Overall, both ECMWF forecast data and TRMM observations show good potential for catchment scale hydrological analysis.
An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1983-01-01
An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.
NASA Astrophysics Data System (ADS)
Pavošević, Fabijan; Neese, Frank; Valeev, Edward F.
2014-08-01
We present a production implementation of reduced-scaling explicitly correlated (F12) coupled-cluster singles and doubles (CCSD) method based on pair-natural orbitals (PNOs). A key feature is the reformulation of the explicitly correlated terms using geminal-spanning orbitals that greatly reduce the truncation errors of the F12 contribution. For the standard S66 benchmark of weak intermolecular interactions, the cc-pVDZ-F12 PNO CCSD F12 interaction energies reproduce the complete basis set CCSD limit with mean absolute error <0.1 kcal/mol, and at a greatly reduced cost compared to the conventional CCSD F12.
Measurement of diffusion coefficients from solution rates of bubbles
NASA Technical Reports Server (NTRS)
Krieger, I. M.
1979-01-01
The rate of solution of a stationary bubble is limited by the diffusion of dissolved gas molecules away from the bubble surface. Diffusion coefficients computed from measured rates of solution give mean values higher than accepted literature values, with standard errors as high as 10% for a single observation. Better accuracy is achieved with sparingly soluble gases, small bubbles, and highly viscous liquids. Accuracy correlates with the Grashof number, indicating that free convection is the major source of error. Accuracy should, therefore, be greatly increased in a gravity-free environment. The fact that the bubble will need no support is an additional important advantage of Spacelab for this measurement.
A Method for Calculating the Mean Orbits of Meteor Streams
NASA Astrophysics Data System (ADS)
Voloshchuk, Yu. I.; Kashcheev, B. L.
An examination of the published catalogs of orbits of meteor streams and of a large number of works devoted to the selection of streams, their analysis and interpretation, showed that elements of stream orbits are calculated, as a rule, as arithmetical (sometimes, weighed) sample means. On the basis of these means, a search for parent bodies, a study of the evolution of swarms generating these streams, an analysis of one-dimensional and multidimensional distributions of these elements, etc., are performed. We show that systematic errors in the estimates of elements of the mean orbits are present in each of the catalogs. These errors are caused by the formal averaging of orbital elements over the sample, while ignoring the fact that they represent not only correlated, but dependent quantities, with nonlinear, in most cases, interrelations between them. Numerous examples are given of such inaccuracies, in particular, the cases where the "mean orbit of the stream" recorded by ground-based techniques does not cross the Earth's orbit. We suggest the computation algorithm, in which the averaging over the sample is carried out at the initial stage of the calculation of the mean orbit, and only for the variables required for subsequent calculations. After this, the known astrometric formulas are used to sequentially calculate all other parameters of the stream, considered now as a standard orbit. Variance analysis is used to estimate the errors in orbital elements of the streams, in the case that their orbits are obtained by averaging the orbital elements of meteoroids forming the stream, without taking into account their interdependence. The results obtained in this analysis indicate the behavior of systematic errors in the elements of orbits of meteor streams. As an example, the effect of the incorrect computation method on the distribution of elements of the stream orbits close to the orbits of asteroids of the Apollo, Aten, and Amor groups (AAA asteroids) is examined.
Computation of Standard Errors
Dowd, Bryan E; Greene, William H; Norton, Edward C
2014-01-01
Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304
a Climatology of Global Precipitation.
NASA Astrophysics Data System (ADS)
Legates, David Russell
A global climatology of mean monthly precipitation has been developed using traditional land-based gage measurements as well as derived oceanic data. These data have been screened for coding errors and redundant entries have been removed. Oceanic precipitation estimates are most often extrapolated from coastal and island observations because few gage estimates of oceanic precipitation exist. One such procedure, developed by Dorman and Bourke and used here, employs a derived relationship between observed rainfall totals and the "current weather" at coastal stations. The combined data base contains 24,635 independent terrestial station records and 2223 oceanic grid-point records. Raingage catches are known to underestimate actual precipitation. Errors in the gage catch result from wind -field deformation, wetting losses, and evaporation from the gage and can amount to nearly 8, 2, and 1 percent of the global catch, respectively. A procedure has been developed to correct many of these errors and has been used to adjust the gage estimates of global precipitation. Space-time variations in gage type, air temperature, wind speed, and natural vegetation were incorporated into the correction procedure. Corrected data were then interpolated to the nodes of a 0.5^circ of latitude by 0.5^circ of longitude lattice using a spherically-based interpolation algorithm. Interpolation errors are largest in areas of low station density, rugged topography, and heavy precipitation. Interpolated estimates also were compared with a digital filtering technique to access the aliasing of high-frequency "noise" into the lower frequency signals. Isohyetal maps displaying the mean annual, seasonal, and monthly precipitation are presented. Gage corrections and the standard error of the corrected estimates also are mapped. Results indicate that mean annual global precipitation is 1123 mm with 1251 mm falling over the oceans and 820 mm over land. Spatial distributions of monthly precipitation generally are consistent with existing precipitation climatologies.
Blumenfeld, Philip; Hata, Nobuhiko; DiMaio, Simon; Zou, Kelly; Haker, Steven; Fichtinger, Gabor; Tempany, Clare M C
2007-09-01
To quantify needle placement accuracy of magnetic resonance image (MRI)-guided core needle biopsy of the prostate. A total of 10 biopsies were performed with 18-gauge (G) core biopsy needle via a percutaneous transperineal approach. Needle placement error was assessed by comparing the coordinates of preplanned targets with the needle tip measured from the intraprocedural coherent gradient echo images. The source of these errors was subsequently investigated by measuring displacement caused by needle deflection and needle susceptibility artifact shift in controlled phantom studies. Needle placement error due to misalignment of the needle template guide was also evaluated. The mean and standard deviation (SD) of errors in targeted biopsies was 6.5 +/- 3.5 mm. Phantom experiments showed significant placement error due to needle deflection with a needle with an asymmetrically beveled tip (3.2-8.7 mm depending on tissue type) but significantly smaller error with a symmetrical bevel (0.6-1.1 mm). Needle susceptibility artifacts observed a shift of 1.6 +/- 0.4 mm from the true needle axis. Misalignment of the needle template guide contributed an error of 1.5 +/- 0.3 mm. Needle placement error was clinically significant in MRI-guided biopsy for diagnosis of prostate cancer. Needle placement error due to needle deflection was the most significant cause of error, especially for needles with an asymmetrical bevel. (c) 2007 Wiley-Liss, Inc.
Echeta, Genevieve; Moffett, Brady S; Checchia, Paul; Benton, Mary Kay; Klouda, Leda; Rodriguez, Fred H; Franklin, Wayne
2014-01-01
Adults with congenital heart disease (CHD) are often cared for at pediatric hospitals. There are no data describing the incidence or type of medication prescribing errors in adult patients admitted to a pediatric cardiovascular intensive care unit (CVICU). A review of patients >18 years of age admitted to the pediatric CVICU at our institution from 2009 to 2011 occurred. A comparator group <18 years of age but >70 kg (a typical adult weight) was identified. Medication prescribing errors were determined according to a commonly used adult drug reference. An independent panel consisting of a physician specializing in the care of adult CHD patients, a nurse, and a pharmacist evaluated all errors. Medication prescribing orders were classified as appropriate, underdose, overdose, or nonstandard (dosing per weight instead of standard adult dosing), and severity of error was classified. Eighty-five adult (74 patients) and 33 pediatric admissions (32 patients) met study criteria (mean age 27.5 ± 9.4 years, 53% male vs. 14.9 ± 1.8 years, 63% male). A cardiothoracic surgical procedure occurred in 81.4% of admissions. Adult admissions weighed less than pediatric admissions (72.8 ± 22.4 kg vs. 85.6 ± 14.9 kg, P < .01) but hospital length of stay was similar. (Adult 6 days [range 1-216 days]; pediatric 5 days [Range 2-123 days], P = .52.) A total of 112 prescribing errors were identified and they occurred less often in adults (42.4% of admissions vs. 66.7% of admissions, P = .02). Adults had a lower mean number of errors (0.7 errors per adult admission vs. 1.7 errors per pediatric admission, P < .01). Prescribing errors occurred most commonly with antimicrobials (n = 27). Underdosing was the most common category of prescribing error. Most prescribing errors were determined to have not caused harm to the patient. Prescribing errors occur frequently in adult patients admitted to a pediatric CVICU but occur more often in pediatric patients of adult weight. © 2013 Wiley Periodicals, Inc.
Hinton-Bayre, Anton D
2011-02-01
There is an ongoing debate over the preferred method(s) for determining the reliable change (RC) in individual scores over time. In the present paper, specificity comparisons of several classic and contemporary RC models were made using a real data set. This included a more detailed review of a new RC model recently proposed in this journal, that used the within-subjects standard deviation (WSD) as the error term. It was suggested that the RC(WSD) was more sensitive to change and theoretically superior. The current paper demonstrated that even in the presence of mean practice effects, false-positive rates were comparable across models when reliability was good and initial and retest variances were equivalent. However, when variances differed, discrepancies in classification across models became evident. Notably, the RC using the WSD provided unacceptably high false-positive rates in this setting. It was considered that the WSD was never intended for measuring change in this manner. The WSD actually combines systematic and error variance. The systematic variance comes from measurable between-treatment differences, commonly referred to as practice effect. It was further demonstrated that removal of the systematic variance and appropriate modification of the residual error term for the purpose of testing individual change yielded an error term already published and criticized in the literature. A consensus on the RC approach is needed. To that end, further comparison of models under varied conditions is encouraged.
A variational regularization of Abel transform for GPS radio occultation
NASA Astrophysics Data System (ADS)
Wee, Tae-Kwon
2018-04-01
In the Global Positioning System (GPS) radio occultation (RO) technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI) is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR) proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the mean refractivity deserting the first guess. In the lowest few kilometers that AI produces large negative refractivity bias, VR reduces the refractivity bias substantially with the aid of the background, which in this study is the operational forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF). It is concluded based on the results presented in this study that VR offers a definite advantage over AI in the quality of refractivity.
An assessment of air pollutant exposure methods in Mexico City, Mexico.
Rivera-González, Luis O; Zhang, Zhenzhen; Sánchez, Brisa N; Zhang, Kai; Brown, Daniel G; Rojas-Bracho, Leonora; Osornio-Vargas, Alvaro; Vadillo-Ortega, Felipe; O'Neill, Marie S
2015-05-01
Geostatistical interpolation methods to estimate individual exposure to outdoor air pollutants can be used in pregnancy cohorts where personal exposure data are not collected. Our objectives were to a) develop four assessment methods (citywide average (CWA); nearest monitor (NM); inverse distance weighting (IDW); and ordinary Kriging (OK)), and b) compare daily metrics and cross-validations of interpolation models. We obtained 2008 hourly data from Mexico City's outdoor air monitoring network for PM10, PM2.5, O3, CO, NO2, and SO2 and constructed daily exposure metrics for 1,000 simulated individual locations across five populated geographic zones. Descriptive statistics from all methods were calculated for dry and wet seasons, and by zone. We also evaluated IDW and OK methods' ability to predict measured concentrations at monitors using cross validation and a coefficient of variation (COV). All methods were performed using SAS 9.3, except ordinary Kriging which was modeled using R's gstat package. Overall, mean concentrations and standard deviations were similar among the different methods for each pollutant. Correlations between methods were generally high (r=0.77 to 0.99). However, ranges of estimated concentrations determined by NM, IDW, and OK were wider than the ranges for CWA. Root mean square errors for OK were consistently equal to or lower than for the IDW method. OK standard errors varied considerably between pollutants and the computed COVs ranged from 0.46 (least error) for SO2 and PM10 to 3.91 (most error) for PM2.5. OK predicted concentrations measured at the monitors better than IDW and NM. Given the similarity in results for the exposure methods, OK is preferred because this method alone provides predicted standard errors which can be incorporated in statistical models. The daily estimated exposures calculated using these different exposure methods provide flexibility to evaluate multiple windows of exposure during pregnancy, not just trimester or pregnancy-long exposures. Many studies evaluating associations between outdoor air pollution and adverse pregnancy outcomes rely on outdoor air pollution monitoring data linked to information gathered from large birth registries, and often lack residence location information needed to estimate individual exposure. This study simulated 1,000 residential locations to evaluate four air pollution exposure assessment methods, and describes possible exposure misclassification from using spatial averaging versus geostatistical interpolation models. An implication of this work is that policies to reduce air pollution and exposure among pregnant women based on epidemiologic literature should take into account possible error in estimates of effect when spatial averages alone are evaluated.
Bankfull characteristics of Ohio streams and their relation to peak streamflows
Sherwood, James M.; Huitger, Carrie A.
2005-01-01
Regional curves, simple-regression equations, and multiple-regression equations were developed to estimate bankfull width, bankfull mean depth, bankfull cross-sectional area, and bankfull discharge of rural, unregulated streams in Ohio. The methods are based on geomorphic, basin, and flood-frequency data collected at 50 study sites on unregulated natural alluvial streams in Ohio, of which 40 sites are near streamflow-gaging stations. The regional curves and simple-regression equations relate the bankfull characteristics to drainage area. The multiple-regression equations relate the bankfull characteristics to drainage area, main-channel slope, main-channel elevation index, median bed-material particle size, bankfull cross-sectional area, and local-channel slope. Average standard errors of prediction for bankfull width equations range from 20.6 to 24.8 percent; for bankfull mean depth, 18.8 to 20.6 percent; for bankfull cross-sectional area, 25.4 to 30.6 percent; and for bankfull discharge, 27.0 to 78.7 percent. The simple-regression (drainage-area only) equations have the highest average standard errors of prediction. The multiple-regression equations in which the explanatory variables included drainage area, main-channel slope, main-channel elevation index, median bed-material particle size, bankfull cross-sectional area, and local-channel slope have the lowest average standard errors of prediction. Field surveys were done at each of the 50 study sites to collect the geomorphic data. Bankfull indicators were identified and evaluated, cross-section and longitudinal profiles were surveyed, and bed- and bank-material were sampled. Field data were analyzed to determine various geomorphic characteristics such as bankfull width, bankfull mean depth, bankfull cross-sectional area, bankfull discharge, streambed slope, and bed- and bank-material particle-size distribution. The various geomorphic characteristics were analyzed by means of a combination of graphical and statistical techniques. The logarithms of the annual peak discharges for the 40 gaged study sites were fit by a Pearson Type III frequency distribution to develop flood-peak discharges associated with recurrence intervals of 2, 5, 10, 25, 50, and 100 years. The peak-frequency data were related to geomorphic, basin, and climatic variables by multiple-regression analysis. Simple-regression equations were developed to estimate 2-, 5-, 10-, 25-, 50-, and 100-year flood-peak discharges of rural, unregulated streams in Ohio from bankfull channel cross-sectional area. The average standard errors of prediction are 31.6, 32.6, 35.9, 41.5, 46.2, and 51.2 percent, respectively. The study and methods developed are intended to improve understanding of the relations between geomorphic, basin, and flood characteristics of streams in Ohio and to aid in the design of hydraulic structures, such as culverts and bridges, where stability of the stream and structure is an important element of the design criteria. The study was done in cooperation with the Ohio Department of Transportation and the U.S. Department of Transportation, Federal Highway Administration.
Microcomputer package for statistical analysis of microbial populations.
Lacroix, J M; Lavoie, M C
1987-11-01
We have developed a Pascal system to compare microbial populations from different ecological sites using microcomputers. The values calculated are: the coverage value and its standard error, the minimum similarity and the geometric similarity between two biological samples, and the Lambda test consisting of calculating the ratio of the mean similarity between two subsets by the mean similarity within subsets. This system is written for Apple II, IBM or compatible computers, but it can work for any computer which can use CP/M, if the programs are recompiled for such a system.
The Infinitesimal Jackknife with Exploratory Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.
2012-01-01
The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…
ERIC Educational Resources Information Center
Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu
2013-01-01
Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
Analysis of S-box in Image Encryption Using Root Mean Square Error Method
NASA Astrophysics Data System (ADS)
Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan
2012-07-01
The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes
NASA Astrophysics Data System (ADS)
Luce, C. H.; Tonina, D.; Applebee, R.; DeWeese, T.
2017-12-01
Two common refrains about using the one-dimensional advection diffusion equation to estimate fluid fluxes, thermal conductivity, or bed surface elevation from temperature time series in streambeds are that the solution assumes that 1) the surface boundary condition is a sine wave or nearly so, and 2) there is no gradient in mean temperature with depth. Concerns on these subjects are phrased in various ways, including non-stationarity in frequency, amplitude, or phase. Although the mathematical posing of the original solution to the problem might lead one to believe these constraints exist, the perception that they are a source of error is a fallacy. Here we re-derive the inverse solution of the 1-D advection-diffusion equation starting with an arbitrary surface boundary condition for temperature. In doing so, we demonstrate the frequency-independence of the solution, meaning any single frequency can be used in the frequency-domain solutions to estimate thermal diffusivity and 1-D fluid flux in streambeds, even if the forcing has multiple frequencies. This means that diurnal variations with asymmetric shapes, gradients in the mean temperature with depth, or `non-stationary' amplitude and frequency (or phase) do not actually represent violations of assumptions, and they should not cause errors in estimates when using one of the suite of existing solution methods derived based on a single frequency. Misattribution of errors to these issues constrains progress on solving real sources of error. Numerical and physical experiments are used to verify this conclusion and consider the utility of information at `non-standard' frequencies and multiple frequencies to augment the information derived from time series of temperature.
Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.
2011-01-01
In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.
Nakano, Tadashi; Hayashi, Takeshi; Nakagawa, Toru; Honda, Toru; Owada, Satoshi; Endo, Hitoshi; Tatemichi, Masayuki
2018-04-05
This retrospective cohort study primarily aimed to investigate the possible association of computer use with visual field abnormalities (VFA) among Japanese workers. The study included 2,377 workers (mean age 45.7 [standard deviation, 8.3] years; 2,229 men and 148 women) who initially exhibited no VFA during frequency doubling technology perimetry (FDT) testing. Subjects then underwent annual follow-up FDT testing for 7 years, and VFA were determined using a FDT-test protocol (FDT-VFA). Subjects with FDT-VFA were examined by ophthalmologists. Baseline data about the mean duration of computer use during a 5-year period and refractive errors were obtained via self-administered questionnaire and evaluations for refractive errors (use of eyeglasses or contact lenses), respectively. A Cox proportional hazard analysis demonstrated that heavy computer users (>8 hr/day) had a significantly increased risk of FDT-VFA (hazard ratio [HR] 2.85; 95% confidence interval [CI], 1.26-6.48) relative to light users (<4 hr/day), and this association was strengthened among subjects with refractive errors (HR 4.48; 95% CI, 1.87-10.74). The computer usage history also significantly correlated with FDT-VFA among subject with refractive errors (P < 0.05), and 73.1% of subjects with FDT-VFA and refractive errors were diagnosed with glaucoma or ocular hypertension. The incidence of FDT-VFA appears to be increased among Japanese workers who are heavy computer users, particularly if they have refractive errors. Further investigations of epidemiology and causality are warranted.
Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife
ERIC Educational Resources Information Center
Jennrich, Robert I.
2008-01-01
The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…
Factor Rotation and Standard Errors in Exploratory Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.
2015-01-01
In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…
Fuller-Rowell, Thomas E; Curtis, David S; Doan, Stacey N; Coe, Christopher L
2015-01-01
The current study examined the prospective effects of educational attainment on proinflammatory physiology among African American and white adults. Participants were 1192 African Americans and 1487 whites who participated in Year 5 (mean [standard deviation] age = 30 [3.5] years), and Year 20 (mean [standard deviation] age = 45 [3.5]) of an ongoing longitudinal study. Initial analyses focused on age-related changes in fibrinogen across racial groups, and parallel analyses for C-reactive protein and interleukin-6 assessed at Year 20. Models then estimated the effects of educational attainment on changes in inflammation for African Americans and whites before and after controlling for four blocks of covariates: a) early life adversity, b) health and health behaviors at baseline, c) employment and financial measures at baseline and follow-up, and d) psychosocial stresses in adulthood. African Americans had larger increases in fibrinogen over time than whites (B = 24.93, standard error = 3.24, p < .001), and 37% of this difference was explained after including all covariates. Effects of educational attainment were weaker for African Americans than for whites (B = 10.11, standard error = 3.29, p = .002), and only 8% of this difference was explained by covariates. Analyses for C-reactive protein and interleukin-6 yielded consistent results. The effects of educational attainment on inflammation levels were stronger for white than for African American participants. Why African Americans do not show the same health benefits with educational attainment is an important question for health disparities research.
Horvath, K C; Miller-Cushon, E K
2018-05-09
Weaned dairy calves are commonly exposed to changing physical and social environments, and ability to adapt to novel management is likely to have performance and welfare implications. We characterized how behavioral responses of weaned heifer calves develop over time after introduction to a social group. Previously individually reared Holstein heifer calves (n = 15; 60 ± 5 d of age; mean ± standard deviation) were introduced in weekly cohorts (5 ± 3 new calves/wk) to an existing group on pasture (8 ± 2 calves/group). We measured activity and behavior on the day of initial introduction and after 1 wk, when calves were exposed to regrouping (addition of younger calves and removal of older calves from the pen). Upon introduction, calves had 2 to 3 times more visits to each region of the pasture; they also spent more time at the back of the pasture, closest to where they were introduced and furthest from the feeding area (25.13 vs. 9.63% of observation period, standard error = 5.04), compared with behavior after 1 wk. Calves also spent less time feeding (5.0 vs. 9.6% of observation period, standard error = 0.82) and self-grooming (0.52 vs. 1.31% of observation period; standard error = 0.20) and more time within 1 to 3 body lengths of another calf (16.3 vs. 11.9% of observation period, standard error = 2.3) when initially grouped. We also explored whether behavioral responses to initial postweaning grouping might be associated with individual differences in behavioral flexibility. To evaluate this, we assessed cognition of individually housed calves (n = 18) at 5 wk of age using a spatial discrimination task conducted in a T-maze to measure initial learning (ability to learn the location of a milk reward) and reversal learning (ability to relearn location of the milk reward when it was switched to opposite arm of the maze). Calves were categorized by reversal learning success (passed, n = 6, or failed, n = 8). Calves that passed the reversal learning stage of the cognitive task spent less time at the back of the pen (9.3 vs. 27.4% of observation period, standard error = 5.5) and tended to have lower latency to feed (121.8 vs. 306.2 min; standard error = 96.4) on the day of introduction compared with calves that failed reversal learning. Overall, we found that initial introduction to social grouping had a marked influence on behavior of weaned calves that decreased over time. Further, these results suggest that individual variability in cognitive ability may be predictive of behavioral responses and ability to adapt to a novel environment. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Hollyday, E. F. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Streamflow characteristics in the Delmarva Peninsula derived from the records of daily discharge of 20 gaged basins are representative of the full range in flow conditions and include all of those commonly used for design or planning purposes. They include annual flood peaks with recurrence intervals of 2, 5, 10, 25, and 50 years, mean annual discharge, standard deviation of the mean annual discharge, mean monthly discharges, standard deviation of the mean monthly discharges, low-flow characteristics, flood volume characteristics, and the discharge equalled or exceeded 50 percent of the time. Streamflow and basin characteristics were related by a technique of multiple regression using a digital computer. A control group of equations was computed using basin characteristics derived from maps and climatological records. An experimental group of equations was computed using basin characteristics derived from LANDSAT imagery as well as from maps and climatological records. Based on a reduction in standard error of estimate equal to or greater than 10 percent, the equations for 12 stream flow characteristics were substantially improved by adding to the analyses basin characteristics derived from LANDSAT imagery.
Postmortem validation of breast density using dual-energy mammography
Molloi, Sabee; Ducote, Justin L.; Ding, Huanjun; Feig, Stephen A.
2014-01-01
Purpose: Mammographic density has been shown to be an indicator of breast cancer risk and also reduces the sensitivity of screening mammography. Currently, there is no accepted standard for measuring breast density. Dual energy mammography has been proposed as a technique for accurate measurement of breast density. The purpose of this study is to validate its accuracy in postmortem breasts and compare it with other existing techniques. Methods: Forty postmortem breasts were imaged using a dual energy mammography system. Glandular and adipose equivalent phantoms of uniform thickness were used to calibrate a dual energy basis decomposition algorithm. Dual energy decomposition was applied after scatter correction to calculate breast density. Breast density was also estimated using radiologist reader assessment, standard histogram thresholding and a fuzzy C-mean algorithm. Chemical analysis was used as the reference standard to assess the accuracy of different techniques to measure breast composition. Results: Breast density measurements using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm, and dual energy were in good agreement with the measured fibroglandular volume fraction using chemical analysis. The standard error estimates using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean, and dual energy were 9.9%, 8.6%, 7.2%, and 4.7%, respectively. Conclusions: The results indicate that dual energy mammography can be used to accurately measure breast density. The variability in breast density estimation using dual energy mammography was lower than reader assessment rankings, standard histogram thresholding, and fuzzy C-mean algorithm. Improved quantification of breast density is expected to further enhance its utility as a risk factor for breast cancer. PMID:25086548
Postmortem validation of breast density using dual-energy mammography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molloi, Sabee, E-mail: symolloi@uci.edu; Ducote, Justin L.; Ding, Huanjun
2014-08-15
Purpose: Mammographic density has been shown to be an indicator of breast cancer risk and also reduces the sensitivity of screening mammography. Currently, there is no accepted standard for measuring breast density. Dual energy mammography has been proposed as a technique for accurate measurement of breast density. The purpose of this study is to validate its accuracy in postmortem breasts and compare it with other existing techniques. Methods: Forty postmortem breasts were imaged using a dual energy mammography system. Glandular and adipose equivalent phantoms of uniform thickness were used to calibrate a dual energy basis decomposition algorithm. Dual energy decompositionmore » was applied after scatter correction to calculate breast density. Breast density was also estimated using radiologist reader assessment, standard histogram thresholding and a fuzzy C-mean algorithm. Chemical analysis was used as the reference standard to assess the accuracy of different techniques to measure breast composition. Results: Breast density measurements using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm, and dual energy were in good agreement with the measured fibroglandular volume fraction using chemical analysis. The standard error estimates using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean, and dual energy were 9.9%, 8.6%, 7.2%, and 4.7%, respectively. Conclusions: The results indicate that dual energy mammography can be used to accurately measure breast density. The variability in breast density estimation using dual energy mammography was lower than reader assessment rankings, standard histogram thresholding, and fuzzy C-mean algorithm. Improved quantification of breast density is expected to further enhance its utility as a risk factor for breast cancer.« less
Theoretical and Experimental Study of Light Shift in a CPT-Based RB Vapor Cell Frequency Standard
2001-01-01
Questions and Answers ROBERT LUTWAK (Datum): When you servo the microwave power to eliminate the light shift, what do you servo to? To what are you...leveling that signal? MIA0 ZHU: Do you mean what I servo to o r where did I do the servo? LUTWAK : What is the error signal that determines the TR
Throughfall in a Puerto Rican lower montane rain forest: A comparison of sampling strategies
F. Holwerda; F.N. Scatena; L.A. Bruijnzeel
2006-01-01
During a one-year period, the variability of throughfall and the standard errors of the means associated with different gauge arrangements were studied in a lower montane rain forest in Puerto Rico. The following gauge arrangements were used: (1) 60 fixed gauges, (2) 30 fixed gauges, and (3) 30 roving gauges. Stemflow was measured on 22 trees of four different species...
Spatial Ensemble Postprocessing of Precipitation Forecasts Using High Resolution Analyses
NASA Astrophysics Data System (ADS)
Lang, Moritz N.; Schicker, Irene; Kann, Alexander; Wang, Yong
2017-04-01
Ensemble prediction systems are designed to account for errors or uncertainties in the initial and boundary conditions, imperfect parameterizations, etc. However, due to sampling errors and underestimation of the model errors, these ensemble forecasts tend to be underdispersive, and to lack both reliability and sharpness. To overcome such limitations, statistical postprocessing methods are commonly applied to these forecasts. In this study, a full-distributional spatial post-processing method is applied to short-range precipitation forecasts over Austria using Standardized Anomaly Model Output Statistics (SAMOS). Following Stauffer et al. (2016), observation and forecast fields are transformed into standardized anomalies by subtracting a site-specific climatological mean and dividing by the climatological standard deviation. Due to the need of fitting only a single regression model for the whole domain, the SAMOS framework provides a computationally inexpensive method to create operationally calibrated probabilistic forecasts for any arbitrary location or for all grid points in the domain simultaneously. Taking advantage of the INCA system (Integrated Nowcasting through Comprehensive Analysis), high resolution analyses are used for the computation of the observed climatology and for model training. The INCA system operationally combines station measurements and remote sensing data into real-time objective analysis fields at 1 km-horizontal resolution and 1 h-temporal resolution. The precipitation forecast used in this study is obtained from a limited area model ensemble prediction system also operated by ZAMG. The so called ALADIN-LAEF provides, by applying a multi-physics approach, a 17-member forecast at a horizontal resolution of 10.9 km and a temporal resolution of 1 hour. The performed SAMOS approach statistically combines the in-house developed high resolution analysis and ensemble prediction system. The station-based validation of 6 hour precipitation sums shows a mean improvement of more than 40% in CRPS when compared to bilinearly interpolated uncalibrated ensemble forecasts. The validation on randomly selected grid points, representing the true height distribution over Austria, still indicates a mean improvement of 35%. The applied statistical model is currently set up for 6-hourly and daily accumulation periods, but will be extended to a temporal resolution of 1-3 hours within a new probabilistic nowcasting system operated by ZAMG.
High-frequency ultrasound measurements of the normal ciliary body and iris.
Garcia, Julian P S; Spielberg, Leigh; Finger, Paul T
2011-01-01
To determine the normal ultrasonographic thickness of the iris and ciliary body. This prospective 35-MHz ultrasonographic study included 80 normal eyes of 40 healthy volunteers. The images were obtained at the 12-, 3-, 6-, and 9-o'clock radial meridians, measured at three locations along the radial length of the iris and at the thickest section of the ciliary body. Mixed model was used to estimate eye site-adjusted means and standard errors and to test the statistical difference of adjusted results. Parameters included mean thickness, standard deviation, and range. Mean thicknesses at the iris root, midway along the radial length of the iris, and at the juxtapupillary margin were 0.4 ± 0.1, 0.5 ± 0.1, and 0.6 ± 0.1 mm, respectively. Those of the ciliary body, ciliary processes, and ciliary body + ciliary processes were 0.7 ± 0.1, 0.6 ± 0.1, and 1.3 ± 0.2 mm, respectively. This study provides standard, normative thickness data for the iris and ciliary body in healthy adults using ultrasonographic imaging. Copyright 2011, SLACK Incorporated.
Boland, Julie E; Queen, Robin
2016-01-01
The increasing prevalence of social media means that we often encounter written language characterized by both stylistic variation and outright errors. How does the personality of the reader modulate reactions to non-standard text? Experimental participants read 'email responses' to an ad for a housemate that either contained no errors or had been altered to include either typos (e.g., teh) or homophonous grammar errors (grammos, e.g., to/too, it's/its). Participants completed a 10-item evaluation scale for each message, which measured their impressions of the writer. In addition participants completed a Big Five personality assessment and answered demographic and language attitude questions. Both typos and grammos had a negative impact on the evaluation scale. This negative impact was not modulated by age, education, electronic communication frequency, or pleasure reading time. In contrast, personality traits did modulate assessments, and did so in distinct ways for grammos and typos.
2016-01-01
The increasing prevalence of social media means that we often encounter written language characterized by both stylistic variation and outright errors. How does the personality of the reader modulate reactions to non-standard text? Experimental participants read ‘email responses’ to an ad for a housemate that either contained no errors or had been altered to include either typos (e.g., teh) or homophonous grammar errors (grammos, e.g., to/too, it’s/its). Participants completed a 10-item evaluation scale for each message, which measured their impressions of the writer. In addition participants completed a Big Five personality assessment and answered demographic and language attitude questions. Both typos and grammos had a negative impact on the evaluation scale. This negative impact was not modulated by age, education, electronic communication frequency, or pleasure reading time. In contrast, personality traits did modulate assessments, and did so in distinct ways for grammos and typos. PMID:26959823
Optimized universal color palette design for error diffusion
NASA Astrophysics Data System (ADS)
Kolpatzik, Bernd W.; Bouman, Charles A.
1995-04-01
Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.
Computer-aided field editing in DHS: the Turkey experiment.
1995-01-01
A study comparing field editing using a Notebook computer, computer-aided field editing (CAFE), with that done manually in the standard manner, during the 1993 Demographic and Health Survey (DHS) in Turkey, demonstrated that there was less missing data and a lower mean number of errors for teams using CAFE. 6 of 13 teams used CAFE in the Turkey experiment; the computers were equipped with Integrated System for Survey Analysis (ISSA) software for editing the DHS questionnaires. The CAFE teams completed 2466 out of 8619 household questionnaires and 1886 out of 6649 individual questionnaires. The CAFE team editor entered data into the computer and marked any detected errors on the questionnaire; the errors were then corrected by the editor, in the field, based on other responses in the questionnaire, or on corrections made by the interviewer to which the questionnaire was returned. Errors in questionnaires edited manually are not identified until they are sent to the survey office for data processing, when it is too late to ask for clarification from respondents. There was one area where the error rate was higher for CAFE teams; the CAFE editors paid less attention to errors presented as warnings only.
Asquith, William H.
2014-01-01
A database containing more than 16,300 discharge values and ancillary hydraulic attributes was assembled from summaries of discharge measurement records for 391 USGS streamflow-gauging stations (streamgauges) in Texas. Each discharge is between the 40th- and 60th-percentile daily mean streamflow as determined by period-of-record, streamgauge-specific, flow-duration curves. Each discharge therefore is assumed to represent a discharge measurement made for near-median streamflow conditions, and such conditions are conceptualized as representative of midrange to baseflow conditions in much of the state. The hydraulic attributes of each discharge measurement included concomitant cross-section flow area, water-surface top width, and reported mean velocity. Two regression equations are presented: (1) an expression for discharge and (2) an expression for mean velocity, both as functions of selected hydraulic attributes and watershed characteristics. Specifically, the discharge equation uses cross-sectional area, water-surface top width, contributing drainage area of the watershed, and mean annual precipitation of the location; the equation has an adjusted R-squared of approximately 0.95 and residual standard error of approximately 0.23 base-10 logarithm (cubic meters per second). The mean velocity equation uses discharge, water-surface top width, contributing drainage area, and mean annual precipitation; the equation has an adjusted R-squared of approximately 0.50 and residual standard error of approximately 0.087 third root (meters per second). Residual plots from both equations indicate that reliable estimates of discharge and mean velocity at ungauged stream sites are possible. Further, the relation between contributing drainage area and main-channel slope (a measure of whole-watershed slope) is depicted to aid analyst judgment of equation applicability for ungauged sites. Example applications and computations are provided and discussed within a real-world, discharge-measurement scenario, and an illustration of the development of a preliminary stage-discharge relation using the discharge equation is given.
Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant
Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar
2015-01-01
Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485
Malinowski, Kathleen; McAvoy, Thomas J.; George, Rohini; Dieterich, Sonja; D’Souza, Warren D.
2013-01-01
Purpose: To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Methods: Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥3 mm), and always (approximately once per minute). Results: Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. Conclusions: The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization. PMID:23822413
Measurement of reaeration coefficients for selected Florida streams
Hampson, P.S.; Coffin, J.E.
1989-01-01
A total of 29 separate reaeration coefficient determinations were performed on 27 subreaches of 12 selected Florida streams between October 1981 and May 1985. Measurements performed prior to June 1984 were made using the peak and area methods with ethylene and propane as the tracer gases. Later measurements utilized the steady-state method with propane as the only tracer gas. The reaeration coefficients ranged from 1.07 to 45.9 days with a mean estimated probable error of +/16.7%. Ten predictive equations (compiled from the literature) were also evaluated using the measured coefficients. The most representative equation was one of the energy dissipation type with a standard error of 60.3%. Seven of the 10 predictive additional equations were modified using the measured coefficients and nonlinear regression techniques. The most accurate of the developed equations was also of the energy dissipation form and had a standard error of 54.9%. For 5 of the 13 subreaches in which both ethylene and propane were used, the ethylene data resulted in substantially larger reaeration coefficient values which were rejected. In these reaches, ethylene concentrations were probably significantly affected by one or more electrophilic addition reactions known to occur in aqueous media. (Author 's abstract)
Research on Standard Errors of Equating Differences. Research Report. ETS RR-10-25
ERIC Educational Resources Information Center
Moses, Tim; Zhang, Wenmin
2010-01-01
In this paper, the "standard error of equating difference" (SEED) is described in terms of originally proposed kernel equating functions (von Davier, Holland, & Thayer, 2004) and extended to incorporate traditional linear and equipercentile functions. These derivations expand on prior developments of SEEDs and standard errors of equating and…
Rosman, Mohamad; Wong, Tien Y; Tay, Wan-Ting; Tong, Louis; Saw, Seang-Mei
2009-08-01
To describe the prevalence and the risk factors of undercorrected refractive error in an adult urban Malay population. This population-based, cross-sectional study was conducted in Singapore in 3280 Malay adults, aged 40 to 80 years. All individuals were examined at a centralized clinic and underwent standardized interviews and assessment of refractive errors and presenting and best corrected visual acuities. Distance presenting visual acuity was monocularly measured by using a logarithm of the minimum angle of resolution (logMAR) number chart at a distance of 4 m, with the participants wearing their "walk-in" optical corrections (spectacles or contact lenses), if any. Refraction was determined by subjective refraction by trained, certified study optometrists. Best corrected visual acuity was monocularly assessed and recorded in logMAR scores using the same test protocol as was used for presenting visual acuity. Undercorrected refractive error was defined as an improvement of at least 0.2 logMAR (2 lines equivalent) in the best corrected visual acuity compared with the presenting visual acuity in the better eye. The mean age of the subjects included in our study was 58 +/- 11 years, and 52% of the subjects were women. The prevalence rate of undercorrected refractive error among Singaporean Malay adults in our study (n = 3115) was 20.4% (age-standardized prevalence rate, 18.3%). More of the women had undercorrected refractive error than the men (21.8% vs. 18.8%, P = 0.04). Undercorrected refractive error was also more common in subjects older than 50 years than in subjects aged 40 to 49 years (22.6% vs. 14.3%, P < 0.001). Non-spectacle wearers were more likely to have undercorrected refractive errors than were spectacle wearers (24.4% vs. 14.4%, P < 0.001). Persons with primary school education or less were 1.89 times (P = 0.03) more likely to have undercorrected refractive errors than those with post-secondary school education or higher. In contrast, persons with a history of eye disease were 0.74 times (P = 0.003) less likely to have undercorrected refractive errors. The proportion of undercorrected refractive error among the Singaporean Malay adults with refractive errors was higher than that of the Singaporean Chinese adults with refractive errors. Undercorrected refractive error is a significant cause of correctable visual impairment among Singaporean Malay adults, affecting one in five persons.
NASA Astrophysics Data System (ADS)
Zhang, Yu-ying; Wang, Meng-jie; Chang, Chun-ran; Xu, Kang-zhen; Ma, Hai-xia; Zhao, Feng-qi
2018-05-01
The standard thermite reaction enthalpies (ΔrHmθ) for seven metal oxides were theoretically analyzed using density functional theory (DFT) under five different functional levels, and the results were compared with experimental values. Through the comparison of the linear fitting constants, mean error and root mean square error, the Perdew-Wang functional within the framework of local density approximation (LDA-PWC) and Perdew-Burke-Ernzerhof exchange-correlation functional within the framework of generalized gradient approximation (GGA-PBE) were selected to further calculate the thermite reaction enthalpies for metal composite oxides (MCOs). According to the Kirchhoff formula, the standard molar reaction enthalpies for these MCOs were obtained and their standard molar enthalpies of formation (ΔfHmθ) were finally calculated. The results indicated that GGA-PBE is the most suitable one out of the total five methods to calculate these oxides. Tungstate crystals present the maximum deviation of the enthalpies of thermite reactions for MCOs and these of their physical metal oxide mixtures, but ferrite crystals are the minimum. The correlation coefficients are all above 0.95, meaning linear fitting results are very precise. And the molar enthalpies of formation for NiMoO4, CuMoO4, PbZrO3 (Pm/3m), PbZrO3 (PBA2), PbZrO3 (PBam), MgZrO3, CdZrO3, MnZrO3, CuWO4 and Fe2WO6 were first obtained as -1078.75, -1058.45, -1343.87, -1266.54, -1342.29, -1333.03, -1210.43, -1388.05, -1131.07 and - 1860.11 kJ·mol-1, respectively.
Leardini, Alberto; Lullini, Giada; Giannini, Sandro; Berti, Lisa; Ortolani, Maurizio; Caravaggi, Paolo
2014-09-11
Several rehabilitation systems based on inertial measurement units (IMU) are entering the market for the control of exercises and to measure performance progression, particularly for recovery after lower limb orthopaedic treatments. IMU are easy to wear also by the patient alone, but the extent to which IMU's malpositioning in routine use can affect the accuracy of the measurements is not known. A new such system (Riablo™, CoRehab, Trento, Italy), using audio-visual biofeedback based on videogames, was assessed against state-of-the-art gait analysis as the gold standard. The sensitivity of the system to errors in the IMU's position and orientation was measured in 5 healthy subjects performing two hip joint motion exercises. Root mean square deviation was used to assess differences in the system's kinematic output between the erroneous and correct IMU position and orientation.In order to estimate the system's accuracy, thorax and knee joint motion of 17 healthy subjects were tracked during the execution of standard rehabilitation tasks and compared with the corresponding measurements obtained with an established gait protocol using stereophotogrammetry. A maximum mean error of 3.1 ± 1.8 deg and 1.9 ± 0.8 deg from the angle trajectory with correct IMU position was recorded respectively in the medio-lateral malposition and frontal-plane misalignment tests. Across the standard rehabilitation tasks, the mean distance between the IMU and gait analysis systems was on average smaller than 5°. These findings showed that the tested IMU based system has the necessary accuracy to be safely utilized in rehabilitation programs after orthopaedic treatments of the lower limb.
NASA Astrophysics Data System (ADS)
Saarinen, N.; Vastaranta, M.; Näsi, R.; Rosnell, T.; Hakala, T.; Honkavaara, E.; Wulder, M. A.; Luoma, V.; Tommaselli, A. M. G.; Imai, N. N.; Ribeiro, E. A. W.; Guimarães, R. B.; Holopainen, M.; Hyyppä, J.
2017-10-01
Biodiversity is commonly referred to as species diversity but in forest ecosystems variability in structural and functional characteristics can also be treated as measures of biodiversity. Small unmanned aerial vehicles (UAVs) provide a means for characterizing forest ecosystem with high spatial resolution, permitting measuring physical characteristics of a forest ecosystem from a viewpoint of biodiversity. The objective of this study is to examine the applicability of photogrammetric point clouds and hyperspectral imaging acquired with a small UAV helicopter in mapping biodiversity indicators, such as structural complexity as well as the amount of deciduous and dead trees at plot level in southern boreal forests. Standard deviation of tree heights within a sample plot, used as a proxy for structural complexity, was the most accurately derived biodiversity indicator resulting in a mean error of 0.5 m, with a standard deviation of 0.9 m. The volume predictions for deciduous and dead trees were underestimated by 32.4 m3/ha and 1.7 m3/ha, respectively, with standard deviation of 50.2 m3/ha for deciduous and 3.2 m3/ha for dead trees. The spectral features describing brightness (i.e. higher reflectance values) were prevailing in feature selection but several wavelengths were represented. Thus, it can be concluded that structural complexity can be predicted reliably but at the same time can be expected to be underestimated with photogrammetric point clouds obtained with a small UAV. Additionally, plot-level volume of dead trees can be predicted with small mean error whereas identifying deciduous species was more challenging at plot level.
Schillaci, Michael A; Schillaci, Mario E
2009-02-01
The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.
Natural radioactivity of riverbank sediments of the Maritza and Tundja Rivers in Turkey.
Aytas, Sule; Yusan, Sabriye; Aslani, Mahmoud A A; Karali, Turgay; Turkozu, D Alkim; Gok, Cem; Erenturk, Sema; Gokce, Melis; Oguz, K Firat
2012-01-01
This article represents the first results of the natural radionuclides in the Maritza and Tundja river sediments, in the vicinity of Edirne city, Turkey. The aim of the article is to describe the natural radioactivity concentrations as a baseline for further studies and to obtain the distribution patterns of radioactivity in trans-boundary river sediments of the Maritza and Tundja, which are shared by Turkey, Bulgaria and Greece. Sediment samples were collected during the period of August 2007-April 2010. The riverbank sediment samples were analyzed firstly for their pH, organic matter content and soil texture. The gross alpha/beta and (238)U, (232)Th and (40)K activity concentrations were then investigated in the collected sediment samples. The mean and standard error of mean values of gross alpha and gross beta activity concentrations were found as 91 ± 11, 410 ± 69 Bq/kg and 86 ± 11, 583 ± 109 Bq/kg for the Maritza and Tundja river sediments, respectively. Moreover, the mean and standard error of mean values of (238)U, (232)Th and (40)K activity concentrations were determined as 219 ± 68, 128 ± 55, 298 ± 13 and as 186 ± 98, 121 ± 68, 222 ± 30 Bq/kg for the Maritza and Tundja River, respectively. Absorbed dose rates (D) and annual effective dose equivalent s have been calculated for each sampling point. The average value of adsorbed dose rate and effective dose equivalent were found as 191 and 169 nGy/h; 2 and 2 mSv/y for the Maritza and the Tundja river sediments, respectively.
The Calibration of Gloss Reference Standards
NASA Astrophysics Data System (ADS)
Budde, W.
1980-04-01
In present international and national standards for the measurement of specular gloss the primary and secondary reference standards are defined for monochromatic radiation. However the glossmeter specified is using polychromatic radiation (CIE Standard Illuminant C) and the CIE Standard Photometric Observer. This produces errors in practical gloss measurements of up to 0.5%. Although this may be considered small as compared to the accuracy of most practical gloss measurements, such an error should not be tolerated in the calibration of secondary standards. Corrections for such errors are presented and various alternatives for amendments of the existing documentary standards are discussed.
Measurements of stem diameter: implications for individual- and stand-level errors.
Paul, Keryn I; Larmour, John S; Roxburgh, Stephen H; England, Jacqueline R; Davies, Micah J; Luck, Hamish D
2017-08-01
Stem diameter is one of the most common measurements made to assess the growth of woody vegetation, and the commercial and environmental benefits that it provides (e.g. wood or biomass products, carbon sequestration, landscape remediation). Yet inconsistency in its measurement is a continuing source of error in estimates of stand-scale measures such as basal area, biomass, and volume. Here we assessed errors in stem diameter measurement through repeated measurements of individual trees and shrubs of varying size and form (i.e. single- and multi-stemmed) across a range of contrasting stands, from complex mixed-species plantings to commercial single-species plantations. We compared a standard diameter tape with a Stepped Diameter Gauge (SDG) for time efficiency and measurement error. Measurement errors in diameter were slightly (but significantly) influenced by size and form of the tree or shrub, and stem height at which the measurement was made. Compared to standard tape measurement, the mean systematic error with SDG measurement was only -0.17 cm, but varied between -0.10 and -0.52 cm. Similarly, random error was relatively large, with standard deviations (and percentage coefficients of variation) averaging only 0.36 cm (and 3.8%), but varying between 0.14 and 0.61 cm (and 1.9 and 7.1%). However, at the stand scale, sampling errors (i.e. how well individual trees or shrubs selected for measurement of diameter represented the true stand population in terms of the average and distribution of diameter) generally had at least a tenfold greater influence on random errors in basal area estimates than errors in diameter measurements. This supports the use of diameter measurement tools that have high efficiency, such as the SDG. Use of the SDG almost halved the time required for measurements compared to the diameter tape. Based on these findings, recommendations include the following: (i) use of a tape to maximise accuracy when developing allometric models, or when monitoring relatively small changes in permanent sample plots (e.g. National Forest Inventories), noting that care is required in irregular-shaped, large-single-stemmed individuals, and (ii) use of a SDG to maximise efficiency when using inventory methods to assess basal area, and hence biomass or wood volume, at the stand scale (i.e. in studies of impacts of management or site quality) where there are budgetary constraints, noting the importance of sufficient sample sizes to ensure that the population sampled represents the true population.
Anatomy of emotion: a 3D study of facial mimicry.
Ferrario, V F; Sforza, C
2007-01-01
Alterations in facial motion severely impair the quality of life and social interaction of patients, and an objective grading of facial function is necessary. A method for the non-invasive detection of 3D facial movements was developed. Sequences of six standardized facial movements (maximum smile; free smile; surprise with closed mouth; surprise with open mouth; right side eye closure; left side eye closure) were recorded in 20 healthy young adults (10 men, 10 women) using an optoelectronic motion analyzer. For each subject, 21 cutaneous landmarks were identified by 2-mm reflective markers, and their 3D movements during each facial animation were computed. Three repetitions of each expression were recorded (within-session error), and four separate sessions were used (between-session error). To assess the within-session error, the technical error of the measurement (random error, TEM) was computed separately for each sex, movement and landmark. To assess the between-session repeatability, the standard deviation among the mean displacements of each landmark (four independent sessions) was computed for each movement. TEM for the single landmarks ranged between 0.3 and 9.42 mm (intrasession error). The sex- and movement-related differences were statistically significant (two-way analysis of variance, p=0.003 for sex comparison, p=0.009 for the six movements, p<0.001 for the sex x movement interaction). Among four different (independent) sessions, the left eye closure had the worst repeatability, the right eye closure had the best one; the differences among various movements were statistically significant (one-way analysis of variance, p=0.041). In conclusion, the current protocol demonstrated a sufficient repeatability for a future clinical application. Great care should be taken to assure a consistent marker positioning in all the subjects.
Calibration of Contactless Pulse Oximetry
Bartula, Marek; Bresch, Erik; Rocque, Mukul; Meftah, Mohammed; Kirenko, Ihor
2017-01-01
BACKGROUND: Contactless, camera-based photoplethysmography (PPG) interrogates shallower skin layers than conventional contact probes, either transmissive or reflective. This raises questions on the calibratability of camera-based pulse oximetry. METHODS: We made video recordings of the foreheads of 41 healthy adults at 660 and 840 nm, and remote PPG signals were extracted. Subjects were in normoxic, hypoxic, and low temperature conditions. Ratio-of-ratios were compared to reference Spo2 from 4 contact probes. RESULTS: A calibration curve based on artifact-free data was determined for a population of 26 individuals. For an Spo2 range of approximately 83% to 100% and discarding short-term errors, a root mean square error of 1.15% was found with an upper 99% one-sided confidence limit of 1.65%. Under normoxic conditions, a decrease in ambient temperature from 23 to 7°C resulted in a calibration error of 0.1% (±1.3%, 99% confidence interval) based on measurements for 3 subjects. PPG signal strengths varied strongly among individuals from about 0.9 × 10−3 to 4.6 × 10−3 for the infrared wavelength. CONCLUSIONS: For healthy adults, the results present strong evidence that camera-based contactless pulse oximetry is fundamentally feasible because long-term (eg, 10 minutes) error stemming from variation among individuals expressed as A*rms is significantly lower (<1.65%) than that required by the International Organization for Standardization standard (<4%) with the notion that short-term errors should be added. A first illustration of such errors has been provided with A**rms = 2.54% for 40 individuals, including 6 with dark skin. Low signal strength and subject motion present critical challenges that will have to be addressed to make camera-based pulse oximetry practically feasible. PMID:27258081
Simplified Approach Charts Improve Data Retrieval Performance
Stewart, Michael; Laraway, Sean; Jordan, Kevin; Feary, Michael S.
2016-01-01
The effectiveness of different instrument approach charts to deliver minimum visibility and altitude information during airport equipment outages was investigated. Eighteen pilots flew simulated instrument approaches in three conditions: (a) normal operations using a standard approach chart (standard-normal), (b) equipment outage conditions using a standard approach chart (standard-outage), and (c) equipment outage conditions using a prototype decluttered approach chart (prototype-outage). Errors and retrieval times in identifying minimum altitudes and visibilities were measured. The standard-outage condition produced significantly more errors and longer retrieval times versus the standard-normal condition. The prototype-outage condition had significantly fewer errors and shorter retrieval times than did the standard-outage condition. The prototype-outage condition produced significantly fewer errors but similar retrieval times when compared with the standard-normal condition. Thus, changing the presentation of minima may reduce risk and increase safety in instrument approaches, specifically with airport equipment outages. PMID:28491009
Fractional Ornstein-Uhlenbeck for index prices of FTSE Bursa Malaysia KLCI
NASA Astrophysics Data System (ADS)
Chen, Kho Chia; Bahar, Arifah; Ting, Chee-Ming
2014-07-01
This paper studies the Ornstein-Uhlenbeck model that incorporates long memory stochastic volatility which is known as fractional Ornstein-Uhlenbeck model. The determination of the existence of long range dependence of the index prices of FTSE Bursa Malaysia KLCI is measured by the Hurst exponent. The empirical distribution of unobserved volatility is estimated using the particle filtering method. The performance between fractional Ornstein -Uhlenbeck and standard Ornstein -Uhlenbeck process had been compared. The mean square errors of the fractional Ornstein-Uhlenbeck model indicated that the model describes index prices better than the standard Ornstein-Uhlenbeck process.
Prediction of oxygen consumption in cardiac rehabilitation patients performing leg ergometry
NASA Astrophysics Data System (ADS)
Alvarez, John Gershwin
The purpose of this study was two-fold. First, to determine the validity of the ACSM leg ergometry equation in the prediction of steady-state oxygen consumption (VO2) in a heterogeneous population of cardiac patients. Second, to determine whether a more accurate prediction equation could be developed for use in the cardiac population. Thirty-one cardiac rehabilitation patients participated in the study of which 24 were men and 7 were women. Biometric variables (mean +/- sd) of the participants were as follows: age = 61.9 +/- 9.5 years; height = 172.6 +/- 1.6 cm; and body mass = 82.3 +/- 10.6 kg. Subjects exercised on a MonarchTM cycle ergometer at 0, 180, 360, 540 and 720 kgm ˙ min-1. The length of each stage was five minutes. Heart rate, ECG, and VO2 were continuously monitored. Blood pressure and heart rate were collected at the end of each stage. Steady state VO 2 was calculated for each stage using the average of the last two minutes. Correlation coefficients, standard error of estimate, coefficient of determination, total error, and mean bias were used to determine the accuracy of the ACSM equation (1995). The analysis found the ACSM equation to be a valid means of estimating VO2 in cardiac patients. Simple linear regression was used to develop a new equation. Regression analysis found workload to be a significant predictor of VO2. The following equation is the result: VO2 = (1.6 x kgm ˙ min-1) + 444 ml ˙ min-1. The r of the equation was .78 (p < .05) and the standard error of estimate was 211 ml ˙ min-1. Analysis of variance was used to determine significant differences between means for actual and predicted VO2 values for each equation. The analysis found the ACSM and new equation to significantly (p < .05) under predict VO2 during unloaded pedaling. Furthermore, the ACSM equation was found to significantly (p < .05) under predict VO 2 during the first loaded stage of exercise. When the accuracy of the ACSM and new equations were compared based on correlation coefficients, coefficients of determinations, SEEs, total error, and mean bias the new equation was found to have equal or better accuracy at all workloads. The final form of the new equation is: VO2 (ml ˙ min-1) = (kgm ˙ min-1 x 1.6 ml ˙ kgm-1) + (3.5 ml ˙ kg-1 ˙ min-1 x body mass in kg) + 156 ml ˙ min-1.
Yan, M; Lovelock, D; Hunt, M; Mechalakos, J; Hu, Y; Pham, H; Jackson, A
2013-12-01
To use Cone Beam CT scans obtained just prior to treatments of head and neck cancer patients to measure the setup error and cumulative dose uncertainty of the cochlea. Data from 10 head and neck patients with 10 planning CTs and 52 Cone Beam CTs taken at time of treatment were used in this study. Patients were treated with conventional fractionation using an IMRT dose painting technique, most with 33 fractions. Weekly radiographic imaging was used to correct the patient setup. The authors used rigid registration of the planning CT and Cone Beam CT scans to find the translational and rotational setup errors, and the spatial setup errors of the cochlea. The planning CT was rotated and translated such that the cochlea positions match those seen in the cone beam scans, cochlea doses were recalculated and fractional doses accumulated. Uncertainties in the positions and cumulative doses of the cochlea were calculated with and without setup adjustments from radiographic imaging. The mean setup error of the cochlea was 0.04 ± 0.33 or 0.06 ± 0.43 cm for RL, 0.09 ± 0.27 or 0.07 ± 0.48 cm for AP, and 0.00 ± 0.21 or -0.24 ± 0.45 cm for SI with and without radiographic imaging, respectively. Setup with radiographic imaging reduced the standard deviation of the setup error by roughly 1-2 mm. The uncertainty of the cochlea dose depends on the treatment plan and the relative positions of the cochlea and target volumes. Combining results for the left and right cochlea, the authors found the accumulated uncertainty of the cochlea dose per fraction was 4.82 (0.39-16.8) cGy, or 10.1 (0.8-32.4) cGy, with and without radiographic imaging, respectively; the percentage uncertainties relative to the planned doses were 4.32% (0.28%-9.06%) and 10.2% (0.7%-63.6%), respectively. Patient setup error introduces uncertainty in the position of the cochlea during radiation treatment. With the assistance of radiographic imaging during setup, the standard deviation of setup error reduced by 31%, 42%, and 54% in RL, AP, and SI direction, respectively, and consequently, the uncertainty of the mean dose to cochlea reduced more than 50%. The authors estimate that the effects of these uncertainties on the probability of hearing loss for an individual patient could be as large as 10%.
Yan, M.; Lovelock, D.; Hunt, M.; Mechalakos, J.; Hu, Y.; Pham, H.; Jackson, A.
2013-01-01
Purpose: To use Cone Beam CT scans obtained just prior to treatments of head and neck cancer patients to measure the setup error and cumulative dose uncertainty of the cochlea. Methods: Data from 10 head and neck patients with 10 planning CTs and 52 Cone Beam CTs taken at time of treatment were used in this study. Patients were treated with conventional fractionation using an IMRT dose painting technique, most with 33 fractions. Weekly radiographic imaging was used to correct the patient setup. The authors used rigid registration of the planning CT and Cone Beam CT scans to find the translational and rotational setup errors, and the spatial setup errors of the cochlea. The planning CT was rotated and translated such that the cochlea positions match those seen in the cone beam scans, cochlea doses were recalculated and fractional doses accumulated. Uncertainties in the positions and cumulative doses of the cochlea were calculated with and without setup adjustments from radiographic imaging. Results: The mean setup error of the cochlea was 0.04 ± 0.33 or 0.06 ± 0.43 cm for RL, 0.09 ± 0.27 or 0.07 ± 0.48 cm for AP, and 0.00 ± 0.21 or −0.24 ± 0.45 cm for SI with and without radiographic imaging, respectively. Setup with radiographic imaging reduced the standard deviation of the setup error by roughly 1–2 mm. The uncertainty of the cochlea dose depends on the treatment plan and the relative positions of the cochlea and target volumes. Combining results for the left and right cochlea, the authors found the accumulated uncertainty of the cochlea dose per fraction was 4.82 (0.39–16.8) cGy, or 10.1 (0.8–32.4) cGy, with and without radiographic imaging, respectively; the percentage uncertainties relative to the planned doses were 4.32% (0.28%–9.06%) and 10.2% (0.7%–63.6%), respectively. Conclusions: Patient setup error introduces uncertainty in the position of the cochlea during radiation treatment. With the assistance of radiographic imaging during setup, the standard deviation of setup error reduced by 31%, 42%, and 54% in RL, AP, and SI direction, respectively, and consequently, the uncertainty of the mean dose to cochlea reduced more than 50%. The authors estimate that the effects of these uncertainties on the probability of hearing loss for an individual patient could be as large as 10%. PMID:24320510
NASA Astrophysics Data System (ADS)
Peterson, James Preston, II
Unmanned Aerial Systems (UAS) are rapidly blurring the lines between traditional and close range photogrammetry, and between surveying and photogrammetry. UAS are providing an economic platform for performing aerial surveying on small projects. The focus of this research was to describe traditional photogrammetric imagery and Light Detection and Ranging (LiDAR) geospatial products, describe close range photogrammetry (CRP), introduce UAS and computer vision (CV), and investigate whether industry mapping standards for accuracy can be met using UAS collection and CV processing. A 120-acre site was selected and 97 aerial targets were surveyed for evaluation purposes. Four UAS flights of varying heights above ground level (AGL) were executed, and three different target patterns of varying distances between targets were analyzed for compliance with American Society for Photogrammetry and Remote Sensing (ASPRS) and National Standard for Spatial Data Accuracy (NSSDA) mapping standards. This analysis resulted in twelve datasets. Error patterns were evaluated and reasons for these errors were determined. The relationship between the AGL, ground sample distance, target spacing and the root mean square error of the targets is exploited by this research to develop guidelines that use the ASPRS and NSSDA map standard as the template. These guidelines allow the user to select the desired mapping accuracy and determine what target spacing and AGL is required to produce the desired accuracy. These guidelines also address how UAS/CV phenomena affect map accuracy. General guidelines and recommendations are presented that give the user helpful information for planning a UAS flight using CV technology.
Economic values under inappropriate normal distribution assumptions.
Sadeghi-Sefidmazgi, A; Nejati-Javaremi, A; Moradi-Shahrbabak, M; Miraei-Ashtiani, S R; Amer, P R
2012-08-01
The objectives of this study were to quantify the errors in economic values (EVs) for traits affected by cost or price thresholds when skewed or kurtotic distributions of varying degree are assumed to be normal and when data with a normal distribution is subject to censoring. EVs were estimated for a continuous trait with dichotomous economic implications because of a price premium or penalty arising from a threshold ranging between -4 and 4 standard deviations from the mean. In order to evaluate the impacts of skewness, positive and negative excess kurtosis, standard skew normal, Pearson and the raised cosine distributions were used, respectively. For the various evaluable levels of skewness and kurtosis, the results showed that EVs can be underestimated or overestimated by more than 100% when price determining thresholds fall within a range from the mean that might be expected in practice. Estimates of EVs were very sensitive to censoring or missing data. In contrast to practical genetic evaluation, economic evaluation is very sensitive to lack of normality and missing data. Although in some special situations, the presence of multiple thresholds may attenuate the combined effect of errors at each threshold point, in practical situations there is a tendency for a few key thresholds to dominate the EV, and there are many situations where errors could be compounded across multiple thresholds. In the development of breeding objectives for non-normal continuous traits influenced by value thresholds, it is necessary to select a transformation that will resolve problems of non-normality or consider alternative methods that are less sensitive to non-normality.
Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O
2017-02-01
One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.
A rocket ozonesonde for geophysical research and satellite intercomparison
NASA Technical Reports Server (NTRS)
Hilsenrath, E.; Coley, R. L.; Kirschner, P. T.; Gammill, B.
1979-01-01
The in-situ rocketsonde for ozone profile measurements developed and flown for geophysical research and satellite comparison is reviewed. The measurement principle involves the chemiluminescence caused by ambient ozone striking a detector and passive pumping as a means of sampling the atmosphere as the sonde descends through the atmosphere on a parachute. The sonde is flown on a meteorological sounding rocket, and flight data are telemetered via the standard meteorological GMD ground receiving system. The payload operation, sensor performance, and calibration procedures simulating flight conditions are described. An error analysis indicated an absolute accuracy of about 12 percent and a precision of about 8 percent. These are combined to give a measurement error of 14 percent.
ERIC Educational Resources Information Center
Wang, Tianyou
2009-01-01
Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…
Estimates of streamflow characteristics for selected small streams, Baker River basin, Washington
Williams, John R.
1987-01-01
Regression equations were used to estimate streamflow characteristics at eight ungaged sites on small streams in the Baker River basin in the North Cascade Mountains, Washington, that could be suitable for run-of-the-river hydropower development. The regression equations were obtained by relating known streamflow characteristics at 25 gaging stations in nearby basins to several physical and climatic variables that could be easily measured in gaged or ungaged basins. The known streamflow characteristics were mean annual flows, 1-, 3-, and 7-day low flows and high flows, mean monthly flows, and flow duration. Drainage area and mean annual precipitation were not the most significant variables in all the regression equations. Variance in the low flows and the summer mean monthly flows was reduced by including an index of glacierized area within the basin as a third variable. Standard errors of estimate of the regression equations ranged from 25 to 88%, and the largest errors were associated with the low flow characteristics. Discharge measurements made at the eight sites near midmonth each month during 1981 were used to estimate monthly mean flows at the sites for that period. These measurements also were correlated with concurrent daily mean flows from eight operating gaging stations. The correlations provided estimates of mean monthly flows that compared reasonably well with those estimated by the regression analyses. (Author 's abstract)
Estimation of correlation functions by stochastic approximation.
NASA Technical Reports Server (NTRS)
Habibi, A.; Wintz, P. A.
1972-01-01
Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.
Linhart, S. Mike; Nania, Jon F.; Sanders, Curtis L.; Archfield, Stacey A.
2012-01-01
The U.S. Geological Survey (USGS) maintains approximately 148 real-time streamgages in Iowa for which daily mean streamflow information is available, but daily mean streamflow data commonly are needed at locations where no streamgages are present. Therefore, the USGS conducted a study as part of a larger project in cooperation with the Iowa Department of Natural Resources to develop methods to estimate daily mean streamflow at locations in ungaged watersheds in Iowa by using two regression-based statistical methods. The regression equations for the statistical methods were developed from historical daily mean streamflow and basin characteristics from streamgages within the study area, which includes the entire State of Iowa and adjacent areas within a 50-mile buffer of Iowa in neighboring states. Results of this study can be used with other techniques to determine the best method for application in Iowa and can be used to produce a Web-based geographic information system tool to compute streamflow estimates automatically. The Flow Anywhere statistical method is a variation of the drainage-area-ratio method, which transfers same-day streamflow information from a reference streamgage to another location by using the daily mean streamflow at the reference streamgage and the drainage-area ratio of the two locations. The Flow Anywhere method modifies the drainage-area-ratio method in order to regionalize the equations for Iowa and determine the best reference streamgage from which to transfer same-day streamflow information to an ungaged location. Data used for the Flow Anywhere method were retrieved for 123 continuous-record streamgages located in Iowa and within a 50-mile buffer of Iowa. The final regression equations were computed by using either left-censored regression techniques with a low limit threshold set at 0.1 cubic feet per second (ft3/s) and the daily mean streamflow for the 15th day of every other month, or by using an ordinary-least-squares multiple linear regression method and the daily mean streamflow for the 15th day of every other month. The Flow Duration Curve Transfer method was used to estimate unregulated daily mean streamflow from the physical and climatic characteristics of gaged basins. For the Flow Duration Curve Transfer method, daily mean streamflow quantiles at the ungaged site were estimated with the parameter-based regression model, which results in a continuous daily flow-duration curve (the relation between exceedance probability and streamflow for each day of observed streamflow) at the ungaged site. By the use of a reference streamgage, the Flow Duration Curve Transfer is converted to a time series. Data used in the Flow Duration Curve Transfer method were retrieved for 113 continuous-record streamgages in Iowa and within a 50-mile buffer of Iowa. The final statewide regression equations for Iowa were computed by using a weighted-least-squares multiple linear regression method and were computed for the 0.01-, 0.05-, 0.10-, 0.15-, 0.20-, 0.30-, 0.40-, 0.50-, 0.60-, 0.70-, 0.80-, 0.85-, 0.90-, and 0.95-exceedance probability statistics determined from the daily mean streamflow with a reporting limit set at 0.1 ft3/s. The final statewide regression equation for Iowa computed by using left-censored regression techniques was computed for the 0.99-exceedance probability statistic determined from the daily mean streamflow with a low limit threshold and a reporting limit set at 0.1 ft3/s. For the Flow Anywhere method, results of the validation study conducted by using six streamgages show that differences between the root-mean-square error and the mean absolute error ranged from 1,016 to 138 ft3/s, with the larger value signifying a greater occurrence of outliers between observed and estimated streamflows. Root-mean-square-error values ranged from 1,690 to 237 ft3/s. Values of the percent root-mean-square error ranged from 115 percent to 26.2 percent. The logarithm (base 10) streamflow percent root-mean-square error ranged from 13.0 to 5.3 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.80 to 0.40. Percent-bias values ranged from 25.4 to 4.0 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.35. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.86 to 0.56. For the streamgage with the best agreement between observed and estimated streamflow, higher streamflows appear to be underestimated. For the streamgage with the worst agreement between observed and estimated streamflow, low flows appear to be overestimated whereas higher flows seem to be underestimated. Estimated cumulative streamflows for the period October 1, 2004, to September 30, 2009, are underestimated by -25.8 and -7.4 percent for the closest and poorest comparisons, respectively. For the Flow Duration Curve Transfer method, results of the validation study conducted by using the same six streamgages show that differences between the root-mean-square error and the mean absolute error ranged from 437 to 93.9 ft3/s, with the larger value signifying a greater occurrence of outliers between observed and estimated streamflows. Root-mean-square-error values ranged from 906 to 169 ft3/s. Values of the percent root-mean-square-error ranged from 67.0 to 25.6 percent. The logarithm (base 10) streamflow percent root-mean-square error ranged from 12.5 to 4.4 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.79 to 0.40. Percent-bias values ranged from 22.7 to 0.94 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.38. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.89 to 0.48. For the streamgage with the closest agreement between observed and estimated streamflow, there is relatively good agreement between observed and estimated streamflows. For the streamgage with the poorest agreement between observed and estimated streamflow, streamflows appear to be substantially underestimated for much of the time period. Estimated cumulative streamflow for the period October 1, 2004, to September 30, 2009, are underestimated by -9.3 and -22.7 percent for the closest and poorest comparisons, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, X; Gao, H; Schuemann, J
2015-06-15
Purpose: The Monte Carlo (MC) method is a gold standard for dose calculation in radiotherapy. However, it is not a priori clear how many particles need to be simulated to achieve a given dose accuracy. Prior error estimate and stopping criterion are not well established for MC. This work aims to fill this gap. Methods: Due to the statistical nature of MC, our approach is based on one-sample t-test. We design the prior error estimate method based on the t-test, and then use this t-test based error estimate for developing a simulation stopping criterion. The three major components are asmore » follows.First, the source particles are randomized in energy, space and angle, so that the dose deposition from a particle to the voxel is independent and identically distributed (i.i.d.).Second, a sample under consideration in the t-test is the mean value of dose deposition to the voxel by sufficiently large number of source particles. Then according to central limit theorem, the sample as the mean value of i.i.d. variables is normally distributed with the expectation equal to the true deposited dose.Third, the t-test is performed with the null hypothesis that the difference between sample expectation (the same as true deposited dose) and on-the-fly calculated mean sample dose from MC is larger than a given error threshold, in addition to which users have the freedom to specify confidence probability and region of interest in the t-test based stopping criterion. Results: The method is validated for proton dose calculation. The difference between the MC Result based on the t-test prior error estimate and the statistical Result by repeating numerous MC simulations is within 1%. Conclusion: The t-test based prior error estimate and stopping criterion are developed for MC and validated for proton dose calculation. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin
2015-12-01
Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.
Highly Efficient Compression Algorithms for Multichannel EEG.
Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda
2018-05-01
The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.
NASA Astrophysics Data System (ADS)
Caldwell, T. G.; Scanlon, B. R.; Long, D.; Young, M.
2013-12-01
Soil moisture is the most enigmatic component of the water balance; nonetheless, it is inherently tied to every component of the hydrologic cycle, affecting the partitioning of both water and energy at the land surface. However, our ability to assess soil water storage capacity and status through measurement or modeling is challenged by error and scale. Soil moisture is as difficult to measure as it is to model, yet land surface models and remote sensing products require some means of validation. Here we compare the three major soil moisture monitoring networks across the US, including the USDA Soil Climate Assessment Network (SCAN), NOAA Climate Reference Network (USCRN), and Cosmic Ray Soil Moisture Observing System (COSMOS) to the soil moisture simulated using the North American Land Data Assimilation System (NLDAS) Phase 2. NLDAS runs in near real-time on a 0.125° (12 km) grid over the US, producing ensemble model outputs of surface fluxes and storage. We focus primarily on soil water storage (SWS) in the upper 0-0.1 m zone from the Noah Land Surface Model and secondarily on the effects of error propagation from atmospheric forcing and soil parameterization. No scaling of the observational data was attempted. We simply compared the extracted time series at the nearest grid center from NLDAS and assessed the results by standard model statistics including root mean square error (RMSE) and mean bias estimate (MBE) of the collocated ground station. Observed and modeled data were compared at both hourly and daily mean coordinated universal time steps. In all, ~300 stations were used for 2012. SCAN sites were found to be particularly troublesome at 5- and 10-cm depths. SWS at 163 SCAN sites departed significantly from Noah with a mean R2 of 0.38 × 0.0.23, a mean RMSE of 14.9 mm with a MBE of -13.5 mm. SWS at 111 USCRN sites has a mean R2 of 0.53 × 0.20, a mean RMSE of 8.2 mm with a MBE of -3.7 mm relative to Noah. Finally, 62 COSMOS sites, the instrument with the largest measurement footprint (0.03 km2), we calculated a mean R2 of 0.53 × 0.21, a mean RMSE of 9.7 mm with a MBE of -0.3 mm. Forcing errors and textural misclassifications correlate well with model biases, indicating that scale and structural errors are equally present in NLDAS. Scaling issues aside, these confounding errors make cal/val missions, such as NASA's upcoming Soil Moisture Active Passive (SMAP) mission, problematic without significant quality control and maintenance of for our monitoring networks. Land surface models, such as NLDAS-2, may provide valuable insight into our soil moisture data and somewhere in between the real values likely exist.
August Median Streamflow on Ungaged Streams in Eastern Aroostook County, Maine
Lombard, Pamela J.; Tasker, Gary D.; Nielsen, Martha G.
2003-01-01
Methods for estimating August median streamflow were developed for ungaged, unregulated streams in the eastern part of Aroostook County, Maine, with drainage areas from 0.38 to 43 square miles and mean basin elevations from 437 to 1,024 feet. Few long-term, continuous-record streamflow-gaging stations with small drainage areas were available from which to develop the equations; therefore, 24 partial-record gaging stations were established in this investigation. A mathematical technique for estimating a standard low-flow statistic, August median streamflow, at partial-record stations was applied by relating base-flow measurements at these stations to concurrent daily flows at nearby long-term, continuous-record streamflow- gaging stations (index stations). Generalized least-squares regression analysis (GLS) was used to relate estimates of August median streamflow at gaging stations to basin characteristics at these same stations to develop equations that can be applied to estimate August median streamflow on ungaged streams. GLS accounts for varying periods of record at the gaging stations and the cross correlation of concurrent streamflows among gaging stations. Twenty-three partial-record stations and one continuous-record station were used for the final regression equations. The basin characteristics of drainage area and mean basin elevation are used in the calculated regression equation for ungaged streams to estimate August median flow. The equation has an average standard error of prediction from -38 to 62 percent. A one-variable equation uses only drainage area to estimate August median streamflow when less accuracy is acceptable. This equation has an average standard error of prediction from -40 to 67 percent. Model error is larger than sampling error for both equations, indicating that additional basin characteristics could be important to improved estimates of low-flow statistics. Weighted estimates of August median streamflow, which can be used when making estimates at partial-record or continuous-record gaging stations, range from 0.03 to 11.7 cubic feet per second or from 0.1 to 0.4 cubic feet per second per square mile. Estimates of August median streamflow on ungaged streams in the eastern part of Aroostook County, within the range of acceptable explanatory variables, range from 0.03 to 30 cubic feet per second or 0.1 to 0.7 cubic feet per second per square mile. Estimates of August median streamflow per square mile of drainage area generally increase as mean elevation and drainage area increase.
High test-retest-reliability of pain-related evoked potentials (PREP) in healthy subjects.
Özgül, Özüm Simal; Maier, Christoph; Enax-Krumova, Elena K; Vollert, Jan; Fischer, Marc; Tegenthoff, Martin; Höffken, Oliver
2017-04-24
Pain-related evoked potentials (PREP) is an established electrophysiological method to evaluate the signal transmission of electrically stimulated A-delta fibres. Although prerequisite for its clinical use, test-retest-reliability and side-to-side differences of bilateral stimulation in healthy subjects have not been examined yet. We performed PREP twice within 3-14days in 33 healthy subjects bilaterally by stimulating the dorsal hand. Detection (DT) and pain thresholds (PT) after electrical stimulation, the corresponding pain ratings, latencies of P0, N1, P1 and N2 components and the corresponding amplitudes were assessed. Impact of electrically induced pain intensity, age, sex, and arm length on PREP was analysed. MANOVA, t-Test, interclass correlation coefficient (ICC), standard error of measurement (SEM), smallest real difference (SRD), Bland-Altmann-Analysis as well as ANCOVA were used for statistical analysis. Measurement from both sides on both days resulted in mean N1-latencies from 142.39±18.12ms to 144.03±16.62ms and in mean N1P1-amplitudes from 39.04±12.26μV to 40.53±12.9μV. Analysis of a side-to-side effect showed for the N1-latency a F-value of 0.038 and for the N1P1-amplitude of 0.004 (p>0.8). We found intraclass correlation coefficients (ICC) from 0.88 to 0.93 and a standard error of measurement (SEM)<10% of mean values for all measurements concerning the N1-Latency and N1P1-amplitude. Intraclass correlation coefficients, standard error of measurement and Bland-Altman-Analyses revealed excellent test-retest-reliability for N1-latency and N1P1-amplitude without systematic error and there was no side-to-side effect on PREP. N1-latency (r=0.35, p<0.05) and N1P1-amplitude (r=-0.45, p<0.05) correlated with age and additionally N1-latency correlated with arm length (r=0.45, p<0.001). In contrast, pain intensity during the stimulation had no effect on both N1-latency and N1P1-amplitude. In summary, PREP showed high test-retest-reliability and negligible side-to-side differences concerning the commonly used parameters N1-latency and N1P1-amplitude. Copyright © 2017 Elsevier B.V. All rights reserved.
Limpert, Eckhard; Stahel, Werner A
2011-01-01
The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric) mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.
Lee, Jin H; Howell, David R; Meehan, William P; Iverson, Grant L; Gardner, Andrew J
2017-09-01
The Sport Concussion Assessment Tool-Third Edition (SCAT3) is currently considered the standard sideline assessment for concussions. In-game exercise, however, may affect SCAT3 performance and the diagnosis of concussions. To examine the influence of exercise on SCAT3 performance in professional male athletes. Controlled laboratory study. We examined the SCAT3 performance of 82 professional male athletes under 2 conditions: at rest and after exercise. Athletes reported significantly fewer total symptoms (mean, 1.0 ± 1.5 vs 1.6 ± 2.3 total symptoms, respectively; P = .008; Cohen d = 0.34), committed significantly fewer errors on the modified Balance Error Scoring System (mean, 3.5 ± 3.5 vs 4.6 ± 4.1 errors, respectively; P = .017; d = 0.31), and required significantly less time to complete the tandem gait test (mean, 9.5 ± 1.4 vs 9.9 ± 1.7 seconds, respectively; P = .02; d = 0.30) during the at-rest condition compared with the postexercise condition. The interpretation of in-game (sideline) SCAT3 results should consider the effects of postexercise fatigue levels on an athlete's performance, particularly if preseason baseline data have been collected when the athlete was well rested. Exercise appears to affect symptom burden and physical abilities, such as balance and tandem gait, more so than the cognitive components of the SCAT3.
Application of near-infrared spectroscopy for the rapid quality assessment of Radix Paeoniae Rubra
NASA Astrophysics Data System (ADS)
Zhan, Hao; Fang, Jing; Tang, Liying; Yang, Hongjun; Li, Hua; Wang, Zhuju; Yang, Bin; Wu, Hongwei; Fu, Meihong
2017-08-01
Near-infrared (NIR) spectroscopy with multivariate analysis was used to quantify gallic acid, catechin, albiflorin, and paeoniflorin in Radix Paeoniae Rubra, and the feasibility to classify the samples originating from different areas was investigated. A new high-performance liquid chromatography method was developed and validated to analyze gallic acid, catechin, albiflorin, and paeoniflorin in Radix Paeoniae Rubra as the reference. Partial least squares (PLS), principal component regression (PCR), and stepwise multivariate linear regression (SMLR) were performed to calibrate the regression model. Different data pretreatments such as derivatives (1st and 2nd), multiplicative scatter correction, standard normal variate, Savitzky-Golay filter, and Norris derivative filter were applied to remove the systematic errors. The performance of the model was evaluated according to the root mean square of calibration (RMSEC), root mean square error of prediction (RMSEP), root mean square error of cross-validation (RMSECV), and correlation coefficient (r). The results show that compared to PCR and SMLR, PLS had a lower RMSEC, RMSECV, and RMSEP and higher r for all the four analytes. PLS coupled with proper pretreatments showed good performance in both the fitting and predicting results. Furthermore, the original areas of Radix Paeoniae Rubra samples were partly distinguished by principal component analysis. This study shows that NIR with PLS is a reliable, inexpensive, and rapid tool for the quality assessment of Radix Paeoniae Rubra.
Guimaraes, Carolina V; Grzeszczuk, Robert; Bisset, George S; Donnelly, Lane F
2018-03-01
When implementing or monitoring department-sanctioned standardized radiology reports, feedback about individual faculty performance has been shown to be a useful driver of faculty compliance. Most commonly, these data are derived from manual audit, which can be both time-consuming and subject to sampling error. The purpose of this study was to evaluate whether a software program using natural language processing and machine learning could accurately audit radiologist compliance with the use of standardized reports compared with performed manual audits. Radiology reports from a 1-month period were loaded into such a software program, and faculty compliance with use of standardized reports was calculated. For that same period, manual audits were performed (25 reports audited for each of 42 faculty members). The mean compliance rates calculated by automated auditing were then compared with the confidence interval of the mean rate by manual audit. The mean compliance rate for use of standardized reports as determined by manual audit was 91.2% with a confidence interval between 89.3% and 92.8%. The mean compliance rate calculated by automated auditing was 92.0%, within that confidence interval. This study shows that by use of natural language processing and machine learning algorithms, an automated analysis can accurately define whether reports are compliant with use of standardized report templates and language, compared with manual audits. This may avoid significant labor costs related to conducting the manual auditing process. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Tarone, Aaron M; Foran, David R
2008-07-01
Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.
Prevention of medication errors: detection and audit.
Montesi, Germana; Lechi, Alessandro
2009-06-01
1. Medication errors have important implications for patient safety, and their identification is a main target in improving clinical practice errors, in order to prevent adverse events. 2. Error detection is the first crucial step. Approaches to this are likely to be different in research and routine care, and the most suitable must be chosen according to the setting. 3. The major methods for detecting medication errors and associated adverse drug-related events are chart review, computerized monitoring, administrative databases, and claims data, using direct observation, incident reporting, and patient monitoring. All of these methods have both advantages and limitations. 4. Reporting discloses medication errors, can trigger warnings, and encourages the diffusion of a culture of safe practice. Combining and comparing data from various and encourages the diffusion of a culture of safe practice sources increases the reliability of the system. 5. Error prevention can be planned by means of retroactive and proactive tools, such as audit and Failure Mode, Effect, and Criticality Analysis (FMECA). Audit is also an educational activity, which promotes high-quality care; it should be carried out regularly. In an audit cycle we can compare what is actually done against reference standards and put in place corrective actions to improve the performances of individuals and systems. 6. Patient safety must be the first aim in every setting, in order to build safer systems, learning from errors and reducing the human and fiscal costs.
Glycosylated haemoglobin: measurement and clinical use.
Peacock, I
1984-08-01
The discovery, biochemistry, laboratory determination, and clinical application of glycosylated haemoglobins are reviewed. Sources of error are discussed in detail. No single assay method is suitable for all purposes, and in the foreseeable future generally acceptable standards and reference ranges are unlikely to be agreed. Each laboratory must establish its own. Nevertheless, the development of glycosylated haemoglobin assays is an important advance. They offer the best available means of assessing diabetic control.
Multivariate Adaptive Regression Splines (Preprint)
1990-08-01
fold cross -validation would take about ten time as long, and MARS is not all that fast to begin with. Friedman has a number of examples showing...standardized mean squared error of prediction (MSEP), the generalized cross validation (GCV), and the number of selected terms (TERMS). In accordance with...and mi= 10 case were almost exclusively spurious cross product terms and terms involving the nuisance variables x6 through xlo. This large number of
Tong, Hui; Tanaka, Carina B; Kaizer, Marina R; Zhang, Yu
2016-01-01
Developing yttria-stabilized tetragonal zirconia polycrystal (Y-TZP) with high strength and translucency could significantly widen the clinical indications of monolithic zirconia restorations. This study investigates the mechanical and optical properties of three Y-TZP ceramics: High-Translucency, High-Strength and High-Surface Area. The four-point bending strengths (mean ± standard error) for the three Y-TZP ceramics ( n = 10) were 990 ± 39, 1416 ± 33 and 1076 ± 32 MPa for High-Translucency, High-Strength and High-Surface Area, respectively. The fracture toughness values (mean ± standard error) for the three zirconias ( n = 10) were 3.24 ± 0.10, 3.63 ± 0.12 and 3.21 ± 0.14 MPa m 1/2 for High-Translucency, High-Strength and High-Surface Area, respectively. Both strength and toughness values of High-Strength zirconia were significantly higher than High-Surface Area and High-Translucency zirconias. Translucency parameter values of High-Translucency zirconia were considerably higher than High-Strength and High-Surface Area zirconias. However, all three zirconias became essentially opaque when their thickness reached 1 mm or greater. Our findings suggest that there exists a delicate balance between mechanical and optical properties of the current commercial Y-TZP ceramics.
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Kunz, Cornelia U; Stallard, Nigel; Parsons, Nicholas; Todd, Susan; Friede, Tim
2017-03-01
Regulatory authorities require that the sample size of a confirmatory trial is calculated prior to the start of the trial. However, the sample size quite often depends on parameters that might not be known in advance of the study. Misspecification of these parameters can lead to under- or overestimation of the sample size. Both situations are unfavourable as the first one decreases the power and the latter one leads to a waste of resources. Hence, designs have been suggested that allow a re-assessment of the sample size in an ongoing trial. These methods usually focus on estimating the variance. However, for some methods the performance depends not only on the variance but also on the correlation between measurements. We develop and compare different methods for blinded estimation of the correlation coefficient that are less likely to introduce operational bias when the blinding is maintained. Their performance with respect to bias and standard error is compared to the unblinded estimator. We simulated two different settings: one assuming that all group means are the same and one assuming that different groups have different means. Simulation results show that the naïve (one-sample) estimator is only slightly biased and has a standard error comparable to that of the unblinded estimator. However, if the group means differ, other estimators have better performance depending on the sample size per group and the number of groups. © 2016 The Authors. Biometrical Journal Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Stallard, Nigel; Parsons, Nicholas; Todd, Susan; Friede, Tim
2016-01-01
Regulatory authorities require that the sample size of a confirmatory trial is calculated prior to the start of the trial. However, the sample size quite often depends on parameters that might not be known in advance of the study. Misspecification of these parameters can lead to under‐ or overestimation of the sample size. Both situations are unfavourable as the first one decreases the power and the latter one leads to a waste of resources. Hence, designs have been suggested that allow a re‐assessment of the sample size in an ongoing trial. These methods usually focus on estimating the variance. However, for some methods the performance depends not only on the variance but also on the correlation between measurements. We develop and compare different methods for blinded estimation of the correlation coefficient that are less likely to introduce operational bias when the blinding is maintained. Their performance with respect to bias and standard error is compared to the unblinded estimator. We simulated two different settings: one assuming that all group means are the same and one assuming that different groups have different means. Simulation results show that the naïve (one‐sample) estimator is only slightly biased and has a standard error comparable to that of the unblinded estimator. However, if the group means differ, other estimators have better performance depending on the sample size per group and the number of groups. PMID:27886393
Moerbeek, Mirjam; van Schie, Sander
2016-07-11
The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.
An Empirical State Error Covariance Matrix for Batch State Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).
Ramsthaler, Frank; Kettner, Mattias; Verhoff, Marcel A
2014-01-01
In forensic anthropological casework, estimating age-at-death is key to profiling unknown skeletal remains. The aim of this study was to examine the reliability of a new, simple, fast, and inexpensive digital odontological method for age-at-death estimation. The method is based on the original Lamendin method, which is a widely used technique in the repertoire of odontological aging methods in forensic anthropology. We examined 129 single root teeth employing a digital camera and imaging software for the measurement of the luminance of the teeth's translucent root zone. Variability in luminance detection was evaluated using statistical technical error of measurement analysis. The method revealed stable values largely unrelated to observer experience, whereas requisite formulas proved to be camera-specific and should therefore be generated for an individual recording setting based on samples of known chronological age. Multiple regression analysis showed a highly significant influence of the coefficients of the variables "arithmetic mean" and "standard deviation" of luminance for the regression formula. For the use of this primer multivariate equation for age-at-death estimation in casework, a standard error of the estimate of 6.51 years was calculated. Step-by-step reduction of the number of embedded variables to linear regression analysis employing the best contributor "arithmetic mean" of luminance yielded a regression equation with a standard error of 6.72 years (p < 0.001). The results of this study not only support the premise of root translucency as an age-related phenomenon, but also demonstrate that translucency reflects a number of other influencing factors in addition to age. This new digital measuring technique of the zone of dental root luminance can broaden the array of methods available for estimating chronological age, and furthermore facilitate measurement and age classification due to its low dependence on observer experience.
Statistical power for detecting trends with applications to seabird monitoring
Hatch, Shyla A.
2003-01-01
Power analysis is helpful in defining goals for ecological monitoring and evaluating the performance of ongoing efforts. I examined detection standards proposed for population monitoring of seabirds using two programs (MONITOR and TRENDS) specially designed for power analysis of trend data. Neither program models within- and among-years components of variance explicitly and independently, thus an error term that incorporates both components is an essential input. Residual variation in seabird counts consisted of day-to-day variation within years and unexplained variation among years in approximately equal parts. The appropriate measure of error for power analysis is the standard error of estimation (S.E.est) from a regression of annual means against year. Replicate counts within years are helpful in minimizing S.E.est but should not be treated as independent samples for estimating power to detect trends. Other issues include a choice of assumptions about variance structure and selection of an exponential or linear model of population change. Seabird count data are characterized by strong correlations between S.D. and mean, thus a constant CV model is appropriate for power calculations. Time series were fit about equally well with exponential or linear models, but log transformation ensures equal variances over time, a basic assumption of regression analysis. Using sample data from seabird monitoring in Alaska, I computed the number of years required (with annual censusing) to detect trends of -1.4% per year (50% decline in 50 years) and -2.7% per year (50% decline in 25 years). At ??=0.05 and a desired power of 0.9, estimated study intervals ranged from 11 to 69 years depending on species, trend, software, and study design. Power to detect a negative trend of 6.7% per year (50% decline in 10 years) is suggested as an alternative standard for seabird monitoring that achieves a reasonable match between statistical and biological significance.
ERIC Educational Resources Information Center
Wang, Tianyou; And Others
M. J. Kolen, B. A. Hanson, and R. L. Brennan (1992) presented a procedure for assessing the conditional standard error of measurement (CSEM) of scale scores using a strong true-score model. They also investigated the ways of using nonlinear transformation from number-correct raw score to scale score to equalize the conditional standard error along…
NASA Astrophysics Data System (ADS)
Greve, Annika; Turner, Gillian M.
2018-06-01
Since publication we have noticed mistakes in the calculation of the flow mean palaeointensities. These are generally within the standard error of the mean of each result, and so do not affect the interpretations or overall conclusions of the paper. Tables 2 and 3 of the paper are reproduced below. The reader is referred to the original publication, Greve and Turner (2017) for a full discussion of the study and references. We thank the editors for the opportunity to make these corrections.
Martz, D E; Harris, R T; Langner, G H
1989-07-01
Direct observation of the 218Po alpha-peak decay with a microcomputer-controlled alpha-spectrometer yielded a mean half-life value of 3.040 +/- 0.008 min, where the error quoted represents twice the standard deviation of the means from 38 separate decay measurements. The 1912 and 1924 218Po half-life measurements, which provided the 3.05-min value listed in nuclear tables for the past 60 y, are critically reviewed. Two more recent experiments, which yielded longer values of 3.11 min (Van Hise et al. 1982) and 3.093 min (Potapov and Soloshenkov 1986), are also discussed.
NASA Astrophysics Data System (ADS)
Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh
2016-03-01
The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.
Generation, Validation, and Application of Abundance Map Reference Data for Spectral Unmixing
NASA Astrophysics Data System (ADS)
Williams, McKay D.
Reference data ("ground truth") maps traditionally have been used to assess the accuracy of imaging spectrometer classification algorithms. However, these reference data can be prohibitively expensive to produce, often do not include sub-pixel abundance estimates necessary to assess spectral unmixing algorithms, and lack published validation reports. Our research proposes methodologies to efficiently generate, validate, and apply abundance map reference data (AMRD) to airborne remote sensing scenes. We generated scene-wide AMRD for three different remote sensing scenes using our remotely sensed reference data (RSRD) technique, which spatially aggregates unmixing results from fine scale imagery (e.g., 1-m Ground Sample Distance (GSD)) to co-located coarse scale imagery (e.g., 10-m GSD or larger). We validated the accuracy of this methodology by estimating AMRD in 51 randomly-selected 10 m x 10 m plots, using seven independent methods and observers, including field surveys by two observers, imagery analysis by two observers, and RSRD using three algorithms. Results indicated statistically-significant differences between all versions of AMRD, suggesting that all forms of reference data need to be validated. Given these significant differences between the independent versions of AMRD, we proposed that the mean of all (MOA) versions of reference data for each plot and class were most likely to represent true abundances. We then compared each version of AMRD to MOA. Best case accuracy was achieved by a version of imagery analysis, which had a mean coverage area error of 2.0%, with a standard deviation of 5.6%. One of the RSRD algorithms was nearly as accurate, achieving a mean error of 3.0%, with a standard deviation of 6.3%, showing the potential of RSRD-based AMRD generation. Application of validated AMRD to specific coarse scale imagery involved three main parts: 1) spatial alignment of coarse and fine scale imagery, 2) aggregation of fine scale abundances to produce coarse scale imagery-specific AMRD, and 3) demonstration of comparisons between coarse scale unmixing abundances and AMRD. Spatial alignment was performed using our scene-wide spectral comparison (SWSC) algorithm, which aligned imagery with accuracy approaching the distance of a single fine scale pixel. We compared simple rectangular aggregation to coarse sensor point spread function (PSF) aggregation, and found that the PSF approach returned lower error, but that rectangular aggregation more accurately estimated true abundances at ground level. We demonstrated various metrics for comparing unmixing results to AMRD, including mean absolute error (MAE) and linear regression (LR). We additionally introduced reference data mean adjusted MAE (MA-MAE), and reference data confidence interval adjusted MAE (CIA-MAE), which account for known error in the reference data itself. MA-MAE analysis indicated that fully constrained linear unmixing of coarse scale imagery across all three scenes returned an error of 10.83% per class and pixel, with regression analysis yielding a slope = 0.85, intercept = 0.04, and R2 = 0.81. Our reference data research has demonstrated a viable methodology to efficiently generate, validate, and apply AMRD to specific examples of airborne remote sensing imagery, thereby enabling direct quantitative assessment of spectral unmixing performance.
Goel, S; Chua, C; Dong, B; Butcher, M; Ahfat, F; Hindi, S K; Kotta, S
2004-02-01
Disposable devices are increasingly becoming the preferred choice where possible in contact medical equipment. To evaluate the accuracy of the disposable applanation tonometer head as a potential substitute to the standard Goldmann applanation head. The study was prospective. The intraocular pressure recordings in 80 eyes of 42 patients were compared using the disposable and standard Goldmann applanator heads. The Bland and Altman method of assessing agreement between two methods of clinical measurement was used in the analysis. The difference in the readings between the two types of tonometer heads was highly variable (mean difference=0.78 mm Hg, range=-1 to 11 mm Hg). This was because of the distortions on the applanating surface of the disposable device. When the readings associated with the defective heads were excluded, very strong agreement was obtained (mean=0.07 mm Hg, range=-1 to 2 mm Hg). Good agreement with standard Goldmann applanation is achieved with the disposable heads except where surface distortions induce significant errors. Careful inspection to ensure well-structured disposable units is imperative in disposable applanation tonometry.
NASA Technical Reports Server (NTRS)
Herrman, B. D.; Uman, M. A.; Brantley, R. D.; Krider, E. P.
1976-01-01
The principle of operation of a wideband crossed-loop magnetic-field direction finder is studied by comparing the bearing determined from the NS and EW magnetic fields at various times up to 155 microsec after return stroke initiation with the TV-determined lightning channel base direction. For 40 lightning strokes in the 3 to 12 km range, the difference between the bearings found from magnetic fields sampled at times between 1 and 10 microsec and the TV channel-base data has a standard deviation of 3-4 deg. Included in this standard deviation is a 2-3 deg measurement error. For fields sampled at progressively later times, both the mean and the standard deviation of the difference between the direction-finder bearing and the TV bearing increase. Near 150 microsec, means are about 35 deg and standard deviations about 60 deg. The physical reasons for the late-time inaccuracies in the wideband direction finder and the occurrence of these effects in narrow-band VLF direction finders are considered.
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
Kovalchik, Stephanie A; Cumberland, William G
2012-05-01
Subgroup analyses are important to medical research because they shed light on the heterogeneity of treatment effectts. A treatment-covariate interaction in an individual patient data (IPD) meta-analysis is the most reliable means to estimate how a subgroup factor modifies a treatment's effectiveness. However, owing to the challenges in collecting participant data, an approach based on aggregate data might be the only option. In these circumstances, it would be useful to assess the relative efficiency and power loss of a subgroup analysis without patient-level data. We present methods that use aggregate data to estimate the standard error of an IPD meta-analysis' treatment-covariate interaction for regression models of a continuous or dichotomous patient outcome. Numerical studies indicate that the estimators have good accuracy. An application to a previously published meta-regression illustrates the practical utility of the methodology. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Padé Approximant and Minimax Rational Approximation in Standard Cosmology
NASA Astrophysics Data System (ADS)
Zaninetti, Lorenzo
2016-02-01
The luminosity distance in the standard cosmology as given by $\\Lambda$CDM and consequently the distance modulus for supernovae can be defined by the Pad\\'e approximant. A comparison with a known analytical solution shows that the Pad\\'e approximant for the luminosity distance has an error of $4\\%$ at redshift $= 10$. A similar procedure for the Taylor expansion of the luminosity distance gives an error of $4\\%$ at redshift $=0.7 $; this means that for the luminosity distance, the Pad\\'e approximation is superior to the Taylor series. The availability of an analytical expression for the distance modulus allows applying the Levenberg--Marquardt method to derive the fundamental parameters from the available compilations for supernovae. A new luminosity function for galaxies derived from the truncated gamma probability density function models the observed luminosity function for galaxies when the observed range in absolute magnitude is modeled by the Pad\\'e approximant. A comparison of $\\Lambda$CDM with other cosmologies is done adopting a statistical point of view.
Intestinal helminths induce haematological changes in dogs from Jabalpur, India.
Qadir, S; Dixit, A K; Dixit, P; Sharma, R L
2011-12-01
The effect of canine intestinal helminths on the haematological profile of 200 dogs, of both sexes and variable age, visiting university veterinary clinics for routine examination was investigated. The dogs were assigned to parasitized (n = 39) and non-parasitized (n = 161) groups of animals. Coprological examination revealed a 19.5% prevalence of different species of the helminths. Of these animals, 10.25% had mixed infections with Ancylostoma caninum, Toxascaris spp. and Dipylidium caninum. The intensity of A. caninum infection was the highest, with mean egg counts of 951.43 (standard error 88.66), followed by Toxascaris 283.33 (standard error 116.81) and D. caninum. The parasitized animals had significantly lower levels of haemoglobin, packed cell volume and total erythrocyte counts than non-parasitized animals (P < 0.01). Values of other parameters, except for lymphocytes and eosinophils, were not different between the two groups. Analyses of the haematological profile revealed normocytic hypochromic anaemia in the parasitized group of animals.
Scoggins, John F; Weinberg, Daniel A
2017-06-01
Published estimates of the healthcare coinsurance elasticity coefficient have typically relied on annual observations of individual healthcare expenditures even though health plan membership and expenditures are traditionally reported in monthly units and several studies have stressed the need for demand models to recognize the episodic nature of healthcare. Summing individual healthcare expenditures into annual observations complicates two common challenges of statistical inference, heteroscedasticity, and regressor endogeneity. This paper estimates the elasticity coefficient using a monthly panel data model that addresses the heteroscedasticity and endogeneity problems with relative ease. Healthcare claims data from employees of King County, Washington, during 2005 to 2011 were used to estimate the mean point elasticity coefficient: -0.314 (0.015 standard error) to -0.145 (0.015 standard error) depending on model specification. These estimates bracket the -0.2 point estimate (range: -0.22 to -0.17) derived from the famous Rand Health Insurance Experiment. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data
Yang, Yan; Simpson, Douglas
2010-01-01
Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-05-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less
Improving the accuracy of Møller-Plesset perturbation theory with neural networks
NASA Astrophysics Data System (ADS)
McGibbon, Robert T.; Taube, Andrew G.; Donchev, Alexander G.; Siva, Karthik; Hernández, Felipe; Hargus, Cory; Law, Ka-Hei; Klepeis, John L.; Shaw, David E.
2017-10-01
Noncovalent interactions are of fundamental importance across the disciplines of chemistry, materials science, and biology. Quantum chemical calculations on noncovalently bound complexes, which allow for the quantification of properties such as binding energies and geometries, play an essential role in advancing our understanding of, and building models for, a vast array of complex processes involving molecular association or self-assembly. Because of its relatively modest computational cost, second-order Møller-Plesset perturbation (MP2) theory is one of the most widely used methods in quantum chemistry for studying noncovalent interactions. MP2 is, however, plagued by serious errors due to its incomplete treatment of electron correlation, especially when modeling van der Waals interactions and π-stacked complexes. Here we present spin-network-scaled MP2 (SNS-MP2), a new semi-empirical MP2-based method for dimer interaction-energy calculations. To correct for errors in MP2, SNS-MP2 uses quantum chemical features of the complex under study in conjunction with a neural network to reweight terms appearing in the total MP2 interaction energy. The method has been trained on a new data set consisting of over 200 000 complete basis set (CBS)-extrapolated coupled-cluster interaction energies, which are considered the gold standard for chemical accuracy. SNS-MP2 predicts gold-standard binding energies of unseen test compounds with a mean absolute error of 0.04 kcal mol-1 (root-mean-square error 0.09 kcal mol-1), a 6- to 7-fold improvement over MP2. To the best of our knowledge, its accuracy exceeds that of all extant density functional theory- and wavefunction-based methods of similar computational cost, and is very close to the intrinsic accuracy of our benchmark coupled-cluster methodology itself. Furthermore, SNS-MP2 provides reliable per-conformation confidence intervals on the predicted interaction energies, a feature not available from any alternative method.
Improving the accuracy of Møller-Plesset perturbation theory with neural networks.
McGibbon, Robert T; Taube, Andrew G; Donchev, Alexander G; Siva, Karthik; Hernández, Felipe; Hargus, Cory; Law, Ka-Hei; Klepeis, John L; Shaw, David E
2017-10-28
Noncovalent interactions are of fundamental importance across the disciplines of chemistry, materials science, and biology. Quantum chemical calculations on noncovalently bound complexes, which allow for the quantification of properties such as binding energies and geometries, play an essential role in advancing our understanding of, and building models for, a vast array of complex processes involving molecular association or self-assembly. Because of its relatively modest computational cost, second-order Møller-Plesset perturbation (MP2) theory is one of the most widely used methods in quantum chemistry for studying noncovalent interactions. MP2 is, however, plagued by serious errors due to its incomplete treatment of electron correlation, especially when modeling van der Waals interactions and π-stacked complexes. Here we present spin-network-scaled MP2 (SNS-MP2), a new semi-empirical MP2-based method for dimer interaction-energy calculations. To correct for errors in MP2, SNS-MP2 uses quantum chemical features of the complex under study in conjunction with a neural network to reweight terms appearing in the total MP2 interaction energy. The method has been trained on a new data set consisting of over 200 000 complete basis set (CBS)-extrapolated coupled-cluster interaction energies, which are considered the gold standard for chemical accuracy. SNS-MP2 predicts gold-standard binding energies of unseen test compounds with a mean absolute error of 0.04 kcal mol -1 (root-mean-square error 0.09 kcal mol -1 ), a 6- to 7-fold improvement over MP2. To the best of our knowledge, its accuracy exceeds that of all extant density functional theory- and wavefunction-based methods of similar computational cost, and is very close to the intrinsic accuracy of our benchmark coupled-cluster methodology itself. Furthermore, SNS-MP2 provides reliable per-conformation confidence intervals on the predicted interaction energies, a feature not available from any alternative method.
NASA Astrophysics Data System (ADS)
Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad
2018-02-01
The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.
Munzimi, Yolande A.; Hansen, Matthew C.; Adusei, Bernard; Senay, Gabriel B.
2015-01-01
Quantitative understanding of Congo River basin hydrological behavior is poor because of the basin’s limited hydrometeorological observation network. In cases such as the Congo basin where ground data are scarce, satellite-based estimates of rainfall, such as those from the joint NASA/JAXA Tropical Rainfall Measuring Mission (TRMM), can be used to quantify rainfall patterns. This study tests and reports the use of limited rainfall gauge data within the Democratic Republic of Congo (DRC) to recalibrate a TRMM science product (TRMM 3B42, version 6) in characterizing precipitation and climate in the Congo basin. Rainfall estimates from TRMM 3B42, version 6, are compared and adjusted using ground precipitation data from 12 DRC meteorological stations from 1998 to 2007. Adjustment is achieved on a monthly scale by using a regression-tree algorithm. The output is a new, basin-specific estimate of monthly and annual rainfall and climate types across the Congo basin. This new product and the latest version-7 TRMM 3B43 science product are validated by using an independent long-term dataset of historical isohyets. Standard errors of the estimate, root-mean-square errors, and regression coefficients r were slightly and uniformly better with the recalibration from this study when compared with the 3B43 product (mean monthly standard errors of 31 and 40 mm of precipitation and mean r2 of 0.85 and 0.82, respectively), but the 3B43 product was slightly better in terms of bias estimation (1.02 and 1.00). Despite reasonable doubts that have been expressed in studies of other tropical regions, within the Congo basin the TRMM science product (3B43) performed in a manner that is comparable to the performance of the recalibrated product that is described in this study.
Gundle, Kenneth R; White, Jedediah K; Conrad, Ernest U; Ching, Randal P
2017-01-01
Surgical navigation systems are increasingly used to aid resection and reconstruction of osseous malignancies. In the process of implementing image-based surgical navigation systems, there are numerous opportunities for error that may impact surgical outcome. This study aimed to examine modifiable sources of error in an idealized scenario, when using a bidirectional infrared surgical navigation system. Accuracy and precision were assessed using a computerized-numerical-controlled (CNC) machined grid with known distances between indentations while varying: 1) the distance from the grid to the navigation camera (range 150 to 247cm), 2) the distance from the grid to the patient tracker device (range 20 to 40cm), and 3) whether the minimum or maximum number of bidirectional infrared markers were actively functioning. For each scenario, distances between grid points were measured at 10-mm increments between 10 and 120mm, with twelve measurements made at each distance. The accuracy outcome was the root mean square (RMS) error between the navigation system distance and the actual grid distance. To assess precision, four indentations were recorded six times for each scenario while also varying the angle of the navigation system pointer. The outcome for precision testing was the standard deviation of the distance between each measured point to the mean three-dimensional coordinate of the six points for each cluster. Univariate and multiple linear regression revealed that as the distance from the navigation camera to the grid increased, the RMS error increased (p<0.001). The RMS error also increased when not all infrared markers were actively tracking (p=0.03), and as the measured distance increased (p<0.001). In a multivariate model, these factors accounted for 58% of the overall variance in the RMS error. Standard deviations in repeated measures also increased when not all infrared markers were active (p<0.001), and as the distance between navigation camera and physical space increased (p=0.005). Location of the patient tracker did not affect accuracy (0.36) or precision (p=0.97). In our model laboratory test environment, the infrared bidirectional navigation system was more accurate and precise when the distance from the navigation camera to the physical (working) space was minimized and all bidirectional markers were active. These findings may require alterations in operating room setup and software changes to improve the performance of this system.
NASA Astrophysics Data System (ADS)
Marchant, T. E.; Joshi, K. D.; Moore, C. J.
2018-03-01
Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).
Cirrus Cloud Retrieval Using Infrared Sounding Data: Multilevel Cloud Errors.
NASA Astrophysics Data System (ADS)
Baum, Bryan A.; Wielicki, Bruce A.
1994-01-01
In this study we perform an error analysis for cloud-top pressure retrieval using the High-Resolution Infrared Radiometric Sounder (HIRS/2) 15-µm CO2 channels for the two-layer case of transmissive cirrus overlying an overcast, opaque stratiform cloud. This analysis includes standard deviation and bias error due to instrument noise and the presence of two cloud layers, the lower of which is opaque. Instantaneous cloud pressure retrieval errors are determined for a range of cloud amounts (0.1 1.0) and cloud-top pressures (850250 mb). Large cloud-top pressure retrieval errors are found to occur when a lower opaque layer is present underneath an upper transmissive cloud layer in the satellite field of view (FOV). Errors tend to increase with decreasing upper-cloud elective cloud amount and with decreasing cloud height (increasing pressure). Errors in retrieved upper-cloud pressure result in corresponding errors in derived effective cloud amount. For the case in which a HIRS FOV has two distinct cloud layers, the difference between the retrieved and actual cloud-top pressure is positive in all casts, meaning that the retrieved upper-cloud height is lower than the actual upper-cloud height. In addition, errors in retrieved cloud pressure are found to depend upon the lapse rate between the low-level cloud top and the surface. We examined which sounder channel combinations would minimize the total errors in derived cirrus cloud height caused by instrument noise and by the presence of a lower-level cloud. We find that while the sounding channels that peak between 700 and 1000 mb minimize random errors, the sounding channels that peak at 300—500 mb minimize bias errors. For a cloud climatology, the bias errors are most critical.
Improving Arterial Spin Labeling by Using Deep Learning.
Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong
2018-05-01
Purpose To develop a deep learning algorithm that generates arterial spin labeling (ASL) perfusion images with higher accuracy and robustness by using a smaller number of subtraction images. Materials and Methods For ASL image generation from pair-wise subtraction, we used a convolutional neural network (CNN) as a deep learning algorithm. The ground truth perfusion images were generated by averaging six or seven pairwise subtraction images acquired with (a) conventional pseudocontinuous arterial spin labeling from seven healthy subjects or (b) Hadamard-encoded pseudocontinuous ASL from 114 patients with various diseases. CNNs were trained to generate perfusion images from a smaller number (two or three) of subtraction images and evaluated by means of cross-validation. CNNs from the patient data sets were also tested on 26 separate stroke data sets. CNNs were compared with the conventional averaging method in terms of mean square error and radiologic score by using a paired t test and/or Wilcoxon signed-rank test. Results Mean square errors were approximately 40% lower than those of the conventional averaging method for the cross-validation with the healthy subjects and patients and the separate test with the patients who had experienced a stroke (P < .001). Region-of-interest analysis in stroke regions showed that cerebral blood flow maps from CNN (mean ± standard deviation, 19.7 mL per 100 g/min ± 9.7) had smaller mean square errors than those determined with the conventional averaging method (43.2 ± 29.8) (P < .001). Radiologic scoring demonstrated that CNNs suppressed noise and motion and/or segmentation artifacts better than the conventional averaging method did (P < .001). Conclusion CNNs provided superior perfusion image quality and more accurate perfusion measurement compared with those of the conventional averaging method for generation of ASL images from pair-wise subtraction images. © RSNA, 2017.
On the application of photogrammetry to the fitting of jawbone-anchored bridges.
Strid, K G
1985-01-01
Misfit between a jawbone-anchored bridge and the abutments in the patient's jaw may result in, for example, fixture fracture. To achieve improved alignment, the bridge base could be prepared in a numerically-controlled tooling machine using measured abutment coordinates as primary data. For each abutment, the measured values must comprise the coordinates of a reference surface as well as the spatial orientation of the fixture/abutment longitudinal axis. Stereophotogrammetry was assumed to be the measuring method of choice. To assess its potentials, a lower-jaw model with accurately positioned signals was stereophotographed and the films were measured in a stereocomparator. Model-space coordinates, computed from the image coordinates, were compared to the known signal coordinates. The root-mean-square error in position was determined to 0.03-0.08 mm, the maximum individual error amounting to 0.12 mm, whereas the r. m. s. error in axis direction was found to be 0.5-1.5 degrees with a maximum individual error of 1.8 degrees. These errors are of the same order as can be achieved by careful impression techniques. The method could be useful, but because of its complexity, stereophotogrammetry is not recommended as a standard procedure.
Evaluation of Eight Methods for Aligning Orientation of Two Coordinate Systems.
Mecheri, Hakim; Robert-Lachaine, Xavier; Larue, Christian; Plamondon, André
2016-08-01
The aim of this study was to evaluate eight methods for aligning the orientation of two different local coordinate systems. Alignment is very important when combining two different systems of motion analysis. Two of the methods were developed specifically for biomechanical studies, and because there have been at least three decades of algorithm development in robotics, it was decided to include six methods from this field. To compare these methods, an Xsens sensor and two Optotrak clusters were attached to a Plexiglas plate. The first optical marker cluster was fixed on the sensor and 20 trials were recorded. The error of alignment was calculated for each trial, and the mean, the standard deviation, and the maximum values of this error over all trials were reported. One-way repeated measures analysis of variance revealed that the alignment error differed significantly across the eight methods. Post-hoc tests showed that the alignment error from the methods based on angular velocities was significantly lower than for the other methods. The method using angular velocities performed the best, with an average error of 0.17 ± 0.08 deg. We therefore recommend this method, which is easy to perform and provides accurate alignment.
Estimating standard errors in feature network models.
Frank, Laurence E; Heiser, Willem J
2007-05-01
Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.
NASA Astrophysics Data System (ADS)
Mai, W.; Zhang, J.-F.; Zhao, X.-M.; Li, Z.; Xu, Z.-W.
2017-11-01
Wastewater from the dye industry is typically analyzed using a standard method for measurement of chemical oxygen demand (COD) or by a single-wavelength spectroscopic method. To overcome the disadvantages of these methods, ultraviolet-visible (UV-Vis) spectroscopy was combined with principal component regression (PCR) and partial least squares regression (PLSR) in this study. Unlike the standard method, this method does not require digestion of the samples for preparation. Experiments showed that the PLSR model offered high prediction performance for COD, with a mean relative error of about 5% for two dyes. This error is similar to that obtained with the standard method. In this study, the precision of the PLSR model decreased with the number of dye compounds present. It is likely that multiple models will be required in reality, and the complexity of a COD monitoring system would be greatly reduced if the PLSR model is used because it can include several dyes. UV-Vis spectroscopy with PLSR successfully enhanced the performance of COD prediction for dye wastewater and showed good potential for application in on-line water quality monitoring.
Estimating Discharge in Low-Order Rivers With High-Resolution Aerial Imagery
NASA Astrophysics Data System (ADS)
King, Tyler V.; Neilson, Bethany T.; Rasmussen, Mitchell T.
2018-02-01
Remote sensing of river discharge promises to augment in situ gauging stations, but the majority of research in this field focuses on large rivers (>50 m wide). We present a method for estimating volumetric river discharge in low-order (<50 m wide) rivers from remotely sensed data by coupling high-resolution imagery with one-dimensional hydraulic modeling at so-called virtual gauging stations. These locations were identified as locations where the river contracted under low flows, exposing a substantial portion of the river bed. Topography of the exposed river bed was photogrammetrically extracted from high-resolution aerial imagery while the geometry of the remaining inundated portion of the channel was approximated based on adjacent bank topography and maximum depth assumptions. Full channel bathymetry was used to create hydraulic models that encompassed virtual gauging stations. Discharge for each aerial survey was estimated with the hydraulic model by matching modeled and remotely sensed wetted widths. Based on these results, synthetic width-discharge rating curves were produced for each virtual gauging station. In situ observations were used to determine the accuracy of wetted widths extracted from imagery (mean error 0.36 m), extracted bathymetry (mean vertical RMSE 0.23 m), and discharge (mean percent error 7% with a standard deviation of 6%). Sensitivity analyses were conducted to determine the influence of inundated channel bathymetry and roughness parameters on estimated discharge. Comparison of synthetic rating curves produced through sensitivity analyses show that reasonable ranges of parameter values result in mean percent errors in predicted discharges of 12%-27%.
A practical method of estimating standard error of age in the fission track dating method
Johnson, N.M.; McGee, V.E.; Naeser, C.W.
1979-01-01
A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.
Harada, Saki; Suzuki, Akio; Nishida, Shohei; Kobayashi, Ryo; Tamai, Sayuri; Kumada, Keisuke; Murakami, Nobuo; Itoh, Yoshinori
2017-06-01
Insulin is frequently used for glycemic control. Medication errors related to insulin are a common problem for medical institutions. Here, we prepared a standardized sliding scale insulin (SSI) order sheet and assessed the effect of its introduction. Observations before and after the introduction of the standardized SSI template were conducted at Gifu University Hospital. The incidence of medication errors, hyperglycemia, and hypoglycemia related to SSI were obtained from the electronic medical records. The introduction of the standardized SSI order sheet significantly reduced the incidence of medication errors related to SSI compared with that prior to its introduction (12/165 [7.3%] vs 4/159 [2.1%], P = .048). However, the incidence of hyperglycemia (≥250 mg/dL) and hypoglycemia (≤50 mg/dL) in patients who received SSI was not significantly different between the 2 groups. The introduction of the standardized SSI order sheet reduced the incidence of medication errors related to SSI. © 2016 John Wiley & Sons, Ltd.
A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test
NASA Technical Reports Server (NTRS)
Reeder, James R.
2002-01-01
The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.
Arterial Blood Flow Measurement Using Digital Subtraction Angiography (DSA)
NASA Astrophysics Data System (ADS)
Swanson, David K.; Myerowitz, P. David; Van Lysel, Michael S.; Peppler, Walter W.; Fields, Barry L.; Watson, Kim M.; O'Connor, Julia
1984-08-01
Standard angiography demonstrates the anatomy of arterial occlusive disease but not its physiological signficance. Using intravenous digital subtraction angiography (DSA), we investigated transit-time videodensitometric techniques in measuring femoral arterial flows in dogs. These methods have been successfully applied to intraarterial DSA but not to intravenous DSA. Eight 20 kg dogs were instrumented with an electromagnetic flow probe and a balloon occluder above an imaged segment of femoral artery. 20 cc of Renografin 76 was power injected at 15 cc/sec into the right atrium. Flow in the femoral artery was varied by partial balloon occlusion or peripheral dilatation following induced ischemia resulting in 51 flow measurements varying from 15 to 270 cc/min. Three different transit-time techniques were studied: crosscorrelation, mean square error, and two leading edge methods. Correlation between videodensitometry and flowmeter measurements using these different techniques ranged from 0.78 to 0.88 with a mean square error of 29 to 37 cc/min. Blood flow information using several different transit-time techniques can be obtained with intravenous DSA.
3D foot shape generation from 2D information.
Luximon, Ameersing; Goonetilleke, Ravindra S; Zhang, Ming
2005-05-15
Two methods to generate an individual 3D foot shape from 2D information are proposed. A standard foot shape was first generated and then scaled based on known 2D information. In the first method, the foot outline and the foot height were used, and in the second, the foot outline and the foot profile were used. The models were developed using 40 participants and then validated using a different set of 40 participants. Results show that each individual foot shape can be predicted within a mean absolute error of 1.36 mm for the left foot and 1.37 mm for the right foot using the first method, and within a mean absolute error of 1.02 mm for the left foot and 1.02 mm for the right foot using the second method. The second method shows somewhat improved accuracy even though it requires two images. Both the methods are relatively cheaper than using a scanner to determine the 3D foot shape for custom footwear design.
Driving characteristics of teens with attention deficit hyperactivity and autism spectrum disorder.
Classen, Sherrilene; Monahan, Miriam; Wang, Yanning
2013-01-01
Vehicle crashes are a leading cause of death among teens. Teens with attention deficit hyperactivity disorder (ADHD), autism spectrum disorder (ASD), or both (ADHD-ASD) may have a greater crash risk. We examined the between-groups demographic, clinical, and predriving performance differences of 22 teens with ADHD-ASD (mean age = 15.05, standard deviation [SD] = 0.95) and 22 healthy control (HC) teens (mean age = 14.32, SD = 0.72). Compared with HC teens, the teens with ADHD-ASD performed more poorly on right-eye visual acuity, selective attention, visual-motor integration, cognition, and motor performance and made more errors on the driving simulator pertaining to visual scanning, speed regulation, lane maintenance, adjustment to stimuli, and total number of driving errors. Teens with ADHD-ASD, compared with HC teens, may have more predriving deficits and as such require the skills of a certified driving rehabilitation specialist to assess readiness to drive. Copyright © 2013 by the American Occupational Therapy Association, Inc.
Accuracy assessment of TanDEM-X IDEM using airborne LiDAR on the area of Poland
NASA Astrophysics Data System (ADS)
Woroszkiewicz, Małgorzata; Ewiak, Ireneusz; Lulkowska, Paulina
2017-06-01
The TerraSAR-X add-on for Digital Elevation Measurement (TanDEM-X) mission launched in 2010 is another programme - after the Shuttle Radar Topography Mission (SRTM) in 2000 - that uses space-borne radar interferometry to build a global digital surface model. This article presents the accuracy assessment of the TanDEM-X intermediate Digital Elevation Model (IDEM) provided by the German Aerospace Center (DLR) under the project "Accuracy assessment of a Digital Elevation Model based on TanDEM-X data" for the southwestern territory of Poland. The study area included: open terrain, urban terrain and forested terrain. Based on a set of 17,498 reference points acquired by airborne laser scanning, the mean errors of average heights and standard deviations were calculated for areas with a terrain slope below 2 degrees, between 2 and 6 degrees and above 6 degrees. The absolute accuracy of the IDEM data for the analysed area, expressed as a root mean square error (Total RMSE), was 0.77 m.
Koutstaal, Wilma
2003-03-01
Investigations of memory deficits in older individuals have concentrated on their increased likelihood of forgetting events or details of events that were actually encountered (errors of omission). However, mounting evidence demonstrates that normal cognitive aging also is associated with an increased propensity for errors of commission--shown in false alarms or false recognition. The present study examined the origins of this age difference. Older and younger adults each performed three types of memory tasks in which details of encountered items might influence performance. Although older adults showed greater false recognition of related lures on a standard (identical) old/new episodic recognition task, older and younger adults showed parallel effects of detail on repetition priming and meaning-based episodic recognition (decreased priming and decreased meaning-based recognition for different relative to same exemplars). The results suggest that the older adults encoded details but used them less effectively than the younger adults in the recognition context requiring their deliberate, controlled use.
Aksan, Nazan; Hacker, Sarah D; Sager, Lauren; Dawson, Jeffrey; Anderson, Steven; Rizzo, Matthew
2016-03-01
Forty-two younger (Mean age = 35) and 37 older drivers (Mean age = 77) completed four similar simulated drives. In addition, 32 younger and 30 older drivers completed a standard on-road drive in an instrumented vehicle. Performance in the simulated drives was evaluated using both electronic drive data and video-review of errors. Safety errors during the on-road drive were evaluated by a certified driving instructor blind to simulator performance, using state Department of Transportation criteria. We examined the degree of convergence in performance across the two platforms on various driving tasks including lane change, lane keeping, speed control, stopping, turns, and overall performance. Differences based on age group indicated a pattern of strong relative validity for simulator measures. However, relative rank-order in specific metrics of performance suggested a pattern of moderate relative validity. The findings have implications for the use of simulators in assessments of driving safety as well as its use in training and/or rehabilitation settings.
Aksan, Nazan; Hacker, Sarah D.; Sager, Lauren; Dawson, Jeffrey; Anderson, Steven; Rizzo, Matthew
2017-01-01
Forty-two younger (Mean age = 35) and 37 older drivers (Mean age = 77) completed four similar simulated drives. In addition, 32 younger and 30 older drivers completed a standard on-road drive in an instrumented vehicle. Performance in the simulated drives was evaluated using both electronic drive data and video-review of errors. Safety errors during the on-road drive were evaluated by a certified driving instructor blind to simulator performance, using state Department of Transportation criteria. We examined the degree of convergence in performance across the two platforms on various driving tasks including lane change, lane keeping, speed control, stopping, turns, and overall performance. Differences based on age group indicated a pattern of strong relative validity for simulator measures. However, relative rank-order in specific metrics of performance suggested a pattern of moderate relative validity. The findings have implications for the use of simulators in assessments of driving safety as well as its use in training and/or rehabilitation settings. PMID:28649572
Evaluation of “Autotune” calibration against manual calibration of building energy models
Chaudhary, Gaurav; New, Joshua; Sanyal, Jibonananda; ...
2016-08-26
Our paper demonstrates the application of Autotune, a methodology aimed at automatically producing calibrated building energy models using measured data, in two case studies. In the first case, a building model is de-tuned by deliberately injecting faults into more than 60 parameters. This model was then calibrated using Autotune and its accuracy with respect to the original model was evaluated in terms of the industry-standard normalized mean bias error and coefficient of variation of root mean squared error metrics set forth in ASHRAE Guideline 14. In addition to whole-building energy consumption, outputs including lighting, plug load profiles, HVAC energy consumption,more » zone temperatures, and other variables were analyzed. In the second case, Autotune calibration is compared directly to experts’ manual calibration of an emulated-occupancy, full-size residential building with comparable calibration results in much less time. Lastly, our paper concludes with a discussion of the key strengths and weaknesses of auto-calibration approaches.« less
About problematic peculiarities of Fault Tolerance digital regulation organization
NASA Astrophysics Data System (ADS)
Rakov, V. I.; Zakharova, O. V.
2018-05-01
The solution of problems concerning estimation of working capacity of regulation chains and possibilities of preventing situations of its violation in three directions are offered. The first direction is working out (creating) the methods of representing the regulation loop (circuit) by means of uniting (combining) diffuse components and forming algorithmic tooling for building predicates of serviceability assessment separately for the components and the for regulation loops (circuits, contours) in general. The second direction is creating methods of Fault Tolerance redundancy in the process of complex assessment of current values of control actions, closure errors and their regulated parameters. The third direction is creating methods of comparing the processes of alteration (change) of control actions, errors of closure and regulating parameters with their standard models or their surroundings. This direction allows one to develop methods and algorithmic tool means, aimed at preventing loss of serviceability and effectiveness of not only a separate digital regulator, but also the whole complex of Fault Tolerance regulation.
Flodgren, G; Heiden, M; Lyskov, E; Crenshaw, A G
2007-03-01
In the present study, we assessed the wrist kinetics (range of motion, mean position, velocity and mean power frequency in radial/ulnar deviation, flexion/extension, and pronation/supination) associated with performing a mouse-operated computerized task involving painting rectangles on a computer screen. Furthermore, we evaluated the effects of the painting task on subjective perception of fatigue and wrist position sense. The results showed that the painting task required constrained wrist movements, and repetitive movements of about the same magnitude as those performed in mouse-operated design tasks. In addition, the painting task induced a perception of muscle fatigue in the upper extremity (Borg CR-scale: 3.5, p<0.001) and caused a reduction in the position sense accuracy of the wrist (error before: 4.6 degrees , error after: 5.6 degrees , p<0.05). This standardized painting task appears suitable for studying relevant risk factors, and therefore it offers a potential for investigating the pathophysiological mechanisms behind musculoskeletal disorders related to computer mouse use.
On the internal target model in a tracking task
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Baron, S.
1981-01-01
An optimal control model for predicting operator's dynamic responses and errors in target tracking ability is summarized. The model, which predicts asymmetry in the tracking data, is dependent on target maneuvers and trajectories. Gunners perception, decision making, control, and estimate of target positions and velocity related to crossover intervals are discussed. The model provides estimates for means, standard deviations, and variances for variables investigated and for operator estimates of future target positions and velocities.
The Relationship of Exercise to Fatigue and Quality of Life in Women With Breast Cancer
1999-08-01
exercise study during the first 3 cycles of chemotherapy. Weight change, body mass index, anorexia, nausea, caloric expenditure during exercise and... caloric expenditure increased, fatigue declined. However, the effects of exercise intensity were only significant for the least fatigue (p=.0402) and...Exercise dose and fatigue 25 Table 7. Least squares means and standard errors for four measures of daily fatigue by caloric expenditure . Caloric
NASA Astrophysics Data System (ADS)
Filmer, M. S.; Hughes, C. W.; Woodworth, P. L.; Featherstone, W. E.; Bingham, R. J.
2018-04-01
The direct method of vertical datum unification requires estimates of the ocean's mean dynamic topography (MDT) at tide gauges, which can be sourced from either geodetic or oceanographic approaches. To assess the suitability of different types of MDT for this purpose, we evaluate 13 physics-based numerical ocean models and six MDTs computed from observed geodetic and/or ocean data at 32 tide gauges around the Australian coast. We focus on the viability of numerical ocean models for vertical datum unification, classifying the 13 ocean models used as either independent (do not contain assimilated geodetic data) or non-independent (do contain assimilated geodetic data). We find that the independent and non-independent ocean models deliver similar results. Maximum differences among ocean models and geodetic MDTs reach >150 mm at several Australian tide gauges and are considered anomalous at the 99% confidence level. These differences appear to be of geodetic origin, but without additional independent information, or formal error estimates for each model, some of these errors remain inseparable. Our results imply that some ocean models have standard deviations of differences with other MDTs (using geodetic and/or ocean observations) at Australian tide gauges, and with levelling between some Australian tide gauges, of ˜ ± 50 mm . This indicates that they should be considered as an alternative to geodetic MDTs for the direct unification of vertical datums. They can also be used as diagnostics for errors in geodetic MDT in coastal zones, but the inseparability problem remains, where the error cannot be discriminated between the geoid model or altimeter-derived mean sea surface.
Chou, C P; Bentler, P M; Satorra, A
1991-11-01
Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.
Low-flow characteristics of streams in Virginia
Hayes, Donald C.
1991-01-01
Streamflow data were collected and low-flow characteristics computed for 715 gaged sites in Virginia Annual minimum average 7-consecutive-day flows range from 0 to 2,195 cubic feet per second for a 2-year recurrence interval and from 0 to 1,423 cubic feet per second for a 10-year recurrence interval. Drainage areas range from 0.17 to 7,320 square miles. Existing and discontinued gaged sites are separated into three types: long-term continuous-record sites, short-term continuous-record sites, and partial-record sites. Low-flow characteristics for long-term continuous-record sites are determined from frequency curves of annual minimum average 7-consecutive-day flows . Low-flow characteristics for short-term continuous-record sites are estimated by relating daily mean base-flow discharge values at a short-term site to concurrent daily mean discharge values at nearby long-term continuous-record sites having similar basin characteristics . Low-flow characteristics for partial-record sites are estimated by relating base-flow measurements to daily mean discharge values at long-term continuous-record sites. Information from the continuous-record sites and partial-record sites in Virginia are used to develop two techniques for estimating low-flow characteristics at ungaged sites. A flow-routing method is developed to estimate low-flow values at ungaged sites on gaged streams. Regional regression equations are developed for estimating low-flow values at ungaged sites on ungaged streams. The flow-routing method consists of transferring low-flow characteristics from a gaged site, either upstream or downstream, to a desired ungaged site. A simple drainage-area proration is used to transfer values when there are no major tributaries between the gaged and ungaged sites. Standard errors of estimate for108 test sites are 19 percent of the mean for estimates of low-flow characteristics having a 2-year recurrence interval and 52 percent of the mean for estimates of low-flow characteristics having a 10-year recurrence interval . A more complex transfer method must be used when major tributaries enter the stream between the gaged and ungaged sites. Twenty-four stream networks are analyzed, and predictions are made for 84 sites. Standard errors of estimate are 15 percent of the mean for estimates of low-flow characteristics having a 2-year recurrence interval and 22 percent of the mean for estimates of low-flow characteristics having a 10-year recurrence interval. Regional regression equations were developed for estimating low-flow values at ungaged sites on ungaged streams. The State was divided into eight regions on the basis of physiography and geographic grouping of the residuals computed in regression analyses . Basin characteristics that were significant in the regression analysis were drainage area, rock type, and strip-mined area. Standard errors of prediction range from 60 to139 percent for estimates of low-flow characteristics having a 2-year recurrence interval and 90 percent to 172 percent for estimates of low-flow characteristics having a 10-year recurrence interval.
Prediction of adult height in girls: the Beunen-Malina-Freitas method.
Beunen, Gaston P; Malina, Robert M; Freitas, Duarte L; Thomis, Martine A; Maia, José A; Claessens, Albrecht L; Gouveia, Elvio R; Maes, Hermine H; Lefevre, Johan
2011-12-01
The purpose of this study was to validate and cross-validate the Beunen-Malina-Freitas method for non-invasive prediction of adult height in girls. A sample of 420 girls aged 10-15 years from the Madeira Growth Study were measured at yearly intervals and then 8 years later. Anthropometric dimensions (lengths, breadths, circumferences, and skinfolds) were measured; skeletal age was assessed using the Tanner-Whitehouse 3 method and menarcheal status (present or absent) was recorded. Adult height was measured and predicted using stepwise, forward, and maximum R (2) regression techniques. Multiple correlations, mean differences, standard errors of prediction, and error boundaries were calculated. A sample of the Leuven Longitudinal Twin Study was used to cross-validate the regressions. Age-specific coefficients of determination (R (2)) between predicted and measured adult height varied between 0.57 and 0.96, while standard errors of prediction varied between 1.1 and 3.9 cm. The cross-validation confirmed the validity of the Beunen-Malina-Freitas method in girls aged 12-15 years, but at lower ages the cross-validation was less consistent. We conclude that the Beunen-Malina-Freitas method is valid for the prediction of adult height in girls aged 12-15 years. It is applicable to European populations or populations of European ancestry.
A low-cost acoustic permeameter
NASA Astrophysics Data System (ADS)
Drake, Stephen A.; Selker, John S.; Higgins, Chad W.
2017-04-01
Intrinsic permeability is an important parameter that regulates air exchange through porous media such as snow. Standard methods of measuring snow permeability are inconvenient to perform outdoors, are fraught with sampling errors, and require specialized equipment, while bringing intact samples back to the laboratory is also challenging. To address these issues, we designed, built, and tested a low-cost acoustic permeameter that allows computation of volume-averaged intrinsic permeability for a homogenous medium. In this paper, we validate acoustically derived permeability of homogenous, reticulated foam samples by comparison with results derived using a standard flow-through permeameter. Acoustic permeameter elements were designed for use in snow, but the measurement methods are not snow-specific. The electronic components - consisting of a signal generator, amplifier, speaker, microphone, and oscilloscope - are inexpensive and easily obtainable. The system is suitable for outdoor use when it is not precipitating, but the electrical components require protection from the elements in inclement weather. The permeameter can be operated with a microphone either internally mounted or buried a known depth in the medium. The calibration method depends on choice of microphone positioning. For an externally located microphone, calibration was based on a low-frequency approximation applied at 500 Hz that provided an estimate of both intrinsic permeability and tortuosity. The low-frequency approximation that we used is valid up to 2 kHz, but we chose 500 Hz because data reproducibility was maximized at this frequency. For an internally mounted microphone, calibration was based on attenuation at 50 Hz and returned only intrinsic permeability. We found that 50 Hz corresponded to a wavelength that minimized resonance frequencies in the acoustic tube and was also within the response limitations of the microphone. We used reticulated foam of known permeability (ranging from 2 × 10-7 to 3 × 10-9 m2) and estimated tortuosity of 1.05 to validate both methods. For the externally mounted microphone the mean normalized standard deviation was 6 % for permeability and 2 % for tortuosity. The mean relative error from known measurements was 17 % for permeability and 2 % for tortuosity. For the internally mounted microphone the mean normalized standard deviation for permeability was 10 % and the relative error was also 10 %. Permeability determination for an externally mounted microphone is less sensitive to environmental noise than is the internally mounted microphone and is therefore the recommended method. The approximation using the internally mounted microphone was developed as an alternative for circumstances in which placing the microphone in the medium was not feasible. Environmental noise degrades precision of both methods and is recognizable as increased scatter for replicate data points.
Accuracy of the Lifebox pulse oximeter during hypoxia in healthy volunteers.
Dubowitz, G; Breyer, K; Lipnick, M; Sall, J W; Feiner, J; Ikeda, K; MacLeod, D B; Bickler, P E
2013-12-01
Pulse oximetry is a standard of care during anaesthesia in high-income countries. However, 70% of operating environments in low- and middle-income countries have no pulse oximeter. The 'Lifebox' oximetry project set out to bridge this gap with an inexpensive oximeter meeting CE (European Conformity) and ISO (International Organization for Standardization) standards. To date, there are no performance-specific accuracy data on this instrument. The aim of this study was to establish whether the Lifebox pulse oximeter provides clinically reliable haemoglobin oxygen saturation (Sp O2 ) readings meeting USA Food and Drug Administration 510(k) standards. Using healthy volunteers, inspired oxygen fraction was adjusted to produce arterial haemoglobin oxygen saturation (Sa O2 ) readings between 71% and 100% measured with a multi-wavelength oximeter. Lifebox accuracy was expressed using bias (Sp O2 - Sa O2 ), precision (SD of the bias) and the root mean square error (Arms). Simultaneous readings of Sa O2 and Sp O2 in 57 subjects showed a mean (SD) bias of -0.41% (2.28%) and Arms 2.31%. The Lifebox pulse oximeter meets current USA Food and Drug Administration standards for accuracy, thus representing an inexpensive solution for patient monitoring without compromising standards. © 2013 The Association of Anaesthetists of Great Britain and Ireland.
Evaluation of Acoustic Doppler Current Profiler measurements of river discharge
Morlock, S.E.
1996-01-01
The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.
Prevalence of refractive errors in a Brazilian population: the Botucatu eye study.
Schellini, Silvana Artioli; Durkin, Shane R; Hoyama, Erika; Hirai, Flavio; Cordeiro, Ricardo; Casson, Robert J; Selva, Dinesh; Padovani, Carlos Roberto
2009-01-01
To determine the prevalence and demographic associations of refractive error in Botucatu, Brazil. A population-based, cross-sectional prevalence study was conducted, which involved random, household cluster sampling of an urban Brazilian population in Botucatu. There were 3000 individuals aged 1 to 91 years (mean 38.3) who were eligible to participate in the study. Refractive error measurements were obtained by objective refraction. Objective refractive error examinations were performed on 2454 residents within this sample (81.8% of eligible participants). The mean age was 38 years (standard deviation (SD) 20.8 years, Range 1 to 91) and females comprised 57.5% of the study population. Myopia (spherical equivalent (SE) < -0.5 dropters (D)) was most prevalent among those aged 30-39 years (29.7%; 95% confidence interval (CI) 24.8-35.1) and least prevalent among children under 10 years (3.8%; 95% confidence interval (CI) 1.6-7.3). Conversely hypermetropia (SE > 0.5D) was most prevalent among participants under 10 years (86.9%; 95% CI 81.6-91.1) and least prevalent in the fourth decade (32.5%; 95% CI 28.2-37.0). Participants aged 70 years or older bore the largest burden of astigmatism (cylinder at least -0.5D) and anisometropia (difference in SE of > 0.5D) with a prevalence of 71.7% (95% CI 64.8-78.0) 55.0% (95% CI 47.6-62.2) respectively. Myopia and hypermetropia were significantly associated with age in a bimodal manner (P < 0.001), whereas anisometropia and astigmatism increased in line with age (P < 0.001). Multivariate modeling confirmed age-related risk factors for refractive error and revealed several gender, occupation and ethnic-related risk factors. These results represent previously unreported data on refractive error within this Brazilian population. They signal a need to continue to screen for refractive error within this population and to ensure that people have adequate access to optical correction.
Minimal nuclear energy density functional
NASA Astrophysics Data System (ADS)
Bulgac, Aurel; Forbes, Michael McNeil; Jin, Shi; Perez, Rodrigo Navarro; Schunck, Nicolas
2018-04-01
We present a minimal nuclear energy density functional (NEDF) called "SeaLL1" that has the smallest number of possible phenomenological parameters to date. SeaLL1 is defined by seven significant phenomenological parameters, each related to a specific nuclear property. It describes the nuclear masses of even-even nuclei with a mean energy error of 0.97 MeV and a standard deviation of 1.46 MeV , two-neutron and two-proton separation energies with rms errors of 0.69 MeV and 0.59 MeV respectively, and the charge radii of 345 even-even nuclei with a mean error ɛr=0.022 fm and a standard deviation σr=0.025 fm . SeaLL1 incorporates constraints on the equation of state (EoS) of pure neutron matter from quantum Monte Carlo calculations with chiral effective field theory two-body (NN ) interactions at the next-to-next-to-next-to leading order (N3LO) level and three-body (NNN ) interactions at the next-to-next-to leading order (N2LO) level. Two of the seven parameters are related to the saturation density and the energy per particle of the homogeneous symmetric nuclear matter, one is related to the nuclear surface tension, two are related to the symmetry energy and its density dependence, one is related to the strength of the spin-orbit interaction, and one is the coupling constant of the pairing interaction. We identify additional phenomenological parameters that have little effect on ground-state properties but can be used to fine-tune features such as the Thomas-Reiche-Kuhn sum rule, the excitation energy of the giant dipole and Gamow-Teller resonances, the static dipole electric polarizability, and the neutron skin thickness.
Feeney, Joanne; Savva, George M; O'Regan, Claire; King-Kallimanis, Bellinda; Cronin, Hilary; Kenny, Rose Anne
2016-05-31
Knowing the reliability of cognitive tests, particularly those commonly used in clinical practice, is important in order to interpret the clinical significance of a change in performance or a low score on a single test. To report the intra-class correlation (ICC), standard error of measurement (SEM) and minimum detectable change (MDC) for the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and Color Trails Test (CTT) among community dwelling older adults. 130 participants aged 55 and older without severe cognitive impairment underwent two cognitive assessments between two and four months apart. Half the group changed rater between assessments and half changed time of day. Mean (standard deviation) MMSE was 28.1 (2.1) at baseline and 28.4 (2.1) at repeat. Mean (SD) MoCA increased from 24.8 (3.6) to 25.2 (3.6). There was a rater effect on CTT, but not on the MMSE or MoCA. The SEM of the MMSE was 1.0, leading to an MDC (based on a 95% confidence interval) of 3 points. The SEM of the MoCA was 1.5, implying an MDC95 of 4 points. MoCA (ICC = 0.81) was more reliable than MMSE (ICC = 0.75), but all tests examined showed substantial within-patient variation. An individual's score would have to change by greater than or equal to 3 points on the MMSE and 4 points on the MoCA for the rater to be confident that the change was not due to measurement error. This has important implications for epidemiologists and clinicians in dementia screening and diagnosis.
Minimal nuclear energy density functional
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bulgac, Aurel; Forbes, Michael McNeil; Jin, Shi
Inmore » this paper, we present a minimal nuclear energy density functional (NEDF) called “SeaLL1” that has the smallest number of possible phenomenological parameters to date. SeaLL1 is defined by seven significant phenomenological parameters, each related to a specific nuclear property. It describes the nuclear masses of even-even nuclei with a mean energy error of 0.97 MeV and a standard deviation of 1.46 MeV , two-neutron and two-proton separation energies with rms errors of 0.69 MeV and 0.59 MeV respectively, and the charge radii of 345 even-even nuclei with a mean error ε r = 0.022 fm and a standard deviation σ r = 0.025 fm . SeaLL1 incorporates constraints on the equation of state (EoS) of pure neutron matter from quantum Monte Carlo calculations with chiral effective field theory two-body ( NN ) interactions at the next-to-next-to-next-to leading order (N3LO) level and three-body ( NNN ) interactions at the next-to-next-to leading order (N2LO) level. Two of the seven parameters are related to the saturation density and the energy per particle of the homogeneous symmetric nuclear matter, one is related to the nuclear surface tension, two are related to the symmetry energy and its density dependence, one is related to the strength of the spin-orbit interaction, and one is the coupling constant of the pairing interaction. Finally, we identify additional phenomenological parameters that have little effect on ground-state properties but can be used to fine-tune features such as the Thomas-Reiche-Kuhn sum rule, the excitation energy of the giant dipole and Gamow-Teller resonances, the static dipole electric polarizability, and the neutron skin thickness.« less
Minimal nuclear energy density functional
Bulgac, Aurel; Forbes, Michael McNeil; Jin, Shi; ...
2018-04-17
Inmore » this paper, we present a minimal nuclear energy density functional (NEDF) called “SeaLL1” that has the smallest number of possible phenomenological parameters to date. SeaLL1 is defined by seven significant phenomenological parameters, each related to a specific nuclear property. It describes the nuclear masses of even-even nuclei with a mean energy error of 0.97 MeV and a standard deviation of 1.46 MeV , two-neutron and two-proton separation energies with rms errors of 0.69 MeV and 0.59 MeV respectively, and the charge radii of 345 even-even nuclei with a mean error ε r = 0.022 fm and a standard deviation σ r = 0.025 fm . SeaLL1 incorporates constraints on the equation of state (EoS) of pure neutron matter from quantum Monte Carlo calculations with chiral effective field theory two-body ( NN ) interactions at the next-to-next-to-next-to leading order (N3LO) level and three-body ( NNN ) interactions at the next-to-next-to leading order (N2LO) level. Two of the seven parameters are related to the saturation density and the energy per particle of the homogeneous symmetric nuclear matter, one is related to the nuclear surface tension, two are related to the symmetry energy and its density dependence, one is related to the strength of the spin-orbit interaction, and one is the coupling constant of the pairing interaction. Finally, we identify additional phenomenological parameters that have little effect on ground-state properties but can be used to fine-tune features such as the Thomas-Reiche-Kuhn sum rule, the excitation energy of the giant dipole and Gamow-Teller resonances, the static dipole electric polarizability, and the neutron skin thickness.« less
Regression away from the mean: Theory and examples.
Schwarz, Wolf; Reike, Dennis
2018-02-01
Using a standard repeated measures model with arbitrary true score distribution and normal error variables, we present some fundamental closed-form results which explicitly indicate the conditions under which regression effects towards (RTM) and away from the mean are expected. Specifically, we show that for skewed and bimodal distributions many or even most cases will show a regression effect that is in expectation away from the mean, or that is not just towards but actually beyond the mean. We illustrate our results in quantitative detail with typical examples from experimental and biometric applications, which exhibit a clear regression away from the mean ('egression from the mean') signature. We aim not to repeal cautionary advice against potential RTM effects, but to present a balanced view of regression effects, based on a clear identification of the conditions governing the form that regression effects take in repeated measures designs. © 2017 The British Psychological Society.
Li, Xiongwei; Wang, Zhe; Fu, Yangting; Li, Zheng; Liu, Jianmin; Ni, Weidou
2014-01-01
Measurement of coal carbon content using laser-induced breakdown spectroscopy (LIBS) is limited by its low precision and accuracy. A modified spectrum standardization method was proposed to achieve both reproducible and accurate results for the quantitative analysis of carbon content in coal using LIBS. The proposed method used the molecular emissions of diatomic carbon (C2) and cyanide (CN) to compensate for the diminution of atomic carbon emissions in high volatile content coal samples caused by matrix effect. The compensated carbon line intensities were further converted into an assumed standard state with standard plasma temperature, electron number density, and total number density of carbon, under which the carbon line intensity is proportional to its concentration in the coal samples. To obtain better compensation for fluctuations of total carbon number density, the segmental spectral area was used and an iterative algorithm was applied that is different from our previous spectrum standardization calculations. The modified spectrum standardization model was applied to the measurement of carbon content in 24 bituminous coal samples. The results demonstrate that the proposed method has superior performance over the generally applied normalization methods. The average relative standard deviation was 3.21%, the coefficient of determination was 0.90, the root mean square error of prediction was 2.24%, and the average maximum relative error for the modified model was 12.18%, showing an overall improvement over the corresponding values for the normalization with segmental spectrum area, 6.00%, 0.75, 3.77%, and 15.40%, respectively.
Decroos, Francis Char; Stinnett, Sandra S; Heydary, Cynthia S; Burns, Russell E; Jaffe, Glenn J
2013-11-01
To determine the impact of segmentation error correction and precision of standardized grading of time domain optical coherence tomography (OCT) scans obtained during an interventional study for macular edema secondary to central retinal vein occlusion (CRVO). A reading center team of two readers and a senior reader evaluated 1199 OCT scans. Manual segmentation error correction (SEC) was performed. The frequency of SEC, resulting change in central retinal thickness after SEC, and reproducibility of SEC were quantified. Optical coherence tomography characteristics associated with the need for SECs were determined. Reading center teams graded all scans, and the reproducibility of this evaluation for scan quality at the fovea and cystoid macular edema was determined on 97 scans. Segmentation errors were observed in 360 (30.0%) scans, of which 312 were interpretable. On these 312 scans, the mean machine-generated central subfield thickness (CST) was 507.4 ± 208.5 μm compared to 583.0 ± 266.2 μm after SEC. Segmentation error correction resulted in a mean absolute CST correction of 81.3 ± 162.0 μm from baseline uncorrected CST. Segmentation error correction was highly reproducible (intraclass correlation coefficient [ICC] = 0.99-1.00). Epiretinal membrane (odds ratio [OR] = 2.3, P < 0.0001), subretinal fluid (OR = 2.1, P = 0.0005), and increasing CST (OR = 1.6 per 100-μm increase, P < 0.001) were associated with need for SEC. Reading center teams reproducibly graded scan quality at the fovea (87% agreement, kappa = 0.64, 95% confidence interval [CI] 0.45-0.82) and cystoid macular edema (92% agreement, kappa = 0.84, 95% CI 0.74-0.94). Optical coherence tomography images obtained during an interventional CRVO treatment trial can be reproducibly graded. Segmentation errors can cause clinically meaningful deviation in central retinal thickness measurements; however, these errors can be corrected reproducibly in a reading center setting. Segmentation errors are common on these images, can cause clinically meaningful errors in central retinal thickness measurement, and can be corrected reproducibly in a reading center setting.
VLBI-derived troposphere parameters during CONT08
NASA Astrophysics Data System (ADS)
Heinkelmann, R.; Böhm, J.; Bolotin, S.; Engelhardt, G.; Haas, R.; Lanotte, R.; MacMillan, D. S.; Negusini, M.; Skurikhina, E.; Titov, O.; Schuh, H.
2011-07-01
Time-series of zenith wet and total troposphere delays as well as north and east gradients are compared, and zenith total delays ( ZTD) are combined on the level of parameter estimates. Input data sets are provided by ten Analysis Centers (ACs) of the International VLBI Service for Geodesy and Astrometry (IVS) for the CONT08 campaign (12-26 August 2008). The inconsistent usage of meteorological data and models, such as mapping functions, causes systematics among the ACs, and differing parameterizations and constraints add noise to the troposphere parameter estimates. The empirical standard deviation of ZTD among the ACs with regard to an unweighted mean is 4.6 mm. The ratio of the analysis noise to the observation noise assessed by the operator/software impact (OSI) model is about 2.5. These and other effects have to be accounted for to improve the intra-technique combination of VLBI-derived troposphere parameters. While the largest systematics caused by inconsistent usage of meteorological data can be avoided and the application of different mapping functions can be considered by applying empirical corrections, the noise has to be modeled in the stochastic model of intra-technique combination. The application of different stochastic models shows no significant effects on the combined parameters but results in different mean formal errors: the mean formal errors of the combined ZTD are 2.3 mm (unweighted), 4.4 mm (diagonal), 8.6 mm [variance component (VC) estimation], and 8.6 mm (operator/software impact, OSI). On the one hand, the OSI model, i.e. the inclusion of off-diagonal elements in the cofactor-matrix, considers the reapplication of observations yielding a factor of about two for mean formal errors as compared to the diagonal approach. On the other hand, the combination based on VC estimation shows large differences among the VCs and exhibits a comparable scaling of formal errors. Thus, for the combination of troposphere parameters a combination of the two extensions of the stochastic model is recommended.
Kado, DM; Huang, MH; Karlamangla, AS; Cawthon, P; Katzman, W; Hillier, TA; Ensrud, K; Cummings, SR
2012-01-01
Age-related hyperkyphosis is thought to be a result of underlying vertebral fractures, but studies suggest that among the most hyperkyphotic women, only one in three have underlying radiographic vertebral fractures. Although commonly observed, there is no widely accepted definition of hyperkyphosis in older persons, and other than vertebral fracture, no major causes have been identified. To identify important correlates of kyphosis and risk factors for its progression over time, we conducted a 15 year retrospective cohort study of 1,196 women, aged 65 years and older at baseline (1986–88), from four communities across the United States: Baltimore County, MD; Minneapolis, MN, Portland, Oregon, and the Monongahela Valley, PA. Cobb angle kyphosis was measured from radiographs obtained at baseline and an average of 3.7 and 15 years later. Repeated measures, mixed effects analyses were performed. At baseline, the mean kyphosis angle was 44.7 degrees (standard error 0.4, standard deviation 11.9) and significant correlates included a family history of hyperkyphosis, prevalent vertebral fracture, low bone mineral density, greater body weight, degenerative disc disease, and smoking. Over an average of 15 years, the mean increase in kyphosis was 7.1 degrees (standard error 0.25). Independent determinants of greater kyphosis progression were prevalent and incident vertebral fractures, low bone mineral density and concurrent bone density loss, low body weight, and concurrent weight loss. Thus, age-related kyphosis progression may be best prevented by slowing bone density loss and avoiding weight loss. PMID:22865329
Fisher, Jason C.
2013-01-01
Long-term groundwater monitoring networks can provide essential information for the planning and management of water resources. Budget constraints in water resource management agencies often mean a reduction in the number of observation wells included in a monitoring network. A network design tool, distributed as an R package, was developed to determine which wells to exclude from a monitoring network because they add little or no beneficial information. A kriging-based genetic algorithm method was used to optimize the monitoring network. The algorithm was used to find the set of wells whose removal leads to the smallest increase in the weighted sum of the (1) mean standard error at all nodes in the kriging grid where the water table is estimated, (2) root-mean-squared-error between the measured and estimated water-level elevation at the removed sites, (3) mean standard deviation of measurements across time at the removed sites, and (4) mean measurement error of wells in the reduced network. The solution to the optimization problem (the best wells to retain in the monitoring network) depends on the total number of wells removed; this number is a management decision. The network design tool was applied to optimize two observation well networks monitoring the water table of the eastern Snake River Plain aquifer, Idaho; these networks include the 2008 Federal-State Cooperative water-level monitoring network (Co-op network) with 166 observation wells, and the 2008 U.S. Geological Survey-Idaho National Laboratory water-level monitoring network (USGS-INL network) with 171 wells. Each water-level monitoring network was optimized five times: by removing (1) 10, (2) 20, (3) 40, (4) 60, and (5) 80 observation wells from the original network. An examination of the trade-offs associated with changes in the number of wells to remove indicates that 20 wells can be removed from the Co-op network with a relatively small degradation of the estimated water table map, and 40 wells can be removed from the USGS-INL network before the water table map degradation accelerates. The optimal network designs indicate the robustness of the network design tool. Observation wells were removed from high well-density areas of the network while retaining the spatial pattern of the existing water-table map.
Yang, Jie; Liu, Qingquan; Dai, Wei
2017-02-01
To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.
Avulsion research using flume experiments and highly accurate and temporal-rich SfM datasets
NASA Astrophysics Data System (ADS)
Javernick, L.; Bertoldi, W.; Vitti, A.
2017-12-01
SfM's ability to produce high-quality, large-scale digital elevation models (DEMs) of complicated and rapidly evolving systems has made it a valuable technique for low-budget researchers and practitioners. While SfM has provided valuable datasets that capture single-flood event DEMs, there is an increasing scientific need to capture higher temporal resolution datasets that can quantify the evolutionary processes instead of pre- and post-flood snapshots. However, flood events' dangerous field conditions and image matching challenges (e.g. wind, rain) prevent quality SfM-image acquisition. Conversely, flume experiments offer opportunities to document flood events, but achieving consistent and accurate DEMs to detect subtle changes in dry and inundated areas remains a challenge for SfM (e.g. parabolic error signatures).This research aimed at investigating the impact of naturally occurring and manipulated avulsions on braided river morphology and on the encroachment of floodplain vegetation, using laboratory experiments. This required DEMs with millimeter accuracy and precision and at a temporal resolution to capture the processes. SfM was chosen as it offered the most practical method. Through redundant local network design and a meticulous ground control point (GCP) survey with a Leica Total Station in red laser configuration (reported 2 mm accuracy), the SfM residual errors compared to separate ground truthing data produced mean errors of 1.5 mm (accuracy) and standard deviations of 1.4 mm (precision) without parabolic error signatures. Lighting conditions in the flume were limited to uniform, oblique, and filtered LED strips, which removed glint and thus improved bed elevation mean errors to 4 mm, but errors were further reduced by means of an open source software for refraction correction. The obtained datasets have provided the ability to quantify how small flood events with avulsion can have similar morphologic and vegetation impacts as large flood events without avulsion. Further, this research highlights the potential application of SfM in the laboratory and ability to document physical and biological processes at greater spatial and temporal resolution. Marie Sklodowska-Curie Individual Fellowship: River-HMV, 656917
Biases and Standard Errors of Standardized Regression Coefficients
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Chan, Wai
2011-01-01
The paper obtains consistent standard errors (SE) and biases of order O(1/n) for the sample standardized regression coefficients with both random and given predictors. Analytical results indicate that the formulas for SEs given in popular text books are consistent only when the population value of the regression coefficient is zero. The sample…
Uncertainty Analysis of Downscaled CMIP5 Precipitation Data for Louisiana, USA
NASA Astrophysics Data System (ADS)
Sumi, S. J.; Tamanna, M.; Chivoiu, B.; Habib, E. H.
2014-12-01
The downscaled CMIP3 and CMIP5 Climate and Hydrology Projections dataset contains fine spatial resolution translations of climate projections over the contiguous United States developed using two downscaling techniques (monthly Bias Correction Spatial Disaggregation (BCSD) and daily Bias Correction Constructed Analogs (BCCA)). The objective of this study is to assess the uncertainty of the CMIP5 downscaled general circulation models (GCM). We performed an analysis of the daily, monthly, seasonal and annual variability of precipitation downloaded from the Downscaled CMIP3 and CMIP5 Climate and Hydrology Projections website for the state of Louisiana, USA at 0.125° x 0.125° resolution. A data set of daily gridded observations of precipitation of a rectangular boundary covering Louisiana is used to assess the validity of 21 downscaled GCMs for the 1950-1999 period. The following statistics are computed using the CMIP5 observed dataset with respect to the 21 models: the correlation coefficient, the bias, the normalized bias, the mean absolute error (MAE), the mean absolute percentage error (MAPE), and the root mean square error (RMSE). A measure of variability simulated by each model is computed as the ratio of its standard deviation, in both space and time, to the corresponding standard deviation of the observation. The correlation and MAPE statistics are also computed for each of the nine climate divisions of Louisiana. Some of the patterns that we observed are: 1) Average annual precipitation rate shows similar spatial distribution for all the models within a range of 3.27 to 4.75 mm/day from Northwest to Southeast. 2) Standard deviation of summer (JJA) precipitation (mm/day) for the models maintains lower value than the observation whereas they have similar spatial patterns and range of values in winter (NDJ). 3) Correlation coefficients of annual precipitation of models against observation have a range of -0.48 to 0.36 with variable spatial distribution by model. 4) Most of the models show negative correlation coefficients in summer and positive in winter. 5) MAE shows similar spatial distribution for all the models within a range of 5.20 to 7.43 mm/day from Northwest to Southeast of Louisiana. 6) Highest values of correlation coefficients are found at seasonal scale within a range of 0.36 to 0.46.
Lee, Jin H.; Howell, David R.; Meehan, William P.; Iverson, Grant L.; Gardner, Andrew J.
2017-01-01
Background: The Sport Concussion Assessment Tool–Third Edition (SCAT3) is currently considered the standard sideline assessment for concussions. In-game exercise, however, may affect SCAT3 performance and the diagnosis of concussions. Purpose: To examine the influence of exercise on SCAT3 performance in professional male athletes. Study Design: Controlled laboratory study. Methods: We examined the SCAT3 performance of 82 professional male athletes under 2 conditions: at rest and after exercise. Results: Athletes reported significantly fewer total symptoms (mean, 1.0 ± 1.5 vs 1.6 ± 2.3 total symptoms, respectively; P = .008; Cohen d = 0.34), committed significantly fewer errors on the modified Balance Error Scoring System (mean, 3.5 ± 3.5 vs 4.6 ± 4.1 errors, respectively; P = .017; d = 0.31), and required significantly less time to complete the tandem gait test (mean, 9.5 ± 1.4 vs 9.9 ± 1.7 seconds, respectively; P = .02; d = 0.30) during the at-rest condition compared with the postexercise condition. Conclusion: The interpretation of in-game (sideline) SCAT3 results should consider the effects of postexercise fatigue levels on an athlete’s performance, particularly if preseason baseline data have been collected when the athlete was well rested. Clinical Relevance: Exercise appears to affect symptom burden and physical abilities, such as balance and tandem gait, more so than the cognitive components of the SCAT3. PMID:28944251
NASA Astrophysics Data System (ADS)
Solazzo, Efisio; Hogrefe, Christian; Colette, Augustin; Garcia-Vivanco, Marta; Galmarini, Stefano
2017-09-01
The work here complements the overview analysis of the modelling systems participating in the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3) by focusing on the performance for hourly surface ozone by two modelling systems, Chimere for Europe and CMAQ for North America. The evaluation strategy outlined in the course of the three phases of the AQMEII activity, aimed to build up a diagnostic methodology for model evaluation, is pursued here and novel diagnostic methods are proposed. In addition to evaluating the base case
simulation in which all model components are configured in their standard mode, the analysis also makes use of sensitivity simulations in which the models have been applied by altering and/or zeroing lateral boundary conditions, emissions of anthropogenic precursors, and ozone dry deposition. To help understand of the causes of model deficiencies, the error components (bias, variance, and covariance) of the base case and of the sensitivity runs are analysed in conjunction with timescale considerations and error modelling using the available error fields of temperature, wind speed, and NOx concentration. The results reveal the effectiveness and diagnostic power of the methods devised (which remains the main scope of this study), allowing the detection of the timescale and the fields that the two models are most sensitive to. The representation of planetary boundary layer (PBL) dynamics is pivotal to both models. In particular, (i) the fluctuations slower than ˜ 1.5 days account for 70-85 % of the mean square error of the full (undecomposed) ozone time series; (ii) a recursive, systematic error with daily periodicity is detected, responsible for 10-20 % of the quadratic total error; (iii) errors in representing the timing of the daily transition between stability regimes in the PBL are responsible for a covariance error as large as 9 ppb (as much as the standard deviation of the network-average ozone observations in summer in both Europe and North America); (iv) the CMAQ ozone error has a weak/negligible dependence on the errors in NO2, while the error in NO2 significantly impacts the ozone error produced by Chimere; (v) the response of the models to variations of anthropogenic emissions and boundary conditions show a pronounced spatial heterogeneity, while the seasonal variability of the response is found to be less marked. Only during the winter season does the zeroing of boundary values for North America produce a spatially uniform deterioration of the model accuracy across the majority of the continent.
NASA Astrophysics Data System (ADS)
Atkinson, Callum; Coudert, Sebastien; Foucaut, Jean-Marc; Stanislas, Michel; Soria, Julio
2011-04-01
To investigate the accuracy of tomographic particle image velocimetry (Tomo-PIV) for turbulent boundary layer measurements, a series of synthetic image-based simulations and practical experiments are performed on a high Reynolds number turbulent boundary layer at Reθ = 7,800. Two different approaches to Tomo-PIV are examined using a full-volume slab measurement and a thin-volume "fat" light sheet approach. Tomographic reconstruction is performed using both the standard MART technique and the more efficient MLOS-SMART approach, showing a 10-time increase in processing speed. Random and bias errors are quantified under the influence of the near-wall velocity gradient, reconstruction method, ghost particles, seeding density and volume thickness, using synthetic images. Experimental Tomo-PIV results are compared with hot-wire measurements and errors are examined in terms of the measured mean and fluctuating profiles, probability density functions of the fluctuations, distributions of fluctuating divergence through the volume and velocity power spectra. Velocity gradients have a large effect on errors near the wall and also increase the errors associated with ghost particles, which convect at mean velocities through the volume thickness. Tomo-PIV provides accurate experimental measurements at low wave numbers; however, reconstruction introduces high noise levels that reduces the effective spatial resolution. A thinner volume is shown to provide a higher measurement accuracy at the expense of the measurement domain, albeit still at a lower effective spatial resolution than planar and Stereo-PIV.
Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J
2014-06-01
We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.
Darajeh, Negisa; Idris, Azni; Fard Masoumi, Hamid Reza; Nourani, Abolfazl; Truong, Paul; Rezania, Shahabaldin
2017-05-04
Artificial neural networks (ANNs) have been widely used to solve the problems because of their reliable, robust, and salient characteristics in capturing the nonlinear relationships between variables in complex systems. In this study, ANN was applied for modeling of Chemical Oxygen Demand (COD) and biodegradable organic matter (BOD) removal from palm oil mill secondary effluent (POMSE) by vetiver system. The independent variable, including POMSE concentration, vetiver slips density, and removal time, has been considered as input parameters to optimize the network, while the removal percentage of COD and BOD were selected as output. To determine the number of hidden layer nodes, the root mean squared error of testing set was minimized, and the topologies of the algorithms were compared by coefficient of determination and absolute average deviation. The comparison indicated that the quick propagation (QP) algorithm had minimum root mean squared error and absolute average deviation, and maximum coefficient of determination. The importance values of the variables was included vetiver slips density with 42.41%, time with 29.8%, and the POMSE concentration with 27.79%, which showed none of them, is negligible. Results show that the ANN has great potential ability in prediction of COD and BOD removal from POMSE with residual standard error (RSE) of less than 0.45%.
Cutti, Andrea Giovanni; Cappello, Angelo; Davalli, Angelo
2006-01-01
Soft tissue artefact is the dominant error source for upper extremity motion analyses that use skin-mounted markers, especially in humeral axial rotation. A new in vivo technique is presented that is based on the definition of a humerus bone-embedded frame almost "artefact free" but influenced by the elbow orientation in the measurement of the humeral axial rotation, and on an algorithm designed to solve this kinematic coupling. The technique was validated in vivo in a study of six healthy subjects who performed five arm-movement tasks. For each task the similarity between a gold standard pattern and the axial rotation pattern before and after the application of the compensation algorithm was evaluated in terms of explained variance, gain, phase and offset. In addition the root mean square error between the patterns was used as a global similarity estimator. After the application, for four out of five tasks, patterns were highly correlated, in phase, with almost equal gain and limited offset; the root mean square error decreased from the original 9 degrees to 3 degrees . The proposed technique appears to help compensate for the soft tissue artefact affecting axial rotation. A further development is also proposed to make the technique effective also for the pure prono-supination task.
Turbulent CO2 Flux Measurements by Lidar: Length Scales, Results and Comparison with In-Situ Sensors
NASA Technical Reports Server (NTRS)
Gilbert, Fabien; Koch, Grady J.; Beyon, Jeffrey Y.; Hilton, Timothy W.; Davis, Kenneth J.; Andrews, Arlyn; Ismail, Syed; Singh, Upendra N.
2009-01-01
The vertical CO2 flux in the atmospheric boundary layer (ABL) is investigated with a Doppler differential absorption lidar (DIAL). The instrument was operated next to the WLEF instrumented tall tower in Park Falls, Wisconsin during three days and nights in June 2007. Profiles of turbulent CO2 mixing ratio and vertical velocity fluctuations are measured by in-situ sensors and Doppler DIAL. Time and space scales of turbulence are precisely defined in the ABL. The eddy-covariance method is applied to calculate turbulent CO2 flux both by lidar and in-situ sensors. We show preliminary mean lidar CO2 flux measurements in the ABL with a time and space resolution of 6 h and 1500 m respectively. The flux instrumental errors decrease linearly with the standard deviation of the CO2 data, as expected. Although turbulent fluctuations of CO2 are negligible with respect to the mean (0.1 %), we show that the eddy-covariance method can provide 2-h, 150-m range resolved CO2 flux estimates as long as the CO2 mixing ratio instrumental error is no greater than 10 ppm and the vertical velocity error is lower than the natural fluctuations over a time resolution of 10 s.
Fast and fully automatic phalanx segmentation using a grayscale-histogram morphology algorithm
NASA Astrophysics Data System (ADS)
Hsieh, Chi-Wen; Liu, Tzu-Chiang; Jong, Tai-Lang; Chen, Chih-Yen; Tiu, Chui-Mei; Chan, Din-Yuen
2011-08-01
Bone age assessment is a common radiological examination used in pediatrics to diagnose the discrepancy between the skeletal and chronological age of a child; therefore, it is beneficial to develop a computer-based bone age assessment to help junior pediatricians estimate bone age easily. Unfortunately, the phalanx on radiograms is not easily separated from the background and soft tissue. Therefore, we proposed a new method, called the grayscale-histogram morphology algorithm, to segment the phalanges fast and precisely. The algorithm includes three parts: a tri-stage sieve algorithm used to eliminate the background of hand radiograms, a centroid-edge dual scanning algorithm to frame the phalanx region, and finally a segmentation algorithm based on disk traverse-subtraction filter to segment the phalanx. Moreover, two more segmentation methods: adaptive two-mean and adaptive two-mean clustering were performed, and their results were compared with the segmentation algorithm based on disk traverse-subtraction filter using five indices comprising misclassification error, relative foreground area error, modified Hausdorff distances, edge mismatch, and region nonuniformity. In addition, the CPU time of the three segmentation methods was discussed. The result showed that our method had a better performance than the other two methods. Furthermore, satisfactory segmentation results were obtained with a low standard error.
Comparison of laser ray-tracing and skiascopic ocular wavefront-sensing devices
Bartsch, D-UG; Bessho, K; Gomez, L; Freeman, WR
2009-01-01
Purpose To compare two wavefront-sensing devices based on different principles. Methods Thirty-eight healthy eyes of 19 patients were measured five times in the reproducibility study. Twenty eyes of 10 patients were measured in the comparison study. The Tracey Visual Function Analyzer (VFA), based on the ray-tracing principle and the Nidek optical pathway difference (OPD)-Scan, based on the dynamic skiascopy principle were compared. Standard deviation (SD) of root mean square (RMS) errors was compared to verify the reproducibility. We evaluated RMS errors, Zernike terms and conventional refractive indexes (Sph, Cyl, Ax, and spherical equivalent). Results In RMS errors reading, both devices showed similar ratios of SD to the mean measurement value (VFA: 57.5±11.7%, OPD-Scan: 53.9±10.9%). Comparison on the same eye showed that almost all terms were significantly greater using the VFA than using the OPD-Scan. However, certain high spatial frequency aberrations (tetrafoil, pentafoil, and hexafoil) were consistently measured near zero with the OPD-Scan. Conclusion Both devices showed similar level of reproducibility; however, there was considerable difference in the wavefront reading between machines when measuring the same eye. Differences in the number of sample points, centration, and measurement algorithms between the two instruments may explain our results. PMID:17571088
Piva, Elisa; Tosato, Francesca; Plebani, Mario
2015-12-07
Most errors in laboratory medicine occur in the pre-analytical phase of the total testing process. Phlebotomy, a crucial step in the pre-analytical phase influencing laboratory results and patient outcome, calls for quality assurance procedures and automation in order to prevent errors and ensure patient safety. We compared the performance of a new small, automated device, the ProTube Inpeco, designed for use in phlebotomy with a complete traceability of the process, with a centralized automated system, BC ROBO. ProTube was used for 15,010 patients undergoing phlebotomy with 48,776 tubes being labeled. The mean time and standard deviation (SD) for blood sampling was 3:03 (min:sec; SD ± 1:24) when using ProTube, against 5:40 (min:sec; SD ± 1:57) when using BC ROBO. The mean number of patients per hour managed at each phlebotomy point was 16 ± 3 with ProTube, and 10 ± 2 with BC ROBO. No tubes were labeled erroneously or incorrectly, even if process failure occurred in 2.8% of cases when ProTube was used. Thanks to its cutting edge technology, the ProTube has many advantages over BC ROBO, above all in verifying patient identity, and in allowing a reduction in both identification error and tube mislabeling.
NASA Technical Reports Server (NTRS)
Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don
1998-01-01
Sophisticated network storage management applications are rapidly evolving to satisfy a market demand for highly reliable data storage systems with large data storage capacities and performance requirements. To preserve a high degree of data integrity, these applications must rely on intelligent data storage devices that can provide reliable indicators of data degradation. Error correction activity generally occurs within storage devices without notification to the host. Early indicators of degradation and media error monitoring 333 and reporting (MEMR) techniques implemented in data storage devices allow network storage management applications to notify system administrators of these events and to take appropriate corrective actions before catastrophic errors occur. Although MEMR techniques have been implemented in data storage devices for many years, until 1996 no MEMR standards existed. In 1996 the American National Standards Institute (ANSI) approved the only known (world-wide) industry standard specifying MEMR techniques to verify stored data on optical disks. This industry standard was developed under the auspices of the Association for Information and Image Management (AIIM). A recently formed AIIM Optical Tape Subcommittee initiated the development of another data integrity standard specifying a set of media error monitoring tools and media error monitoring information (MEMRI) to verify stored data on optical tape media. This paper discusses the need for intelligent storage devices that can provide data integrity metadata, the content of the existing data integrity standard for optical disks, and the content of the MEMRI standard being developed by the AIIM Optical Tape Subcommittee.
The Effect of Geocenter Motion on Jason-2 and Jason-1 Orbits and the Mean Sea Level
NASA Technical Reports Server (NTRS)
Melachroinos, Stavros A.; Beckley, Brian D.; Lemoine, Frank G.; Zelensky, Nikita P.; Rowlands, David D.; Luthcke, Scott B.
2012-01-01
We have investigated the impact of geocenter motion on Jason-2 orbits. This was accomplished by computing a series of Jason-1, Jason-2 GPS-based and SLR/DORIS-based orbits using ITRF2008 and the IGS repro1 framework based on the most recent GSFC standards. From these orbits, we extract the Jason-2 orbit frame translational parameters per cycle by the means of a Helmert transformation between a set of reference orbits and a set of test orbits. The fitted annual and seasonal terms of these time-series are compared to two different geocenter motion models. Subsequently, we included the geocenter motion corrections in the POD process as a degree-1 loading displacement correction to the tracking network. The analysis suggested that the GSFC's Jason-2 std0905 GPS-based orbits are closely tied to the center of mass (CM) of the Earth whereas the SLR/DORIS std0905 orbits are tied to the center of figure (CF) of the ITRF2005 (Melachroinos et al., 2012). In this study we extend the investigation to the centering of the GPS constellation and the way those are tied in the Jason-1 and Jason-2 POD process. With a new set of standards, we quantify the GPS and SLR/DORIS-based orbit centering during the Jason-1 and Jason-2 inter-calibration period and how this impacts the orbit radial error over the globe, which is assimilated into mean sea level (MSL) error, from the omission of the full term of the geocenter motion correction.
Gabriel, Rodney A; Burton, Brittany N; Tsai, Mitchell H; Ehrenfeld, Jesse M; Dutton, Richard P; Urman, Richard D
2017-09-01
The objective of this study was to characterize workload during all hours of the day in the non-operating room anesthesia (NORA) environment and identify what type of patients and procedures were more likely to occur during after-hours. By investigating data from the National Anesthesia Clinical Outcomes Registry, we characterized the total number of ongoing NORA cases per hour of the day (0 - 23 h). Results were presented as the mean hour and standard error (SE). Multivariable logistic regression was applied to assess the association of various patient, procedural, and facility characteristics with time of day (after-hours = 17:01-06:59 local time versus day-time). Included in this analysis, there were a total of 4,948,634 cases performed on non-holiday weekdays. The mean hour for ongoing cases for gastroenterology, cardiac, radiology and "other" were: 10.8 with standard error (SE) of 0.002, 11.5 (SE of 0.005), 11.2 (SE of 0.005), and 10.8 (SE of 0.002), respectively. Pairwise differences between means for each NORA specialty were all statistically significant (p < 0.0001). During after-hour shifts (4.3% of cases), patients with higher American Society of Anesthesiologists physical status classification scores had increased odds for undergoing a NORA procedure, while procedures that were more physiologically complex had decreased odds. With the increasing demand for NORA services, it is prudent that we fully understand the challenges of providing safe and efficient anesthetic services particularly in locations where fewer resources are available.
The Flynn Effect: A Meta-analysis
Trahan, Lisa; Stuebing, Karla K.; Hiscock, Merril K.; Fletcher, Jack M.
2014-01-01
The “Flynn effect” refers to the observed rise in IQ scores over time, resulting in norms obsolescence. Although the Flynn effect is widely accepted, most approaches to estimating it have relied upon “scorecard” approaches that make estimates of its magnitude and error of measurement controversial and prevent determination of factors that moderate the Flynn effect across different IQ tests. We conducted a meta-analysis to determine the magnitude of the Flynn effect with a higher degree of precision, to determine the error of measurement, and to assess the impact of several moderator variables on the mean effect size. Across 285 studies (N = 14,031) since 1951 with administrations of two intelligence tests with different normative bases, the meta-analytic mean was 2.31, 95% CI [1.99, 2.64], standard score points per decade. The mean effect size for 53 comparisons (N = 3,951) (excluding three atypical studies that inflate the estimates) involving modern (since 1972) Stanford-Binet and Wechsler IQ tests (2.93, 95% CI [2.3, 3.5], IQ points per decade) was comparable to previous estimates of about 3 points per decade, but not consistent with the hypothesis that the Flynn effect is diminishing. For modern tests, study sample (larger increases for validation research samples vs. test standardization samples) and order of administration explained unique variance in the Flynn effect, but age and ability level were not significant moderators. These results supported previous estimates of the Flynn effect and its robustness across different age groups, measures, samples, and levels of performance. PMID:24979188
NASA Astrophysics Data System (ADS)
Saatkamp, Cassiano Junior; de Almeida, Maurício Liberal; Bispo, Jeyse Aliana Martins; Pinheiro, Antonio Luiz Barbosa; Fernandes, Adriana Barrinha; Silveira, Landulfo, Jr.
2016-03-01
Due to their importance in the regulation of metabolites, the kidneys need continuous monitoring to check for correct functioning, mainly by urea and creatinine urinalysis. This study aimed to develop a model to estimate the concentrations of urea and creatinine in urine by means of Raman spectroscopy (RS) that could be used to diagnose kidney disease. Midstream urine samples were obtained from 54 volunteers with no kidney complaints. Samples were subjected to a standard colorimetric assay of urea and creatinine and submitted to spectroscopic analysis by means of a dispersive Raman spectrometer (830 nm, 350 mW, 30 s). The Raman spectra of urine showed peaks related mainly to urea and creatinine. Partial least squares models were developed using selected Raman bands related to urea and creatinine and the biochemical concentrations in urine measured by the colorimetric method, resulting in r=0.90 and 0.91 for urea and creatinine, respectively, with root mean square error of cross-validation (RMSEcv) of 312 and 25.2 mg/dL, respectively. RS may become a technique for rapid urinalysis, with concentration errors suitable for population screening aimed at the prevention of renal diseases.
2009-01-01
standard error of the mean (SEM). Analysis of variance procedures with Tukey post hoc correction examined the existence and nature of temporal trends ...apoptosis. Cell 2006;126:121–134. 20. Yorimitsu T, Klionsky DJ. Eating the enoplasmic reticulum: quality control by autophagy. Trends Cell Biol 2007;17...oxide signaling to iron- regulatory protein: direct control of ferritin mRNA translation and transferrin receptor mRNA stability in transfected
Frequency distribution histograms for the rapid analysis of data
NASA Technical Reports Server (NTRS)
Burke, P. V.; Bullen, B. L.; Poff, K. L.
1988-01-01
The mean and standard error are good representations for the response of a population to an experimental parameter and are frequently used for this purpose. Frequency distribution histograms show, in addition, responses of individuals in the population. Both the statistics and a visual display of the distribution of the responses can be obtained easily using a microcomputer and available programs. The type of distribution shown by the histogram may suggest different mechanisms to be tested.
Chang, Jasper O; Levy, Susan S; Seay, Seth W; Goble, Daniel J
2014-05-01
Recent guidelines advocate sports medicine professionals to use balance tests to assess sensorimotor status in the management of concussions. The present study sought to determine whether a low-cost balance board could provide a valid, reliable, and objective means of performing this balance testing. Criterion validity testing relative to a gold standard and 7 day test-retest reliability. University biomechanics laboratory. Thirty healthy young adults. Balance ability was assessed on 2 days separated by 1 week using (1) a gold standard measure (ie, scientific grade force plate), (2) a low-cost Nintendo Wii Balance Board (WBB), and (3) the Balance Error Scoring System (BESS). Validity of the WBB center of pressure path length and BESS scores were determined relative to the force plate data. Test-retest reliability was established based on intraclass correlation coefficients. Composite scores for the WBB had excellent validity (r = 0.99) and test-retest reliability (R = 0.88). Both the validity (r = 0.10-0.52) and test-retest reliability (r = 0.61-0.78) were lower for the BESS. These findings demonstrate that a low-cost balance board can provide improved balance testing accuracy/reliability compared with the BESS. This approach provides a potentially more valid/reliable, yet affordable, means of assessing sports-related concussion compared with current methods.
Huh, S.; Dickey, D.A.; Meador, M.R.; Ruhl, K.E.
2005-01-01
A temporal analysis of the number and duration of exceedences of high- and low-flow thresholds was conducted to determine the number of years required to detect a level shift using data from Virginia, North Carolina, and South Carolina. Two methods were used - ordinary least squares assuming a known error variance and generalized least squares without a known error variance. Using ordinary least squares, the mean number of years required to detect a one standard deviation level shift in measures of low-flow variability was 57.2 (28.6 on either side of the break), compared to 40.0 years for measures of high-flow variability. These means become 57.6 and 41.6 when generalized least squares is used. No significant relations between years and elevation or drainage area were detected (P>0.05). Cluster analysis did not suggest geographic patterns in years related to physiography or major hydrologic regions. Referring to the number of observations required to detect a one standard deviation shift as 'characterizing' the variability, it appears that at least 20 years of record on either side of a shift may be necessary to adequately characterize high-flow variability. A longer streamflow record (about 30 years on either side) may be required to characterize low-flow variability. ?? 2005 Elsevier B.V. All rights reserved.
Age at menarche in relation to adult height: the EPIC study.
Onland-Moret, N C; Peeters, P H M; van Gils, C H; Clavel-Chapelon, F; Key, T; Tjønneland, A; Trichopoulou, A; Kaaks, R; Manjer, J; Panico, S; Palli, D; Tehard, B; Stoikidou, M; Bueno-De-Mesquita, H B; Boeing, H; Overvad, K; Lenner, P; Quirós, J R; Chirlaque, M D; Miller, A B; Khaw, K T; Riboli, E
2005-10-01
In the last two centuries, age at menarche has decreased in several European populations, whereas adult height has increased. It is unclear whether these trends have ceased in recent years or how age at menarche and height are related in individuals. In this study, the authors first investigated trends in age at menarche and adult height among 286,205 women from nine European countries by computing the mean age at menarche and height in 5-year birth cohorts, adjusted for differences in socioeconomic status. Second, the relation between age at menarche and height was estimated by linear regression models, adjusted for age at enrollment between 1992 and 1998 and socioeconomic status. Mean age at menarche decreased by 44 days per 5-year birth cohort (beta = -0.12, standard error = 0.002), varying from 18 days in the United Kingdom to 58 days in Spain and Germany. Women grew 0.29 cm taller per 5-year birth cohort (standard error = 0.007), varying from 0.42 cm in Italy to 0.98 cm in Denmark. Furthermore, women grew approximately 0.31 cm taller when menarche occurred 1 year later (range by country: 0.13-0.50 cm). Based on time trends, more recent birth cohorts have their menarche earlier and grow taller. However, women with earlier menarche reach a shorter adult height compared with women who have menarche at a later age.
Evaluating true BCI communication rate through mutual information and language models.
Speier, William; Arnold, Corey; Pouratian, Nader
2013-01-01
Brain-computer interface (BCI) systems are a promising means for restoring communication to patients suffering from "locked-in" syndrome. Research to improve system performance primarily focuses on means to overcome the low signal to noise ratio of electroencephalogric (EEG) recordings. However, the literature and methods are difficult to compare due to the array of evaluation metrics and assumptions underlying them, including that: 1) all characters are equally probable, 2) character selection is memoryless, and 3) errors occur completely at random. The standardization of evaluation metrics that more accurately reflect the amount of information contained in BCI language output is critical to make progress. We present a mutual information-based metric that incorporates prior information and a model of systematic errors. The parameters of a system used in one study were re-optimized, showing that the metric used in optimization significantly affects the parameter values chosen and the resulting system performance. The results of 11 BCI communication studies were then evaluated using different metrics, including those previously used in BCI literature and the newly advocated metric. Six studies' results varied based on the metric used for evaluation and the proposed metric produced results that differed from those originally published in two of the studies. Standardizing metrics to accurately reflect the rate of information transmission is critical to properly evaluate and compare BCI communication systems and advance the field in an unbiased manner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-01-01
A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less
Hnatkova, K; Malik, M; Kautzner, J; Gang, Y; Camm, A J
1994-01-01
OBJECTIVE--Normal electrocardiographic recordings were analysed to establish the influence of measurement of different numbers of electrocardiographic leads on the results of different formulas expressing QT dispersion and the effects of adjustment of QT dispersion obtained from a subset of an electrocardiogram to approximate to the true QT dispersion obtained from a complete electrocardiogram. SUBJECTS AND METHODS--Resting 12 lead electrocardiograms of 27 healthy people were investigated. In each lead, the QT interval was measured with a digitising board and QT dispersion was evaluated by three formulas: (A) the difference between the longest and the shortest QT interval among all leads; (B) the difference between the second longest and the second shortest QT interval; (C) SD of QT intervals in different leads. For each formula, the "true" dispersion was assessed from all measurable leads and then different combinations of leads were omitted. The mean relative differences between the QT dispersion with a given number of omitted leads and the "true" QT dispersion (mean relative errors) and the coefficients of variance of the results of QT dispersion obtained when omitting combinations of leads were compared for the different formulas. The procedure was repeated with an adjustment of each formula dividing its results by the square root of the number of measured leads. The same approach was used for the measurement of QT dispersion from the chest leads including a fourth formula (D) the SD of interlead differences weighted according to the distances between leads. For different formulas, the mean relative errors caused by omitting individual electrocardiographic leads were also assessed and the importance of individual leads for correct measurement of QT dispersion was investigated. RESULTS--The study found important differences between different formulas for assessment of QT dispersion with respect to compensation for missing measurements of QT interval. The standard max-min formula (A) performed poorly (mean relative errors of 6.1% to 18.5% for missing one to four leads) but was appropriately adjusted with the factor of 1/square root of n (n = number of measured leads). In a population of healthy people such an adjustment removed the systematic bias introduced by missing leads of the 12 lead electrocardiogram and significantly reduced the mean relative errors caused by the omission of several leads. The unadjusted SD was the optimum formula (C) for the analysis of 12 lead electrocardiograms, and the weighted standard deviation (D) was the optimum for the analysis of six lead chest electrocardiograms. The coefficients of variance of measurements of QT dispersion with different missing leads were very large (about 3 to 7 for one to four missing leads). Independently of the formula for measurement of QT dispersion, omission of different leads produced substantially different relative errors. In 12 lead electrocardiograms the largest relative errors (> 10%) were caused by omitting lead aVL or lead V1. CONCLUSIONS--Because of the large coefficients of variance, the concept of adjusting the QT dispersion for different numbers of electrocardiographic leads used in its assessment is difficult if not impossible to fulfil. Thus it is likely to be more appropriate to assess QT dispersion from standardised constant sets of electrocardiographic leads. PMID:7833200
Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error
NASA Astrophysics Data System (ADS)
Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi
2017-12-01
Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.
Performance monitoring and error significance in patients with obsessive-compulsive disorder.
Endrass, Tanja; Schuermann, Beate; Kaufmann, Christan; Spielberg, Rüdiger; Kniesche, Rainer; Kathmann, Norbert
2010-05-01
Performance monitoring has been consistently found to be overactive in obsessive-compulsive disorder (OCD). The present study examines whether performance monitoring in OCD is adjusted with error significance. Therefore, errors in a flanker task were followed by neutral (standard condition) or punishment feedbacks (punishment condition). In the standard condition patients had significantly larger error-related negativity (ERN) and correct-related negativity (CRN) ampliudes than controls. But, in the punishment condition groups did not differ in ERN and CRN amplitudes. While healthy controls showed an amplitude enhancement between standard and punishment condition, OCD patients showed no variation. In contrast, group differences were not found for the error positivity (Pe): both groups had larger Pe amplitudes in the punishment condition. Results confirm earlier findings of overactive error monitoring in OCD. The absence of a variation with error significance might indicate that OCD patients are unable to down-regulate their monitoring activity according to external requirements. Copyright 2010 Elsevier B.V. All rights reserved.
Kwon, Heon-Ju; Kim, Bohyun; Kim, So Yeon; Lee, Chul Seung; Lee, Jeongjin; Song, Gi Won; Lee, Sung Gyu
2018-01-01
Background/Aims Computed tomography (CT) hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT). However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Methods Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (VP) was measured via the assumptive hepatectomy plane. Retrospective liver volume (VR) was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W), errors in percentage (%) VP and VR were evaluated. Plane-dependent error in VP was defined as the absolute difference between VP and VR. % plane-dependent error was defined as follows: |VP–VR|/W∙100. Results Mean VP, VR, and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in VP were 73.3 mL and 10.7%. Mean error and % error in VR were 64.4 mL and 9.3%. Mean plane-dependent error in VP was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in VP exceeded 10% of W in approximately 10% of the subjects in our study. Conclusions There was approximately 5% plane-dependent error in liver VP on CT volumetry. Plane-dependent error in VP exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane. PMID:28759989
Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J.; Tabirca, Sabin; O’Driscoll, Aoife; Corrigan, Mark
2016-01-01
Background Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. Methods An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. Results A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (P<0.015). A mean satisfaction score of 2.30 was reported by users, (rated on seven-point scale with 1 denoting total satisfaction with system use and 7 denoting total dissatisfaction). Conclusions An NFC based medication system may be used to effectively reduce medication errors in a simulated ward environment. PMID:28293602
O'Connell, Emer; Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J; Tabirca, Sabin; O'Driscoll, Aoife; Corrigan, Mark
2016-01-01
Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (P<0.015). A mean satisfaction score of 2.30 was reported by users, (rated on seven-point scale with 1 denoting total satisfaction with system use and 7 denoting total dissatisfaction). An NFC based medication system may be used to effectively reduce medication errors in a simulated ward environment.
Random errors in interferometry with the least-squares method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Qi
2011-01-20
This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less
Baert, Isabel A C; Lluch, Enrique; Struyf, Thomas; Peeters, Greta; Van Oosterwijck, Sophie; Tuynman, Joanna; Rufai, Salim; Struyf, Filip
2018-06-01
The therapeutic value of proprioceptive-based exercises in knee osteoarthritis (KOA) management warrants investigation of proprioceptive testing methods easily accessible in clinical practice. To estimate inter- and intrarater reliability of the knee joint position sense (KJPS) test and knee force sense (KFS) test in subjects with and without KOA. Cross-sectional test-retest design. Two blinded raters performed independently repeated measures of the KJPS and KFS test, using an analogue inclinometer and handheld dynamometer, respectively, in eight KOA patients (12 symptomatic knees) and 26 healthy controls (52 asymptomatic knees). Intraclass correlation coefficients (ICCs; model 2,1), standard error of measurement (SEM) and minimal detectable change with 95% confidence bounds (MDC 95 ) were calculated. For KJPS, results showed good to excellent test-retest agreement (ICCs 0.70-0.95 in KOA patients; ICCs 0.65-0.85 in healthy controls). A 2° measurement error (SEM 1°) was reported when measuring KJPS in multiple test positions and calculating mean repositioning error. Testing KOA patients pre and post therapy a repositioning error larger than 4° (MDC 95 ) is needed to consider true change. Measuring KFS using handheld dynamometry showed poor to fair interrater and poor to excellent intrarater reliability in subjects with and without KOA. Measuring KJPS in multiple test positions using an analogue inclinometer and calculating mean repositioning error is reliable and can be used in clinical practice. We do not recommend the use of the KFS test to clinicians. Further research is required to establish diagnostic accuracy and validity of our KJPS test in larger knee pain populations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Takemura, Akihiro; Sasamoto, Kouhei; Nakamura, Kaori; Kuroda, Tatsunori; Shoji, Saori; Matsuura, Yukihiro; Matsushita, Tatsuhiko
2013-06-01
In this study, we evaluated the image distortion of three magnetic resonance imaging (MRI) systems with magnetic field strengths of 0.4 T, 1.5 T and 3 T, during stereotactic irradiation of the brain. A quality assurance phantom for MRI image distortion in radiosurgery was used for these measurements of image distortion. Images were obtained from a 0.4-T MRI (APERTO Eterna, HITACHI), a 1.5-T MRI (Signa HDxt, GE Healthcare) and a 3-T MRI (Signa HDx 3.0 T, GE Healthcare) system. Imaging sequences for the 0.4-T and 3-T MRI were based on the 1.5-T MRI sequence used for stereotactic irradiation in the clinical setting. The same phantom was scanned using a computed tomography (CT) system (Aquilion L/B, Toshiba) as the standard. The results showed mean errors in the Z direction to be the least satisfactory of all the directions in all results. The mean error in the Z direction for 1.5-T MRI at -110 mm in the axial plane showed the largest error of 4.0 mm. The maximum errors for the 0.4-T and 3-T MRI were 1.7 mm and 2.8 mm, respectively. The errors in the plane were not uniform and did not show linearity, suggesting that simple distortion correction using outside markers is unlikely to be effective. The 0.4-T MRI showed the lowest image distortion of the three. However, other items, such as image noise, contrast and study duration need to be evaluated in MRI systems when applying frameless stereotactic irradiation.
The impacts of observing flawed and flawless demonstrations on clinical skill learning.
Domuracki, Kurt; Wong, Arthur; Olivieri, Lori; Grierson, Lawrence E M
2015-02-01
Clinical skills expertise can be advanced through accessible and cost-effective video-based observational practice activities. Previous findings suggest that the observation of performances of skills that include flaws can be beneficial to trainees. Observing the scope of variability within a skilled movement allows learners to develop strategies to manage the potential for and consequences associated with errors. This study tests this observational learning approach on the development of the skills of central line insertion (CLI). Medical trainees with no CLI experience (n = 39) were randomised to three observational practice groups: a group which viewed and assessed videos of an expert performing a CLI without any errors (F); a group which viewed and assessed videos that contained a mix of flawless and errorful performances (E), and a group which viewed the same videos as the E group but were also given information concerning the correctness of their assessments (FA). All participants interacted with their observational videos each day for 4 days. Following this period, participants returned to the laboratory and performed a simulation-based insertion, which was assessed using a standard checklist and a global rating scale for the skill. These ratings served as the dependent measures for analysis. The checklist analysis revealed no differences between observational learning groups (grand mean ± standard error: [20.3 ± 0.7]/25). However, the global rating analysis revealed a main effect of group (d.f.2,36 = 4.51, p = 0.018), which describes better CLI performance in the FA group, compared with the F and E groups. Observational practice that includes errors improves the global performance aspects of clinical skill learning as long as learners are given confirmation that what they are observing is errorful. These findings provide a refined perspective on the optimal organisation of skill education programmes that combine physical and observational practice activities. © 2015 John Wiley & Sons Ltd.
Importance of implementing an analytical quality control system in a core laboratory.
Marques-Garcia, F; Garcia-Codesal, M F; Caro-Narros, M R; Contreras-SanFeliciano, T
2015-01-01
The aim of the clinical laboratory is to provide useful information for screening, diagnosis and monitoring of disease. The laboratory should ensure the quality of extra-analytical and analytical process, based on set criteria. To do this, it develops and implements a system of internal quality control, designed to detect errors, and compare its data with other laboratories, through external quality control. In this way it has a tool to detect the fulfillment of the objectives set, and in case of errors, allowing corrective actions to be made, and ensure the reliability of the results. This article sets out to describe the design and implementation of an internal quality control protocol, as well as its periodical assessment intervals (6 months) to determine compliance with pre-determined specifications (Stockholm Consensus(1)). A total of 40 biochemical and 15 immunochemical methods were evaluated using three different control materials. Next, a standard operation procedure was planned to develop a system of internal quality control that included calculating the error of the analytical process, setting quality specifications, and verifying compliance. The quality control data were then statistically depicted as means, standard deviations, and coefficients of variation, as well as systematic, random, and total errors. The quality specifications were then fixed and the operational rules to apply in the analytical process were calculated. Finally, our data were compared with those of other laboratories through an external quality assurance program. The development of an analytical quality control system is a highly structured process. This should be designed to detect errors that compromise the stability of the analytical process. The laboratory should review its quality indicators, systematic, random and total error at regular intervals, in order to ensure that they are meeting pre-determined specifications, and if not, apply the appropriate corrective actions. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.
Thompson, Ronald E.; Hoffman, Scott A.
2006-01-01
A suite of 28 streamflow statistics, ranging from extreme low to high flows, was computed for 17 continuous-record streamflow-gaging stations and predicted for 20 partial-record stations in Monroe County and contiguous counties in north-eastern Pennsylvania. The predicted statistics for the partial-record stations were based on regression analyses relating inter-mittent flow measurements made at the partial-record stations indexed to concurrent daily mean flows at continuous-record stations during base-flow conditions. The same statistics also were predicted for 134 ungaged stream locations in Monroe County on the basis of regression analyses relating the statistics to GIS-determined basin characteristics for the continuous-record station drainage areas. The prediction methodology for developing the regression equations used to estimate statistics was developed for estimating low-flow frequencies. This study and a companion study found that the methodology also has application potential for predicting intermediate- and high-flow statistics. The statistics included mean monthly flows, mean annual flow, 7-day low flows for three recurrence intervals, nine flow durations, mean annual base flow, and annual mean base flows for two recurrence intervals. Low standard errors of prediction and high coefficients of determination (R2) indicated good results in using the regression equations to predict the statistics. Regression equations for the larger flow statistics tended to have lower standard errors of prediction and higher coefficients of determination (R2) than equations for the smaller flow statistics. The report discusses the methodologies used in determining the statistics and the limitations of the statistics and the equations used to predict the statistics. Caution is indicated in using the predicted statistics for small drainage area situations. Study results constitute input needed by water-resource managers in Monroe County for planning purposes and evaluation of water-resources availability.
Nakling, Jakob; Buhaug, Harald; Backe, Bjorn
2005-10-01
In a large unselected population of normal spontaneous pregnancies, to estimate the biologic variation of the interval from the first day of the last menstrual period to start of pregnancy, and the biologic variation of gestational length to delivery; and to estimate the random error of routine ultrasound assessment of gestational age in mid-second trimester. Cohort study of 11,238 singleton pregnancies, with spontaneous onset of labour and reliable last menstrual period. The day of delivery was predicted with two independent methods: According to the rule of Nägele and based on ultrasound examination in gestational weeks 17-19. For both methods, the mean difference between observed and predicted day of delivery was calculated. The variances of the differences were combined to estimate the variances of the two partitions of pregnancy. The biologic variation of the time from last menstrual period to pregnancy start was estimated to 7.0 days (standard deviation), and the standard deviation of the time to spontaneous delivery was estimated to 12.4 days. The estimate of the standard deviation of the random error of ultrasound assessed foetal age was 5.2 days. Even when the last menstrual period is reliable, the biologic variation of the time from last menstrual period to the real start of pregnancy is substantial, and must be taken into account. Reliable information about the first day of the last menstrual period is not equivalent with reliable information about the start of pregnancy.
Pragmatics abilities in narrative production: a cross-disorder comparison.
Norbury, Courtenay Frazier; Gemmell, Tracey; Paul, Rhea
2014-05-01
We aimed to disentangle contributions of socio-pragmatic and structural language deficits to narrative competence by comparing the narratives of children with autism spectrum disorder (ASD; n = 25), non-autistic children with language impairments (LI; n = 23), and children with typical development (TD; n = 27). Groups were matched for age (6½ to 15 years; mean: 10;6) and non-verbal ability; ASD and TD groups were matched on standardized language scores. Despite distinct clinical presentation, children with ASD and LI produced similarly simple narratives that lacked semantic richness and omitted important story elements, when compared to TD peers. Pragmatic errors were common across groups. Within the LI group, pragmatic errors were negatively correlated with story macrostructure scores and with an index of semantic-pragmatic relevance. For the group with ASD, pragmatic errors consisted of comments that, though extraneous, did not detract from the gist of the narrative. These findings underline the importance of both language and socio-pragmatic skill for producing coherent, appropriate narratives.
Treating Sample Covariances for Use in Strongly Coupled Atmosphere-Ocean Data Assimilation
NASA Astrophysics Data System (ADS)
Smith, Polly J.; Lawless, Amos S.; Nichols, Nancy K.
2018-01-01
Strongly coupled data assimilation requires cross-domain forecast error covariances; information from ensembles can be used, but limited sampling means that ensemble derived error covariances are routinely rank deficient and/or ill-conditioned and marred by noise. Thus, they require modification before they can be incorporated into a standard assimilation framework. Here we compare methods for improving the rank and conditioning of multivariate sample error covariance matrices for coupled atmosphere-ocean data assimilation. The first method, reconditioning, alters the matrix eigenvalues directly; this preserves the correlation structures but does not remove sampling noise. We show that it is better to recondition the correlation matrix rather than the covariance matrix as this prevents small but dynamically important modes from being lost. The second method, model state-space localization via the Schur product, effectively removes sample noise but can dampen small cross-correlation signals. A combination that exploits the merits of each is found to offer an effective alternative.
A novel optical rotary encoder with eccentricity self-detection ability.
Li, Xuan; Ye, Guoyong; Liu, Hongzhong; Ban, Yaowen; Shi, Yongsheng; Yin, Lei; Lu, Bingheng
2017-11-01
Eccentricity error is the main error source of optical rotary encoders. Real-time detection and compensation of the eccentricity error is an effective way of improving the accuracy of rotary optical encoders. In this paper, a novel rotary optical encoder is presented to realize eccentricity self-detection. The proposed encoder adopts a spider-web-patterned scale grating as a measuring standard which is scanned by a dual-head scanning unit. Two scanning heads of the dual-head scanning unit, which are arranged orthogonally, have the function of scanning the periodic pattern of the scale grating along the angular and radial directions, respectively. By this means, synchronous measurement of angular and radial displacements of the scale grating is realized. This paper gives the details of the operating principle of the rotary optical encoder, developing and testing work of a prototype. The eccentricity self-detection result agrees well with the result measured by an optical microscope. The experimental result preliminarily proves the feasibility and effectiveness of the proposed optical encoder.
A novel optical rotary encoder with eccentricity self-detection ability
NASA Astrophysics Data System (ADS)
Li, Xuan; Ye, Guoyong; Liu, Hongzhong; Ban, Yaowen; Shi, Yongsheng; Yin, Lei; Lu, Bingheng
2017-11-01
Eccentricity error is the main error source of optical rotary encoders. Real-time detection and compensation of the eccentricity error is an effective way of improving the accuracy of rotary optical encoders. In this paper, a novel rotary optical encoder is presented to realize eccentricity self-detection. The proposed encoder adopts a spider-web-patterned scale grating as a measuring standard which is scanned by a dual-head scanning unit. Two scanning heads of the dual-head scanning unit, which are arranged orthogonally, have the function of scanning the periodic pattern of the scale grating along the angular and radial directions, respectively. By this means, synchronous measurement of angular and radial displacements of the scale grating is realized. This paper gives the details of the operating principle of the rotary optical encoder, developing and testing work of a prototype. The eccentricity self-detection result agrees well with the result measured by an optical microscope. The experimental result preliminarily proves the feasibility and effectiveness of the proposed optical encoder.
Kawasaki, Ryo; Wang, Jie Jin; Rochtchina, Elena; Taylor, Bronwen; Wong, Tien Yin; Tominaga, Makoto; Kato, Takeo; Daimon, Makoto; Oizumi, Toshihide; Kawata, Sumio; Kayama, Takamasa; Yamashita, Hidetoshi; Mitchell, Paul
2006-08-01
To describe the prevalence of retinal vascular signs and their association with cardiovascular risk factors in a Japanese population. Population-based cross-sectional study. Adult persons aged 35 years or older from Funagata, Yamagata Prefecture, Japan (n = 1481). The Funagata Study is a Japanese population-based study of persons aged 35 years or older, and included 1961 nondiabetic participants (53.3% of 3676 eligible subjects). A nonmydriatic retinal photograph was taken of 1 eye to assess retinal microvascular signs. Retinal arteriolar wall signs (focal arteriolar narrowing, arteriovenous nicking, enhanced arteriolar wall reflex) and retinopathy were assessed in 1481 participants without diabetes (40.3% of eligible persons) using a standardized protocol. Using a computer-assisted method, retinal vessel diameters were measured in 921 participants with gradable retinal image (25.1% of eligible persons). Prevalence of retinal microvascular signs and their association with cardiovascular risk factors. Moderate or severe focal arteriolar narrowing, arteriovenous nicking, enhanced arteriolar wall reflex, and retinopathy were found in 8.3%, 15.2%, 18.7%, and 9.0%, respectively, of the study population. Mean (+/-standard error) values for retinal arteriolar diameter were 178.6+/-21.0 mum, and mean values (+/-standard error) for venular diameter were 214.9+/-20.6 mum. Older persons were more likely to have retinal arteriolar wall signs, retinopathy, and narrower retinal vessel diameters. After adjusting for multiple factors, each 10-mmHg increase in mean arterial blood pressure was associated with a 20% to 40% increased likelihood of retinal arteriolar signs and a 2.8-mum reduction in arteriolar diameter. Retinopathy was associated with higher body mass index and both impaired glucose tolerance and impaired fasting glucose. In nondiabetic Japanese adults, retinal arteriolar wall signs were associated with older age and increased blood pressure, whereas retinopathy was associated with older age, higher body mass index, impaired glucose tolerance, and impaired fasting glucose. These findings are comparable with data from white populations.
The use of mini-samples in palaeomagnetism
NASA Astrophysics Data System (ADS)
Böhnel, Harald; Michalk, Daniel; Nowaczyk, Norbert; Naranjo, Gildardo Gonzalez
2009-10-01
Rock cores of ~25 mm diameter are widely used in palaeomagnetism. Occasionally smaller diameters have been used as well which represents distinct advantages in terms of throughput, weight of equipment and core collections. How their orientation precision compares to 25 mm cores, however, has not been evaluated in detail before. Here we compare the site mean directions and their statistical parameters for 12 lava flows sampled with 25 mm cores (standard samples, typically 8 cores per site) and with 12 mm drill cores (mini-samples, typically 14 cores per site). The site-mean directions for both sample sizes appear to be indistinguishable in most cases. For the mini-samples, site dispersion parameters k on average are slightly lower than for the standard samples reflecting their larger orienting and measurement errors. Applying the Wilcoxon signed-rank test the probability that k or α95 have the same distribution for both sizes is acceptable only at the 17.4 or 66.3 per cent level, respectively. The larger mini-core numbers per site appears to outweigh the lower k values yielding also slightly smaller confidence limits α95. Further, both k and α95 are less variable for mini-samples than for standard size samples. This is interpreted also to result from the larger number of mini-samples per site, which better averages out the detrimental effect of undetected abnormal remanence directions. Sampling of volcanic rocks with mini-samples therefore does not present a disadvantage in terms of the overall obtainable uncertainty of site mean directions. Apart from this, mini-samples do present clear advantages during the field work, as about twice the number of drill cores can be recovered compared to 25 mm cores, and the sampled rock unit is then more widely covered, which reduces the contribution of natural random errors produced, for example, by fractures, cooling joints, and palaeofield inhomogeneities. Mini-samples may be processed faster in the laboratory, which is of particular advantage when carrying out palaeointensity experiments.
Predictors of driving safety in early Alzheimer disease
Dawson, J D.; Anderson, S W.; Uc, E Y.; Dastrup, E; Rizzo, M
2009-01-01
Objective: To measure the association of cognition, visual perception, and motor function with driving safety in Alzheimer disease (AD). Methods: Forty drivers with probable early AD (mean Mini-Mental State Examination score 26.5) and 115 elderly drivers without neurologic disease underwent a battery of cognitive, visual, and motor tests, and drove a standardized 35-mile route in urban and rural settings in an instrumented vehicle. A composite cognitive score (COGSTAT) was calculated for each subject based on eight neuropsychological tests. Driving safety errors were noted and classified by a driving expert based on video review. Results: Drivers with AD committed an average of 42.0 safety errors/drive (SD = 12.8), compared to an average of 33.2 (SD = 12.2) for drivers without AD (p < 0.0001); the most common errors were lane violations. Increased age was predictive of errors, with a mean of 2.3 more errors per drive observed for each 5-year age increment. After adjustment for age and gender, COGSTAT was a significant predictor of safety errors in subjects with AD, with a 4.1 increase in safety errors observed for a 1 SD decrease in cognitive function. Significant increases in safety errors were also found in subjects with AD with poorer scores on Benton Visual Retention Test, Complex Figure Test-Copy, Trail Making Subtest-A, and the Functional Reach Test. Conclusion: Drivers with Alzheimer disease (AD) exhibit a range of performance on tests of cognition, vision, and motor skills. Since these tests provide additional predictive value of driving performance beyond diagnosis alone, clinicians may use these tests to help predict whether a patient with AD can safely operate a motor vehicle. GLOSSARY AD = Alzheimer disease; AVLT = Auditory Verbal Learning Test; Blocks = Block Design subtest; BVRT = Benton Visual Retention Test; CFT = Complex Figure Test; CI = confidence interval; COWA = Controlled Oral Word Association; CS = contrast sensitivity; FVA = far visual acuity; JLO = Judgment of Line Orientation; MCI = mild cognitive impairment; MMSE = Mini-Mental State Examination; NVA = near visual acuity; SFM = structure from motion; TMT = Trail-Making Test; UFOV = Useful Field of View. PMID:19204261
Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C
2013-12-01
To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.
Geolocation error tracking of ZY-3 three line cameras
NASA Astrophysics Data System (ADS)
Pan, Hongbo
2017-01-01
The high-accuracy geolocation of high-resolution satellite images (HRSIs) is a key issue for mapping and integrating multi-temporal, multi-sensor images. In this manuscript, we propose a new geometric frame for analysing the geometric error of a stereo HRSI, in which the geolocation error can be divided into three parts: the epipolar direction, cross base direction, and height direction. With this frame, we proved that the height error of three line cameras (TLCs) is independent of nadir images, and that the terrain effect has a limited impact on the geolocation errors. For ZY-3 error sources, the drift error in both the pitch and roll angle and its influence on the geolocation accuracy are analysed. Epipolar and common tie-point constraints are proposed to study the bundle adjustment of HRSIs. Epipolar constraints explain that the relative orientation can reduce the number of compensation parameters in the cross base direction and have a limited impact on the height accuracy. The common tie points adjust the pitch-angle errors to be consistent with each other for TLCs. Therefore, free-net bundle adjustment of a single strip cannot significantly improve the geolocation accuracy. Furthermore, the epipolar and common tie-point constraints cause the error to propagate into the adjacent strip when multiple strips are involved in the bundle adjustment, which results in the same attitude uncertainty throughout the whole block. Two adjacent strips-Orbit 305 and Orbit 381, covering 7 and 12 standard scenes separately-and 308 ground control points (GCPs) were used for the experiments. The experiments validate the aforementioned theory. The planimetric and height root mean square errors were 2.09 and 1.28 m, respectively, when two GCPs were settled at the beginning and end of the block.
Inui, Hiroshi; Taketomi, Shuji; Tahara, Keitarou; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae
2017-03-01
Bone cutting errors can cause malalignment of unicompartmental knee arthroplasties (UKA). Although the extent of tibial malalignment due to horizontal cutting errors has been well reported, there is a lack of studies evaluating malalignment as a consequence of keel cutting errors, particularly in the Oxford UKA. The purpose of this study was to examine keel cutting errors during Oxford UKA placement using a navigation system and to clarify whether two different tibial keel cutting techniques would have different error rates. The alignment of the tibial cut surface after a horizontal osteotomy and the surface of the tibial trial component was measured with a navigation system. Cutting error was defined as the angular difference between these measurements. The following two techniques were used: the standard "pushing" technique in 83 patients (group P) and a modified "dolphin" technique in 41 patients (group D). In all 123 patients studied, the mean absolute keel cutting error was 1.7° and 1.4° in the coronal and sagittal planes, respectively. In group P, there were 22 outlier patients (27 %) in the coronal plane and 13 (16 %) in the sagittal plane. Group D had three outlier patients (8 %) in the coronal plane and none (0 %) in the sagittal plane. Significant differences were observed in the outlier ratio of these techniques in both the sagittal (P = 0.014) and coronal (P = 0.008) planes. Our study demonstrated overall keel cutting errors of 1.7° in the coronal plane and 1.4° in the sagittal plane. The "dolphin" technique was found to significantly reduce keel cutting errors on the tibial side. This technique will be useful for accurate component positioning and therefore improve the longevity of Oxford UKAs. Retrospective comparative study, Level III.
tPA Prescription and Administration Errors within a Regional Stroke System
Chung, Lee S; Tkach, Aleksander; Lingenfelter, Erin M; Dehoney, Sarah; Rollo, Jeannie; de Havenon, Adam; DeWitt, Lucy Dana; Grantz, Matthew Ryan; Wang, Haimei; Wold, Jana J; Hannon, Peter M; Weathered, Natalie R; Majersik, Jennifer J
2015-01-01
Background IV tPA utilization in acute ischemic stroke (AIS) requires weight-based dosing and a standardized infusion rate. In our regional network, we have tried to minimize tPA dosing errors. We describe the frequency and types of tPA administration errors made in our comprehensive stroke center (CSC) and at community hospitals (CHs) prior to transfer. Methods Using our stroke quality database, we extracted clinical and pharmacy information on all patients who received IV tPA from 2010–11 at the CSC or CH prior to transfer. All records were analyzed for the presence of inclusion/exclusion criteria deviations or tPA errors in prescription, reconstitution, dispensing, or administration, and analyzed for association with outcomes. Results We identified 131 AIS cases treated with IV tPA: 51% female; mean age 68; 32% treated at CSC, 68% at CH (including 26% by telestroke) from 22 CHs. tPA prescription and administration errors were present in 64% of all patients (41% CSC, 75% CH, p<0.001), the most common being incorrect dosage for body weight (19% CSC, 55% CH, p<0.001). Of the 27 overdoses, there were 3 deaths due to systemic hemorrhage or ICH. Nonetheless, outcomes (parenchymal hematoma, mortality, mRS) did not differ between CSC and CH patients nor between those with and without errors. Conclusion Despite focus on minimization of tPA administration errors in AIS patients, such errors were very common in our regional stroke system. Although an association between tPA errors and stroke outcomes was not demonstrated, quality assurance mechanisms are still necessary to reduce potentially dangerous, avoidable errors. PMID:26698642
Evaluation of lens distortion errors in video-based motion analysis
NASA Technical Reports Server (NTRS)
Poliner, Jeffrey; Wilmington, Robert; Klute, Glenn K.; Micocci, Angelo
1993-01-01
In an effort to study lens distortion errors, a grid of points of known dimensions was constructed and videotaped using a standard and a wide-angle lens. Recorded images were played back on a VCR and stored on a personal computer. Using these stored images, two experiments were conducted. Errors were calculated as the difference in distance from the known coordinates of the points to the calculated coordinates. The purposes of this project were as follows: (1) to develop the methodology to evaluate errors introduced by lens distortion; (2) to quantify and compare errors introduced by use of both a 'standard' and a wide-angle lens; (3) to investigate techniques to minimize lens-induced errors; and (4) to determine the most effective use of calibration points when using a wide-angle lens with a significant amount of distortion. It was seen that when using a wide-angle lens, errors from lens distortion could be as high as 10 percent of the size of the entire field of view. Even with a standard lens, there was a small amount of lens distortion. It was also found that the choice of calibration points influenced the lens distortion error. By properly selecting the calibration points and avoidance of the outermost regions of a wide-angle lens, the error from lens distortion can be kept below approximately 0.5 percent with a standard lens and 1.5 percent with a wide-angle lens.
Technology utilization to prevent medication errors.
Forni, Allison; Chu, Hanh T; Fanikos, John
2010-01-01
Medication errors have been increasingly recognized as a major cause of iatrogenic illness and system-wide improvements have been the focus of prevention efforts. Critically ill patients are particularly vulnerable to injury resulting from medication errors because of the severity of illness, need for high risk medications with a narrow therapeutic index and frequent use of intravenous infusions. Health information technology has been identified as method to reduce medication errors as well as improve the efficiency and quality of care; however, few studies regarding the impact of health information technology have focused on patients in the intensive care unit. Computerized physician order entry and clinical decision support systems can play a crucial role in decreasing errors in the ordering stage of the medication use process through improving the completeness and legibility of orders, alerting physicians to medication allergies and drug interactions and providing a means for standardization of practice. Electronic surveillance, reminders and alerts identify patients susceptible to an adverse event, communicate critical changes in a patient's condition, and facilitate timely and appropriate treatment. Bar code technology, intravenous infusion safety systems, and electronic medication administration records can target prevention of errors in medication dispensing and administration where other technologies would not be able to intercept a preventable adverse event. Systems integration and compliance are vital components in the implementation of health information technology and achievement of a safe medication use process.
NASA Astrophysics Data System (ADS)
Yehia, Ali M.; Mohamed, Heba M.
2016-01-01
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference.
Intravenous Chemotherapy Compounding Errors in a Follow-Up Pan-Canadian Observational Study.
Gilbert, Rachel E; Kozak, Melissa C; Dobish, Roxanne B; Bourrier, Venetia C; Koke, Paul M; Kukreti, Vishal; Logan, Heather A; Easty, Anthony C; Trbovich, Patricia L
2018-05-01
Intravenous (IV) compounding safety has garnered recent attention as a result of high-profile incidents, awareness efforts from the safety community, and increasingly stringent practice standards. New research with more-sensitive error detection techniques continues to reinforce that error rates with manual IV compounding are unacceptably high. In 2014, our team published an observational study that described three types of previously unrecognized and potentially catastrophic latent chemotherapy preparation errors in Canadian oncology pharmacies that would otherwise be undetectable. We expand on this research and explore whether additional potential human failures are yet to be addressed by practice standards. Field observations were conducted in four cancer center pharmacies in four Canadian provinces from January 2013 to February 2015. Human factors specialists observed and interviewed pharmacy managers, oncology pharmacists, pharmacy technicians, and pharmacy assistants as they carried out their work. Emphasis was on latent errors (potential human failures) that could lead to outcomes such as wrong drug, dose, or diluent. Given the relatively short observational period, no active failures or actual errors were observed. However, 11 latent errors in chemotherapy compounding were identified. In terms of severity, all 11 errors create the potential for a patient to receive the wrong drug or dose, which in the context of cancer care, could lead to death or permanent loss of function. Three of the 11 practices were observed in our previous study, but eight were new. Applicable Canadian and international standards and guidelines do not explicitly address many of the potentially error-prone practices observed. We observed a significant degree of risk for error in manual mixing practice. These latent errors may exist in other regions where manual compounding of IV chemotherapy takes place. Continued efforts to advance standards, guidelines, technological innovation, and chemical quality testing are needed.
Reproducibility Between Brain Uptake Ratio Using Anatomic Standardization and Patlak-Plot Methods.
Shibutani, Takayuki; Onoguchi, Masahisa; Noguchi, Atsushi; Yamada, Tomoki; Tsuchihashi, Hiroko; Nakajima, Tadashi; Kinuya, Seigo
2015-12-01
The Patlak-plot and conventional methods of determining brain uptake ratio (BUR) have some problems with reproducibility. We formulated a method of determining BUR using anatomic standardization (BUR-AS) in a statistical parametric mapping algorithm to improve reproducibility. The objective of this study was to demonstrate the inter- and intraoperator reproducibility of mean cerebral blood flow as determined using BUR-AS in comparison to the conventional-BUR (BUR-C) and Patlak-plot methods. The images of 30 patients who underwent brain perfusion SPECT were retrospectively used in this study. The images were reconstructed using ordered-subset expectation maximization and processed using an automatic quantitative analysis for cerebral blood flow of ECD tool. The mean SPECT count was calculated from axial basal ganglia slices of the normal side (slices 31-40) drawn using a 3-dimensional stereotactic region-of-interest template after anatomic standardization. The mean cerebral blood flow was calculated from the mean SPECT count. Reproducibility was evaluated using coefficient of variation and Bland-Altman plotting. For both inter- and intraoperator reproducibility, the BUR-AS method had the lowest coefficient of variation and smallest error range about the Bland-Altman plot. Mean CBF obtained using the BUR-AS method had the highest reproducibility. Compared with the Patlak-plot and BUR-C methods, the BUR-AS method provides greater inter- and intraoperator reproducibility of cerebral blood flow measurement. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Uncertainty estimates in broadband seismometer sensitivities using microseisms
Ringler, Adam T.; Storm, Tyler L.; Gee, Lind S.; Hutt, Charles R.; Wilson, David C.
2015-01-01
The midband sensitivity of a seismic instrument is one of the fundamental parameters used in published station metadata. Any errors in this value can compromise amplitude estimates in otherwise high-quality data. To estimate an upper bound in the uncertainty of the midband sensitivity for modern broadband instruments, we compare daily microseism (4- to 8-s period) amplitude ratios between the vertical components of colocated broadband sensors across the IRIS/USGS (network code IU) seismic network. We find that the mean of the 145,972 daily ratios used between 2002 and 2013 is 0.9895 with a standard deviation of 0.0231. This suggests that the ratio between instruments shows a small bias and considerable scatter. We also find that these ratios follow a standard normal distribution (R 2 = 0.95442), which suggests that the midband sensitivity of an instrument has an error of no greater than ±6 % with a 99 % confidence interval. This gives an upper bound on the precision to which we know the sensitivity of a fielded instrument.
Near infrared spectroscopy for prediction of antioxidant compounds in the honey.
Escuredo, Olga; Seijo, M Carmen; Salvador, Javier; González-Martín, M Inmaculada
2013-12-15
The selection of antioxidant variables in honey is first time considered applying the near infrared (NIR) spectroscopic technique. A total of 60 honey samples were used to develop the calibration models using the modified partial least squares (MPLS) regression method and 15 samples were used for external validation. Calibration models on honey matrix for the estimation of phenols, flavonoids, vitamin C, antioxidant capacity (DPPH), oxidation index and copper using near infrared (NIR) spectroscopy has been satisfactorily obtained. These models were optimised by cross-validation, and the best model was evaluated according to multiple correlation coefficient (RSQ), standard error of cross-validation (SECV), ratio performance deviation (RPD) and root mean standard error (RMSE) in the prediction set. The result of these statistics suggested that the equations developed could be used for rapid determination of antioxidant compounds in honey. This work shows that near infrared spectroscopy can be considered as rapid tool for the nondestructive measurement of antioxidant constitutes as phenols, flavonoids, vitamin C and copper and also the antioxidant capacity in the honey. Copyright © 2013 Elsevier Ltd. All rights reserved.