Huang, Kuo-Chen; Wang, Hsiu-Feng; Chen, Chun-Ching
2010-06-01
Effects of shape, size, and chromaticity of stimuli on participants' errors when estimating the size of simultaneously presented standard and comparison stimuli were examined. 48 Taiwanese college students ages 20 to 24 years old (M = 22.3, SD = 1.3) participated. Analysis showed that the error for estimated size was significantly greater for those in the low-vision group than for those in the normal-vision and severe-myopia groups. The errors were significantly greater with green and blue stimuli than with red stimuli. Circular stimuli produced smaller mean errors than did square stimuli. The actual size of the standard stimulus significantly affected the error for estimated size. Errors for estimations using smaller sizes were significantly higher than when the sizes were larger. Implications of the results for graphics-based interface design, particularly when taking account of visually impaired users, are discussed.
Bootstrap Standard Error Estimates in Dynamic Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Browne, Michael W.
2010-01-01
Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…
Micro-mass standards to calibrate the sensitivity of mass comparators
NASA Astrophysics Data System (ADS)
Madec, Tanguy; Mann, Gaëlle; Meury, Paul-André; Rabault, Thierry
2007-10-01
In mass metrology, the standards currently used are calibrated by a chain of comparisons, performed using mass comparators, that extends ultimately from the international prototype (which is the definition of the unit of mass) to the standards in routine use. The differences measured in the course of these comparisons become smaller and smaller as the standards approach the definitions of their units, precisely because of how accurately they have been adjusted. One source of uncertainty in the determination of the difference of mass between the mass compared and the reference mass is the sensitivity error of the comparator used. Unfortunately, in the market there are no mass standards small enough (of the order of a few hundreds of micrograms) for a valid evaluation of this source of uncertainty. The users of these comparators therefore have no choice but to rely on the characteristics claimed by the makers of the comparators, or else to determine this sensitivity error at higher values (at least 1 mg) and interpolate from this result to smaller differences of mass. For this reason, the LNE decided to produce and calibrate micro-mass standards having nominal values between 100 µg and 900 µg. These standards were developed, then tested in multiple comparisons on an A5 type automatic comparator. They have since been qualified and calibrated in a weighing design, repeatedly and over an extended period of time, to establish their stability with respect to oxidation and the harmlessness of the handling and storage procedure associated with their use. Finally, the micro-standards so qualified were used to characterize the sensitivity errors of two of the LNE's mass comparators, including the one used to tie France's Platinum reference standard (Pt 35) to stainless steel and superalloy standards.
Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits
NASA Astrophysics Data System (ADS)
Hoogland, Jiri; Kleiss, Ronald
1997-04-01
In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Methods for estimating flood frequency in Montana based on data through water year 1998
Parrett, Charles; Johnson, Dave R.
2004-01-01
Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.
Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.
2011-01-01
In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.
Parameter recovery, bias and standard errors in the linear ballistic accumulator model.
Visser, Ingmar; Poessé, Rens
2017-05-01
The linear ballistic accumulator (LBA) model (Brown & Heathcote, , Cogn. Psychol., 57, 153) is increasingly popular in modelling response times from experimental data. An R package, glba, has been developed to fit the LBA model using maximum likelihood estimation which is validated by means of a parameter recovery study. At sufficient sample sizes parameter recovery is good, whereas at smaller sample sizes there can be large bias in parameters. In a second simulation study, two methods for computing parameter standard errors are compared. The Hessian-based method is found to be adequate and is (much) faster than the alternative bootstrap method. The use of parameter standard errors in model selection and inference is illustrated in an example using data from an implicit learning experiment (Visser et al., , Mem. Cogn., 35, 1502). It is shown that typical implicit learning effects are captured by different parameters of the LBA model. © 2017 The British Psychological Society.
Spectral combination of spherical gravitational curvature boundary-value problems
NASA Astrophysics Data System (ADS)
PitoÅák, Martin; Eshagh, Mehdi; Šprlák, Michal; Tenzer, Robert; Novák, Pavel
2018-04-01
Four solutions of the spherical gravitational curvature boundary-value problems can be exploited for the determination of the Earth's gravitational potential. In this article we discuss the combination of simulated satellite gravitational curvatures, i.e., components of the third-order gravitational tensor, by merging these solutions using the spectral combination method. For this purpose, integral estimators of biased- and unbiased-types are derived. In numerical studies, we investigate the performance of the developed mathematical models for the gravitational field modelling in the area of Central Europe based on simulated satellite measurements. Firstly, we verify the correctness of the integral estimators for the spectral downward continuation by a closed-loop test. Estimated errors of the combined solution are about eight orders smaller than those from the individual solutions. Secondly, we perform a numerical experiment by considering the Gaussian noise with the standard deviation of 6.5× 10-17 m-1s-2 in the input data at the satellite altitude of 250 km above the mean Earth sphere. This value of standard deviation is equivalent to a signal-to-noise ratio of 10. Superior results with respect to the global geopotential model TIM-r5 are obtained by the spectral downward continuation of the vertical-vertical-vertical component with the standard deviation of 2.104 m2s-2, but the root mean square error is the largest and reaches 9.734 m2s-2. Using the spectral combination of all gravitational curvatures the root mean square error is more than 400 times smaller but the standard deviation reaches 17.234 m2s-2. The combination of more components decreases the root mean square error of the corresponding solutions while the standard deviations of the combined solutions do not improve as compared to the solution from the vertical-vertical-vertical component. The presented method represents a weight mean in the spectral domain that minimizes the root mean square error of the combined solutions and improves standard deviation of the solution based only on the least accurate components.
Orbital-free bond breaking via machine learning
NASA Astrophysics Data System (ADS)
Snyder, John C.; Rupp, Matthias; Hansen, Katja; Blooston, Leo; Müller, Klaus-Robert; Burke, Kieron
2013-12-01
Using a one-dimensional model, we explore the ability of machine learning to approximate the non-interacting kinetic energy density functional of diatomics. This nonlinear interpolation between Kohn-Sham reference calculations can (i) accurately dissociate a diatomic, (ii) be systematically improved with increased reference data and (iii) generate accurate self-consistent densities via a projection method that avoids directions with no data. With relatively few densities, the error due to the interpolation is smaller than typical errors in standard exchange-correlation functionals.
Rank score and permutation testing alternatives for regression quantile estimates
Cade, B.S.; Richards, J.D.; Mielke, P.W.
2006-01-01
Performance of quantile rank score tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1) were evaluated by simulation for models with p = 2 and 6 predictors, moderate collinearity among predictors, homogeneous and hetero-geneous errors, small to moderate samples (n = 20–300), and central to upper quantiles (0.50–0.99). Test statistics evaluated were the conventional quantile rank score T statistic distributed as χ2 random variable with q degrees of freedom (where q parameters are constrained by H 0:) and an F statistic with its sampling distribution approximated by permutation. The permutation F-test maintained better Type I errors than the T-test for homogeneous error models with smaller n and more extreme quantiles τ. An F distributional approximation of the F statistic provided some improvements in Type I errors over the T-test for models with > 2 parameters, smaller n, and more extreme quantiles but not as much improvement as the permutation approximation. Both rank score tests required weighting to maintain correct Type I errors when heterogeneity under the alternative model increased to 5 standard deviations across the domain of X. A double permutation procedure was developed to provide valid Type I errors for the permutation F-test when null models were forced through the origin. Power was similar for conditions where both T- and F-tests maintained correct Type I errors but the F-test provided some power at smaller n and extreme quantiles when the T-test had no power because of excessively conservative Type I errors. When the double permutation scheme was required for the permutation F-test to maintain valid Type I errors, power was less than for the T-test with decreasing sample size and increasing quantiles. Confidence intervals on parameters and tolerance intervals for future predictions were constructed based on test inversion for an example application relating trout densities to stream channel width:depth.
Corsica: A Multi-Mission Absolute Calibration Site
NASA Astrophysics Data System (ADS)
Bonnefond, P.; Exertier, P.; Laurain, O.; Guinle, T.; Femenias, P.
2013-09-01
In collaboration with the CNES and NASA oceanographic projects (TOPEX/Poseidon and Jason), the OCA (Observatoire de la Côte d'Azur) developed a verification site in Corsica since 1996, operational since 1998. CALibration/VALidation embraces a wide variety of activities, ranging from the interpretation of information from internal-calibration modes of the sensors to validation of the fully corrected estimates of the reflector heights using in situ data. Now, Corsica is, like the Harvest platform (NASA side) [14], an operating calibration site able to support a continuous monitoring with a high level of accuracy: a 'point calibration' which yields instantaneous bias estimates with a 10-day repeatability of 30 mm (standard deviation) and mean errors of 4 mm (standard error). For a 35-day repeatability (ERS, Envisat), due to a smaller time series, the standard error is about the double ( 7 mm).In this paper, we will present updated results of the absolute Sea Surface Height (SSH) biases for TOPEX/Poseidon (T/P), Jason-1, Jason-2, ERS-2 and Envisat.
Comparative study of standard space and real space analysis of quantitative MR brain data.
Aribisala, Benjamin S; He, Jiabao; Blamire, Andrew M
2011-06-01
To compare the robustness of region of interest (ROI) analysis of magnetic resonance imaging (MRI) brain data in real space with analysis in standard space and to test the hypothesis that standard space image analysis introduces more partial volume effect errors compared to analysis of the same dataset in real space. Twenty healthy adults with no history or evidence of neurological diseases were recruited; high-resolution T(1)-weighted, quantitative T(1), and B(0) field-map measurements were collected. Algorithms were implemented to perform analysis in real and standard space and used to apply a simple standard ROI template to quantitative T(1) datasets. Regional relaxation values and histograms for both gray and white matter tissues classes were then extracted and compared. Regional mean T(1) values for both gray and white matter were significantly lower using real space compared to standard space analysis. Additionally, regional T(1) histograms were more compact in real space, with smaller right-sided tails indicating lower partial volume errors compared to standard space analysis. Standard space analysis of quantitative MRI brain data introduces more partial volume effect errors biasing the analysis of quantitative data compared to analysis of the same dataset in real space. Copyright © 2011 Wiley-Liss, Inc.
The performance of projective standardization for digital subtraction radiography.
Mol, André; Dunn, Stanley M
2003-09-01
We sought to test the performance and robustness of projective standardization in preserving invariant properties of subtraction images in the presence of irreversible projection errors. Study design Twenty bone chips (1-10 mg each) were placed on dentate dry mandibles. Follow-up images were obtained without the bone chips, and irreversible projection errors of up to 6 degrees were introduced. Digitized image intensities were normalized, and follow-up images were geometrically reconstructed by 2 operators using anatomical and fiduciary landmarks. Subtraction images were analyzed by 3 observers. Regression analysis revealed a linear relationship between radiographic estimates of mineral loss and actual mineral loss (R(2) = 0.99; P <.05). The effect of projection error was not significant (general linear model [GLM]: P >.05). There was no difference between the radiographic estimates from images standardized with anatomical landmarks and those standardized with fiduciary landmarks (Wilcoxon signed rank test: P >.05). Operator variability was low for image analysis alone (R(2) = 0.99; P <.05), as well as for the entire procedure (R(2) = 0.98; P <.05). The predicted detection limit was smaller than 1 mg. Subtraction images registered by projective standardization yield estimates of osseous change that are invariant to irreversible projection errors of up to 6 degrees. Within these limits, operator precision is high and anatomical landmarks can be used to establish correspondence.
Cost-effectiveness of the Federal stream-gaging program in Virginia
Carpenter, D.H.
1985-01-01
Data uses and funding sources were identified for the 77 continuous stream gages currently being operated in Virginia by the U.S. Geological Survey with a budget of $446,000. Two stream gages were identified as not being used sufficiently to warrant continuing their operation. Operation of these stations should be considered for discontinuation. Data collected at two other stations were identified as having uses primarily related to short-term studies; these stations should also be considered for discontinuation at the end of the data collection phases of the studies. The remaining 73 stations should be kept in the program for the foreseeable future. The current policy for operation of the 77-station program requires a budget of $446,000/yr. The average standard error of estimation of streamflow records is 10.1%. It was shown that this overall level of accuracy at the 77 sites could be maintained with a budget of $430,500 if resources were redistributed among the gages. A minimum budget of $428,500 is required to operate the 77-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, with optimized operation, the average standard error would be 10.4%. The maximum budget analyzed was $650,000, which resulted in an average standard error of 5.5%. The study indicates that a major component of error is caused by lost or missing data. If perfect equipment were available, the standard error for the current program and budget could be reduced to 7.6%. This also can be interpreted to mean that the streamflow data have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)
2014-01-01
We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588
Hennig, Cheryl; Cooper, David
2011-08-01
Histomorphometric aging methods report varying degrees of precision, measured through Standard Error of the Estimate (SEE). These techniques have been developed from variable samples sizes (n) and the impact of n on reported aging precision has not been rigorously examined in the anthropological literature. This brief communication explores the relation between n and SEE through a review of the literature (abstracts, articles, book chapters, theses, and dissertations), predictions based upon sampling theory and a simulation. Published SEE values for age prediction, derived from 40 studies, range from 1.51 to 16.48 years (mean 8.63; sd: 3.81 years). In general, these values are widely distributed for smaller samples and the distribution narrows as n increases--a pattern expected from sampling theory. For the two studies that have samples in excess of 200 individuals, the SEE values are very similar (10.08 and 11.10 years) with a mean of 10.59 years. Assuming this mean value is a 'true' characterization of the error at the population level, the 95% confidence intervals for SEE values from samples of 10, 50, and 150 individuals are on the order of ± 4.2, 1.7, and 1.0 years, respectively. While numerous sources of variation potentially affect the precision of different methods, the impact of sample size cannot be overlooked. The uncertainty associated with SEE values derived from smaller samples complicates the comparison of approaches based upon different methodology and/or skeletal elements. Meaningful comparisons require larger samples than have frequently been used and should ideally be based upon standardized samples. Copyright © 2011 Wiley-Liss, Inc.
A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation
NASA Astrophysics Data System (ADS)
Zhang, Xubin; Tan, Zhe-Min
2017-04-01
The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the analysis-forecast cycles for a period of 1 month are conducted. Through the comparison of both analyses and forecasts from this system, it is found that the trends for variation in analysis increments with information of different scale errors introduced are consistent with those for variation in variances and correlations of background errors. In particular, introduction of smaller-scale errors leads to larger amplitude of analysis increments for winds at medium scales at the height of both high- and low- level jet. And analysis increments for both temperature and humidity are greater at the corresponding scales at middle and upper levels under this circumstance. These analysis increments improve the intensity of jet-convection system which includes jets at different levels and coupling between them associated with latent heat release, and these changes in analyses contribute to the better forecasts for winds and temperature in the corresponding areas. When smaller-scale errors are included, analysis increments for humidity enhance significantly at large scales at lower levels to moisten southern analyses. This humidification devotes to correcting dry bias there and eventually improves forecast skill of humidity. Moreover, inclusion of larger- (smaller-) scale errors is beneficial for forecast quality of heavy (light) precipitation at large (small) scales due to the amplification (diminution) of intensity and area in precipitation forecasts but tends to overestimate (underestimate) light (heavy) precipitation .
Shirasaki, Osamu; Asou, Yosuke; Takahashi, Yukio
2007-12-01
Owing to fast or stepwise cuff deflation, or measuring at places other than the upper arm, the clinical accuracy of most recent automated sphygmomanometers (auto-BPMs) cannot be validated by one-arm simultaneous comparison, which would be the only accurate validation method based on auscultation. Two main alternative methods are provided by current standards, that is, two-arm simultaneous comparison (method 1) and one-arm sequential comparison (method 2); however, the accuracy of these validation methods might not be sufficient to compensate for the suspicious accuracy in lateral blood pressure (BP) differences (LD) and/or BP variations (BPV) between the device and reference readings. Thus, the Japan ISO-WG for sphygmomanometer standards has been studying a new method that might improve validation accuracy (method 3). The purpose of this study is to determine the appropriateness of method 3 by comparing immunity to LD and BPV with those of the current validation methods (methods 1 and 2). The validation accuracy of the above three methods was assessed in human participants [N=120, 45+/-15.3 years (mean+/-SD)]. An oscillometric automated monitor, Omron HEM-762, was used as the tested device. When compared with the others, methods 1 and 3 showed a smaller intra-individual standard deviation of device error (SD1), suggesting their higher reproducibility of validation. The SD1 by method 2 (P=0.004) significantly correlated with the participant's BP, supporting our hypothesis that the increased SD of device error by method 2 is at least partially caused by essential BPV. Method 3 showed a significantly (P=0.0044) smaller interparticipant SD of device error (SD2), suggesting its higher interparticipant consistency of validation. Among the methods of validation of the clinical accuracy of auto-BPMs, method 3, which showed the highest reproducibility and highest interparticipant consistency, can be proposed as being the most appropriate.
NASA Astrophysics Data System (ADS)
Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.
2006-06-01
Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.
On NUFFT-based gridding for non-Cartesian MRI
NASA Astrophysics Data System (ADS)
Fessler, Jeffrey A.
2007-10-01
For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI.
The Infrared Hubble Diagram of Type Ia Supernovae
NASA Astrophysics Data System (ADS)
Krisciunas, Kevin
Photometry of Type Ia supernovae reveals that these objects are standardizable candles in optical passbands - the peak luminosities are related to the rate of decline after maximum light. In the near-infrared bands, there is essentially a characteristic brightness at maximum light for each photometric band. Thus, in the near-infrared they are better than standardizable candles; they are essentially standard candles. Their absolute magnitudes are known to ±0.15 magnitude or better. The infrared observations have the extra advantage that interstellar extinction by dust along the line of sight is a factor of 3-10 smaller than in the optical B- and V -bands. The size of any systematic errors in the infrared extinction corrections typically become smaller than the photometric errors of the observations. Thus, we can obtain distances to the hosts of Type Ia supernovae to ±8 % or better. This is particularly useful for extragalactic astronomy and precise measurements of the dark energy component of the universe.
Acharya, Ashith B
2014-05-01
Dentin translucency measurement is an easy yet relatively accurate approach to postmortem age estimation. Translucency area represents a two-dimensional change and may reflect age variations better than length. Manually measuring area is challenging and this paper proposes a new digital method using commercially available computer hardware and software. Area and length were measured on 100 tooth sections (age range, 19-82 years) of 250 μm thickness. Regression analysis revealed lower standard error of estimate and higher correlation with age for length than for area (R = 0.62 vs. 0.60). However, test of regression formulae on a control sample (n = 33, 21-85 years) showed smaller mean absolute difference (8.3 vs. 8.8 years) and greater frequency of smaller errors (73% vs. 67% age estimates ≤ ± 10 years) for area than for length. These suggest that digital area measurements of root translucency may be used as an alternative to length in forensic age estimation. © 2014 American Academy of Forensic Sciences.
McMahon, Camilla M.; Henderson, Heather A.
2014-01-01
Error-monitoring, or the ability to recognize one's mistakes and implement behavioral changes to prevent further mistakes, may be impaired in individuals with Autism Spectrum Disorder (ASD). Children and adolescents (ages 9-19) with ASD (n = 42) and typical development (n = 42) completed two face processing tasks that required discrimination of either the gender or affect of standardized face stimuli. Post-error slowing and the difference in Error-Related Negativity amplitude between correct and incorrect responses (ERNdiff) were used to index error-monitoring ability. Overall, ERNdiff increased with age. On the Gender Task, individuals with ASD had a smaller ERNdiff than individuals with typical development; however, on the Affect Task, there were no significant diagnostic group differences on ERNdiff. Individuals with ASD may have ERN amplitudes similar to those observed in individuals with typical development in more social contexts compared to less social contexts due to greater consequences for errors, more effortful processing, and/or reduced processing efficiency in these contexts. Across all participants, more post-error slowing on the Affect Task was associated with better social cognitive skills. PMID:25066088
Gómez-Cabello, Alba; Vicente-Rodríguez, Germán; Albers, Ulrike; Mata, Esmeralda; Rodriguez-Marroyo, Jose A.; Olivares, Pedro R.; Gusi, Narcis; Villa, Gerardo; Aznar, Susana; Gonzalez-Gross, Marcela; Casajús, Jose A.; Ara, Ignacio
2012-01-01
Background The elderly EXERNET multi-centre study aims to collect normative anthropometric data for old functionally independent adults living in Spain. Purpose To describe the standardization process and reliability of the anthropometric measurements carried out in the pilot study and during the final workshop, examining both intra- and inter-rater errors for measurements. Materials and Methods A total of 98 elderly from five different regions participated in the intra-rater error assessment, and 10 different seniors living in the city of Toledo (Spain) participated in the inter-rater assessment. We examined both intra- and inter-rater errors for heights and circumferences. Results For height, intra-rater technical errors of measurement (TEMs) were smaller than 0.25 cm. For circumferences and knee height, TEMs were smaller than 1 cm, except for waist circumference in the city of Cáceres. Reliability for heights and circumferences was greater than 98% in all cases. Inter-rater TEMs were 0.61 cm for height, 0.75 cm for knee-height and ranged between 2.70 and 3.09 cm for the circumferences measured. Inter-rater reliabilities for anthropometric measurements were always higher than 90%. Conclusion The harmonization process, including the workshop and pilot study, guarantee the quality of the anthropometric measurements in the elderly EXERNET multi-centre study. High reliability and low TEM may be expected when assessing anthropometry in elderly population. PMID:22860013
Gençay, R; Qi, M
2001-01-01
We study the effectiveness of cross validation, Bayesian regularization, early stopping, and bagging to mitigate overfitting and improving generalization for pricing and hedging derivative securities with daily S&P 500 index daily call options from January 1988 to December 1993. Our results indicate that Bayesian regularization can generate significantly smaller pricing and delta-hedging errors than the baseline neural-network (NN) model and the Black-Scholes model for some years. While early stopping does not affect the pricing errors, it significantly reduces the hedging error (HE) in four of the six years we investigated. Although computationally most demanding, bagging seems to provide the most accurate pricing and delta hedging. Furthermore, the standard deviation of the MSPE of bagging is far less than that of the baseline model in all six years, and the standard deviation of the average HE of bagging is far less than that of the baseline model in five out of six years. We conclude that they be used at least in cases when no appropriate hints are available.
Improving Estimates Of Phase Parameters When Amplitude Fluctuates
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Brown, D. H.; Hurd, W. J.
1989-01-01
Adaptive inverse filter applied to incoming signal and noise. Time-varying inverse-filtering technique developed to improve digital estimate of phase of received carrier signal. Intended for use where received signal fluctuates in amplitude as well as in phase and signal tracked by digital phase-locked loop that keeps its phase error much smaller than 1 radian. Useful in navigation systems, reception of time- and frequency-standard signals, and possibly spread-spectrum communication systems.
Using a Divided Bar Apparatus to Measure Thermal Conductivity of Samples of Odd Sizes and Shapes
NASA Astrophysics Data System (ADS)
Crowell, J. "; Gosnold, W. D.
2012-12-01
Standard procedure for measuring thermal conductivity using a divided bar apparatus requires a sample that has the same surface dimensions as the heat sink/source surface in the divided bar. Heat flow is assumed to be constant throughout the column and thermal conductivity (K) is determined by measuring temperatures (T) across the sample and across standard layers and using the basic relationship Ksample=(Kstandard*(ΔT1+ΔT2)/2)/(ΔTsample). Sometimes samples are not large enough or of correct proportions to match the surface of the heat sink/source, however using the equations presented here the thermal conductivity of these samples can still be measured with a divided bar. Measurements were done on the UND Geothermal Laboratories stationary divided bar apparatus (SDB). This SDB has been designed to mimic many in-situ conditions, with a temperature range of -20C to 150C and a pressure range of 0 to 10,000 psi for samples with parallel surfaces and 0 to 3000 psi for samples with non-parallel surfaces. The heat sink/source surfaces are copper disks and have a surface area of 1,772 mm2 (2.74 in2). Layers of polycarbonate 6 mm thick with the same surface area as the copper disks are located in the heat sink and in the heat source as standards. For this study, all samples were prepared from a single piece of 4 inch limestone core. Thermal conductivities were measured for each sample as it was cut successively smaller. The above equation was adjusted to include the thicknesses (Th) of the samples and the standards and the surface areas (A) of the heat sink/source and of the sample Ksample=(Kstandard*Astandard*Thsample*(ΔT1+ΔT3))/(ΔTsample*Asample*2*Thstandard). Measuring the thermal conductivity of samples of multiple sizes, shapes, and thicknesses gave consistent values for samples with surfaces as small as 50% of the heat sink/source surface, regardless of the shape of the sample. Measuring samples with surfaces smaller than 50% of the heat sink/source surface resulted in thermal conductivity values which were too high. The cause of the error with the smaller samples is being examined as is the relationship between the amount of error in the thermal conductivity and the difference in surface areas. As more measurements are made an equation to mathematically correct for the error is being developed on in case a way to physically correct the problem cannot be determined.
Golz, Jürgen; MacLeod, Donald I A
2003-05-01
We analyze the sources of error in specifying color in CRT displays. These include errors inherent in the use of the color matching functions of the CIE 1931 standard observer when only colorimetric, not radiometric, calibrations are available. We provide transformation coefficients that prove to correct the deficiencies of this observer very well. We consider four different candidate sets of cone sensitivities. Some of these differ substantially; variation among candidate cone sensitivities exceeds the variation among phosphors. Finally, the effects of the recognized forms of observer variation on the visual responses (cone excitations or cone contrasts) generated by CRT stimuli are investigated and quantitatively specified. Cone pigment polymorphism gives rise to variation of a few per cent in relative excitation by the different phosphors--a variation larger than the errors ensuing from the adoption of the CIE standard observer, though smaller than the differences between some candidate cone sensitivities. Macular pigmentation has a larger influence, affecting mainly responses to the blue phosphor. The estimated combined effect of all sources of observer variation is comparable in magnitude with the largest differences between competing cone sensitivity estimates but is not enough to disrupt very seriously the relation between the L and M cone weights and the isoluminance settings of individual observers. It is also comparable with typical instrumental colorimetric errors, but we discuss these only briefly.
Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H.; Lewis, Marc S.; Brautigam, Chad A.; Schuck, Peter; Zhao, Huaying
2013-01-01
Sedimentation velocity (SV) is a method based on first-principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton® temperature logger to directly measure the temperature of a spinning rotor, and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration, which were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., doi 10.1016/j.ab.2013.02.011) and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from eleven instruments displayed a significantly reduced standard deviation of ∼ 0.7 %. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. PMID:23711724
Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H; Lewis, Marc S; Brautigam, Chad A; Schuck, Peter; Zhao, Huaying
2013-09-01
Sedimentation velocity (SV) is a method based on first principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton temperature logger to directly measure the temperature of a spinning rotor and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration that were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., Anal. Biochem., 437 (2013) 104-108), and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from 11 instruments displayed a significantly reduced standard deviation of approximately 0.7%. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. Published by Elsevier Inc.
Shoemaker, W. Barclay; Sumner, D.M.
2006-01-01
Corrections can be used to estimate actual wetland evapotranspiration (AET) from potential evapotranspiration (PET) as a means to define the hydrology of wetland areas. Many alternate parameterizations for correction coefficients for three PET equations are presented, covering a wide range of possible data-availability scenarios. At nine sites in the wetland Everglades of south Florida, USA, the relatively complex PET Penman equation was corrected to daily total AET with smaller standard errors than the PET simple and Priestley-Taylor equations. The simpler equations, however, required less data (and thus less funding for instrumentation), with the possibility of being corrected to AET with slightly larger, comparable, or even smaller standard errors. Air temperature generally corrected PET simple most effectively to wetland AET, while wetland stage and humidity generally corrected PET Priestley-Taylor and Penman most effectively to wetland AET. Stage was identified for PET Priestley-Taylor and Penman as the data type with the most correction ability at sites that are dry part of each year or dry part of some years. Finally, although surface water generally was readily available at each monitoring site, AET was not occurring at potential rates, as conceptually expected under well-watered conditions. Apparently, factors other than water availability, such as atmospheric and stomata resistances to vapor transport, also were limiting the PET rate. ?? 2006, The Society of Wetland Scientists.
Revised techniques for estimating peak discharges from channel width in Montana
Parrett, Charles; Hull, J.A.; Omang, R.J.
1987-01-01
This study was conducted to develop new estimating equations based on channel width and the updated flood frequency curves of previous investigations. Simple regression equations for estimating peak discharges with recurrence intervals of 2, 5, 10 , 25, 50, and 100 years were developed for seven regions in Montana. The standard errors of estimates for the equations that use active channel width as the independent variables ranged from 30% to 87%. The standard errors of estimate for the equations that use bankfull width as the independent variable ranged from 34% to 92%. The smallest standard errors generally occurred in the prediction equations for the 2-yr flood, 5-yr flood, and 10-yr flood, and the largest standard errors occurred in the prediction equations for the 100-yr flood. The equations that use active channel width and the equations that use bankfull width were determined to be about equally reliable in five regions. In the West Region, the equations that use bankfull width were slightly more reliable than those based on active channel width, whereas in the East-Central Region the equations that use active channel width were slightly more reliable than those based on bankfull width. Compared with similar equations previously developed, the standard errors of estimate for the new equations are substantially smaller in three regions and substantially larger in two regions. Limitations on the use of the estimating equations include: (1) The equations are based on stable conditions of channel geometry and prevailing water and sediment discharge; (2) The measurement of channel width requires a site visit, preferably by a person with experience in the method, and involves appreciable measurement errors; (3) Reliability of results from the equations for channel widths beyond the range of definition is unknown. In spite of the limitations, the estimating equations derived in this study are considered to be as reliable as estimating equations based on basin and climatic variables. Because the two types of estimating equations are independent, results from each can be weighted inversely proportional to their variances, and averaged. The weighted average estimate has a variance less than either individual estimate. (Author 's abstract)
Rudin-Brown, Christina M; Kramer, Chelsea; Langerak, Robin; Scipione, Andrea; Kelsey, Shelley
2017-11-17
Although numerous research studies have reported high levels of error and misuse of child restraint systems (CRS) and booster seats in experimental and real-world scenarios, conclusions are limited because they provide little information regarding which installation issues pose the highest risk and thus should be targeted for change. Beneficial to legislating bodies and researchers alike would be a standardized, globally relevant assessment of the potential injury risk associated with more common forms of CRS and booster seat misuse, which could be applied with observed error frequency-for example, in car seat clinics or during prototype user testing-to better identify and characterize the installation issues of greatest risk to safety. A group of 8 leading world experts in CRS and injury biomechanics, who were members of an international child safety project, estimated the potential injury severity associated with common forms of CRS and booster seat misuse. These injury risk error severity score (ESS) ratings were compiled and compared to scores from previous research that had used a similar procedure but with fewer respondents. To illustrate their application, and as part of a larger study examining CRS and booster seat labeling requirements, the new standardized ESS ratings were applied to objective installation performance data from 26 adult participants who installed a convertible (rear- vs. forward-facing) CRS and booster seat in a vehicle, and a child test dummy in the CRS and booster seat, using labels that only just met minimal regulatory requirements. The outcome measure, the risk priority number (RPN), represented the composite scores of injury risk and observed installation error frequency. Variability within the sample of ESS ratings in the present study was smaller than that generated in previous studies, indicating better agreement among experts on what constituted injury risk. Application of the new standardized ESS ratings to installation performance data revealed several areas of misuse of the CRS/booster seat associated with high potential injury risk. Collectively, findings indicate that standardized ESS ratings are useful for estimating injury risk potential associated with real-world CRS and booster seat installation errors.
Reproducibility of 3D kinematics and surface electromyography measurements of mastication.
Remijn, Lianne; Groen, Brenda E; Speyer, Renée; van Limbeek, Jacques; Nijhuis-van der Sanden, Maria W G
2016-03-01
The aim of this study was to determine the measurement reproducibility for a procedure evaluating the mastication process and to estimate the smallest detectable differences of 3D kinematic and surface electromyography (sEMG) variables. Kinematics of mandible movements and sEMG activity of the masticatory muscles were obtained over two sessions with four conditions: two food textures (biscuit and bread) of two sizes (small and large). Twelve healthy adults (mean age 29.1 years) completed the study. The second to the fifth chewing cycle of 5 bites were used for analyses. The reproducibility per outcome variable was calculated with an intraclass correlation coefficient (ICC) and a Bland-Altman analysis was applied to determine the standard error of measurement relative error of measurement and smallest detectable differences of all variables. ICCs ranged from 0.71 to 0.98 for all outcome variables. The outcome variables consisted of four bite and fourteen chewing cycle variables. The relative standard error of measurement of the bite variables was up to 17.3% for 'time-to-swallow', 'time-to-transport' and 'number of chewing cycles', but ranged from 31.5% to 57.0% for 'change of chewing side'. The relative standard error of measurement ranged from 4.1% to 24.7% for chewing cycle variables and was smaller for kinematic variables than sEMG variables. In general, measurements obtained with 3D kinematics and sEMG are reproducible techniques to assess the mastication process. The duration of the chewing cycle and frequency of chewing were the best reproducible measurements. Change of chewing side could not be reproduced. The published measurement error and smallest detectable differences will aid the interpretation of the results of future clinical studies using the same study variables. Copyright © 2015 Elsevier Inc. All rights reserved.
Efficient Z gates for quantum computing
NASA Astrophysics Data System (ADS)
McKay, David C.; Wood, Christopher J.; Sheldon, Sarah; Chow, Jerry M.; Gambetta, Jay M.
2017-08-01
For superconducting qubits, microwave pulses drive rotations around the Bloch sphere. The phase of these drives can be used to generate zero-duration arbitrary virtual Z gates, which, combined with two Xπ /2 gates, can generate any SU(2) gate. Here we show how to best utilize these virtual Z gates to both improve algorithms and correct pulse errors. We perform randomized benchmarking using a Clifford set of Hadamard and Z gates and show that the error per Clifford is reduced versus a set consisting of standard finite-duration X and Y gates. Z gates can correct unitary rotation errors for weakly anharmonic qubits as an alternative to pulse-shaping techniques such as derivative removal by adiabatic gate (DRAG). We investigate leakage and show that a combination of DRAG pulse shaping to minimize leakage and Z gates to correct rotation errors realizes a 13.3 ns Xπ /2 gate characterized by low error [1.95 (3 ) ×10-4] and low leakage [3.1 (6 ) ×10-6] . Ultimately leakage is limited by the finite temperature of the qubit, but this limit is two orders of magnitude smaller than pulse errors due to decoherence.
Blumenfeld, Philip; Hata, Nobuhiko; DiMaio, Simon; Zou, Kelly; Haker, Steven; Fichtinger, Gabor; Tempany, Clare M C
2007-09-01
To quantify needle placement accuracy of magnetic resonance image (MRI)-guided core needle biopsy of the prostate. A total of 10 biopsies were performed with 18-gauge (G) core biopsy needle via a percutaneous transperineal approach. Needle placement error was assessed by comparing the coordinates of preplanned targets with the needle tip measured from the intraprocedural coherent gradient echo images. The source of these errors was subsequently investigated by measuring displacement caused by needle deflection and needle susceptibility artifact shift in controlled phantom studies. Needle placement error due to misalignment of the needle template guide was also evaluated. The mean and standard deviation (SD) of errors in targeted biopsies was 6.5 +/- 3.5 mm. Phantom experiments showed significant placement error due to needle deflection with a needle with an asymmetrically beveled tip (3.2-8.7 mm depending on tissue type) but significantly smaller error with a symmetrical bevel (0.6-1.1 mm). Needle susceptibility artifacts observed a shift of 1.6 +/- 0.4 mm from the true needle axis. Misalignment of the needle template guide contributed an error of 1.5 +/- 0.3 mm. Needle placement error was clinically significant in MRI-guided biopsy for diagnosis of prostate cancer. Needle placement error due to needle deflection was the most significant cause of error, especially for needles with an asymmetrical bevel. (c) 2007 Wiley-Liss, Inc.
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Maruyama, Shuki; Fukushima, Yasuhiro; Miyamae, Yuta; Koizumi, Koji
2018-06-01
This study aimed to investigate the effects of parameter presets of the forward projected model-based iterative reconstruction solution (FIRST) on the accuracy of pulmonary nodule volume measurement. A torso phantom with simulated nodules [diameter: 5, 8, 10, and 12 mm; computed tomography (CT) density: - 630 HU] was scanned with a multi-detector CT at tube currents of 10 mA (ultra-low-dose: UL-dose) and 270 mA (standard-dose: Std-dose). Images were reconstructed with filtered back projection [FBP; standard (Std-FBP), ultra-low-dose (UL-FBP)], FIRST Lung (UL-Lung), and FIRST Body (UL-Body), and analyzed with a semi-automatic software. The error in the volume measurement was determined. The errors with UL-Lung and UL-Body were smaller than that with UL-FBP. The smallest error was 5.8% ± 0.3 for the 12-mm nodule with UL-Body (middle lung). Our results indicated that FIRST Body would be superior to FIRST Lung in terms of accuracy of nodule measurement with UL-dose CT.
Analytical Problems and Suggestions in the Analysis of Behavioral Economic Demand Curves.
Yu, Jihnhee; Liu, Liu; Collins, R Lorraine; Vincent, Paula C; Epstein, Leonard H
2014-01-01
Behavioral economic demand curves (Hursh, Raslear, Shurtleff, Bauman, & Simmons, 1988) are innovative approaches to characterize the relationships between consumption of a substance and its price. In this article, we investigate common analytical issues in the use of behavioral economic demand curves, which can cause inconsistent interpretations of demand curves, and then we provide methodological suggestions to address those analytical issues. We first demonstrate that log transformation with different added values for handling zeros changes model parameter estimates dramatically. Second, demand curves are often analyzed using an overparameterized model that results in an inefficient use of the available data and a lack of assessment of the variability among individuals. To address these issues, we apply a nonlinear mixed effects model based on multivariate error structures that has not been used previously to analyze behavioral economic demand curves in the literature. We also propose analytical formulas for the relevant standard errors of derived values such as P max, O max, and elasticity. The proposed model stabilizes the derived values regardless of using different added increments and provides substantially smaller standard errors. We illustrate the data analysis procedure using data from a relative reinforcement efficacy study of simulated marijuana purchasing.
Improving Arterial Spin Labeling by Using Deep Learning.
Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong
2018-05-01
Purpose To develop a deep learning algorithm that generates arterial spin labeling (ASL) perfusion images with higher accuracy and robustness by using a smaller number of subtraction images. Materials and Methods For ASL image generation from pair-wise subtraction, we used a convolutional neural network (CNN) as a deep learning algorithm. The ground truth perfusion images were generated by averaging six or seven pairwise subtraction images acquired with (a) conventional pseudocontinuous arterial spin labeling from seven healthy subjects or (b) Hadamard-encoded pseudocontinuous ASL from 114 patients with various diseases. CNNs were trained to generate perfusion images from a smaller number (two or three) of subtraction images and evaluated by means of cross-validation. CNNs from the patient data sets were also tested on 26 separate stroke data sets. CNNs were compared with the conventional averaging method in terms of mean square error and radiologic score by using a paired t test and/or Wilcoxon signed-rank test. Results Mean square errors were approximately 40% lower than those of the conventional averaging method for the cross-validation with the healthy subjects and patients and the separate test with the patients who had experienced a stroke (P < .001). Region-of-interest analysis in stroke regions showed that cerebral blood flow maps from CNN (mean ± standard deviation, 19.7 mL per 100 g/min ± 9.7) had smaller mean square errors than those determined with the conventional averaging method (43.2 ± 29.8) (P < .001). Radiologic scoring demonstrated that CNNs suppressed noise and motion and/or segmentation artifacts better than the conventional averaging method did (P < .001). Conclusion CNNs provided superior perfusion image quality and more accurate perfusion measurement compared with those of the conventional averaging method for generation of ASL images from pair-wise subtraction images. © RSNA, 2017.
Use of streamflow data to estimate base flowground-water recharge for Wisconsin
Gebert, W.A.; Radloff, M.J.; Considine, E.J.; Kennedy, J.L.
2007-01-01
The average annual base flow/recharge was determined for streamflow-gaging stations throughout Wisconsin by base-flow separation. A map of the State was prepared that shows the average annual base flow for the period 1970-99 for watersheds at 118 gaging stations. Trend analysis was performed on 22 of the 118 streamflow-gaging stations that had long-term records, unregulated flow, and provided aerial coverage of the State. The analysis found that a statistically significant increasing trend was occurring for watersheds where the primary land use was agriculture. Most gaging stations where the land cover was forest had no significant trend. A method to estimate the average annual base flow at ungaged sites was developed by multiple-regression analysis using basin characteristics. The equation with the lowest standard error of estimate, 9.5%, has drainage area, soil infiltration and base flow factor as independent variables. To determine the average annual base flow for smaller watersheds, estimates were made at low-flow partial-record stations in 3 of the 12 major river basins in Wisconsin. Regression equations were developed for each of the three major river basins using basin characteristics. Drainage area, soil infiltration, basin storage and base-flow factor were the independent variables in the regression equations with the lowest standard error of estimate. The standard error of estimate ranged from 17% to 52% for the three river basins. ?? 2007 American Water Resources Association.
On the Confounding Effect of Temperature on Chemical Shift-Encoded Fat Quantification
Hernando, Diego; Sharma, Samir D.; Kramer, Harald; Reeder, Scott B.
2014-01-01
Purpose To characterize the confounding effect of temperature on chemical shift-encoded (CSE) fat quantification. Methods The proton resonance frequency of water, unlike triglycerides, depends on temperature. This leads to a temperature dependence of the spectral models of fat (relative to water) that are commonly used by CSE-MRI methods. Simulation analysis was performed for 1.5 Tesla CSE fat–water signals at various temperatures and echo time combinations. Oil–water phantoms were constructed and scanned at temperatures between 0 and 40°C using spectroscopy and CSE imaging at three echo time combinations. An explanted human liver, rejected for transplantation due to steatosis, was scanned using spectroscopy and CSE imaging. Fat–water reconstructions were performed using four different techniques: magnitude and complex fitting, with standard or temperature-corrected signal modeling. Results In all experiments, magnitude fitting with standard signal modeling resulted in large fat quantification errors. Errors were largest for echo time combinations near TEinit ≈ 1.3 ms, ΔTE ≈ 2.2 ms. Errors in fat quantification caused by temperature-related frequency shifts were smaller with complex fitting, and were avoided using a temperature-corrected signal model. Conclusion Temperature is a confounding factor for fat quantification. If not accounted for, it can result in large errors in fat quantifications in phantom and ex vivo acquisitions. PMID:24123362
The statistical properties and possible causes of polar motion prediction errors
NASA Astrophysics Data System (ADS)
Kosek, Wieslaw; Kalarus, Maciej; Wnek, Agnieszka; Zbylut-Gorska, Maria
2015-08-01
The pole coordinate data predictions from different prediction contributors of the Earth Orientation Parameters Combination of Prediction Pilot Project (EOPCPPP) were studied to determine the statistical properties of polar motion forecasts by looking at the time series of differences between them and the future IERS pole coordinates data. The mean absolute errors, standard deviations as well as the skewness and kurtosis of these differences were computed together with their error bars as a function of prediction length. The ensemble predictions show a little smaller mean absolute errors or standard deviations however their skewness and kurtosis values are similar as the for predictions from different contributors. The skewness and kurtosis enable to check whether these prediction differences satisfy normal distribution. The kurtosis values diminish with the prediction length which means that the probability distribution of these prediction differences is becoming more platykurtic than letptokurtic. Non zero skewness values result from oscillating character of these differences for particular prediction lengths which can be due to the irregular change of the annual oscillation phase in the joint fluid (atmospheric + ocean + land hydrology) excitation functions. The variations of the annual oscillation phase computed by the combination of the Fourier transform band pass filter and the Hilbert transform from pole coordinates data as well as from pole coordinates model data obtained from fluid excitations are in a good agreement.
NASA Technical Reports Server (NTRS)
Wilson, C.; Dye, R.; Reed, L.
1982-01-01
The errors associated with planimetric mapping of the United States using satellite remote sensing techniques are analyzed. Assumptions concerning the state of the art achievable for satellite mapping systems and platforms in the 1995 time frame are made. An analysis of these performance parameters is made using an interactive cartographic satellite computer model, after first validating the model using LANDSAT 1 through 3 performance parameters. An investigation of current large scale (1:24,000) US National mapping techniques is made. Using the results of this investigation, and current national mapping accuracy standards, the 1995 satellite mapping system is evaluated for its ability to meet US mapping standards for planimetric and topographic mapping at scales of 1:24,000 and smaller.
Calibration of GPS based high accuracy speed meter for vehicles
NASA Astrophysics Data System (ADS)
Bai, Yin; Sun, Qiao; Du, Lei; Yu, Mei; Bai, Jie
2015-02-01
GPS based high accuracy speed meter for vehicles is a special type of GPS speed meter which uses Doppler Demodulation of GPS signals to calculate the speed of a moving target. It is increasingly used as reference equipment in the field of traffic speed measurement, but acknowledged standard calibration methods are still lacking. To solve this problem, this paper presents the set-ups of simulated calibration, field test signal replay calibration, and in-field test comparison with an optical sensor based non-contact speed meter. All the experiments were carried out on particular speed values in the range of (40-180) km/h with the same GPS speed meter. The speed measurement errors of simulated calibration fall in the range of +/-0.1 km/h or +/-0.1%, with uncertainties smaller than 0.02% (k=2). The errors of replay calibration fall in the range of +/-0.1% with uncertainties smaller than 0.10% (k=2). The calibration results justify the effectiveness of the two methods. The relative deviations of the GPS speed meter from the optical sensor based noncontact speed meter fall in the range of +/-0.3%, which validates the use of GPS speed meter as reference instruments. The results of this research can provide technical basis for the establishment of internationally standard calibration methods of GPS speed meters, and thus ensures the legal status of GPS speed meters as reference equipment in the field of traffic speed metrology.
Estimating Extracellular Spike Waveforms from CA1 Pyramidal Cells with Multichannel Electrodes
Molden, Sturla; Moldestad, Olve; Storm, Johan F.
2013-01-01
Extracellular (EC) recordings of action potentials from the intact brain are embedded in background voltage fluctuations known as the “local field potential” (LFP). In order to use EC spike recordings for studying biophysical properties of neurons, the spike waveforms must be separated from the LFP. Linear low-pass and high-pass filters are usually insufficient to separate spike waveforms from LFP, because they have overlapping frequency bands. Broad-band recordings of LFP and spikes were obtained with a 16-channel laminar electrode array (silicone probe). We developed an algorithm whereby local LFP signals from spike-containing channel were modeled using locally weighted polynomial regression analysis of adjoining channels without spikes. The modeled LFP signal was subtracted from the recording to estimate the embedded spike waveforms. We tested the method both on defined spike waveforms added to LFP recordings, and on in vivo-recorded extracellular spikes from hippocampal CA1 pyramidal cells in anaesthetized mice. We show that the algorithm can correctly extract the spike waveforms embedded in the LFP. In contrast, traditional high-pass filters failed to recover correct spike shapes, albeit produceing smaller standard errors. We found that high-pass RC or 2-pole Butterworth filters with cut-off frequencies below 12.5 Hz, are required to retrieve waveforms comparable to our method. The method was also compared to spike-triggered averages of the broad-band signal, and yielded waveforms with smaller standard errors and less distortion before and after the spike. PMID:24391714
Integrating models that depend on variable data
NASA Astrophysics Data System (ADS)
Banks, A. T.; Hill, M. C.
2016-12-01
Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log-transformation. Greater consistency is obtained by imposing smaller (by up to a factor of 1/35) weights on the smaller dependent-variable values. From an error-based perspective, the small weights are consistent with large standard deviations. This work considers the consequences of these two common ways of addressing variable data.
Observing human movements helps decoding environmental forces.
Zago, Myrka; La Scaleia, Barbara; Miller, William L; Lacquaniti, Francesco
2011-11-01
Vision of human actions can affect several features of visual motion processing, as well as the motor responses of the observer. Here, we tested the hypothesis that action observation helps decoding environmental forces during the interception of a decelerating target within a brief time window, a task intrinsically very difficult. We employed a factorial design to evaluate the effects of scene orientation (normal or inverted) and target gravity (normal or inverted). Button-press triggered the motion of a bullet, a piston, or a human arm. We found that the timing errors were smaller for upright scenes irrespective of gravity direction in the Bullet group, while the errors were smaller for the standard condition of normal scene and gravity in the Piston group. In the Arm group, instead, performance was better when the directions of scene and target gravity were concordant, irrespective of whether both were upright or inverted. These results suggest that the default viewer-centered reference frame is used with inanimate scenes, such as those of the Bullet and Piston protocols. Instead, the presence of biological movements in animate scenes (as in the Arm protocol) may help processing target kinematics under the ecological conditions of coherence between scene and target gravity directions.
Evaluating significance in linear mixed-effects models in R.
Luke, Steven G
2017-08-01
Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.
Frequency-domain gravitational waveform models for inspiraling binary neutron stars
NASA Astrophysics Data System (ADS)
Kawaguchi, Kyohei; Kiuchi, Kenta; Kyutoku, Koutarou; Sekiguchi, Yuichiro; Shibata, Masaru; Taniguchi, Keisuke
2018-02-01
We develop a model for frequency-domain gravitational waveforms from inspiraling binary neutron stars. Our waveform model is calibrated by comparison with hybrid waveforms constructed from our latest high-precision numerical-relativity waveforms and the SEOBNRv2T waveforms in the frequency range of 10-1000 Hz. We show that the phase difference between our waveform model and the hybrid waveforms is always smaller than 0.1 rad for the binary tidal deformability Λ ˜ in the range 300 ≲Λ ˜ ≲1900 and for a mass ratio between 0.73 and 1. We show that, for 10-1000 Hz, the distinguishability for the signal-to-noise ratio ≲50 and the mismatch between our waveform model and the hybrid waveforms are always smaller than 0.25 and 1.1 ×10-5 , respectively. The systematic error of our waveform model in the measurement of Λ ˜ is always smaller than 20 with respect to the hybrid waveforms for 300 ≲Λ ˜≲1900 . The statistical error in the measurement of binary parameters is computed employing our waveform model, and we obtain results consistent with the previous studies. We show that the systematic error of our waveform model is always smaller than 20% (typically smaller than 10%) of the statistical error for events with a signal-to-noise ratio of 50.
Lipkind, Dmitry; Plienrasri, Chatchawat; Chickos, James S
2010-12-23
The vaporization enthalpies of 1-methyl-, 1-ethyl-, 1-phenyl-, and 1-benzylimidazole, 1-methyl- and 1-phenylpyrazole, and trans-azobenzene are evaluated by correlation-gas chromatography (C-GC) using a variety of azines and diazines as standards. The vaporization enthalpies obtained by C-GC when compared to literature values are approximately 14 kJ·mol(-1) smaller for the imidazoles and 6 kJ·mol(-1) smaller for the pyrazoles. The literature vaporization enthalpies of 1-methylpyrrole and 1-methylindole, two closely related compounds with one less nitrogen, are reproduced by C-GC. These results suggest that the magnitude of the intermolecular interactions present in 1-substituted imidazoles and pyrazoles are significantly larger than the those present in the reference compounds and greater than or equal in magnitude to the enhanced intermolecular interactions observed previously in aromatic 1,2-diazines. The vaporization enthalpy and vapor pressure of a trans-1,2-diazine, trans-azobenzene, measured by C-GC using similar standards reproduced the literature values within experimental error.
A New Method for Estimating the Effective Population Size from Allele Frequency Changes
Pollak, Edward
1983-01-01
A new procedure is proposed for estimating the effective population size, given that information is available on changes in frequencies of the alleles at one or more independently segregating loci and the population is observed at two or more separate times. Approximate expressions are obtained for the variances of the new statistic, as well as others, also based on allele frequency changes, that have been discussed in the literature. This analysis indicates that the new statistic will generally have a smaller variance than the others. Estimates of effective population sizes and of the standard errors of the estimates are computed for data on two fly populations that have been discussed in earlier papers. In both cases, there is evidence that the effective population size is very much smaller than the minimum census size of the population. PMID:17246147
A one-step method for modelling longitudinal data with differential equations.
Hu, Yueqin; Treinen, Raymond
2018-04-06
Differential equation models are frequently used to describe non-linear trajectories of longitudinal data. This study proposes a new approach to estimate the parameters in differential equation models. Instead of estimating derivatives from the observed data first and then fitting a differential equation to the derivatives, our new approach directly fits the analytic solution of a differential equation to the observed data, and therefore simplifies the procedure and avoids bias from derivative estimations. A simulation study indicates that the analytic solutions of differential equations (ASDE) approach obtains unbiased estimates of parameters and their standard errors. Compared with other approaches that estimate derivatives first, ASDE has smaller standard error, larger statistical power and accurate Type I error. Although ASDE obtains biased estimation when the system has sudden phase change, the bias is not serious and a solution is also provided to solve the phase problem. The ASDE method is illustrated and applied to a two-week study on consumers' shopping behaviour after a sale promotion, and to a set of public data tracking participants' grammatical facial expression in sign language. R codes for ASDE, recommendations for sample size and starting values are provided. Limitations and several possible expansions of ASDE are also discussed. © 2018 The British Psychological Society.
Trommer, J.T.; Loper, J.E.; Hammett, K.M.; Bowman, Georgia
1996-01-01
Hydrologists use several traditional techniques for estimating peak discharges and runoff volumes from ungaged watersheds. However, applying these techniques to watersheds in west-central Florida requires that empirical relationships be extrapolated beyond tested ranges. As a result there is some uncertainty as to their accuracy. Sixty-six storms in 15 west-central Florida watersheds were modeled using (1) the rational method, (2) the U.S. Geological Survey regional regression equations, (3) the Natural Resources Conservation Service (formerly the Soil Conservation Service) TR-20 model, (4) the Army Corps of Engineers HEC-1 model, and (5) the Environmental Protection Agency SWMM model. The watersheds ranged between fully developed urban and undeveloped natural watersheds. Peak discharges and runoff volumes were estimated using standard or recommended methods for determining input parameters. All model runs were uncalibrated and the selection of input parameters was not influenced by observed data. The rational method, only used to calculate peak discharges, overestimated 45 storms, underestimated 20 storms and estimated the same discharge for 1 storm. The mean estimation error for all storms indicates the method overestimates the peak discharges. Estimation errors were generally smaller in the urban watersheds and larger in the natural watersheds. The U.S. Geological Survey regression equations provide peak discharges for storms of specific recurrence intervals. Therefore, direct comparison with observed data was limited to sixteen observed storms that had precipitation equivalent to specific recurrence intervals. The mean estimation error for all storms indicates the method overestimates both peak discharges and runoff volumes. Estimation errors were smallest for the larger natural watersheds in Sarasota County, and largest for the small watersheds located in the eastern part of the study area. The Natural Resources Conservation Service TR-20 model, overestimated peak discharges for 45 storms and underestimated 21 storms, and overestimated runoff volumes for 44 storms and underestimated 22 storms. The mean estimation error for all storms modeled indicates that the model overestimates peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. The HEC-1 model overestimated peak discharge rates for 55 storms and underestimated 11 storms. Runoff volumes were overestimated for 44 storms and underestimated for 22 storms using the Army Corps of Engineers HEC-1 model. The mean estimation error for all the storms modeled indicates that the model overestimates peak discharge rates and runoff volumes. Generally, the smaller estimation errors in peak discharges were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. Estimation errors in runoff volumes; however, were smallest for the 3 natural watersheds located in the southernmost part of Sarasota County. The Environmental Protection Agency Storm Water Management model produced similar peak discharges and runoff volumes when using both the Green-Ampt and Horton infiltration methods. Estimated peak discharge and runoff volume data calculated with the Horton method was only slightly higher than those calculated with the Green-Ampt method. The mean estimation error for all the storms modeled indicates the model using the Green-Ampt infiltration method overestimates peak discharges and slightly underestimates runoff volumes. Using the Horton infiltration method, the model overestimates both peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the five natural watersheds in Sarasota County with the least amount of impervious cover and the lowest slopes. The largest er
The international food unit: a new measurement aid that can improve portion size estimation.
Bucher, T; Weltert, M; Rollo, M E; Smith, S P; Jia, W; Collins, C E; Sun, M
2017-09-12
Portion size education tools, aids and interventions can be effective in helping prevent weight gain. However consumers have difficulties in estimating food portion sizes and are confused by inconsistencies in measurement units and terminologies currently used. Visual cues are an important mediator of portion size estimation, but standardized measurement units are required. In the current study, we present a new food volume estimation tool and test the ability of young adults to accurately quantify food volumes. The International Food Unit™ (IFU™) is a 4x4x4 cm cube (64cm 3 ), subdivided into eight 2 cm sub-cubes for estimating smaller food volumes. Compared with currently used measures such as cups and spoons, the IFU™ standardizes estimation of food volumes with metric measures. The IFU™ design is based on binary dimensional increments and the cubic shape facilitates portion size education and training, memory and recall, and computer processing which is binary in nature. The performance of the IFU™ was tested in a randomized between-subject experiment (n = 128 adults, 66 men) that estimated volumes of 17 foods using four methods; the IFU™ cube, a deformable modelling clay cube, a household measuring cup or no aid (weight estimation). Estimation errors were compared between groups using Kruskall-Wallis tests and post-hoc comparisons. Estimation errors differed significantly between groups (H(3) = 28.48, p < .001). The volume estimations were most accurate in the group using the IFU™ cube (Mdn = 18.9%, IQR = 50.2) and least accurate using the measuring cup (Mdn = 87.7%, IQR = 56.1). The modelling clay cube led to a median error of 44.8% (IQR = 41.9). Compared with the measuring cup, the estimation errors using the IFU™ were significantly smaller for 12 food portions and similar for 5 food portions. Weight estimation was associated with a median error of 23.5% (IQR = 79.8). The IFU™ improves volume estimation accuracy compared to other methods. The cubic shape was perceived as favourable, with subdivision and multiplication facilitating volume estimation. Further studies should investigate whether the IFU™ can facilitate portion size training and whether portion size education using the IFU™ is effective and sustainable without the aid. A 3-dimensional IFU™ could serve as a reference object for estimating food volume.
Roberts, Steven; Martin, Michael A
2010-01-01
Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.
Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters.
Chung, SungWon; Lu, Ying; Henry, Roland G
2006-11-01
Bootstrap is an empirical non-parametric statistical technique based on data resampling that has been used to quantify uncertainties of diffusion tensor MRI (DTI) parameters, useful in tractography and in assessing DTI methods. The current bootstrap method (repetition bootstrap) used for DTI analysis performs resampling within the data sharing common diffusion gradients, requiring multiple acquisitions for each diffusion gradient. Recently, wild bootstrap was proposed that can be applied without multiple acquisitions. In this paper, two new approaches are introduced called residual bootstrap and repetition bootknife. We show that repetition bootknife corrects for the large bias present in the repetition bootstrap method and, therefore, better estimates the standard errors. Like wild bootstrap, residual bootstrap is applicable to single acquisition scheme, and both are based on regression residuals (called model-based resampling). Residual bootstrap is based on the assumption that non-constant variance of measured diffusion-attenuated signals can be modeled, which is actually the assumption behind the widely used weighted least squares solution of diffusion tensor. The performances of these bootstrap approaches were compared in terms of bias, variance, and overall error of bootstrap-estimated standard error by Monte Carlo simulation. We demonstrate that residual bootstrap has smaller biases and overall errors, which enables estimation of uncertainties with higher accuracy. Understanding the properties of these bootstrap procedures will help us to choose the optimal approach for estimating uncertainties that can benefit hypothesis testing based on DTI parameters, probabilistic fiber tracking, and optimizing DTI methods.
Hellström, Åke; Rammsayer, Thomas H
2015-10-01
Studies have shown that the discriminability of successive time intervals depends on the presentation order of the standard (St) and the comparison (Co) stimuli. Also, this order affects the point of subjective equality. The first effect is here called the standard-position effect (SPE); the latter is known as the time-order error. In the present study, we investigated how these two effects vary across interval types and standard durations, using Hellström's sensation-weighting model to describe the results and relate them to stimulus comparison mechanisms. In Experiment 1, four modes of interval presentation were used, factorially combining interval type (filled, empty) and sensory modality (auditory, visual). For each mode, two presentation orders (St-Co, Co-St) and two standard durations (100 ms, 1,000 ms) were used; half of the participants received correctness feedback, and half of them did not. The interstimulus interval was 900 ms. The SPEs were negative (i.e., a smaller difference limen for St-Co than for Co-St), except for the filled-auditory and empty-visual 100-ms standards, for which a positive effect was obtained. In Experiment 2, duration discrimination was investigated for filled auditory intervals with four standards between 100 and 1,000 ms, an interstimulus interval of 900 ms, and no feedback. Standard duration interacted with presentation order, here yielding SPEs that were negative for standards of 100 and 1,000 ms, but positive for 215 and 464 ms. Our findings indicate that the SPE can be positive as well as negative, depending on the interval type and standard duration, reflecting the relative weighting of the stimulus information, as is described by the sensation-weighting model.
Error-Analysis for Correctness, Effectiveness, and Composing Procedure.
ERIC Educational Resources Information Center
Ewald, Helen Rothschild
The assumptions underpinning grammatical mistakes can often be detected by looking for patterns of errors in a student's work. Assumptions that negatively influence rhetorical effectiveness can similarly be detected through error analysis. On a smaller scale, error analysis can also reveal assumptions affecting rhetorical choice. Snags in the…
Errors Affect Hypothetical Intertemporal Food Choice in Women
Sellitto, Manuela; di Pellegrino, Giuseppe
2014-01-01
Growing evidence suggests that the ability to control behavior is enhanced in contexts in which errors are more frequent. Here we investigated whether pairing desirable food with errors could decrease impulsive choice during hypothetical temporal decisions about food. To this end, healthy women performed a Stop-signal task in which one food cue predicted high-error rate, and another food cue predicted low-error rate. Afterwards, we measured participants’ intertemporal preferences during decisions between smaller-immediate and larger-delayed amounts of food. We expected reduced sensitivity to smaller-immediate amounts of food associated with high-error rate. Moreover, taking into account that deprivational states affect sensitivity for food, we controlled for participants’ hunger. Results showed that pairing food with high-error likelihood decreased temporal discounting. This effect was modulated by hunger, indicating that, the lower the hunger level, the more participants showed reduced impulsive preference for the food previously associated with a high number of errors as compared with the other food. These findings reveal that errors, which are motivationally salient events that recruit cognitive control and drive avoidance learning against error-prone behavior, are effective in reducing impulsive choice for edible outcomes. PMID:25244534
Pain, Liza A M; Baker, Ross; Sohail, Qazi Zain; Richardson, Denyse; Zabjek, Karl; Mogk, Jeremy P M; Agur, Anne M R
2018-03-23
Altered three-dimensional (3D) joint kinematics can contribute to shoulder pathology, including post-stroke shoulder pain. Reliable assessment methods enable comparative studies between asymptomatic shoulders of healthy subjects and painful shoulders of post-stroke subjects, and could inform treatment planning for post-stroke shoulder pain. The study purpose was to establish intra-rater test-retest reliability and within-subject repeatability of a palpation/digitization protocol, which assesses 3D clavicular/scapular/humeral rotations, in asymptomatic and painful post-stroke shoulders. Repeated measurements of 3D clavicular/scapular/humeral joint/segment rotations were obtained using palpation/digitization in 32 asymptomatic and six painful post-stroke shoulders during four reaching postures (rest/flexion/abduction/external rotation). Intra-class correlation coefficients (ICCs), standard error of the measurement and 95% confidence intervals were calculated. All ICC values indicated high to very high test-retest reliability (≥0.70), with lower reliability for scapular anterior/posterior tilt during external rotation in asymptomatic subjects, and scapular medial/lateral rotation, humeral horizontal abduction/adduction and axial rotation during abduction in post-stroke subjects. All standard error of measurement values demonstrated within-subject repeatability error ≤5° for all clavicular/scapular/humeral joint/segment rotations (asymptomatic ≤3.75°; post-stroke ≤5.0°), except for humeral axial rotation (asymptomatic ≤5°; post-stroke ≤15°). This noninvasive, clinically feasible palpation/digitization protocol was reliable and repeatable in asymptomatic shoulders, and in a smaller sample of painful post-stroke shoulders. Implications for Rehabilitation In the clinical setting, a reliable and repeatable noninvasive method for assessment of three-dimensional (3D) clavicular/scapular/humeral joint orientation and range of motion (ROM) is currently required. The established reliability and repeatability of this proposed palpation/digitization protocol will enable comparative 3D ROM studies between asymptomatic and post-stroke shoulders, which will further inform treatment planning. Intra-rater test-retest repeatability, which is measured by the standard error of the measure, indicates the range of error associated with a single test measure. Therefore, clinicians can use the standard error of the measure to determine the "true" differences between pre-treatment and post-treatment test scores.
[Evaluation of accuracy of virtual occlusal definition in Angle class I molar relationship].
Wu, L; Liu, X J; Li, Z L; Wang, X
2018-02-18
To evaluate the accuracy of virtual occlusal definition in non-Angle class I molar relationship, and to evaluate the clinical feasibility. Twenty pairs of models of orthognathic patients were included in this study. The inclusion criteria were: (1) finished with pre-surgical orthodontic treatment and (2) stable final occlusion. The exclusion criteria were: (1) existence of distorted teeth, (2) needs for segmentation, (3) defect of dentition except for orthodontic extraction ones, and (4) existence of tooth space. The tooth-extracted test group included 10 models with two premolars extracted during preoperative orthodontic treatment. Their molar relationships were not Angle class I relationship. The non-tooth-extracted test group included another 10 models without teeth extracted, therefore their molar relationships were Angle class I. To define the final occlusion in virtual environment, two steps were included: (1) The morphology data of upper and lower dentition were digitalized by surface scanner (Smart Optics/Activity 102; Model-Tray GmbH, Hamburg, Germany); (2) the virtual relationships were defined using 3Shape software. The control standard of final occlusion was manually defined using gypsum models and then digitalized by surface scanner. The final occlusion of test group and control standard were overlapped according to lower dentition morphology. Errors were evaluated by calculating the distance between the corresponding reference points of testing group and control standard locations. The overall errors for upper dentition between test group and control standard location were (0.51±0.18) mm in non-tooth-extracted test group and (0.60±0.36) mm in tooth-extracted test group. The errors were significantly different between these two test groups (P<0.05). However, in both test groups, the errors of each tooth in a single dentition does not differ from one another. There was no significant difference between errors in tooth-extracted test group and 1 mm (P>0.05); and the accuracy of non-tooth-extracted group was significantly smaller than 1 mm (P<0.05). The error of virtual occlusal definition of none class I molar relationship is higher than that of class I relationship, with an accuracy of 1 mm. However, its accuracy is still feasible for clinical application.
Delay compensation - Its effect in reducing sampling errors in Fourier spectroscopy
NASA Technical Reports Server (NTRS)
Zachor, A. S.; Aaronson, S. M.
1979-01-01
An approximate formula is derived for the spectrum ghosts caused by periodic drive speed variations in a Michelson interferometer. The solution represents the case of fringe-controlled sampling and is applicable when the reference fringes are delayed to compensate for the delay introduced by the electrical filter in the signal channel. Numerical results are worked out for several common low-pass filters. It is shown that the maximum relative ghost amplitude over the range of frequencies corresponding to the lower half of the filter band is typically 20 times smaller than the relative zero-to-peak velocity error, when delayed sampling is used. In the lowest quarter of the filter band it is more than 100 times smaller than the relative velocity error. These values are ten and forty times smaller, respectively, than they would be without delay compensation if the filter is a 6-pole Butterworth.
[Development of Micro-Spectrometer with a Function of Timely Temperature Compensation].
Bao, Jian-guang; Liu, Zheng-kun; Chen, Huo-yao; Lin, Ji-ping; Fu, Shao-jun
2015-05-01
Temperature drift will be brought to Micro-Spectrometer used for demodulating the Varied Line-Space(VLS) grating position sensor on aircraft due to high-low temperature shock. We successfully made a Micro-Spectrometer, for the VLS grating position sensor on aircraft, which still have stable output under temperature shock enviro nment. In order to present a real time temperature compensation scheme, the effects temperature change has on Micro-Spectrometer are analyzed and the traditional cross Czerny-Turner (C-T)optical structure is optimized. Both optical structures are analyzed by optics design software ZEMAX and proved that comparedwithtraditional cross C-T optical structure, the newone can accomplish not only smaller spectrum drift but also spectrum drift with better linearity. Based on the new optical structure. The scheme of using reference wavelength to accomplish real time temperature compensation was proposed and a Micro-fiber Spectrometer was successfully manufactured, whith is with Volume of 80 mm X 70 mmX 70 mm, integration time of 8 ~1 000 ms and FullWidthHalfMaximum(FWHM) of 2 nm. Experiments show that the new spectrometer meets the design requirement. Under high temperature in the range of nearly 60 °C, the standard error of wavelength of this new spectrometer is smaller than 0. 1 nm, and the maximum error of wavelength is 0. 14 nm, which is much smaller than required 0. 3 nm. Innovations of this paper are the schemeof real time temperature compensation, the new cross C-T optical structure and a Micro-fiber Spectrometer based on it.
Saturation of the anisoplanatic error in horizontal imaging scenarios
NASA Astrophysics Data System (ADS)
Beck, Jeffrey; Bos, Jeremy P.
2017-09-01
We evaluate the piston-removed anisoplanatic error for smaller apertures imaging over long horizontal paths. Previous works have shown that the piston and tilt compensated anisoplanatic error saturates to values less than one squared radian. Under these conditions the definition of the isoplanatic angle is unclear. These works focused on nadir pointing telescope systems with aperture sizes between five meters and one half meter. We directly extend this work to horizontal imaging scenarios with aperture sizes smaller than one half meter. We assume turbulence is constant along the imaging path and that the ratio of the aperture size to the atmospheric coherence length is on the order of unity.
Magnitude of pseudopotential localization errors in fixed node diffusion quantum Monte Carlo
Kent, Paul R.; Krogel, Jaron T.
2017-06-22
Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less
Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard
2011-01-01
In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.
Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-04-01
To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.
Cost-effectiveness of the stream-gaging program in Maine; a prototype for nationwide implementation
Fontaine, Richard A.; Moss, M.E.; Smath, J.A.; Thomas, W.O.
1984-01-01
This report documents the results of a cost-effectiveness study of the stream-gaging program in Maine. Data uses and funding sources were identified for the 51 continuous stream gages currently being operated in Maine with a budget of $211,000. Three stream gages were identified as producing data no longer sufficiently needed to warrant continuing their operation. Operation of these stations should be discontinued. Data collected at three other stations were identified as having uses specific only to short-term studies; it is recommended that these stations be discontinued at the end of the data-collection phases of the studies. The remaining 45 stations should be maintained in the program for the foreseeable future. The current policy for operation of the 45-station program would require a budget of $180,300 per year. The average standard error of estimation of streamflow records is 17.7 percent. It was shown that this overall level of accuracy at the 45 sites could be maintained with a budget of approximately $170,000 if resources were redistributed among the gages. A minimum budget of $155,000 is required to operate the 45-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 25.1 percent. The maximum budget analyzed was $350,000, which resulted in an average standard error of 8.7 percent. Large parts of Maine's interior were identified as having sparse streamflow data. It was determined that this sparsity be remedied as funds become available.
NASA Astrophysics Data System (ADS)
Zhang, Yi
2018-01-01
This study extends a set of unstructured third/fourth-order flux operators on spherical icosahedral grids from two perspectives. First, the fifth-order and sixth-order flux operators of this kind are further extended, and the nominally second-order to sixth-order operators are then compared based on the solid body rotation and deformational flow tests. Results show that increasing the nominal order generally leads to smaller absolute errors. Overall, the standard fifth-order scheme generates the smallest errors in limited and unlimited tests, although it does not enhance the convergence rate. Even-order operators show higher limiter sensitivity than the odd-order operators. Second, a triangular version of these high-order operators is repurposed for transporting the potential vorticity in a space-time-split shallow water framework. Results show that a class of nominally third-order upwind-biased operators generates better results than second-order and fourth-order counterparts. The increase of the potential enstrophy over time is suppressed owing to the damping effect. The grid-scale noise in the vorticity is largely alleviated, and the total energy remains conserved. Moreover, models using high-order operators show smaller numerical errors in the vorticity field because of a more accurate representation of the nonlinear Coriolis term. This improvement is especially evident in the Rossby-Haurwitz wave test, in which the fluid is highly rotating. Overall, high-order flux operators with higher damping coefficients, which essentially behave like the Anticipated Potential Vorticity Method, present better results.
The use of mini-samples in palaeomagnetism
NASA Astrophysics Data System (ADS)
Böhnel, Harald; Michalk, Daniel; Nowaczyk, Norbert; Naranjo, Gildardo Gonzalez
2009-10-01
Rock cores of ~25 mm diameter are widely used in palaeomagnetism. Occasionally smaller diameters have been used as well which represents distinct advantages in terms of throughput, weight of equipment and core collections. How their orientation precision compares to 25 mm cores, however, has not been evaluated in detail before. Here we compare the site mean directions and their statistical parameters for 12 lava flows sampled with 25 mm cores (standard samples, typically 8 cores per site) and with 12 mm drill cores (mini-samples, typically 14 cores per site). The site-mean directions for both sample sizes appear to be indistinguishable in most cases. For the mini-samples, site dispersion parameters k on average are slightly lower than for the standard samples reflecting their larger orienting and measurement errors. Applying the Wilcoxon signed-rank test the probability that k or α95 have the same distribution for both sizes is acceptable only at the 17.4 or 66.3 per cent level, respectively. The larger mini-core numbers per site appears to outweigh the lower k values yielding also slightly smaller confidence limits α95. Further, both k and α95 are less variable for mini-samples than for standard size samples. This is interpreted also to result from the larger number of mini-samples per site, which better averages out the detrimental effect of undetected abnormal remanence directions. Sampling of volcanic rocks with mini-samples therefore does not present a disadvantage in terms of the overall obtainable uncertainty of site mean directions. Apart from this, mini-samples do present clear advantages during the field work, as about twice the number of drill cores can be recovered compared to 25 mm cores, and the sampled rock unit is then more widely covered, which reduces the contribution of natural random errors produced, for example, by fractures, cooling joints, and palaeofield inhomogeneities. Mini-samples may be processed faster in the laboratory, which is of particular advantage when carrying out palaeointensity experiments.
Assessment of ecologic regression in the study of lung cancer and indoor radon.
Stidley, C A; Samet, J M
1994-02-01
Ecologic regression studies conducted to assess the cancer risk of indoor radon to the general population are subject to methodological limitations, and they have given seemingly contradictory results. The authors use simulations to examine the effects of two major methodological problems that affect these studies: measurement error and misspecification of the risk model. In a simulation study of the effect of measurement error caused by the sampling process used to estimate radon exposure for a geographic unit, both the effect of radon and the standard error of the effect estimate were underestimated, with greater bias for smaller sample sizes. In another simulation study, which addressed the consequences of uncontrolled confounding by cigarette smoking, even small negative correlations between county geometric mean annual radon exposure and the proportion of smokers resulted in negative average estimates of the radon effect. A third study considered consequences of using simple linear ecologic models when the true underlying model relation between lung cancer and radon exposure is nonlinear. These examples quantify potential biases and demonstrate the limitations of estimating risks from ecologic studies of lung cancer and indoor radon.
Two ultraviolet radiation datasets that cover China
NASA Astrophysics Data System (ADS)
Liu, Hui; Hu, Bo; Wang, Yuesi; Liu, Guangren; Tang, Liqin; Ji, Dongsheng; Bai, Yongfei; Bao, Weikai; Chen, Xin; Chen, Yunming; Ding, Weixin; Han, Xiaozeng; He, Fei; Huang, Hui; Huang, Zhenying; Li, Xinrong; Li, Yan; Liu, Wenzhao; Lin, Luxiang; Ouyang, Zhu; Qin, Boqiang; Shen, Weijun; Shen, Yanjun; Su, Hongxin; Song, Changchun; Sun, Bo; Sun, Song; Wang, Anzhi; Wang, Genxu; Wang, Huimin; Wang, Silong; Wang, Youshao; Wei, Wenxue; Xie, Ping; Xie, Zongqiang; Yan, Xiaoyuan; Zeng, Fanjiang; Zhang, Fawei; Zhang, Yangjian; Zhang, Yiping; Zhao, Chengyi; Zhao, Wenzhi; Zhao, Xueyong; Zhou, Guoyi; Zhu, Bo
2017-07-01
Ultraviolet (UV) radiation has significant effects on ecosystems, environments, and human health, as well as atmospheric processes and climate change. Two ultraviolet radiation datasets are described in this paper. One contains hourly observations of UV radiation measured at 40 Chinese Ecosystem Research Network stations from 2005 to 2015. CUV3 broadband radiometers were used to observe the UV radiation, with an accuracy of 5%, which meets the World Meteorology Organization's measurement standards. The extremum method was used to control the quality of the measured datasets. The other dataset contains daily cumulative UV radiation estimates that were calculated using an all-sky estimation model combined with a hybrid model. The reconstructed daily UV radiation data span from 1961 to 2014. The mean absolute bias error and root-mean-square error are smaller than 30% at most stations, and most of the mean bias error values are negative, which indicates underestimation of the UV radiation intensity. These datasets can improve our basic knowledge of the spatial and temporal variations in UV radiation. Additionally, these datasets can be used in studies of potential ozone formation and atmospheric oxidation, as well as simulations of ecological processes.
Janet, Jon Paul; Kulik, Heather J
2017-11-22
Machine learning (ML) of quantum mechanical properties shows promise for accelerating chemical discovery. For transition metal chemistry where accurate calculations are computationally costly and available training data sets are small, the molecular representation becomes a critical ingredient in ML model predictive accuracy. We introduce a series of revised autocorrelation functions (RACs) that encode relationships of the heuristic atomic properties (e.g., size, connectivity, and electronegativity) on a molecular graph. We alter the starting point, scope, and nature of the quantities evaluated in standard ACs to make these RACs amenable to inorganic chemistry. On an organic molecule set, we first demonstrate superior standard AC performance to other presently available topological descriptors for ML model training, with mean unsigned errors (MUEs) for atomization energies on set-aside test molecules as low as 6 kcal/mol. For inorganic chemistry, our RACs yield 1 kcal/mol ML MUEs on set-aside test molecules in spin-state splitting in comparison to 15-20× higher errors for feature sets that encode whole-molecule structural information. Systematic feature selection methods including univariate filtering, recursive feature elimination, and direct optimization (e.g., random forest and LASSO) are compared. Random-forest- or LASSO-selected subsets 4-5× smaller than the full RAC set produce sub- to 1 kcal/mol spin-splitting MUEs, with good transferability to metal-ligand bond length prediction (0.004-5 Å MUE) and redox potential on a smaller data set (0.2-0.3 eV MUE). Evaluation of feature selection results across property sets reveals the relative importance of local, electronic descriptors (e.g., electronegativity, atomic number) in spin-splitting and distal, steric effects in redox potential and bond lengths.
Furlan, Leonardo; Sterr, Annette
2018-01-01
Motor learning studies face the challenge of differentiating between real changes in performance and random measurement error. While the traditional p -value-based analyses of difference (e.g., t -tests, ANOVAs) provide information on the statistical significance of a reported change in performance scores, they do not inform as to the likely cause or origin of that change, that is, the contribution of both real modifications in performance and random measurement error to the reported change. One way of differentiating between real change and random measurement error is through the utilization of the statistics of standard error of measurement (SEM) and minimal detectable change (MDC). SEM is estimated from the standard deviation of a sample of scores at baseline and a test-retest reliability index of the measurement instrument or test employed. MDC, in turn, is estimated from SEM and a degree of confidence, usually 95%. The MDC value might be regarded as the minimum amount of change that needs to be observed for it to be considered a real change, or a change to which the contribution of real modifications in performance is likely to be greater than that of random measurement error. A computer-based motor task was designed to illustrate the applicability of SEM and MDC to motor learning research. Two studies were conducted with healthy participants. Study 1 assessed the test-retest reliability of the task and Study 2 consisted in a typical motor learning study, where participants practiced the task for five consecutive days. In Study 2, the data were analyzed with a traditional p -value-based analysis of difference (ANOVA) and also with SEM and MDC. The findings showed good test-retest reliability for the task and that the p -value-based analysis alone identified statistically significant improvements in performance over time even when the observed changes could in fact have been smaller than the MDC and thereby caused mostly by random measurement error, as opposed to by learning. We suggest therefore that motor learning studies could complement their p -value-based analyses of difference with statistics such as SEM and MDC in order to inform as to the likely cause or origin of any reported changes in performance.
Smooth empirical Bayes estimation of observation error variances in linear systems
NASA Technical Reports Server (NTRS)
Martz, H. F., Jr.; Lian, M. W.
1972-01-01
A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.
On the consistency of QCBED structure factor measurements for TiO 2 (Rutile)
Jiang, Bin; Zuo, Jian -Min; Friis, Jesper; ...
2003-09-16
The same Bragg reflection in TiO 2 from twelve different CBED patterns (from different crystals, orientations and thicknesses) are analysed quantitatively in order to evaluate the consistency of the QCBED method for bond-charge mapping. The standard deviation in the resulting distribution of derived X-ray structure factors is found to be an order of magnitude smaller than that in conventional X-ray work, and the standard error (0.026% for F X(110)) is slightly better than obtained by the X-ray Pendellosung method applied to silicon. This is sufficiently accuracy to distinguish between atomic, covalent and ionic models of bonding. We describe the importancemore » of extracting experimental parameters from CCD camera characterization, and of surface oxidation and crystal shape. Thus, the current experiments show that the QCBED method is now a robust and powerful tool for low order structure factor measurement, which does not suffer from the large extinction (multiple scattering) errors which occur in inorganic X-ray crystallography, and may be applied to nanocrystals. Our results will be used to understand the role of d electrons in the chemical bonding of TiO 2.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, D; Dyer, B; Kumaran Nair, C
Purpose: The Integral Quality Monitor (IQM), developed by iRT Systems GmbH (Koblenz, Germany) is a large-area, linac-mounted ion chamber used to monitor photon fluence during patient treatment. Our previous work evaluated the change of the ion chamber’s response to deviations from static 1×1 cm2 and 10×10 cm2 photon beams and other characteristics integral to use in external beam detection. The aim of this work is to simulate two external beam radiation delivery errors, quantify the detection of simulated errors and evaluate the reduction in patient harm resulting from detection. Methods: Two well documented radiation oncology delivery errors were selected formore » simulation. The first error was recreated by modifying a wedged whole breast treatment, removing the physical wedge and calculating the planned dose with Pinnacle TPS (Philips Radiation Oncology Systems, Fitchburg, WI). The second error was recreated by modifying a static-gantry IMRT pharyngeal tonsil plan to be delivered in 3 unmodulated fractions. A radiation oncologist evaluated the dose for simulated errors and predicted morbidity and mortality commiserate with the original reported toxicity, indicating that reported errors were approximately simulated. The ion chamber signal of unmodified treatments was compared to the simulated error signal and evaluated in Pinnacle TPS again with radiation oncologist prediction of simulated patient harm. Results: Previous work established that transmission detector system measurements are stable within 0.5% standard deviation (SD). Errors causing signal change greater than 20 SD (10%) were considered detected. The whole breast and pharyngeal tonsil IMRT simulated error increased signal by 215% and 969%, respectively, indicating error detection after the first fraction and IMRT segment, respectively. Conclusion: The transmission detector system demonstrated utility in detecting clinically significant errors and reducing patient toxicity/harm in simulated external beam delivery. Future work will evaluate detection of other smaller magnitude delivery errors.« less
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.
Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan
2017-02-27
Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10 16 electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area.
Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction
Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan
2017-01-01
Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area. PMID:28264424
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-01-01
Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476
The effect of the dynamic wet troposphere on VLBI measurements
NASA Technical Reports Server (NTRS)
Treuhaft, R. N.; Lanyi, G. E.
1986-01-01
Calculations using a statistical model of water vapor fluctuations yield the effect of the dynamic wet troposphere on Very Long Baseline Interferometry (VLBI) measurements. The statistical model arises from two primary assumptions: (1) the spatial structure of refractivity fluctuations can be closely approximated by elementary (Kolmogorov) turbulence theory, and (2) temporal fluctuations are caused by spatial patterns which are moved over a site by the wind. The consequences of these assumptions are outlined for the VLBI delay and delay rate observables. For example, wet troposphere induced rms delays for Deep Space Network (DSN) VLBI at 20-deg elevation are about 3 cm of delay per observation, which is smaller, on the average, than other known error sources in the current DSN VLBI data set. At 20-deg elevation for 200-s time intervals, water vapor induces approximately 1.5 x 10 to the minus 13th power s/s in the Allan standard deviation of interferometric delay, which is a measure of the delay rate observable error. In contrast to the delay error, the delay rate measurement error is dominated by water vapor fluctuations. Water vapor induced VLBI parameter errors and correlations are calculated. For the DSN, baseline length parameter errors due to water vapor fluctuations are in the range of 3 to 5 cm. The above physical assumptions also lead to a method for including the water vapor fluctuations in the parameter estimation procedure, which is used to extract baseline and source information from the VLBI observables.
Image Reconstruction for Interferometric Imaging of Geosynchronous Satellites
NASA Astrophysics Data System (ADS)
DeSantis, Zachary J.
Imaging distant objects at a high resolution has always presented a challenge due to the diffraction limit. Larger apertures improve the resolution, but at some point the cost of engineering, building, and correcting phase aberrations of large apertures become prohibitive. Interferometric imaging uses the Van Cittert-Zernike theorem to form an image from measurements of spatial coherence. This effectively allows the synthesis of a large aperture from two or more smaller telescopes to improve the resolution. We apply this method to imaging geosynchronous satellites with a ground-based system. Imaging a dim object from the ground presents unique challenges. The atmosphere creates errors in the phase measurements. The measurements are taken simultaneously across a large bandwidth of light. The atmospheric piston error, therefore, manifests as a linear phase error across the spectral measurements. Because the objects are faint, many of the measurements are expected to have a poor signal-to-noise ratio (SNR). This eliminates possibility of use of commonly used techniques like closure phase, which is a standard technique in astronomical interferometric imaging for making partial phase measurements in the presence of atmospheric error. The bulk of our work has been focused on forming an image, using sub-Nyquist sampled data, in the presence of these linear phase errors without relying on closure phase techniques. We present an image reconstruction algorithm that successfully forms an image in the presence of these linear phase errors. We demonstrate our algorithm’s success in both simulation and in laboratory experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kent, Paul R.; Krogel, Jaron T.
Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less
Impact of the HERA I+II combined data on the CT14 QCD global analysis
NASA Astrophysics Data System (ADS)
Dulat, S.; Hou, T.-J.; Gao, J.; Guzzi, M.; Huston, J.; Nadolsky, P.; Pumplin, J.; Schmidt, C.; Stump, D.; Yuan, C.-P.
2016-11-01
A brief description of the impact of the recent HERA run I+II combination of inclusive deep inelastic scattering cross-section data on the CT14 global analysis of PDFs is given. The new CT14HERA2 PDFs at NLO and NNLO are illustrated. They employ the same parametrization used in the CT14 analysis, but with an additional shape parameter for describing the strange quark PDF. The HERA I+II data are reasonably well described by both CT14 and CT14HERA2 PDFs, and differences are smaller than the PDF uncertainties of the standard CT14 analysis. Both sets are acceptable when the error estimates are calculated in the CTEQ-TEA (CT) methodology and the standard CT14 PDFs are recommended to be continuously used for the analysis of LHC measurements.
Murad, Havi; Kipnis, Victor; Freedman, Laurence S
2016-10-01
Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data
Ying, Gui-shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-01-01
Purpose To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. Methods We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field data in the elderly. Results When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI −0.03 to 0.32D, P=0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, P=0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller P-values, while analysis of the worse eye provided larger P-values than mixed effects models and marginal models. Conclusion In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision. PMID:28102741
Recommendations for diagnosing effective radiative forcing from climate models for CMIP6
NASA Astrophysics Data System (ADS)
Smith, C. J.; Forster, P.; Richardson, T.; Myhre, G.; Pincus, R.
2016-12-01
The usefulness of previous Coupled Model Intercomparison Project (CMIP) exercises has been hampered by a lack of radiative forcing information. This has made it difficult to understand reasons for differences between model responses. Effective radiative forcing (ERF) is easier to diagnose than traditional radiative forcing in global climate models (GCMs) and is more representative of the ultimate climate response. Here we examine the different methods of computing ERF in two GCMs. We find that ERF computed from a fixed sea-surface temperature (SST) method (ERF_fSST) has much more certainty than regression-based methods. Thirty-year integrations are sufficient to reduce the standard error in global ERF to 0.05 Wm-2. For 2xCO2 ERF, 30 year integrations are needed to ensure that the signal is larger than the standard error over more than 90% of the globe. Within the ERF_fSST method there are various options for prescribing SSTs and sea-ice. We explore these and find that ERF is only weakly dependent on the methodological choices. Prescribing the monthly-averaged seasonally varying model's preindustrial climatology is recommended for its smaller random error and easier implementation. As part of CMIP6, the Radiative Forcing Model Intercomparison Project (RFMIP) asks models to conduct 30-year ERF_fSST experiments using the model's own preindustrial climatology of SST and sea-ice. The Aerosol and Chemistry Model intercomparison Project (AerChemMIP) will also mainly use this approach. We propose this as a standard method for diagnosing ERF in models and recommend that it be used across the climate modeling community to aid future comparisons.
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-11-01
Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
Gravity and isostatic anomaly maps of Greece produced
NASA Astrophysics Data System (ADS)
Lagios, E.; Chailas, S.; Hipkin, R. G.
A gravity anomaly map of Greece was first compiled in the early 1970s [Makris and Stavrou, 1984] from all available gravity data collected by different Hellenic institutions. However, to compose this map the data had to be smoothed to the point that many of the smaller-wavelength gravity anomalies were lost. New work begun in 1987 has resulted in the publication of an updated map [Lagios et al., 1994] and an isostatic anomaly map derived from it.The gravity data cover the area between east longitudes 19° and 27° and north latitudes 32° and 42°, organized in files of 100-km squares and grouped in 10-km squares using UTM zone 34 coordinates. Most of the data on land come from the gravity observations of Makris and Stavrou [1984] with additional data from the Institute of Geology and Mining Exploration, the Public Oil Corporation of Greece, and Athens University. These data were checked using techniques similar to those used in compiling the gravity anomaly map of the United States, but the horizontal gradient was used as a check rather than the gravity difference. Marine data were digitized from the maps of Morelli et al. [1975a, 1975b]. All gravity anomaly values are referred to the IGSN-71 system, reduced with the standard Bouger density of 2.67 Mg/m3. We estimate the errors of the anomalies in the continental part of Greece to be ±0.9 mGal; this is expected to be smaller over fairly flat regions. For stations whose height has been determined by leveling, the error is only ±0.3 mGal. For the marine areas, the errors are about ±5 mGal [Morelli, 1990].
Nichols, Jennifer A; Roach, Koren E; Fiorentino, Niccolo M; Anderson, Andrew E
2016-09-01
Evidence suggests that the tibiotalar and subtalar joints provide near six degree-of-freedom (DOF) motion. Yet, kinematic models frequently assume one DOF at each of these joints. In this study, we quantified the accuracy of kinematic models to predict joint angles at the tibiotalar and subtalar joints from skin-marker data. Models included 1 or 3 DOF at each joint. Ten asymptomatic subjects, screened for deformities, performed 1.0m/s treadmill walking and a balanced, single-leg heel-rise. Tibiotalar and subtalar joint angles calculated by inverse kinematics for the 1 and 3 DOF models were compared to those measured directly in vivo using dual-fluoroscopy. Results demonstrated that, for each activity, the average error in tibiotalar joint angles predicted by the 1 DOF model were significantly smaller than those predicted by the 3 DOF model for inversion/eversion and internal/external rotation. In contrast, neither model consistently demonstrated smaller errors when predicting subtalar joint angles. Additionally, neither model could accurately predict discrete angles for the tibiotalar and subtalar joints on a per-subject basis. Differences between model predictions and dual-fluoroscopy measurements were highly variable across subjects, with joint angle errors in at least one rotation direction surpassing 10° for 9 out of 10 subjects. Our results suggest that both the 1 and 3 DOF models can predict trends in tibiotalar joint angles on a limited basis. However, as currently implemented, neither model can predict discrete tibiotalar or subtalar joint angles for individual subjects. Inclusion of subject-specific attributes may improve the accuracy of these models. Copyright © 2016 Elsevier B.V. All rights reserved.
Pailing, Patricia E; Segalowitz, Sidney J
2004-01-01
This study examines changes in the error-related negativity (ERN/Ne) related to motivational incentives and personality traits. ERPs were gathered while adults completed a four-choice letter task during four motivational conditions. Monetary incentives for finger and hand accuracy were altered across motivation conditions to either be equal or favor one type of accuracy over the other in a 3:1 ratio. Larger ERN/Ne amplitudes were predicted with increased incentives, with personality moderating this effect. Results were as expected: Individuals higher on conscientiousness displayed smaller motivation-related changes in the ERN/Ne. Similarly, those low on neuroticism had smaller effects, with the effect of Conscientiousness absent after accounting for Neuroticism. These results emphasize an emotional/evaluative function for the ERN/Ne, and suggest that the ability to selectively invest in error monitoring is moderated by underlying personality.
What is the effect of area size when using local area practice style as an instrument?
Brooks, John M; Tang, Yuexin; Chapman, Cole G; Cook, Elizabeth A; Chrischilles, Elizabeth A
2013-08-01
Discuss the tradeoffs inherent in choosing a local area size when using a measure of local area practice style as an instrument in instrumental variable estimation when assessing treatment effectiveness. Assess the effectiveness of angiotensin converting-enzyme inhibitors and angiotensin receptor blockers on survival after acute myocardial infarction for Medicare beneficiaries using practice style instruments based on different-sized local areas around patients. We contrasted treatment effect estimates using different local area sizes in terms of the strength of the relationship between local area practice styles and individual patient treatment choices; and indirect assessments of the assumption violations. Using smaller local areas to measure practice styles exploits more treatment variation and results in smaller standard errors. However, if treatment effects are heterogeneous, the use of smaller local areas may increase the risk that local practice style measures are dominated by differences in average treatment effectiveness across areas and bias results toward greater effectiveness. Local area practice style measures can be useful instruments in instrumental variable analysis, but the use of smaller local area sizes to generate greater treatment variation may result in treatment effect estimates that are biased toward higher effectiveness. Assessment of whether ecological bias can be mitigated by changing local area size requires the use of outside data sources. Copyright © 2013 Elsevier Inc. All rights reserved.
(Sample) Size Matters: Best Practices for Defining Error in Planktic Foraminiferal Proxy Records
NASA Astrophysics Data System (ADS)
Lowery, C.; Fraass, A. J.
2016-02-01
Paleoceanographic research is a vital tool to extend modern observational datasets and to study the impact of climate events for which there is no modern analog. Foraminifera are one of the most widely used tools for this type of work, both as paleoecological indicators and as carriers for geochemical proxies. However, the use of microfossils as proxies for paleoceanographic conditions brings about a unique set of problems. This is primarily due to the fact that groups of individual foraminifera, which usually live about a month, are used to infer average conditions for time periods ranging from hundreds to tens of thousands of years. Because of this, adequate sample size is very important for generating statistically robust datasets, particularly for stable isotopes. In the early days of stable isotope geochemistry, instrumental limitations required hundreds of individual foraminiferal tests to return a value. This had the fortunate side-effect of smoothing any seasonal to decadal changes within the planktic foram population. With the advent of more sensitive mass spectrometers, smaller sample sizes have now become standard. While this has many advantages, the use of smaller numbers of individuals to generate a data point has lessened the amount of time averaging in the isotopic analysis and decreased precision in paleoceanographic datasets. With fewer individuals per sample, the differences between individual specimens will result in larger variation, and therefore error, and less precise values for each sample. Unfortunately, most (the authors included) do not make a habit of reporting the error associated with their sample size. We have created an open-source model in R to quantify the effect of sample sizes under various realistic and highly modifiable parameters (calcification depth, diagenesis in a subset of the population, improper identification, vital effects, mass, etc.). For example, a sample in which only 1 in 10 specimens is diagenetically altered can be off by >0.3‰ δ18O VPDB, or 1°C. Here, we demonstrate the use of this tool to quantify error in micropaleontological datasets, and suggest best practices for minimizing error when generating stable isotope data with foraminifera.
NASA Technical Reports Server (NTRS)
Menard, Richard; Chang, Lang-Ping
1998-01-01
A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the error variance.
Lognormal Kalman filter for assimilating phase space density data in the radiation belts
NASA Astrophysics Data System (ADS)
Kondrashov, D.; Ghil, M.; Shprits, Y.
2011-11-01
Data assimilation combines a physical model with sparse observations and has become an increasingly important tool for scientists and engineers in the design, operation, and use of satellites and other high-technology systems in the near-Earth space environment. Of particular importance is predicting fluxes of high-energy particles in the Van Allen radiation belts, since these fluxes can damage spaceborne platforms and instruments during strong geomagnetic storms. In transiting from a research setting to operational prediction of these fluxes, improved data assimilation is of the essence. The present study is motivated by the fact that phase space densities (PSDs) of high-energy electrons in the outer radiation belt—both simulated and observed—are subject to spatiotemporal variations that span several orders of magnitude. Standard data assimilation methods that are based on least squares minimization of normally distributed errors may not be adequate for handling the range of these variations. We propose herein a modification of Kalman filtering that uses a log-transformed, one-dimensional radial diffusion model for the PSDs and includes parameterized losses. The proposed methodology is first verified on model-simulated, synthetic data and then applied to actual satellite measurements. When the model errors are sufficiently smaller then observational errors, our methodology can significantly improve analysis and prediction skill for the PSDs compared to those of the standard Kalman filter formulation. This improvement is documented by monitoring the variance of the innovation sequence.
Custom map projections for regional groundwater models
Kuniansky, Eve L.
2017-01-01
For regional groundwater flow models (areas greater than 100,000 km2), improper choice of map projection parameters can result in model error for boundary conditions dependent on area (recharge or evapotranspiration simulated by application of a rate using cell area from model discretization) and length (rivers simulated with head-dependent flux boundary). Smaller model areas can use local map coordinates, such as State Plane (United States) or Universal Transverse Mercator (correct zone) without introducing large errors. Map projections vary in order to preserve one or more of the following properties: area, shape, distance (length), or direction. Numerous map projections are developed for different purposes as all four properties cannot be preserved simultaneously. Preservation of area and length are most critical for groundwater models. The Albers equal-area conic projection with custom standard parallels, selected by dividing the length north to south by 6 and selecting standard parallels 1/6th above or below the southern and northern extent, preserves both area and length for continental areas in mid latitudes oriented east-west. Custom map projection parameters can also minimize area and length error in non-ideal projections. Additionally, one must also use consistent vertical and horizontal datums for all geographic data. The generalized polygon for the Floridan aquifer system study area (306,247.59 km2) is used to provide quantitative examples of the effect of map projections on length and area with different projections and parameter choices. Use of improper map projection is one model construction problem easily avoided.
Dörnberger, V; Dörnberger, G
1987-01-01
On 99 testes of corpses (death had occurred between 26 und 86 years) comparative volumetry was done. In the left surrounding capsules (without scrotal skin and tunica dartos) the testes were measured via real time sonography in a waterbath (7.5 MHz linear-scan), afterwards length, breadth and height were measured by a sliding calibre, the largest diameter (the length) of the testis was determined by Schirren's circle and finally the size of the testis was measured via Prader's orchidometer. After all the testes were surgically exposed, their volume (by litres) was determined according to Archimedes' principle. As for the Archimedes' principle a random mean error of 7% must be accepted, sonographic determination of the volume showed a random mean error of 15%. Whereas the accuracy of measurement increases with increasing volumes, both methods should be used with caution if the volumes are below 4 ml, since the possibilities of error are rather great. According to Prader's orchidometer the measured volumes on average were higher (+ 27%) with a random mean error of 19.5%. With Schirren's circle the obtained mean value was even higher (+ 52%) in comparison to the "real" volume by Archimedes' principle with a random mean error of 19%. The measurements of the testes in the left capsules by sliding calibre can be optimized, if one applies a correcting factor f (sliding calibre) = 0.39 for calculation of the testis volume corresponding to an ellipsoid. Here you will get the same mean value as in Archimedes' principle with a standard mean error of only 9%. If one applies the correction factor of real time sonography of testis f (sono) = 0.65 the mean value of sliding calibre measurements would be 68.8% too high with a standard mean error of 20.3%. For measurements via sliding calibre the calculation of the testis volume corresponding to an ellipsoid one should apply the smaller factor f (sliding calibre) = 0.39, because in this way the left capsules of testis and the epididymis are considered.
Accuracy of Jump-Mat Systems for Measuring Jump Height.
Pueo, Basilio; Lipinska, Patrycja; Jiménez-Olmedo, José M; Zmijewski, Piotr; Hopkins, Will G
2017-08-01
Vertical-jump tests are commonly used to evaluate lower-limb power of athletes and nonathletes. Several types of equipment are available for this purpose. To compare the error of measurement of 2 jump-mat systems (Chronojump-Boscosystem and Globus Ergo Tester) with that of a motion-capture system as a criterion and to determine the modifying effect of foot length on jump height. Thirty-one young adult men alternated 4 countermovement jumps with 4 squat jumps. Mean jump height and standard deviations representing technical error of measurement arising from each device and variability arising from the subjects themselves were estimated with a novel mixed model and evaluated via standardization and magnitude-based inference. The jump-mat systems produced nearly identical measures of jump height (differences in means and in technical errors of measurement ≤1 mm). Countermovement and squat-jump height were both 13.6 cm higher with motion capture (90% confidence limits ±0.3 cm), but this very large difference was reduced to small unclear differences when adjusted to a foot length of zero. Variability in countermovement and squat-jump height arising from the subjects was small (1.1 and 1.5 cm, respectively, 90% confidence limits ±0.3 cm); technical error of motion capture was similar in magnitude (1.7 and 1.6 cm, ±0.3 and ±0.4 cm), and that of the jump mats was similar or smaller (1.2 and 0.3 cm, ±0.5 and ±0.9 cm). The jump-mat systems provide trustworthy measurements for monitoring changes in jump height. Foot length can explain the substantially higher jump height observed with motion capture.
Study of chromatic adaptation using memory color matches, Part I: neutral illuminants.
Smet, Kevin A G; Zhai, Qiyan; Luo, Ming R; Hanselaer, Peter
2017-04-03
Twelve corresponding color data sets have been obtained using the long-term memory colors of familiar objects as target stimuli. Data were collected for familiar objects with neutral, red, yellow, green and blue hues under 4 approximately neutral illumination conditions on or near the blackbody locus. The advantages of the memory color matching method are discussed in light of other more traditional asymmetric matching techniques. Results were compared to eight corresponding color data sets available in literature. The corresponding color data was used to test several linear (von Kries, RLAB, etc.) and nonlinear (Hunt & Nayatani) chromatic adaptation transforms (CAT). It was found that a simple two-step von Kries, whereby the degree of adaptation D is optimized to minimize the DEu'v' prediction errors, outperformed all other tested models for both memory color and literature corresponding color sets, whereby prediction errors were lower for the memory color sets. The predictive errors were substantially smaller than the standard uncertainty on the average observer and were comparable to what are considered just-noticeable-differences in the CIE u'v' chromaticity diagram, supporting the use of memory color based internal references to study chromatic adaptation mechanisms.
Mieritz, Rune M; Bronfort, Gert; Jakobsen, Markus D; Aagaard, Per; Hartvigsen, Jan
2014-09-01
A basic premise for any instrument measuring spinal motion is that reliable outcomes can be obtained on a relevant sample under standardized conditions. The purpose of this study was to assess the overall reliability and measurement error of regional spinal sagittal plane motion in patients with chronic low back pain (LBP), and then to evaluate the influence of body mass index, examiner, gender, stability of pain, and pain distribution on reliability and measurement error. This study comprises a test-retest design separated by 7 to 14 days. The patient cohort consisted of 220 individuals with chronic LBP. Kinematics of the lumbar spine were sampled during standardized spinal extension-flexion testing using a 6-df instrumented spatial linkage system. Test-retest reliability and measurement error were evaluated using interclass correlation coefficients (ICC(1,1)) and Bland-Altman limits of agreement (LOAs). The overall test-retest reliability (ICC(1,1)) for various motion parameters ranged from 0.51 to 0.70, and relatively wide LOAs were observed for all parameters. Reliability measures in patient subgroups (ICC(1,1)) ranged between 0.34 and 0.77. In general, greater (ICC(1,1)) coefficients and smaller LOAs were found in subgroups with patients examined by the same examiner, patients with a stable pain level, patients with a body mass index less than below 30 kg/m(2), patients who were men, and patients in the Quebec Task Force classifications Group 1. This study shows that sagittal plane kinematic data from patients with chronic LBP may be sufficiently reliable in measurements of groups of patients. However, because of the large LOAs, this test procedure appears unusable at the individual patient level. Furthermore, reliability and measurement error varies substantially among subgroups of patients. Copyright © 2014 Elsevier Inc. All rights reserved.
Kennedy Space Center Timing and Countdown Interface to Kennedy Ground Control Subsystem
NASA Technical Reports Server (NTRS)
Olsen, James C.
2015-01-01
Kennedy Ground Control System (KGCS) engineers at the National Aeronautics and Space Administration (NASA) Kennedy Space Center (KSC) are developing a time-tagging process to enable reconstruction of the events during a launch countdown. Such a process can be useful in the case of anomalies or other situations where it is necessary to know the exact time an event occurred. It is thus critical for the timing information to be accurate. KGCS will synchronize all items with Coordinated Universal Time (UTC) obtained from the Timing and Countdown (T&CD) organization. Network Time Protocol (NTP) is the protocol currently in place for synchronizing UTC. However, NTP has a peak error that is too high for today's standards. Precision Time Protocol (PTP) is a newer protocol with a much smaller peak error. The focus of this project has been to implement a PTP solution on the network to increase timing accuracy while introducing and configuring the implementation of a firewall between T&CD and the KGCS network.
BLIND EXTRACTION OF AN EXOPLANETARY SPECTRUM THROUGH INDEPENDENT COMPONENT ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waldmann, I. P.; Tinetti, G.; Hollis, M. D. J.
2013-03-20
Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a 'blind' analysis of the data is based on the concept of independent component analysis. The detrending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior or auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10%-30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of {approx}0.09 {mu}m. This represents a reasonablemore » trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.« less
Cartographic quality of ERTS-1 images
NASA Technical Reports Server (NTRS)
Welch, R. I.
1973-01-01
Analyses of simulated and operational ERTS images have provided initial estimates of resolution, ground resolution, detectability thresholds and other measures of image quality of interest to earth scientists and cartographers. Based on these values, including an approximate ground resolution of 250 meters for both RBV and MSS systems, the ERTS-1 images appear suited to the production and/or revision of planimetric and photo maps of 1:500,000 scale and smaller for which map accuracy standards are compatible with the imaged detail. Thematic mapping, although less constrained by map accuracy standards, will be influenced by measurement thresholds and errors which have yet to be accurately determined for ERTS images. This study also indicates the desirability of establishing a quantitative relationship between image quality values and map products which will permit both engineers and cartographers/earth scientists to contribute to the design requirements of future satellite imaging systems.
Scaling fixed-field alternating gradient accelerators with a small orbit excursion.
Machida, Shinji
2009-10-16
A novel scaling type of fixed-field alternating gradient (FFAG) accelerator is proposed that solves the major problems of conventional scaling and nonscaling types. This scaling FFAG accelerator can achieve a much smaller orbit excursion by taking a larger field index k. A triplet focusing structure makes it possible to set the operating point in the second stability region of Hill's equation with a reasonable sensitivity to various errors. The orbit excursion is about 5 times smaller than in a conventional scaling FFAG accelerator and the beam size growth due to typical errors is at most 10%.
Longitudinal decline of driving safety in Parkinson disease.
Uc, Ergun Y; Rizzo, Matthew; O'Shea, Amy M J; Anderson, Steven W; Dawson, Jeffrey D
2017-11-07
To longitudinally assess and predict on-road driving safety in Parkinson disease (PD). Drivers with PD (n = 67) and healthy controls (n = 110) drove a standardized route in an instrumented vehicle and were invited to return 2 years later. A professional driving expert reviewed drive data and videos to score safety errors. At baseline, drivers with PD performed worse on visual, cognitive, and motor tests, and committed more road safety errors compared to controls (median PD 38.0 vs controls 30.5; p < 0.001). A smaller proportion of drivers with PD returned for repeat testing (42.8% vs 62.7%; p < 0.01). At baseline, returnees with PD made fewer errors than nonreturnees with PD (median 34.5 vs 40.0; p < 0.05) and performed similar to control returnees (median 33). Baseline global cognitive performance of returnees with PD was better than that of nonreturnees with PD, but worse than for control returnees ( p < 0.05). After 2 years, returnees with PD showed greater cognitive decline and larger increase in error counts than control returnees (median increase PD 13.5 vs controls 3.0; p < 0.001). Driving error count increase in the returnees with PD was predicted by greater error count and worse visual acuity at baseline, and by greater interval worsening of global cognition, Unified Parkinson's Disease Rating Scale activities of daily living score, executive functions, visual processing speed, and attention. Despite drop out of the more impaired drivers within the PD cohort, returning drivers with PD, who drove like controls without PD at baseline, showed many more driving safety errors than controls after 2 years. Driving decline in PD was predicted by baseline driving performance and deterioration of cognitive, visual, and functional abnormalities on follow-up. © 2017 American Academy of Neurology.
NASA Astrophysics Data System (ADS)
Leka, K. D.; Barnes, G.
2003-10-01
We apply statistical tests based on discriminant analysis to the wide range of photospheric magnetic parameters described in a companion paper by Leka & Barnes, with the goal of identifying those properties that are important for the production of energetic events such as solar flares. The photospheric vector magnetic field data from the University of Hawai'i Imaging Vector Magnetograph are well sampled both temporally and spatially, and we include here data covering 24 flare-event and flare-quiet epochs taken from seven active regions. The mean value and rate of change of each magnetic parameter are treated as separate variables, thus evaluating both the parameter's state and its evolution, to determine which properties are associated with flaring. Considering single variables first, Hotelling's T2-tests show small statistical differences between flare-producing and flare-quiet epochs. Even pairs of variables considered simultaneously, which do show a statistical difference for a number of properties, have high error rates, implying a large degree of overlap of the samples. To better distinguish between flare-producing and flare-quiet populations, larger numbers of variables are simultaneously considered; lower error rates result, but no unique combination of variables is clearly the best discriminator. The sample size is too small to directly compare the predictive power of large numbers of variables simultaneously. Instead, we rank all possible four-variable permutations based on Hotelling's T2-test and look for the most frequently appearing variables in the best permutations, with the interpretation that they are most likely to be associated with flaring. These variables include an increasing kurtosis of the twist parameter and a larger standard deviation of the twist parameter, but a smaller standard deviation of the distribution of the horizontal shear angle and a horizontal field that has a smaller standard deviation but a larger kurtosis. To support the ``sorting all permutations'' method of selecting the most frequently occurring variables, we show that the results of a single 10-variable discriminant analysis are consistent with the ranking. We demonstrate that individually, the variables considered here have little ability to differentiate between flaring and flare-quiet populations, but with multivariable combinations, the populations may be distinguished.
Reduction of Orifice-Induced Pressure Errors
NASA Technical Reports Server (NTRS)
Plentovich, Elizabeth B.; Gloss, Blair B.; Eves, John W.; Stack, John P.
1987-01-01
Use of porous-plug orifice reduces or eliminates errors, induced by orifice itself, in measuring static pressure on airfoil surface in wind-tunnel experiments. Piece of sintered metal press-fitted into static-pressure orifice so it matches surface contour of model. Porous material reduces orifice-induced pressure error associated with conventional orifice of same or smaller diameter. Also reduces or eliminates additional errors in pressure measurement caused by orifice imperfections. Provides more accurate measurements in regions with very thin boundary layers.
Mehta, Saurabh P; Barker, Katherine; Bowman, Brett; Galloway, Heather; Oliashirazi, Nicole; Oliashirazi, Ali
2017-07-01
Much of the published works assessing the reliability of smartphone goniometer apps (SG) have poor generalizability since the reliability was assessed in healthy subjects. No research has established the values for standard error of measurement (SEM) or minimal detectable change (MDC) which have greater clinical utility to contextualize the range of motion (ROM) assessed using the SG. This research examined the test-retest reproducibility, concurrent validity, SEM, and MDC values for the iPhone goniometer app (i-Goni; June Software Inc., v.1.1, San Francisco, CA) in assessing knee ROM in patients with knee osteoarthritis or those after total knee replacement. A total of 60 participants underwent data collection which included the assessment of active knee ROM using the i-Goni and the universal goniometer (UG; EZ Read Jamar Goniometer, Patterson Medical, Warrenville, IL), knee muscle strength, and assessment of pain and lower extremity disability using quadruple numeric pain rating scale (Q-NPRS) and lower extremity functional scale (LEFS), respectively. Intraclass correlation coefficients (ICCs) were calculated to assess the reproducibility of the knee ROM assessed using the i-Goni and UG. Bland and Altman technique examined the agreement between these knee ROM. The SEM and MDC values were calculated for i-Goni assessed knee ROM to characterize the error in a single score and the index of true change, respectively. Pearson correlation coefficient examined concurrent relationships between the i-Goni and other measures. The ICC values for the knee flexion/extension ROM were superior for i-Goni (0.97/0.94) compared with the UG (0.95/0.87). The SEM values were smaller for i-Goni assessed knee flexion/extension (2.72/1.18 degrees) compared with UG assessed knee flexion/extension (3.41/1.62 degrees). Similarly, the MDC values were smaller for both these ROM for the i-Goni (6.3 and 2.72 degrees) suggesting smaller change required to infer true change in knee ROM. The i-Goni assessed knee ROM showed expected concurrent relationships with UG, knee muscle strength, Q-NPRS, and the LEFS. In conclusion, the i-Goni demonstrated superior reproducibility with smaller measurement error compared with UG in assessing knee ROM in the recruited cohort. Future research can expand the inquiry for assessing the reliability of the i-Goni to other joints. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Bruza, Petr; Gollub, Sarah L; Andreozzi, Jacqueline M; Tendler, Irwin I; Williams, Benjamin B; Jarvis, Lesley A; Gladstone, David J; Pogue, Brian W
2018-05-02
The purpose of this study was to measure surface dose by remote time-gated imaging of plastic scintillators. A novel technique for time-gated, intensified camera imaging of scintillator emission was demonstrated, and key parameters influencing the signal were analyzed, including distance, angle and thickness. A set of scintillator samples was calibrated by using thermo-luminescence detector response as reference. Examples of use in total skin electron therapy are described. The data showed excellent room light rejection (signal-to-noise ratio of scintillation SNR ≈ 470), ideal scintillation dose response linearity, and 2% dose rate error. Individual sample scintillation response varied by 7% due to sample preparation. Inverse square distance dependence correction and lens throughput error (8% per meter) correction were needed. At scintillator-to-source angle and observation angle <50°, the radiant energy fluence error was smaller than 1%. The achieved standard error of the scintillator cumulative dose measurement compared to the TLD dose was 5%. The results from this proof-of-concept study documented the first use of small scintillator targets for remote surface dosimetry in ambient room lighting. The measured dose accuracy renders our method to be comparable to thermo-luminescent detector dosimetry, with the ultimate realization of accuracy likely to be better than shown here. Once optimized, this approach to remote dosimetry may substantially reduce the time and effort required for surface dosimetry.
NASA Astrophysics Data System (ADS)
Bruza, Petr; Gollub, Sarah L.; Andreozzi, Jacqueline M.; Tendler, Irwin I.; Williams, Benjamin B.; Jarvis, Lesley A.; Gladstone, David J.; Pogue, Brian W.
2018-05-01
The purpose of this study was to measure surface dose by remote time-gated imaging of plastic scintillators. A novel technique for time-gated, intensified camera imaging of scintillator emission was demonstrated, and key parameters influencing the signal were analyzed, including distance, angle and thickness. A set of scintillator samples was calibrated by using thermo-luminescence detector response as reference. Examples of use in total skin electron therapy are described. The data showed excellent room light rejection (signal-to-noise ratio of scintillation SNR ≈ 470), ideal scintillation dose response linearity, and 2% dose rate error. Individual sample scintillation response varied by 7% due to sample preparation. Inverse square distance dependence correction and lens throughput error (8% per meter) correction were needed. At scintillator-to-source angle and observation angle <50°, the radiant energy fluence error was smaller than 1%. The achieved standard error of the scintillator cumulative dose measurement compared to the TLD dose was 5%. The results from this proof-of-concept study documented the first use of small scintillator targets for remote surface dosimetry in ambient room lighting. The measured dose accuracy renders our method to be comparable to thermo-luminescent detector dosimetry, with the ultimate realization of accuracy likely to be better than shown here. Once optimized, this approach to remote dosimetry may substantially reduce the time and effort required for surface dosimetry.
Refining new-physics searches in B→Dτν with lattice QCD.
Bailey, Jon A; Bazavov, A; Bernard, C; Bouchard, C M; Detar, C; Du, Daping; El-Khadra, A X; Foley, J; Freeland, E D; Gámiz, E; Gottlieb, Steven; Heller, U M; Kim, Jongjeong; Kronfeld, A S; Laiho, J; Levkova, L; Mackenzie, P B; Meurice, Y; Neil, E T; Oktay, M B; Qiu, Si-Wei; Simone, J N; Sugar, R; Toussaint, D; Van de Water, R S; Zhou, Ran
2012-08-17
The semileptonic decay channel B→Dτν is sensitive to the presence of a scalar current, such as that mediated by a charged-Higgs boson. Recently, the BABAR experiment reported the first observation of the exclusive semileptonic decay B→Dτ(-)ν, finding an approximately 2σ disagreement with the standard-model prediction for the ratio R(D)=BR(B→Dτν)/BR(B→Dℓν), where ℓ = e,μ. We compute this ratio of branching fractions using hadronic form factors computed in unquenched lattice QCD and obtain R(D)=0.316(12)(7), where the errors are statistical and total systematic, respectively. This result is the first standard-model calculation of R(D) from ab initio full QCD. Its error is smaller than that of previous estimates, primarily due to the reduced uncertainty in the scalar form factor f(0)(q(2)). Our determination of R(D) is approximately 1σ higher than previous estimates and, thus, reduces the tension with experiment. We also compute R(D) in models with electrically charged scalar exchange, such as the type-II two-Higgs-doublet model. Once again, our result is consistent with, but approximately 1σ higher than, previous estimates for phenomenologically relevant values of the scalar coupling in the type-II model. As a by-product of our calculation, we also present the standard-model prediction for the longitudinal-polarization ratio P(L)(D)=0.325(4)(3).
Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A
2013-07-01
Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.
Validation of Calculations in a Digital Thermometer Firmware
NASA Astrophysics Data System (ADS)
Batagelj, V.; Miklavec, A.; Bojkovski, J.
2014-04-01
State-of-the-art digital thermometers are arguably remarkable measurement instruments, measuring outputs from resistance thermometers and/or thermocouples. Not only that they can readily achieve measuring accuracies in the parts-per-million range, but they also incorporate sophisticated algorithms for the transformation calculation of the measured resistance or voltage to temperature. These algorithms often include high-order polynomials, exponentials and logarithms, and must be performed using both standard coefficients and particular calibration coefficients. The numerical accuracy of these calculations and the associated uncertainty component must be much better than the accuracy of the raw measurement in order to be negligible in the total measurement uncertainty. In order for the end-user to gain confidence in these calculations as well as to conform to formal requirements of ISO/IEC 17025 and other standards, a way of validation of these numerical procedures performed in the firmware of the instrument is required. A software architecture which allows a simple validation of internal measuring instrument calculations is suggested. The digital thermometer should be able to expose all its internal calculation functions to the communication interface, so the end-user can compare the results of the internal measuring instrument calculation with reference results. The method can be regarded as a variation of the black-box software validation. Validation results on a thermometer prototype with implemented validation ability show that the calculation error of basic arithmetic operations is within the expected rounding error. For conversion functions, the calculation error is at least ten times smaller than the thermometer effective resolution for the particular probe type.
Particle simulation of Coulomb collisions: Comparing the methods of Takizuka and Abe and Nanbu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Chiaming; Lin, Tungyou; Caflisch, Russel
2008-04-20
The interactions of charged particles in a plasma are governed by long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and statistical error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.
Effects of the liver volume and donor steatosis on errors in the estimated standard liver volume.
Siriwardana, Rohan Chaminda; Chan, See Ching; Chok, Kenneth Siu Ho; Lo, Chung Mau; Fan, Sheung Tat
2011-12-01
An accurate assessment of donor and recipient liver volumes is essential in living donor liver transplantation. Many liver donors are affected by mild to moderate steatosis, and steatotic livers are known to have larger volumes. This study analyzes errors in liver volume estimation by commonly used formulas and the effects of donor steatosis on these errors. Three hundred twenty-five Asian donors who underwent right lobe donor hepatectomy were the subjects of this study. The percentage differences between the liver volumes from computed tomography (CT) and the liver volumes estimated with each formula (ie, the error percentages) were calculated. Five popular formulas were tested. The degrees of steatosis were categorized as follows: no steatosis [n = 178 (54.8%)], ≤ 10% steatosis [n = 128 (39.4%)], and >10% to 20% steatosis [n = 19 (5.8%)]. The median errors ranged from 0.6% (7 mL) to 24.6% (360 mL). The lowest was seen with the locally derived formula. All the formulas showed a significant association between the error percentage and the CT liver volume (P < 0.001). Overestimation was seen with smaller liver volumes, whereas underestimation was seen with larger volumes. The locally derived formula was most accurate when the liver volume was 1001 to 1250 mL. A multivariate analysis showed that the estimation error was dependent on the liver volume (P = 0.001) and the anthropometric measurement that was used in the calculation (P < 0.001) rather than steatosis (P ≥ 0.07). In conclusion, all the formulas have a similar pattern of error that is possibly related to the anthropometric measurement. Clinicians should be aware of this pattern of error and the liver volume with which their formula is most accurate. Copyright © 2011 American Association for the Study of Liver Diseases.
NASA Astrophysics Data System (ADS)
Blanks, J. K.; Hintz, C. J.; Chandler, G. T.; Shaw, T. J.; McCorkle, D. C.; Bernhard, J. M.
2007-12-01
Mg/Ca and Sr/Ca were analyzed from core-top individual Hoeglundina elegans aragonitic tests collected from three continental slope depths within the South Carolina and Little Bahama Bank continental slope environs (220 m to 1084 m). Our study utilized only individuals that labeled with the vital probe CellTracker Green - unlike bulk core-top material often stained with Rose Bengal, which has known inconsistencies in distinguishing live from dead foraminifera. DSr x 10 values were consistently 1.74 $ pm 0.23 across all sampling depths. The analytical error in DSr values (0.7%) determined by ICP-MS between repeated measurements on individual H. elegans tests across all depths was less than analytical error on repeated measurements from standards. Variation in DSr values was not directly explained by a linear temperature relationship (p=0.0003, R2=0.44) over the temperature range of 4.9-11.4°C with a sensitivity of 59.8 μmol/mol/1°C. The standard error by regressing DSr across temperature yields + 3.4°C, which is nearly 3x greater that reported in previous studies. Sr/Ca was more sensitive for calibrating temperature than Mg/Ca in H. elegans. Observed scatter in DSr was too great across individuals of the same size and of different sizes to resolve ontogenetic effects. However, higher DSr values were associated with smaller individuals and warmer/shallower sampling depths. The highest DSr values were observed at the intermediate sampling depth (~600 m). No significant ontogenetic relationship was found across DSr values in different sized individuals due to tighter overall constrained variance; however lower DSr values were observed from several smaller individuals. Several dead tests of H. elegans showed no significant differences in DSr values compared to live specimens cleaned by standard cleaning methods, unlike higher dead than live DMg values observed for the same individuals. There were no significant deviations in DSr across batches cleaned on separate days, unlike the observed sensitivity of DMg across batches. A subset of samples were reductively cleaned (hydrazine solution); and exhibited DMg values within analytical precision of those observed for non-reductively cleaned samples. Therefore, deviations in DMg values resulting from the removal of the reductive cleaning step did not explain analytical errors greater than published values for Mg/Ca or the high variance across same sized individuals. Variation in DMg values across the same cleaning methods and from dead individuals suggests the need for a careful look into how foraminiferal aragonite should be processed. These findings provide evidence that both Mg and Sr in benthic foraminiferal aragonite reflect factors in addition to temperature and pressure that may interfere with absolute temperature calibrations. Funded by NSF OCE 0351029, OCE 0437366, and OCE-0350794.
The general ventilation multipliers calculated by using a standard Near-Field/Far-Field model.
Koivisto, Antti J; Jensen, Alexander C Ø; Koponen, Ismo K
2018-05-01
In conceptual exposure models, the transmission of pollutants in an imperfectly mixed room is usually described with general ventilation multipliers. This is the approach used in the Advanced REACH Tool (ART) and Stoffenmanager® exposure assessment tools. The multipliers used in these tools were reported by Cherrie (1999; http://dx.doi.org/10.1080/104732299302530 ) and Cherrie et al. (2011; http://dx.doi.org/10.1093/annhyg/mer092 ) who developed them by positing input values for a standard Near-Field/Far-Field (NF/FF) model and then calculating concentration ratios between NF and FF concentrations. This study revisited the calculations that produce the multipliers used in ART and Stoffenmanager and found that the recalculated general ventilation multipliers were up to 2.8 times (280%) higher than the values reported by Cherrie (1999) and the recalculated NF and FF multipliers for 1-hr exposure were up to 1.2 times (17%) smaller and for 8-hr exposure up to 1.7 times (41%) smaller than the values reported by Cherrie et al. (2011). Considering that Stoffenmanager and the ART are classified as higher-tier regulatory exposure assessment tools, the errors is general ventilation multipliers should not be ignored. We recommend revising the general ventilation multipliers. A better solution is to integrate the NF/FF model to Stoffenmanager and the ART.
Electron electric dipole moment and hyperfine interaction constants for ThO
NASA Astrophysics Data System (ADS)
Fleig, Timo; Nayak, Malaya K.
2014-06-01
A recently implemented relativistic four-component configuration interaction approach to study P- and T-odd interaction constants in atoms and molecules is employed to determine the electron electric dipole moment effective electric field in the Ω=1 first excited state of the ThO molecule. We obtain a value of Eeff=75.2GV/cm with an estimated error bar of 3% and 10% smaller than a previously reported result (Skripnikov et al., 2013). Using the same wavefunction model we obtain an excitation energy of TvΩ=1=5410 (cm), in accord with the experimental value within 2%. In addition, we report the implementation of the magnetic hyperfine interaction constant A|| as an expectation value, resulting in A||=-1339 (MHz) for the Ω=1 state in ThO. The smaller effective electric field increases the previously determined upper bound (Baron et al., 2014) on the electron electric dipole moment to |de|<9.7×10-29e cm and thus mildly mitigates constraints to possible extensions of the Standard Model of particle physics.
Coelho, Joseph R.; Hastings, Jon M.; Holliday, Charles W.
2012-01-01
This study evaluated foraging effectiveness of Pacific cicada killers (Sphecius convallis) by comparing observed prey loads to that predicted by an optimality model. Female S. convallis preyed exclusively on the cicada Tibicen parallelus, resulting in a mean loaded flight muscle ratio (FMR) of 0.187 (N = 46). This value lies just above the marginal level, and only seven wasps (15%) were below 0.179. The low standard error (0.002) suggests that S. convallis is the most ideal flying predator so far examined in this respect. Preying on a single species may have allowed stabilizing selection to adjust the morphology of females to a nearly ideal size. That the loaded FMR is slightly above the marginal level may provide a small safety factor for wasps that do not have optimal thorax temperatures or that have to contend with attempted prey theft. Operational FMR was directly related to wasp body mass. Smaller wasps were overloaded in spite of provisioning with smaller cicadas, while larger wasps were underloaded despite provisioning with larger cicadas. Small wasps may have abandoned larger cicadas because of difficulty with carriage. PMID:26467953
Liu, Xiaofeng Steven
2011-05-01
The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.
Ruangsetakit, Varee
2015-11-01
To re-examine relative accuracy of intraocular lens (IOL) power calculation of immersion ultrasound biometry (IUB) and partial coherence interferometry (PCI) based on a new approach that limits its interest on the cases in which the IUB's IOL and PCI's IOL assignments disagree. Prospective observational study of 108 eyes that underwent cataract surgeries at Taksin Hospital. Two halves ofthe randomly chosen sample eyes were implanted with the IUB- and PCI-assigned lens. Postoperative refractive errors were measured in the fifth week. More accurate calculation was based on significantly smaller mean absolute errors (MAEs) and root mean squared errors (RMSEs) away from emmetropia. The distributions of the errors were examined to ensure that the higher accuracy was significant clinically as well. The (MAEs, RMSEs) were smaller for PCI of (0.5106 diopter (D), 0.6037D) than for IUB of (0.7000D, 0.8062D). The higher accuracy was principally contributedfrom negative errors, i.e., myopia. The MAEs and RMSEs for (IUB, PCI)'s negative errors were (0.7955D, 0.5185D) and (0.8562D, 0.5853D). Their differences were significant. The 72.34% of PCI errors fell within a clinically accepted range of ± 0.50D, whereas 50% of IUB errors did. PCI's higher accuracy was significant statistically and clinically, meaning that lens implantation based on PCI's assignments could improve postoperative outcomes over those based on IUB's assignments.
Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.
Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas
2016-11-14
Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.
Brito, Thiago V.; Morley, Steven K.
2017-10-25
A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brito, Thiago V.; Morley, Steven K.
A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less
Geometric Accuracy Analysis of Worlddem in Relation to AW3D30, Srtm and Aster GDEM2
NASA Astrophysics Data System (ADS)
Bayburt, S.; Kurtak, A. B.; Büyüksalih, G.; Jacobsen, K.
2017-05-01
In a project area close to Istanbul the quality of WorldDEM, AW3D30, SRTM DSM and ASTER GDEM2 have been analyzed in relation to a reference aerial LiDAR DEM and to each other. The random and the systematic height errors have been separated. The absolute offset for all height models in X, Y and Z is within the expectation. The shifts have been respected in advance for a satisfying estimation of the random error component. All height models are influenced by some tilts, different in size. In addition systematic deformations can be seen not influencing the standard deviation too much. The delivery of WorldDEM includes information about the height error map which is based on the interferometric phase errors, and the number and location of coverage's from different orbits. A dependency of the height accuracy from the height error map information and the number of coverage's can be seen, but it is smaller as expected. WorldDEM is more accurate as the other investigated height models and with 10 m point spacing it includes more morphologic details, visible at contour lines. The morphologic details are close to the details based on the LiDAR digital surface model (DSM). As usual a dependency of the accuracy from the terrain slope can be seen. In forest areas the canopy definition of InSAR X- and C-band height models as well as for the height models based on optical satellite images is not the same as the height definition by LiDAR. In addition the interferometric phase uncertainty over forest areas is larger. Both effects lead to lower height accuracy in forest areas, also visible in the height error map.
Hypothesis Testing Using Factor Score Regression
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2015-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886
DESIGN NOTE: New apparatus for haze measurement for transparent media
NASA Astrophysics Data System (ADS)
Yu, H. L.; Hsiao, C. C.; Liu, W. C.
2006-08-01
Precise measurement of luminous transmittance and haze of transparent media is increasingly important to the LCD industry. Currently there are at least three documentary standards for measuring transmission haze. Unfortunately, none of those standard methods by itself can obtain the precise values for the diffuse transmittance (DT), total transmittance (TT) and haze. This note presents a new apparatus capable of precisely measuring all three variables simultaneously. Compared with current structures, the proposed design contains one more compensatory port. For optimal design, the light trap absorbs the beam completely, light scattered by the instrument is zero and the interior surface of the integrating sphere, baffle, as well as the reflectance standard, are of equal characteristic. The accurate values of the TT, DT and haze can be obtained using the new apparatus. Even if the design is not optimal, the measurement errors of the new apparatus are smaller than those of other methods especially for high sphere reflectance. Therefore, the sphere can be made of a high reflectance material for the new apparatus to increase the signal-to-noise ratio.
Bayesian Analysis of Silica Exposure and Lung Cancer Using Human and Animal Studies.
Bartell, Scott M; Hamra, Ghassan Badri; Steenland, Kyle
2017-03-01
Bayesian methods can be used to incorporate external information into epidemiologic exposure-response analyses of silica and lung cancer. We used data from a pooled mortality analysis of silica and lung cancer (n = 65,980), using untransformed and log-transformed cumulative exposure. Animal data came from chronic silica inhalation studies using rats. We conducted Bayesian analyses with informative priors based on the animal data and different cross-species extrapolation factors. We also conducted analyses with exposure measurement error corrections in the absence of a gold standard, assuming Berkson-type error that increased with increasing exposure. The pooled animal data exposure-response coefficient was markedly higher (log exposure) or lower (untransformed exposure) than the coefficient for the pooled human data. With 10-fold uncertainty, the animal prior had little effect on results for pooled analyses and only modest effects in some individual studies. One-fold uncertainty produced markedly different results for both pooled and individual studies. Measurement error correction had little effect in pooled analyses using log exposure. Using untransformed exposure, measurement error correction caused a 5% decrease in the exposure-response coefficient for the pooled analysis and marked changes in some individual studies. The animal prior had more impact for smaller human studies and for one-fold versus three- or 10-fold uncertainty. Adjustment for Berkson error using Bayesian methods had little effect on the exposure-response coefficient when exposure was log transformed or when the sample size was large. See video abstract at, http://links.lww.com/EDE/B160.
Comparison of different tree sap flow up-scaling procedures using Monte-Carlo simulations
NASA Astrophysics Data System (ADS)
Tatarinov, Fyodor; Preisler, Yakir; Roahtyn, Shani; Yakir, Dan
2015-04-01
An important task in determining forest ecosystem water balance is the estimation of stand transpiration, allowing separating evapotranspiration into transpiration and soil evaporation. This can be based on up-scaling measurements of sap flow in representative trees (SF), which can be done by different mathematical algorithms. The aim of the present study was to evaluate the error associated with different up-scaling algorithms under different conditions. Other types of errors (such as, measurement error, within tree SF variability, choice of sample plot etc.) were not considered here. A set of simulation experiments using Monte-Carlo technique was carried out and three up-scaling procedures were tested. (1) Multiplying mean stand sap flux density based on unit sapwood cross-section area (SFD) by total sapwood area (Klein et al, 2014); (2) deriving of linear dependence of tree sap flow on tree DBH and calculating SFstand using predicted SF by DBH classes and stand DBH distribution (Cermak et al., 2004); (3) same as method 2 but using non-linear dependency. Simulations were performed under different SFD(DBH) slope (bs, positive, negative, zero); different DBH and SFD standard deviations (Δd and Δs, respectively) and DBH class size. It was assumed that all trees in a unit area are measured and the total SF of all trees in the experimental plot was taken as the reference SFstand value. Under negative bs all models tend to overestimate SFstand and the error increases exponentially with decreasing bs. Under bs >0 all models tend to underestimate SFstand, but the error is much smaller than for bs
Feller, David; Peterson, Kirk A
2013-08-28
The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies <0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.
Telemetry Standards, RCC Standard 106-17, Annex A.1, Pulse Amplitude Modulation Standards
2017-07-01
conform to either Figure Error! No text of specified style in document.-1 or Figure Error! No text of specified style in document.-2. Figure Error...No text of specified style in document.-1. 50 percent duty cycle PAM with amplitude synchronization A 20-25 percent deviation reserved for pulse...synchronization is recommended. Telemetry Standards, RCC Standard 106-17 Annex A.1, July 2017 A.1.2 Figure Error! No text of specified style
Flood-frequency prediction methods for unregulated streams of Tennessee, 2000
Law, George S.; Tasker, Gary D.
2003-01-01
Up-to-date flood-frequency prediction methods for unregulated, ungaged rivers and streams of Tennessee have been developed. Prediction methods include the regional-regression method and the newer region-of-influence method. The prediction methods were developed using stream-gage records from unregulated streams draining basins having from 1 percent to about 30 percent total impervious area. These methods, however, should not be used in heavily developed or storm-sewered basins with impervious areas greater than 10 percent. The methods can be used to estimate 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence-interval floods of most unregulated rural streams in Tennessee. A computer application was developed that automates the calculation of flood frequency for unregulated, ungaged rivers and streams of Tennessee. Regional-regression equations were derived by using both single-variable and multivariable regional-regression analysis. Contributing drainage area is the explanatory variable used in the single-variable equations. Contributing drainage area, main-channel slope, and a climate factor are the explanatory variables used in the multivariable equations. Deleted-residual standard error for the single-variable equations ranged from 32 to 65 percent. Deleted-residual standard error for the multivariable equations ranged from 31 to 63 percent. These equations are included in the computer application to allow easy comparison of results produced by the different methods. The region-of-influence method calculates multivariable regression equations for each ungaged site and recurrence interval using basin characteristics from 60 similar sites selected from the study area. Explanatory variables that may be used in regression equations computed by the region-of-influence method include contributing drainage area, main-channel slope, a climate factor, and a physiographic-region factor. Deleted-residual standard error for the region-of-influence method tended to be only slightly smaller than those for the regional-regression method and ranged from 27 to 62 percent.
Particle Simulation of Coulomb Collisions: Comparing the Methods of Takizuka & Abe and Nanbu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, C; Lin, T; Caflisch, R
2007-05-22
The interactions of charged particles in a plasma are in a plasma is governed by the long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and stochastic error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, Andrew K. H.; Basran, Parminder S.; Thomas, Steven D.
Purpose: To investigate the effects of brachytherapy seed size on the quality of x-ray computed tomography (CT), ultrasound (US), and magnetic resonance (MR) images and seed localization through comparison of the 6711 and 9011 {sup 125}I sources. Methods: For CT images, an acrylic phantom mimicking a clinical implantation plan and embedded with low contrast regions of interest (ROIs) was designed for both the 0.774 mm diameter 6711 (standard) and the 0.508 mm diameter 9011 (thin) seed models (Oncura, Inc., and GE Healthcare, Arlington Heights, IL). Image quality metrics were assessed using the standard deviation of ROIs between the seeds andmore » the contrast to noise ratio (CNR) within the low contrast ROIs. For US images, water phantoms with both single and multiseed arrangements were constructed for both seed sizes. For MR images, both seeds were implanted into a porcine gel and imaged with pelvic imaging protocols. The standard deviation of ROIs and CNR values were used as metrics of artifact quantification. Seed localization within the CT images was assessed using the automated seed finder in a commercial brachytherapy treatment planning system. The number of erroneous seed placements and the average and maximum error in seed placements were recorded as metrics of the localization accuracy. Results: With the thin seeds, CT image noise was reduced from 48.5 {+-} 0.2 to 32.0 {+-} 0.2 HU and CNR improved by a median value of 74% when compared with the standard seeds. Ultrasound image noise was measured at 50.3 {+-} 17.1 dB for the thin seed images and 50.0 {+-} 19.8 dB for the standard seed images, and artifacts directly behind the seeds were smaller and less prominent with the thin seed model. For MR images, CNR of the standard seeds reduced on average 17% when using the thin seeds for all different imaging sequences and seed orientations, but these differences are not appreciable. Automated seed localization required an average ({+-}SD) of 7.0 {+-} 3.5 manual corrections in seed positions for the thin seed scans and 3.0 {+-} 1.2 manual corrections in seed positions for the standard seed scans. The average error in seed placement was 1.2 mm for both seed types and the maximum error in seed placement was 2.1 mm for the thin seed scans and 1.8 mm for the standard seed scans. Conclusions: The 9011 thin seeds yielded significantly improved image quality for CT and US images but no significant differences in MR image quality.« less
Willem W.S. van Hees
2002-01-01
Comparisons of estimated standard error for a ratio-of-means (ROM) estimator are presented for forest resource inventories conducted in southeast Alaska between 1995 and 2000. Estimated standard errors for the ROM were generated by using a traditional variance estimator and also approximated by bootstrap methods. Estimates of standard error generated by both...
Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale; Selby, Neil
2012-08-14
Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.
Maassen, Gerard H
2010-08-01
In this Journal, Lewis and colleagues introduced a new Reliable Change Index (RCI(WSD)), which incorporated the within-subject standard deviation (WSD) of a repeated measurement design as the standard error. In this note, two opposite errors in using WSD this way are demonstrated. First, being the standard error of measurement of only a single assessment makes WSD too small when practice effects are absent. Then, too many individuals will be designated reliably changed. Second, WSD can grow unlimitedly to the extent that differential practice effects occur. This can even make RCI(WSD) unable to detect any reliable change.
Radio structure effects on the optical and radio representations of the ICRF
NASA Astrophysics Data System (ADS)
Andrei, A. H.; da Silva Neto, D. N.; Assafin, M.; Vieira Martins, R.
Silva Neto et al. (2002) show that comparing the ICRF Ext.1 sources standard radio position (Ma et al. 1998) against their optical counterpart position (Zacharias et al. 1999, Monet et al., 1998), a systematic pattern appears, which depends on the radio structure index (Fey and Charlot, 2000). The optical to radio offsets produce a distribution suggestive of a coincidence of the optical and radio centroids worse for the radio extended than for the radio compact sources. On average, the coincidence between the optical and radio centroids is found 7.9±1.1 mas smaller for the compact than for the extended sources. Such an effect is reasonably large, and certainly much too large to be due to errors on the VLBI radio position. On the other hand, it is too small to be accounted to the errors on the optical position, which moreover should be independent from the radio stucture. Thus, other than a true pattern of centroids non-coincidence, the remaining explanation is of a hazard result. This paper summarizes the several statistical tests used to discard the hazard explanation.
Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data
Gebert, Warren A.; Walker, John F.; Kennedy, James L.
2011-01-01
Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.
2013-01-01
Binding free energy calculations offer a thermodynamically rigorous method to compute protein-ligand binding, and they depend on empirical force fields with hundreds of parameters. We examined the sensitivity of computed binding free energies to the ligand’s electrostatic and van der Waals parameters. Dielectric screening and cancellation of effects between ligand-protein and ligand-solvent interactions reduce the parameter sensitivity of binding affinity by 65%, compared with interaction strengths computed in the gas-phase. However, multiple changes to parameters combine additively on average, which can lead to large changes in overall affinity from many small changes to parameters. Using these results, we estimate that random, uncorrelated errors in force field nonbonded parameters must be smaller than 0.02 e per charge, 0.06 Å per radius, and 0.01 kcal/mol per well depth in order to obtain 68% (one standard deviation) confidence that a computed affinity for a moderately-sized lead compound will fall within 1 kcal/mol of the true affinity, if these are the only sources of error considered. PMID:24015114
Mehta, Saurabh P; George, Hannah R; Goering, Christian A; Shafer, Danielle R; Koester, Alan; Novotny, Steven
2017-11-01
Clinical measurement study. The push-off test (POT) was recently conceived and found to be reliable and valid for assessing weight bearing through injured wrist or elbow. However, further research with larger sample can lend credence to the preliminary findings supporting the use of the POT. This study examined the interrater reliability, construct validity, and measurement error for the POT in patients with wrist conditions. Participants with musculoskeletal (MSK) wrist conditions were recruited. The performance on the POT, grip isometric strength of wrist extensors was assessed. The shortened version of the Disabilities of the Arm, Shoulder and Hand and numeric pain rating scale were completed. The intraclass correlation coefficient assessed interrater reliability of the POT. Pearson correlation coefficients (r) examined the concurrent relationships between the POT and other measures. The standard error of measurement and the minimal detectable change at 90% confidence interval were assessed as measurement error and index of true change for the POT. A total of 50 participants with different elbow or wrist conditions (age: 48.1 ± 16.6 years) were included in this study. The results of this study strongly supported the interrater reliability (intraclass correlation coefficient: 0.96 and 0.93 for the affected and unaffected sides, respectively) of the POT in patients with wrist MSK conditions. The POT showed convergent relationships with the grip strength on the injured side (r = 0.89) and the wrist extensor strength (r = 0.7). The POT showed smaller standard error of measurement (1.9 kg). The minimal detectable change at 90% confidence interval for the POT was 4.4 kg for the sample. This study provides additional evidence to support the reliability and validity of the POT. This is the first study that provides the values for the measurement error and true change on the POT scores in patients with wrist MSK conditions. Further research should examine the responsiveness and discriminant validity of the POT in patients with wrist conditions. Copyright © 2017 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.
Error Sensitivity to Environmental Noise in Quantum Circuits for Chemical State Preparation.
Sawaya, Nicolas P D; Smelyanskiy, Mikhail; McClean, Jarrod R; Aspuru-Guzik, Alán
2016-07-12
Calculating molecular energies is likely to be one of the first useful applications to achieve quantum supremacy, performing faster on a quantum than a classical computer. However, if future quantum devices are to produce accurate calculations, errors due to environmental noise and algorithmic approximations need to be characterized and reduced. In this study, we use the high performance qHiPSTER software to investigate the effects of environmental noise on the preparation of quantum chemistry states. We simulated 18 16-qubit quantum circuits under environmental noise, each corresponding to a unitary coupled cluster state preparation of a different molecule or molecular configuration. Additionally, we analyze the nature of simple gate errors in noise-free circuits of up to 40 qubits. We find that, in most cases, the Jordan-Wigner (JW) encoding produces smaller errors under a noisy environment as compared to the Bravyi-Kitaev (BK) encoding. For the JW encoding, pure dephasing noise is shown to produce substantially smaller errors than pure relaxation noise of the same magnitude. We report error trends in both molecular energy and electron particle number within a unitary coupled cluster state preparation scheme, against changes in nuclear charge, bond length, number of electrons, noise types, and noise magnitude. These trends may prove to be useful in making algorithmic and hardware-related choices for quantum simulation of molecular energies.
Lipinski, Doug; Mohseni, Kamran
2010-03-01
A ridge tracking algorithm for the computation and extraction of Lagrangian coherent structures (LCS) is developed. This algorithm takes advantage of the spatial coherence of LCS by tracking the ridges which form LCS to avoid unnecessary computations away from the ridges. We also make use of the temporal coherence of LCS by approximating the time dependent motion of the LCS with passive tracer particles. To justify this approximation, we provide an estimate of the difference between the motion of the LCS and that of tracer particles which begin on the LCS. In addition to the speedup in computational time, the ridge tracking algorithm uses less memory and results in smaller output files than the standard LCS algorithm. Finally, we apply our ridge tracking algorithm to two test cases, an analytically defined double gyre as well as the more complicated example of the numerical simulation of a swimming jellyfish. In our test cases, we find up to a 35 times speedup when compared with the standard LCS algorithm.
Estimation of the optical errors on the luminescence imaging of water for proton beam
NASA Astrophysics Data System (ADS)
Yabe, Takuya; Komori, Masataka; Horita, Ryo; Toshito, Toshiyuki; Yamamoto, Seiichi
2018-04-01
Although luminescence imaging of water during proton-beam irradiation can be applied to range estimation, the height of the Bragg peak of the luminescence image was smaller than that measured with an ionization chamber. We hypothesized that the reasons of the difference were attributed to the optical phenomena; parallax errors of the optical system and the reflection of the luminescence from the water phantom. We estimated the errors cause by these optical phenomena affecting the luminescence image of water. To estimate the parallax error on the luminescence images, we measured the luminescence images during proton-beam irradiation using a cooled charge-coupled camera by changing the heights of the optical axis of the camera from those of the Bragg peak. When the heights of the optical axis matched to the depths of the Bragg peak, the Bragg peak heights in the depth profiles were the highest. The reflection of the luminescence of water with a black wall phantom was slightly smaller than that with a transparent phantom and changed the shapes of the depth profiles. We conclude that the parallax error significantly affects the heights of the Bragg peak and the reflection of the phantom affects the shapes of depth profiles of the luminescence images of water.
Developing a confidence metric for the Landsat land surface temperature product
NASA Astrophysics Data System (ADS)
Laraby, Kelly G.; Schott, John R.; Raqueno, Nina
2016-05-01
Land Surface Temperature (LST) is an important Earth system data record that is useful to fields such as change detection, climate research, environmental monitoring, and smaller scale applications such as agriculture. Certain Earth-observing satellites can be used to derive this metric, and it would be extremely useful if such imagery could be used to develop a global product. Through the support of the National Aeronautics and Space Administration (NASA) and the United States Geological Survey (USGS), a LST product for the Landsat series of satellites has been developed. Currently, it has been validated for scenes in North America, with plans to expand to a trusted global product. For ideal atmospheric conditions (e.g. stable atmosphere with no clouds nearby), the LST product underestimates the surface temperature by an average of 0.26 K. When clouds are directly above or near the pixel of interest, however, errors can extend to several Kelvin. As the product approaches public release, our major goal is to develop a quality metric that will provide the user with a per-pixel map of estimated LST errors. There are several sources of error that are involved in the LST calculation process, but performing standard error propagation is a difficult task due to the complexity of the atmospheric propagation component. To circumvent this difficulty, we propose to utilize the relationship between cloud proximity and the error seen in the LST process to help develop a quality metric. This method involves calculating the distance to the nearest cloud from a pixel of interest in a scene, and recording the LST error at that location. Performing this calculation for hundreds of scenes allows us to observe the average LST error for different ranges of distances to the nearest cloud. This paper describes this process in full, and presents results for a large set of Landsat scenes.
ERIC Educational Resources Information Center
Lord, Frederic M.; Stocking, Martha
A general Computer program is described that will compute asymptotic standard errors and carry out significance tests for an endless variety of (standard and) nonstandard large-sample statistical problems, without requiring the statistician to derive asymptotic standard error formulas. The program assumes that the observations have a multinormal…
Interaction of finger enslaving and error compensation in multiple finger force production.
Martin, Joel R; Latash, Mark L; Zatsiorsky, Vladimir M
2009-01-01
Previous studies have documented two patterns of finger interaction during multi-finger pressing tasks, enslaving and error compensation, which do not agree with each other. Enslaving is characterized by positive correlation between instructed (master) and non-instructed (slave) finger(s) while error compensation can be described as a pattern of negative correlation between master and slave fingers. We hypothesize that pattern of finger interaction, enslaving or compensation depends on the initial force level and the magnitude of the targeted force change. Subjects were instructed to press with four fingers (I index, M middle, R ring, and L little) from a specified initial force to target forces following a ramp target line. Force-force relations between master and each of three slave fingers were analyzed during the ramp phase of trials by calculating correlation coefficients within each master-slave pair and then two-factor ANOVA was performed to determine effect of initial force and force increase on the correlation coefficients. It was found that, as initial force increased, the value of the correlation coefficient decreased and in some cases became negative, i.e. the enslaving transformed into error compensation. Force increase magnitude had a smaller effect on the correlation coefficients. The observations support the hypothesis that the pattern of inter-finger interaction--enslaving or compensation--depends on the initial force level and, to a smaller degree, on the targeted magnitude of the force increase. They suggest that the controller views tasks with higher steady-state forces and smaller force changes as implying a requirement to avoid large changes in the total force.
Computation of Standard Errors
Dowd, Bryan E; Greene, William H; Norton, Edward C
2014-01-01
Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304
Talar dome detection and its geometric approximation in CT: Sphere, cylinder or bi-truncated cone?
Huang, Junbin; Liu, He; Wang, Defeng; Griffith, James F; Shi, Lin
2017-04-01
The purpose of our study is to give a relatively objective definition of talar dome and its shape approximations to sphere (SPH), cylinder (CLD) and bi-truncated cone (BTC). The "talar dome" is well-defined with the improved Dijkstra's algorithm, considering the Euclidean distance and surface curvature. The geometric similarity between talar dome and ideal shapes, namely SPH, CLD and BTC, is quantified. 50 unilateral CT datasets from 50 subjects with no pathological morphometry of tali were included in the experiments and statistical analyses were carried out based on the approximation error. The similarity between talar dome and BTC was more prominent, with smaller mean, standard deviation, maximum and median of the approximation error (0.36±0.07mm, 0.32±0.06mm, 2.24±0.47mm and 0.28±0.06mm) compare with fitting to SPH and CLD. In addition, there were significant differences between the fitting error of each pair of models in terms of the 4 measurements (p-values<0.05). The linear regression analyses demonstrated high correlation between CLD and BTC approximations (R 2 =0.55 for median, R 2 >0.7 for others). Color maps representing fitting error indicated that fitting error mainly occurred on the marginal regions of talar dome for SPH and CLD fittings, while that of BTC was small for the whole talar dome. The successful restoration of ankle functions in displacement surgery highly depends on the comprehensive understanding of the talus. The talar dome surface could be well-defined in a computational way and compared to SPH and CLD, the talar dome reflects outstanding similarity with BTC. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audren, Benjamin; Lesgourgues, Julien; Bird, Simeon
2013-01-01
We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fouriermore » space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservatively that the uncorrelated error rises above 2% at k = 0.4 h/Mpc and z = 0.5, we find that a future Euclid-like cosmic shear/galaxy survey achieves a 1-σ error on M{sub ν} close to 32 meV/25 meV, sufficient for detecting the total neutrino mass with good significance. If the residual uncorrelated errors indeed rises rapidly towards smaller scales in the non-linear regime as we have assumed here then the data on non-linear scales does not increase the sensitivity to the total neutrino mass. Assuming instead a ten times smaller theoretical error with the same scale dependence, the error on the total neutrino mass decreases moderately from σ(M{sub ν}) = 18 meV to 14 meV when mildly non-linear scales with 0.1 h/Mpc < k < 0.6 h/Mpc are included in the analysis of the galaxy survey data.« less
A priori discretization error metrics for distributed hydrologic modeling applications
NASA Astrophysics Data System (ADS)
Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar
2016-12-01
Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.
Optical splitter design for telecommunication access networks with triple-play services
NASA Astrophysics Data System (ADS)
Agalliu, Rajdi; Burtscher, Catalina; Lucki, Michal; Seyringer, Dana
2018-01-01
In this paper, we present various designs of optical splitters for access networks, such as GPON and XG-PON by ITU-T with triple-play services (ie data, voice and video). The presented designs exhibit a step forward, compared to the solutions recommended by the ITU, in terms of performance in transmission systems using WDM. The quality of performance is represented by the bit error rate and the Q-factor. Besides the standard splitter design, we propose a new length-optimized splitter design with a smaller waveguide core, providing some reduction of non-uniformity of the power split between the output waveguides. The achieved splitting parameters are incorporated in the simulations of passive optical networks. For this purpose, the OptSim tool employing Time Domain Split Step method was used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinmann, Vera; Chakraborty, Rupak; Rekemeyer, Paul H.
2016-08-31
As novel absorber materials are developed and screened for their photovoltaic (PV) properties, the challenge remains to reproducibly test promising candidates for high-performing PV devices. Many early-stage devices are prone to device shunting due to pinholes in the absorber layer, producing 'false-negative' results. Here, we demonstrate a device engineering solution toward a robust device architecture, using a two-step absorber deposition approach. We use tin sulfide (SnS) as a test absorber material. The SnS bulk is processed at high temperature (400 degrees C) to stimulate grain growth, followed by a much thinner, low-temperature (200 degrees C) absorber deposition. At a lowermore » process temperature, the thin absorber overlayer contains significantly smaller, densely packed grains, which are likely to provide a continuous coating and fill pinholes in the underlying absorber bulk. We compare this two-step approach to the more standard approach of using a semi-insulating buffer layer directly on top of the annealed absorber bulk, and we demonstrate a more than 3.5x superior shunt resistance Rsh with smaller standard error ..sigma..Rsh. Electron-beam-induced current (EBIC) measurements indicate a lower density of pinholes in the SnS absorber bulk when using the two-step absorber deposition approach. We correlate those findings to improvements in the device performance and device performance reproducibility.« less
The Infinitesimal Jackknife with Exploratory Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.
2012-01-01
The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…
A high accuracy magnetic heading system composed of fluxgate magnetometers and a microcomputer
NASA Astrophysics Data System (ADS)
Liu, Sheng-Wu; Zhang, Zhao-Nian; Hung, James C.
The authors present a magnetic heading system consisting of two fluxgate magnetometers and a single-chip microcomputer. The system, when compared to gyro compasses, is smaller in size, lighter in weight, simpler in construction, quicker in reaction time, free from drift, and more reliable. Using a microcomputer in the system, heading error due to compass deviation, sensor offsets, scale factor uncertainty, and sensor tilts can be compensated with the help of an error model. The laboratory test of a typical system showed that the accuracy of the system was improved from more than 8 deg error without error compensation to less than 0.3 deg error with compensation.
ERIC Educational Resources Information Center
Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu
2013-01-01
Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…
Gole, Markus; Köchel, Angelika; Schäfer, Axel; Schienle, Anne
2012-03-01
The goal of the present study was to investigate a threat engagement, disengagement, and sensitivity bias in individuals suffering from pathological worry. Twenty participants high in worry proneness and 16 control participants low in worry proneness completed an emotional go/no-go task with worry-related threat words and neutral words. Shorter reaction times (i.e., threat engagement bias), smaller omission error rates (i.e., threat sensitivity bias), and larger commission error rates (i.e., threat disengagement bias) emerged only in the high worry group when worry-related words constituted the go-stimuli and neutral words the no-go stimuli. Also, smaller omission error rates as well as larger commission error rates were observed in the high worry group relative to the low worry group when worry-related go stimuli and neutral no-go stimuli were used. The obtained results await further replication within a generalized anxiety disorder sample. Also, further samples should include men as well. Our data suggest that worry-prone individuals are threat-sensitive, engage more rapidly with aversion, and disengage harder. Copyright © 2011 Elsevier Ltd. All rights reserved.
Anandakrishnan, Ramu; Onufriev, Alexey
2008-03-01
In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife
ERIC Educational Resources Information Center
Jennrich, Robert I.
2008-01-01
The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…
Factor Rotation and Standard Errors in Exploratory Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.
2015-01-01
In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…
Wood, Clive; Alwati, Abdolati; Halsey, Sheelagh; Gough, Tim; Brown, Elaine; Kelly, Adrian; Paradkar, Anant
2016-09-10
The use of near infra red spectroscopy to predict the concentration of two pharmaceutical co-crystals; 1:1 ibuprofen-nicotinamide (IBU-NIC) and 1:1 carbamazepine-nicotinamide (CBZ-NIC) has been evaluated. A partial least squares (PLS) regression model was developed for both co-crystal pairs using sets of standard samples to create calibration and validation data sets with which to build and validate the models. Parameters such as the root mean square error of calibration (RMSEC), root mean square error of prediction (RMSEP) and correlation coefficient were used to assess the accuracy and linearity of the models. Accurate PLS regression models were created for both co-crystal pairs which can be used to predict the co-crystal concentration in a powder mixture of the co-crystal and the active pharmaceutical ingredient (API). The IBU-NIC model had smaller errors than the CBZ-NIC model, possibly due to the complex CBZ-NIC spectra which could reflect the different arrangement of hydrogen bonding associated with the co-crystal compared to the IBU-NIC co-crystal. These results suggest that NIR spectroscopy can be used as a PAT tool during a variety of pharmaceutical co-crystal manufacturing methods and the presented data will facilitate future offline and in-line NIR studies involving pharmaceutical co-crystals. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Research on Strain Measurements of Core Positions for the Chinese Space Station.
Shen, Jingshi; Zeng, Xiaodong; Luo, Yuxiang; Cao, Changqing; Wang, Ting
2018-06-05
The Chinese space station is designed to carry out manned spaceflight, space science research, and so on. In serious applications, it is a common operation to inject gas into the hull, which can produce strain of the bulkhead. Accurate measurement of strain for the bulkhead is one of the key tasks in evaluating the health condition of the space station. This is the first work to perform strain detection for the Chinese space station bulkhead by using optical fiber Bragg grating. In the period of measurements, the resistance strain gauge is used as the strain standard. The measurement error of the fiber optical sensor in the circumferential direction is very small, being less than 4.52 με. However, the error in the axial direction is very large with the highest value of 28.93 με. Because the measurement error of bare fiber in the axial direction is very small, the transverse effect of the substrate of the fiber optical sensor likely plays a role. The comparison of the theoretical and experimental results of the transverse effect coefficients shows that they are fairly consistent, with values of 0.0271 and 0.0287, respectively. After the transverse effect is compensated, the strain deviation in the axial detection is smaller than 2.04 με. It is of great significance to carry out real-time health assessment for the bulkhead of the space station.
The effect of early deprivation on executive attention in middle childhood.
Loman, Michelle M; Johnson, Anna E; Westerlund, Alissa; Pollak, Seth D; Nelson, Charles A; Gunnar, Megan R
2013-01-01
Children reared in deprived environments, such as institutions for the care of orphaned or abandoned children, are at increased risk for attention and behavior regulation difficulties. This study examined the neurobehavioral correlates of executive attention in post institutionalized (PI) children. The performance and event-related potentials (ERPs) of 10- and 11-year-old internationally adopted PI children on two executive attention tasks, go/no-go and Flanker, were compared with two groups: children internationally adopted early from foster care (PF) and nonadopted children (NA). Behavioral measures suggested problems with sustained attention, with PIs performing more poorly on go trials and not on no-go trials of the go/no-go and made more errors on both congruent and incongruent trials on the Flanker. ERPs suggested differences in inhibitory control and error monitoring, as PIs had smaller N2 amplitude on go/no-go and smaller error-related negativity on Flanker. This pattern of results raises questions regarding the nature of attention difficulties for PI children. The behavioral errors are not specific to executive attention and instead likely reflect difficulties in overall sustained attention. The ERP results are consistent with neural activity related to deficits in inhibitory control (N2) and error monitoring (error-related negativity). Questions emerge regarding the similarity of attention regulatory difficulties in PIs to those experienced by non-PI children with ADHD. © 2012 The Authors. Journal of Child Psychology and Psychiatry © 2012 Association for Child and Adolescent Mental Health.
NASA Astrophysics Data System (ADS)
Larsson, R.; Milz, M.; Rayer, P.; Saunders, R.; Bell, W.; Booton, A.; Buehler, S. A.; Eriksson, P.; John, V.
2015-10-01
We present a comparison of a reference and a fast radiative transfer model using numerical weather prediction profiles for the Zeeman-affected high altitude Special Sensor Microwave Imager/Sounder channels 19-22. We find that the models agree well for channels 21 and 22 compared to the channels' system noise temperatures (1.9 and 1.3 K, respectively) and the expected profile errors at the affected altitudes (estimated to be around 5 K). For channel 22 there is a 0.5 K average difference between the models, with a standard deviation of 0.24 K for the full set of atmospheric profiles. Same channel, there is 1.2 K in average between the fast model and the sensor measurement, with 1.4 K standard deviation. For channel 21 there is a 0.9 K average difference between the models, with a standard deviation of 0.56 K. Same channel, there is 1.3 K in average between the fast model and the sensor measurement, with 2.4 K standard deviation. We consider the relatively small model differences as a validation of the fast Zeeman effect scheme for these channels. Both channels 19 and 20 have smaller average differences between the models (at below 0.2 K) and smaller standard deviations (at below 0.4 K) when both models use a two-dimensional magnetic field profile. However, when the reference model is switched to using a full three-dimensional magnetic field profile, the standard deviation to the fast model is increased to almost 2 K due to viewing geometry dependencies causing up to ± 7 K differences near the equator. The average differences between the two models remain small despite changing magnetic field configurations. We are unable to compare channels 19 and 20 to sensor measurements due to limited altitude range of the numerical weather prediction profiles. We recommended that numerical weather prediction software using the fast model takes the available fast Zeeman scheme into account for data assimilation of the affected sensor channels to better constrain the upper atmospheric temperatures.
NASA Astrophysics Data System (ADS)
Larsson, Richard; Milz, Mathias; Rayer, Peter; Saunders, Roger; Bell, William; Booton, Anna; Buehler, Stefan A.; Eriksson, Patrick; John, Viju O.
2016-03-01
We present a comparison of a reference and a fast radiative transfer model using numerical weather prediction profiles for the Zeeman-affected high-altitude Special Sensor Microwave Imager/Sounder channels 19-22. We find that the models agree well for channels 21 and 22 compared to the channels' system noise temperatures (1.9 and 1.3 K, respectively) and the expected profile errors at the affected altitudes (estimated to be around 5 K). For channel 22 there is a 0.5 K average difference between the models, with a standard deviation of 0.24 K for the full set of atmospheric profiles. Concerning the same channel, there is 1.2 K on average between the fast model and the sensor measurement, with 1.4 K standard deviation. For channel 21 there is a 0.9 K average difference between the models, with a standard deviation of 0.56 K. Regarding the same channel, there is 1.3 K on average between the fast model and the sensor measurement, with 2.4 K standard deviation. We consider the relatively small model differences as a validation of the fast Zeeman effect scheme for these channels. Both channels 19 and 20 have smaller average differences between the models (at below 0.2 K) and smaller standard deviations (at below 0.4 K) when both models use a two-dimensional magnetic field profile. However, when the reference model is switched to using a full three-dimensional magnetic field profile, the standard deviation to the fast model is increased to almost 2 K due to viewing geometry dependencies, causing up to ±7 K differences near the equator. The average differences between the two models remain small despite changing magnetic field configurations. We are unable to compare channels 19 and 20 to sensor measurements due to limited altitude range of the numerical weather prediction profiles. We recommended that numerical weather prediction software using the fast model takes the available fast Zeeman scheme into account for data assimilation of the affected sensor channels to better constrain the upper atmospheric temperatures.
An entropy-based statistic for genomewide association studies.
Zhao, Jinying; Boerwinkle, Eric; Xiong, Momiao
2005-07-01
Efficient genotyping methods and the availability of a large collection of single-nucleotide polymorphisms provide valuable tools for genetic studies of human disease. The standard chi2 statistic for case-control studies, which uses a linear function of allele frequencies, has limited power when the number of marker loci is large. We introduce a novel test statistic for genetic association studies that uses Shannon entropy and a nonlinear function of allele frequencies to amplify the differences in allele and haplotype frequencies to maintain statistical power with large numbers of marker loci. We investigate the relationship between the entropy-based test statistic and the standard chi2 statistic and show that, in most cases, the power of the entropy-based statistic is greater than that of the standard chi2 statistic. The distribution of the entropy-based statistic and the type I error rates are validated using simulation studies. Finally, we apply the new entropy-based test statistic to two real data sets, one for the COMT gene and schizophrenia and one for the MMP-2 gene and esophageal carcinoma, to evaluate the performance of the new method for genetic association studies. The results show that the entropy-based statistic obtained smaller P values than did the standard chi2 statistic.
NASA Technical Reports Server (NTRS)
Dagalakis, N.; Wavering, A. J.; Spidaliere, P.
1991-01-01
Test procedures are proposed for the NASA DTF (Development Test Flight)-1 positioning tests of the FTS (Flight Telerobotic Servicer). The unique problems associated with the DTF-1 mission are discussed, standard robot performance tests and terminology are reviewed and a very detailed description of flight-like testing and analysis is presented. The major technical problem associated with DTF-1 is that only one position sensor can be used, which will be fixed at one location, with a working volume which is probably smaller than some of the robot errors to be measured. Radiation heating of the arm and the sensor could also cause distortions that would interfere with the test. Two robot performance testing committees have established standard testing procedures relevant to the DTF-1. Due to the technical problems associated with DTF-1, these procedures cannot be applied directly. These standard tests call for the use of several test positions at specific locations. Only one position, that of the position sensor, can be used by DTF-1. Off-line programming accuracy might be impossible to measure and in that case it will have to be replaced by forward kinetics accuracy.
NASA Astrophysics Data System (ADS)
Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun
2018-03-01
Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.
NASA Astrophysics Data System (ADS)
Siirila, E. R.; Fernandez-Garcia, D.; Sanchez-Vila, X.
2014-12-01
Particle tracking (PT) techniques, often considered favorable over Eulerian techniques due to artificial smoothening in breakthrough curves (BTCs), are evaluated in a risk-driven framework. Recent work has shown that given a relatively few number of particles (np), PT methods can yield well-constructed BTCs with kernel density estimators (KDEs). This work compares KDE and non-KDE BTCs simulated as a function of np (102-108) and averaged as a function of the exposure duration, ED. Results show that regardless of BTC shape complexity, un-averaged PT BTCs show a large bias over several orders of magnitude in concentration (C) when compared to the KDE results, remarkably even when np is as low as 102. With the KDE, several orders of magnitude less np are required to obtain the same global error in BTC shape as the PT technique. PT and KDE BTCs are averaged as a function of the ED with standard and new methods incorporating the optimal h (ANA). The lowest error curve is obtained through the ANA method, especially for smaller EDs. Percent error of peak of averaged-BTCs, important in a risk framework, is approximately zero for all scenarios and all methods for np ≥105, but vary between the ANA and PT methods, when np is lower. For fewer np, the ANA solution provides a lower error fit except when C oscillations are present during a short time frame. We show that obtaining a representative average exposure concentration is reliant on an accurate representation of the BTC, especially when data is scarce.
Leardini, Alberto; Lullini, Giada; Giannini, Sandro; Berti, Lisa; Ortolani, Maurizio; Caravaggi, Paolo
2014-09-11
Several rehabilitation systems based on inertial measurement units (IMU) are entering the market for the control of exercises and to measure performance progression, particularly for recovery after lower limb orthopaedic treatments. IMU are easy to wear also by the patient alone, but the extent to which IMU's malpositioning in routine use can affect the accuracy of the measurements is not known. A new such system (Riablo™, CoRehab, Trento, Italy), using audio-visual biofeedback based on videogames, was assessed against state-of-the-art gait analysis as the gold standard. The sensitivity of the system to errors in the IMU's position and orientation was measured in 5 healthy subjects performing two hip joint motion exercises. Root mean square deviation was used to assess differences in the system's kinematic output between the erroneous and correct IMU position and orientation.In order to estimate the system's accuracy, thorax and knee joint motion of 17 healthy subjects were tracked during the execution of standard rehabilitation tasks and compared with the corresponding measurements obtained with an established gait protocol using stereophotogrammetry. A maximum mean error of 3.1 ± 1.8 deg and 1.9 ± 0.8 deg from the angle trajectory with correct IMU position was recorded respectively in the medio-lateral malposition and frontal-plane misalignment tests. Across the standard rehabilitation tasks, the mean distance between the IMU and gait analysis systems was on average smaller than 5°. These findings showed that the tested IMU based system has the necessary accuracy to be safely utilized in rehabilitation programs after orthopaedic treatments of the lower limb.
Research on Standard Errors of Equating Differences. Research Report. ETS RR-10-25
ERIC Educational Resources Information Center
Moses, Tim; Zhang, Wenmin
2010-01-01
In this paper, the "standard error of equating difference" (SEED) is described in terms of originally proposed kernel equating functions (von Davier, Holland, & Thayer, 2004) and extended to incorporate traditional linear and equipercentile functions. These derivations expand on prior developments of SEEDs and standard errors of equating and…
Choi, Kai Yip; Yu, Wing Yan; Lam, Christie Hang I; Li, Zhe Chuang; Chin, Man Pan; Lakshmanan, Yamunadevi; Wong, Francisca Siu Yin; Do, Chi Wai; Lee, Paul Hong; Chan, Henry Ho Lung
2017-09-01
People in Hong Kong generally live in a densely populated area and their homes are smaller compared with most other cities worldwide. Interestingly, East Asian cities with high population densities seem to have higher myopia prevalence, but the association between them has not been established. This study investigated whether the crowded habitat in Hong Kong is associated with refractive error among children. In total, 1075 subjects [Mean age (S.D.): 9.95 years (0.97), 586 boys] were recruited. Information such as demographics, living environment, parental education and ocular status were collected using parental questionnaires. The ocular axial length and refractive status of all subjects were measured by qualified personnel. Ocular axial length was found to be significantly longer among those living in districts with a higher population density (F 2,1072 = 6.15, p = 0.002) and those living in a smaller home (F 2,1072 = 3.16, p = 0.04). Axial lengths were the same among different types of housing (F 3,1071 = 1.24, p = 0.29). Non-cycloplegic autorefraction suggested a more negative refractive error in those living in districts with a higher population density (F 2,1072 = 7.88, p < 0.001) and those living in a smaller home (F 2,1072 = 4.25, p = 0.02). After adjustment for other confounding covariates, the population density and home size also significantly predicted axial length and non-cycloplegic refractive error in the multiple linear regression model, while axial length and refractive error had no relationship with types of housing. Axial length in children and childhood refractive error were associated with high population density and small home size. A constricted living space may be an environmental threat for myopia development in children. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.
Vedenov, Dmitry; Alhotan, Rashed A; Wang, Runlian; Pesti, Gene M
2017-02-01
Nutritional requirements and responses of all organisms are estimated using various models representing the response to different dietary levels of the nutrient in question. To help nutritionists design experiments for estimating responses and requirements, we developed a simulation workbook using Microsoft Excel. The objective of the present study was to demonstrate the influence of different numbers of nutrient levels, ranges of nutrient levels and replications per nutrient level on the estimates of requirements based on common nutritional response models. The user provides estimates of the shape of the response curve, requirements and other parameters and observation to observation variation. The Excel workbook then produces 1-1000 randomly simulated responses based on the given response curve and estimates the standard errors of the requirement (and other parameters) from different models as an indication of the expected power of the experiment. Interpretations are based on the assumption that the smaller the standard error of the requirement, the more powerful the experiment. The user can see the potential effects of using one or more subjects, different nutrient levels, etc., on the expected outcome of future experiments. From a theoretical perspective, each organism should have some enzyme-catalysed reaction whose rate is limited by the availability of some limiting nutrient. The response to the limiting nutrient should therefore be similar to enzyme kinetics. In conclusion, the workbook eliminates some of the guesswork involved in designing experiments and determining the minimum number of subjects needed to achieve desired outcomes.
Walton, David M; Macdermid, Joy C; Nielson, Warren; Teasell, Robert W; Chiasson, Marco; Brown, Lauren
2011-09-01
Clinical measurement. To evaluate the intrarater, interrater, and test-retest reliability of an accessible digital algometer, and to determine the minimum detectable change in normal healthy individuals and a clinical population with neck pain. Pressure pain threshold testing may be a valuable assessment and prognostic indicator for people with neck pain. To date, most of this research has been completed using algometers that are too resource intensive for routine clinical use. Novice raters (physiotherapy students or clinical physiotherapists) were trained to perform algometry testing over 2 clinically relevant sites: the angle of the upper trapezius and the belly of the tibialis anterior. A convenience sample of normal healthy individuals and a clinical sample of people with neck pain were tested by 2 different raters (all participants) and on 2 different days (healthy participants only). Intraclass correlation coefficient (ICC), standard error of measurement, and minimum detectable change were calculated. A total of 60 healthy volunteers and 40 people with neck pain were recruited. Intrarater reliability was almost perfect (ICC = 0.94-0.97), interrater reliability was substantial to near perfect (ICC = 0.79-0.90), and test-retest reliability was substantial (ICC = 0.76-0.79). Smaller change was detectable in the trapezius compared to the tibialis anterior. This study provides evidence that novice raters can perform digital algometry with adequate reliability for research and clinical use in people with and without neck pain.
Comparison of Single-Shot Echo-Planar and Line Scan Protocols for Diffusion Tensor Imaging1
Kubicki, Marek; Maier, Stephan E.; Westin, Carl-Frederik; Mamata, Hatsuho; Ersner-Hershfield, Hal; Estepar, Raul; Kikinis, Ron; Jolesz, Ferenc A.
2009-01-01
Rationale and Objectives Both single-shot diffusion-weighted echo-planar imaging (EPI) and line scan diffusion imaging (LSDI) can be used to obtain magnetic resonance diffusion tensor data and to calculate directionally invariant diffusion anisotropy indices, ie, indirect measures of the organization and coherence of white matter fibers in the brain. To date, there has been no comparison of EPI and LSDI. Because EPI is the most commonly used technique for acquiring diffusion tensor data, it is important to understand the limitations and advantages of LSDI relative to EPI. Materials and Methods Five healthy volunteers underwent EPI and LSDI diffusion on a 1.5 Tesla magnet (General Electric Medical Systems, Milwaukee, WI). Four-mm thick coronal sections, covering the entire brain, were obtained. In addition, one subject was tested with both sequences over four sessions. For each image voxel, eigenvectors and eigenvalues of the diffusion tensor were calculated, and fractional anisotropy (FA) was derived. Several regions of interest were delineated, and for each, mean FA and estimated mean standard deviation were calculated and compared. Results Results showed no significant differences between EPI and LSDI for mean FA for the five subjects. When inter-session reproducibility for one subject was evaluated, there was a significant difference between EPI and LSDI in FA for the corpus callosum and the right uncinate fasciculus. Moreover, errors associated with each FA measure were larger for EPI than for LSDI. Conclusion Results indicate that both EPI- and LSDI-derived FA measures are sufficiently robust. However, when higher accuracy is needed, LSDI provides smaller error and smaller inter-subject and inter-session variability than EPI. PMID:14974598
SU-F-T-17: A Feasibility Study for the Transit Dosimetry with a Glass Dosimeter in Brachytherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moon, S; Yoon, M; Chung, W
Purpose: Confirming the dose delivered to a patient is important to make sure the treatment quality and safety of the radiotherapy. Measuring a transit dose of the patient during the radiotherapy could be an interesting way to confirm the patient dose. In this study, we evaluated the feasibility of the transit dosimetry with a glass dosimeter in brachytherapy. Methods: We made a phantom that inserted the glass dosimeters and placed under patient lying on a couch for cervix cancer brachytherapy. The 18 glass dosimeters were placed in the phantom arranged 6 per row. A point putting 1cm vertically from themore » source was prescribed as 500.00 cGy. Solid phantoms of 0, 2, 4, 6, 8, 10 cm were placed between the source and the glass dosimeter. The transit dose was measured each thickness using the glass dosimeters and compared with a treatment planning system (TPS). Results: When the transit dose was smaller than 10 cGy, the average of the differences between measured values and calculated values by TPS was 0.50 cGy and the standard deviation was 0.69 cGy. If the transit dose was smaller than 100 cGy, the average of the error was 1.67 ± 4.01 cGy. The error to a point near the prescription point was −14.02 cGy per 500.00 cGy of the prescription dose. Conclusion: The distances from the sources to skin of the patient generally are within 10 cm for cervix cancer cases in brachytherapy. The results of this preliminary study showed the probability of the glass dosimeter as the transit dosimeter in brachytherapy.« less
The Calibration of Gloss Reference Standards
NASA Astrophysics Data System (ADS)
Budde, W.
1980-04-01
In present international and national standards for the measurement of specular gloss the primary and secondary reference standards are defined for monochromatic radiation. However the glossmeter specified is using polychromatic radiation (CIE Standard Illuminant C) and the CIE Standard Photometric Observer. This produces errors in practical gloss measurements of up to 0.5%. Although this may be considered small as compared to the accuracy of most practical gloss measurements, such an error should not be tolerated in the calibration of secondary standards. Corrections for such errors are presented and various alternatives for amendments of the existing documentary standards are discussed.
A variational regularization of Abel transform for GPS radio occultation
NASA Astrophysics Data System (ADS)
Wee, Tae-Kwon
2018-04-01
In the Global Positioning System (GPS) radio occultation (RO) technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI) is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR) proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the mean refractivity deserting the first guess. In the lowest few kilometers that AI produces large negative refractivity bias, VR reduces the refractivity bias substantially with the aid of the background, which in this study is the operational forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF). It is concluded based on the results presented in this study that VR offers a definite advantage over AI in the quality of refractivity.
Simplified Approach Charts Improve Data Retrieval Performance
Stewart, Michael; Laraway, Sean; Jordan, Kevin; Feary, Michael S.
2016-01-01
The effectiveness of different instrument approach charts to deliver minimum visibility and altitude information during airport equipment outages was investigated. Eighteen pilots flew simulated instrument approaches in three conditions: (a) normal operations using a standard approach chart (standard-normal), (b) equipment outage conditions using a standard approach chart (standard-outage), and (c) equipment outage conditions using a prototype decluttered approach chart (prototype-outage). Errors and retrieval times in identifying minimum altitudes and visibilities were measured. The standard-outage condition produced significantly more errors and longer retrieval times versus the standard-normal condition. The prototype-outage condition had significantly fewer errors and shorter retrieval times than did the standard-outage condition. The prototype-outage condition produced significantly fewer errors but similar retrieval times when compared with the standard-normal condition. Thus, changing the presentation of minima may reduce risk and increase safety in instrument approaches, specifically with airport equipment outages. PMID:28491009
Tucker, Matthew A.; Idrissi, Ali; Almeida, Diogo
2015-01-01
In the processing of subject-verb agreement, non-subject plural nouns following a singular subject sometimes “attract” the agreement with the verb, despite not being grammatically licensed to do so. This phenomenon generates agreement errors in production and an increased tendency to fail to notice such errors in comprehension, thereby providing a window into the representation of grammatical number in working memory during sentence processing. Research in this topic, however, is primarily done in related languages with similar agreement systems. In order to increase the cross-linguistic coverage of the processing of agreement, we conducted a self-paced reading study in Modern Standard Arabic. We report robust agreement attraction errors in relative clauses, a configuration not particularly conducive to the generation of such errors for all possible lexicalizations. In particular, we examined the speed with which readers retrieve a subject controller for both grammatical and ungrammatical agreeing verbs in sentences where verbs are preceded by two NPs, one of which is a local non-subject NP that can act as a distractor for the successful resolution of subject-verb agreement. Our results suggest that the frequency of errors is modulated by the kind of plural formation strategy used on the attractor noun: nouns which form plurals by suffixation condition high rates of attraction, whereas nouns which form their plurals by internal vowel change (ablaut) generate lower rates of errors and reading-time attraction effects of smaller magnitudes. Furthermore, we show some evidence that these agreement attraction effects are mostly contained in the right tail of reaction time distributions. We also present modeling data in the ACT-R framework which supports a view of these ablauting patterns wherein they are differentially specified for number and evaluate the consequences of possible representations for theories of grammar and parsing. PMID:25914651
TOPEX/POSEIDON orbit maintenance maneuver design
NASA Technical Reports Server (NTRS)
Bhat, R. S.; Frauenholz, R. B.; Cannell, Patrick E.
1990-01-01
The Ocean Topography Experiment (TOPEX/POSEIDON) mission orbit requirements are outlined, as well as its control and maneuver spacing requirements including longitude and time targeting. A ground-track prediction model dealing with geopotential, luni-solar gravity, and atmospheric-drag perturbations is considered. Targeting with all modeled perturbations is discussed, and such ground-track prediction errors as initial semimajor axis, orbit-determination, maneuver-execution, and atmospheric-density modeling errors are assessed. A longitude targeting strategy for two extreme situations is investigated employing all modeled perturbations and prediction errors. It is concluded that atmospheric-drag modeling errors are the prevailing ground-track prediction error source early in the mission during high solar flux, and that low solar-flux levels expected late in the experiment stipulate smaller maneuver magnitudes.
ERIC Educational Resources Information Center
Wang, Tianyou
2009-01-01
Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…
Evidence of Non-Coincidence between Radio and Optical Positions of ICRF Sources.
NASA Astrophysics Data System (ADS)
Andrei, A. H.; da Silva, D. N.; Assafin, M.; Vieira Martins, R.
2003-11-01
Silva Neto et al. (SNAAVM: 2002) show that comparing the ICRF Ext1 sources standard radio position (Ma et al., 1998) against their optical counterpart position(ZZHJVW: Zacharias et al., 1999; USNO A2.0: Monet et al., 1998), a systematic pattern appears, which depends on the radio structure index (Fey and Charlot, 2000). The optical to radio offsets produce a distribution suggestive of a coincidence of the optical and radio centroids worse for the radio extended than for the radio compact sources. On average, the coincidence between the optical and radio centroids is found 7.9 +/- 1.1 mas smaller for the compact than for the extended sources. Such an effect is reasonably large, and certainly much too large to be due to errors on the VLBI radio position. On the other hand, it is too small to be accounted to the errors on the optical position, which moreover should be independent from the radio structure. Thus, other than a true pattern of centroids non-coincidence, the remaining explanation is of a hazard result. This paper summarizes the several statistical tests used to discard the hazard explanation.
NASA Astrophysics Data System (ADS)
Wang, Ji; Zhang, Ru; Yan, Yuting; Dong, Xiaoqiang; Li, Jun Ming
2017-05-01
Hazardous gas leaks in the atmosphere can cause significant economic losses in addition to environmental hazards, such as fires and explosions. A three-stage hazardous gas leak source localization method was developed that uses movable and stationary gas concentration sensors. The method calculates a preliminary source inversion with a modified genetic algorithm (MGA) and has the potential to crossover with eliminated individuals from the population, following the selection of the best candidate. The method then determines a search zone using Markov Chain Monte Carlo (MCMC) sampling, utilizing a partial evaluation strategy. The leak source is then accurately localized using a modified guaranteed convergence particle swarm optimization algorithm with several bad-performing individuals, following selection of the most successful individual with dynamic updates. The first two stages are based on data collected by motionless sensors, and the last stage is based on data from movable robots with sensors. The measurement error adaptability and the effect of the leak source location were analyzed. The test results showed that this three-stage localization process can localize a leak source within 1.0 m of the source for different leak source locations, with measurement error standard deviation smaller than 2.0.
Regier, Michael D; Moodie, Erica E M
2016-05-01
We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.
Accuracy assessment of the global TanDEM-X Digital Elevation Model with GPS data
NASA Astrophysics Data System (ADS)
Wessel, Birgit; Huber, Martin; Wohlfart, Christian; Marschalk, Ursula; Kosmann, Detlev; Roth, Achim
2018-05-01
The primary goal of the German TanDEM-X mission is the generation of a highly accurate and global Digital Elevation Model (DEM) with global accuracies of at least 10 m absolute height error (linear 90% error). The global TanDEM-X DEM acquired with single-pass SAR interferometry was finished in September 2016. This paper provides a unique accuracy assessment of the final TanDEM-X global DEM using two different GPS point reference data sets, which are distributed across all continents, to fully characterize the absolute height error. Firstly, the absolute vertical accuracy is examined by about three million globally distributed kinematic GPS (KGPS) points derived from 19 KGPS tracks covering a total length of about 66,000 km. Secondly, a comparison is performed with more than 23,000 "GPS on Bench Marks" (GPS-on-BM) points provided by the US National Geodetic Survey (NGS) scattered across 14 different land cover types of the US National Land Cover Data base (NLCD). Both GPS comparisons prove an absolute vertical mean error of TanDEM-X DEM smaller than ±0.20 m, a Root Means Square Error (RMSE) smaller than 1.4 m and an excellent absolute 90% linear height error below 2 m. The RMSE values are sensitive to land cover types. For low vegetation the RMSE is ±1.1 m, whereas it is slightly higher for developed areas (±1.4 m) and for forests (±1.8 m). This validation confirms an outstanding absolute height error at 90% confidence level of the global TanDEM-X DEM outperforming the requirement by a factor of five. Due to its extensive and globally distributed reference data sets, this study is of considerable interests for scientific and commercial applications.
Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; ...
2017-02-15
Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less
Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; Rudinger, Kenneth; Mizrahi, Jonathan; Fortier, Kevin; Maunz, Peter
2017-01-01
Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Here we use gate set tomography to completely characterize operations on a trapped-Yb+-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10−4). PMID:28198466
Big Data and Large Sample Size: A Cautionary Note on the Potential for Bias
Chambers, David A.; Glasgow, Russell E.
2014-01-01
Abstract A number of commentaries have suggested that large studies are more reliable than smaller studies and there is a growing interest in the analysis of “big data” that integrates information from many thousands of persons and/or different data sources. We consider a variety of biases that are likely in the era of big data, including sampling error, measurement error, multiple comparisons errors, aggregation error, and errors associated with the systematic exclusion of information. Using examples from epidemiology, health services research, studies on determinants of health, and clinical trials, we conclude that it is necessary to exercise greater caution to be sure that big sample size does not lead to big inferential errors. Despite the advantages of big studies, large sample size can magnify the bias associated with error resulting from sampling or study design. Clin Trans Sci 2014; Volume #: 1–5 PMID:25043853
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik
Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less
A Note on Standard Deviation and Standard Error
ERIC Educational Resources Information Center
Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth
2010-01-01
Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.
Cooley, Richard L.
1982-01-01
Prior information on the parameters of a groundwater flow model can be used to improve parameter estimates obtained from nonlinear regression solution of a modeling problem. Two scales of prior information can be available: (1) prior information having known reliability (that is, bias and random error structure) and (2) prior information consisting of best available estimates of unknown reliability. A regression method that incorporates the second scale of prior information assumes the prior information to be fixed for any particular analysis to produce improved, although biased, parameter estimates. Approximate optimization of two auxiliary parameters of the formulation is used to help minimize the bias, which is almost always much smaller than that resulting from standard ridge regression. It is shown that if both scales of prior information are available, then a combined regression analysis may be made.
NASA Technical Reports Server (NTRS)
Noll, Keith S.; Hammel, H. B.; Young, Leslie; Joiner, Joanna; Hackwell, J.; Lynch, D. K.; Russell, R.
1993-01-01
The Broadband Array Spectrograph System with the NASA Infrared Telescope Facility was used to obtain 3- to 13-micron spectra of Io on June 14-16, 1991. The extinction correction and its error for each standard star (Alpha Boo, Alpha Lyr, and Mu UMa) were found individually by performing an unweighted linear fit of instrumental magnitude as a function of airmass. The model results indicate two significant trends: (1) modest differences between the two hemispheres at lower background temperatures and (2) a tendency to higher temperatures, smaller areas, and less power from the warm component at higher background temperatures with an increased contrast between the two hemispheres. The increased flux from 8 to 13 microns is due primarily to a greater area on the Loki (trailing) hemisphere for the warm component, although temperature also plays a role.
Time-of-flight PET time calibration using data consistency
NASA Astrophysics Data System (ADS)
Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan
2018-05-01
This paper presents new data driven methods for the time of flight (TOF) calibration of positron emission tomography (PET) scanners. These methods are derived from the consistency condition for TOF PET, they can be applied to data measured with an arbitrary tracer distribution and are numerically efficient because they do not require a preliminary image reconstruction from the non-TOF data. Two-dimensional simulations are presented for one of the methods, which only involves the two first moments of the data with respect to the TOF variable. The numerical results show that this method estimates the detector timing offsets with errors that are larger than those obtained via an initial non-TOF reconstruction, but remain smaller than of the TOF resolution and thereby have a limited impact on the quantitative accuracy of the activity image estimated with standard maximum likelihood reconstruction algorithms.
Flux Sampling Errors for Aircraft and Towers
NASA Technical Reports Server (NTRS)
Mahrt, Larry
1998-01-01
Various errors and influences leading to differences between tower- and aircraft-measured fluxes are surveyed. This survey is motivated by reports in the literature that aircraft fluxes are sometimes smaller than tower-measured fluxes. Both tower and aircraft flux errors are larger with surface heterogeneity due to several independent effects. Surface heterogeneity may cause tower flux errors to increase with decreasing wind speed. Techniques to assess flux sampling error are reviewed. Such error estimates suffer various degrees of inapplicability in real geophysical time series due to nonstationarity of tower time series (or inhomogeneity of aircraft data). A new measure for nonstationarity is developed that eliminates assumptions on the form of the nonstationarity inherent in previous methods. When this nonstationarity measure becomes large, the surface energy imbalance increases sharply. Finally, strategies for obtaining adequate flux sampling using repeated aircraft passes and grid patterns are outlined.
ERIC Educational Resources Information Center
Wang, Tianyou; And Others
M. J. Kolen, B. A. Hanson, and R. L. Brennan (1992) presented a procedure for assessing the conditional standard error of measurement (CSEM) of scale scores using a strong true-score model. They also investigated the ways of using nonlinear transformation from number-correct raw score to scale score to equalize the conditional standard error along…
NASA Astrophysics Data System (ADS)
Frasson, Renato Prata de Moraes; Wei, Rui; Durand, Michael; Minear, J. Toby; Domeneghetti, Alessio; Schumann, Guy; Williams, Brent A.; Rodriguez, Ernesto; Picamilh, Christophe; Lion, Christine; Pavelsky, Tamlin; Garambois, Pierre-André
2017-10-01
The upcoming Surface Water and Ocean Topography (SWOT) mission will measure water surface heights and widths for rivers wider than 100 m. At its native resolution, SWOT height errors are expected to be on the order of meters, which prevent the calculation of water surface slopes and the use of slope-dependent discharge equations. To mitigate height and width errors, the high-resolution measurements will be grouped into reaches (˜5 to 15 km), where slope and discharge are estimated. We describe three automated river segmentation strategies for defining optimum reaches for discharge estimation: (1) arbitrary lengths, (2) identification of hydraulic controls, and (3) sinuosity. We test our methodologies on 9 and 14 simulated SWOT overpasses over the Sacramento and the Po Rivers, respectively, which we compare against hydraulic models of each river. Our results show that generally, height, width, and slope errors decrease with increasing reach length. However, the hydraulic controls and the sinuosity methods led to better slopes and often height errors that were either smaller or comparable to those of arbitrary reaches of compatible sizes. Estimated discharge errors caused by the propagation of height, width, and slope errors through the discharge equation were often smaller for sinuosity (on average 8.5% for the Sacramento and 6.9% for the Po) and hydraulic control (Sacramento: 7.3% and Po: 5.9%) reaches than for arbitrary reaches of comparable lengths (Sacramento: 8.6% and Po: 7.8%). This analysis suggests that reach definition methods that preserve the hydraulic properties of the river network may lead to better discharge estimates.
NASA Astrophysics Data System (ADS)
Smith, James F.
2017-11-01
With the goal of designing interferometers and interferometer sensors, e.g., LADARs with enhanced sensitivity, resolution, and phase estimation, states using quantum entanglement are discussed. These states include N00N states, plain M and M states (PMMSs), and linear combinations of M and M states (LCMMS). Closed form expressions for the optimal detection operators; visibility, a measure of the state's robustness to loss and noise; a resolution measure; and phase estimate error, are provided in closed form. The optimal resolution for the maximum visibility and minimum phase error are found. For the visibility, comparisons between PMMSs, LCMMS, and N00N states are provided. For the minimum phase error, comparisons between LCMMS, PMMSs, N00N states, separate photon states (SPSs), the shot noise limit (SNL), and the Heisenberg limit (HL) are provided. A representative collection of computational results illustrating the superiority of LCMMS when compared to PMMSs and N00N states is given. It is found that for a resolution 12 times the classical result LCMMS has visibility 11 times that of N00N states and 4 times that of PMMSs. For the same case, the minimum phase error for LCMMS is 10.7 times smaller than that of PMMS and 29.7 times smaller than that of N00N states.
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
Calibration of HST wide field camera for quantitative analysis of faint galaxy images
NASA Technical Reports Server (NTRS)
Ratnatunga, Kavan U.; Griffiths, Richard E.; Casertano, Stefano; Neuschaefer, Lyman W.; Wyckoff, Eric W.
1994-01-01
We present the methods adopted to optimize the calibration of images obtained with the Hubble Space Telescope (HST) Wide Field Camera (WFC) (1991-1993). Our main goal is to improve quantitative measurement of faint images, with special emphasis on the faint (I approximately 20-24 mag) stars and galaxies observed as a part of the Medium-Deep Survey. Several modifications to the standard calibration procedures have been introduced, including improved bias and dark images, and a new supersky flatfield obtained by combining a large number of relatively object-free Medium-Deep Survey exposures of random fields. The supersky flat has a pixel-to-pixel rms error of about 2.0% in F555W and of 2.4% in F785LP; large-scale variations are smaller than 1% rms. Overall, our modifications improve the quality of faint images with respect to the standard calibration by about a factor of five in photometric accuracy and about 0.3 mag in sensitivity, corresponding to about a factor of two in observing time. The relevant calibration images have been made available to the scientific community.
NASA Astrophysics Data System (ADS)
Fernandez, D.; Torregrosa, A.; Weiss-Penzias, P. S.; Mairs, A. A.; Wilson, S.; Bowman, M.; Barkley, T.; Gravelle, M.; Oliphant, A. J.
2015-12-01
Since 2014 an extensive network of standard fog collectors has been deployed along the coast of California, from as far south as southern Big Sur (36.1° N) to as far north as Arcata (40.8° N) at over a dozen sites that contain a total of several dozen of the fog collecting devices. This research is being done in conjunction with the Fognet Project that is looking at the levels of monomethyl mercury in fog water. Data collected reveal a fascinating variability in the amount of fog water collected across different scales of distance, elevation, time and location. In addition, a number of different types of mesh have been deployed and co-located to examine the variation in their fog water collecting capability in identical conditions. Mesh variations exhibit smaller variability across mesh type than had previously been expected. This study documents results found thus far across the network and also discusses the quantification of the errors associated with tipping bucket rain gauge measurements of water volumes and thus the importance of tipping bucket rain gauge calibration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollman, David S.; Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24061; Schaefer, Henry F.
2014-02-14
A local density fitting scheme is considered in which atomic orbital (AO) products are approximated using only auxiliary AOs located on one of the nuclei in that product. The possibility of variational collapse to an unphysical “attractive electron” state that can affect such density fitting [P. Merlot, T. Kjærgaard, T. Helgaker, R. Lindh, F. Aquilante, S. Reine, and T. B. Pedersen, J. Comput. Chem. 34, 1486 (2013)] is alleviated by including atom-wise semidiagonal integrals exactly. Our approach leads to a significant decrease in the computational cost of density fitting for Hartree–Fock theory while still producing results with errors 2–5 timesmore » smaller than standard, nonlocal density fitting. Our method allows for large Hartree–Fock and density functional theory computations with exact exchange to be carried out efficiently on large molecules, which we demonstrate by benchmarking our method on 200 of the most widely used prescription drug molecules. Our new fitting scheme leads to smooth and artifact-free potential energy surfaces and the possibility of relatively simple analytic gradients.« less
NASA Technical Reports Server (NTRS)
Lin, Shu; Rhee, Dojun; Rajpal, Sandeep
1993-01-01
This report presents a low-complexity and high performance concatenated coding scheme for high-speed satellite communications. In this proposed scheme, the NASA Standard Reed-Solomon (RS) code over GF(2(exp 8) is used as the outer code and the second-order Reed-Muller (RM) code of Hamming distance 8 is used as the inner code. The RM inner code has a very simple trellis structure and is decoded with the soft-decision Viterbi decoding algorithm. It is shown that the proposed concatenated coding scheme achieves an error performance which is comparable to that of the NASA TDRS concatenated coding scheme in which the NASA Standard rate-1/2 convolutional code of constraint length 7 and d sub free = 10 is used as the inner code. However, the proposed RM inner code has much smaller decoding complexity, less decoding delay, and much higher decoding speed. Consequently, the proposed concatenated coding scheme is suitable for reliable high-speed satellite communications, and it may be considered as an alternate coding scheme for the NASA TDRS system.
Berke, Ethan M; Shi, Xun
2009-04-29
Travel time is an important metric of geographic access to health care. We compared strategies of estimating travel times when only subject ZIP code data were available. Using simulated data from New Hampshire and Arizona, we estimated travel times to nearest cancer centers by using: 1) geometric centroid of ZIP code polygons as origins, 2) population centroids as origin, 3) service area rings around each cancer center, assigning subjects to rings by assuming they are evenly distributed within their ZIP code, 4) service area rings around each center, assuming the subjects follow the population distribution within the ZIP code. We used travel times based on street addresses as true values to validate estimates. Population-based methods have smaller errors than geometry-based methods. Within categories (geometry or population), centroid and service area methods have similar errors. Errors are smaller in urban areas than in rural areas. Population-based methods are superior to the geometry-based methods, with the population centroid method appearing to be the best choice for estimating travel time. Estimates in rural areas are less reliable.
Improving the analysis of composite endpoints in rare disease trials.
McMenamin, Martina; Berglind, Anna; Wason, James M S
2018-05-22
Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-05-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less
NASA Technical Reports Server (NTRS)
Wyman, D.; Steinman, R. M.
1973-01-01
Recently Timberlake, Wyman, Skavenski, and Steinman (1972) concluded in a study of the oculomotor error signal in the fovea that 'the oculomotor dead zone is surely smaller than 10 min and may even be less than 5 min (smaller than the 0.25 to 0.5 deg dead zone reported by Rashbass (1961) with similar stimulus conditions).' The Timberlake et al. speculation is confirmed by demonstrating that the fixating eye consistently and accurately corrects target displacements as small as 3.4 min. The contact lens optical lever technique was used to study the manner in which the oculomotor system responds to small step displacements of the fixation target. Subjects did, without prior practice, use saccades to correct step displacements of the fixation target just as they correct small position errors during maintained fixation.
NASA Technical Reports Server (NTRS)
Sun, W.; Loeb, N. G.; Videen, G.; Fu, Q.
2004-01-01
Natural particles such as ice crystals in cirrus clouds generally are not pristine but have additional micro-roughness on their surfaces. A two-dimensional finite-difference time-domain (FDTD) program with a perfectly matched layer absorbing boundary condition is developed to calculate the effect of surface roughness on light scattering by long ice columns. When we use a spatial cell size of 1/120 incident wavelength for ice circular cylinders with size parameters of 6 and 24 at wavelengths of 0.55 and 10.8 mum, respectively, the errors in the FDTD results in the extinction, scattering, and absorption efficiencies are smaller than similar to 0.5%. The errors in the FDTD results in the asymmetry factor are smaller than similar to 0.05%. The errors in the FDTD results in the phase-matrix elements are smaller than similar to 5%. By adding a pseudorandom change as great as 10% of the radius of a cylinder, we calculate the scattering properties of randomly oriented rough-surfaced ice columns. We conclude that, although the effect of small surface roughness on light scattering is negligible, the scattering phase-matrix elements change significantly for particles with large surface roughness. The roughness on the particle surface can make the conventional phase function smooth. The most significant effect of the surface roughness is the decay of polarization of the scattered light.
Benau, Erik M; Moelter, Stephen T
2016-09-01
The Error-Related Negativity (ERN) and Correct-Response Negativity (CRN) are brief event-related potential (ERP) components-elicited after the commission of a response-associated with motivation, emotion, and affect. The Error Positivity (Pe) typically appears after the ERN, and corresponds to awareness of having committed an error. Although motivation has long been established as an important factor in the expression and morphology of the ERN, physiological state has rarely been explored as a variable in these investigations. In the present study, we investigated whether self-reported physiological state (SRPS; wakefulness, hunger, or thirst) corresponds with ERN amplitude and type of lexical stimuli. Participants completed a SRPS questionnaire and then completed a speeded Lexical Decision Task with words and pseudowords that were either food-related or neutral. Though similar in frequency and length, food-related stimuli elicited increased accuracy, faster errors, and generated a larger ERN and smaller CRN than neutral words. Self-reported thirst correlated with improved accuracy and smaller ERN and CRN amplitudes. The Pe and Pc (correct positivity) were not impacted by physiological state or by stimulus content. The results indicate that physiological state and manipulations of lexical content may serve as important avenues for future research. Future studies that apply more sensitive measures of physiological and motivational state (e.g., biomarkers for satiety) or direct manipulations of satiety may be a useful technique for future research into response monitoring. Copyright © 2016 Elsevier Inc. All rights reserved.
Error in the Honeybee Waggle Dance Improves Foraging Flexibility
Okada, Ryuichi; Ikeno, Hidetoshi; Kimura, Toshifumi; Ohashi, Mizue; Aonuma, Hitoshi; Ito, Etsuro
2014-01-01
The honeybee waggle dance communicates the location of profitable food sources, usually with a certain degree of error in the directional information ranging from 10–15° at the lower margin. We simulated one-day colonial foraging to address the biological significance of information error in the waggle dance. When the error was 30° or larger, the waggle dance was not beneficial. If the error was 15°, the waggle dance was beneficial when the food sources were scarce. When the error was 10° or smaller, the waggle dance was beneficial under all the conditions tested. Our simulation also showed that precise information (0–5° error) yielded great success in finding feeders, but also caused failures at finding new feeders, i.e., a high-risk high-return strategy. The observation that actual bees perform the waggle dance with an error of 10–15° might reflect, at least in part, the maintenance of a successful yet risky foraging trade-off. PMID:24569525
NASA Technical Reports Server (NTRS)
Chen, Chien-Chung; Gardner, Chester S.
1989-01-01
Given the rms transmitter pointing error and the desired probability of bit error (PBE), it can be shown that an optimal transmitter antenna gain exists which minimizes the required transmitter power. Given the rms local oscillator tracking error, an optimum receiver antenna gain can be found which optimizes the receiver performance. The impact of pointing and tracking errors on the design of direct-detection pulse-position modulation (PPM) and heterodyne noncoherent frequency-shift keying (NCFSK) systems are then analyzed in terms of constraints on the antenna size and the power penalty incurred. It is shown that in the limit of large spatial tracking errors, the advantage in receiver sensitivity for the heterodyne system is quickly offset by the smaller antenna gain and the higher power penalty due to tracking errors. In contrast, for systems with small spatial tracking errors, the heterodyne system is superior because of the higher receiver sensitivity.
Estimating standard errors in feature network models.
Frank, Laurence E; Heiser, Willem J
2007-05-01
Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.
Ozone Profile Retrievals from the OMPS on Suomi NPP
NASA Astrophysics Data System (ADS)
Bak, J.; Liu, X.; Kim, J. H.; Haffner, D. P.; Chance, K.; Yang, K.; Sun, K.; Gonzalez Abad, G.
2017-12-01
We verify and correct the Ozone Mapping and Profiler Suite (OMPS) Nadir Mapper (NM) L1B v2.0 data with the aim of producing accurate ozone profile retrievals using an optimal estimation based inversion method in the 302.5-340 nm fitting. The evaluation of available slit functions demonstrates that preflight-measured slit functions well represent OMPS measurements compared to derived Gaussian slit functions. Our OMPS fitting residuals contain significant wavelength and cross-track dependent biases, and thereby serious cross-track striping errors are found in preliminary retrievals, especially in the troposphere. To eliminate the systematic component of the fitting residuals, we apply "soft calibration" to OMPS radiances. With the soft calibration the amplitude of fitting residuals decreases from 1 % to 0.2 % over low/mid latitudes, and thereby the consistency of tropospheric ozone retrievals between OMPS and Ozone Monitoring Instrument (OMI) are substantially improved. A common mode correction is implemented for additional radiometric calibration, which improves retrievals especially at high latitudes where the amplitude of fitting residuals decreases by a factor of 2. We estimate the floor noise error of OMPS measurements from standard deviations of the fitting residuals. The derived error in the Huggins band ( 0.1 %) is 2 times smaller than OMI floor noise error and 2 times larger than OMPS L1B measurement error. The OMPS floor noise errors better constrain our retrievals for maximizing measurement information and stabilizing our fitting residuals. The final precision of the fitting residuals is less than 0.1 % in the low/mid latitude, with 1 degrees of freedom for signal for the tropospheric ozone, so that we meet the general requirements for successful tropospheric ozone retrievals. To assess if the quality of OMPS ozone retrievals could be acceptable for scientific use, we will characterize OMPS ozone profile retrievals, present error analysis, and validate retrievals using a reference dataset. The useful information on the vertical distribution of ozone is limited below 40 km only from OMPS NM measurements due to the absence of Hartley ozone wavelength. This shortcoming will be improved with the joint ozone profile retrieval using Nadir Profiler (NP) measurements covering the 250 to 310 nm range.
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, R. M.; Tai, K.-S.
2013-01-01
The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.
Impacts of motivational valence on the error-related negativity elicited by full and partial errors.
Maruo, Yuya; Schacht, Annekathrin; Sommer, Werner; Masaki, Hiroaki
2016-02-01
Affect and motivation influence the error-related negativity (ERN) elicited by full errors; however, it is unknown whether they also influence ERNs to correct responses accompanied by covert incorrect response activation (partial errors). Here we compared a neutral condition with conditions, where correct responses were rewarded or where incorrect responses were punished with gains and losses of small amounts of money, respectively. Data analysis distinguished ERNs elicited by full and partial errors. In the reward and punishment conditions, ERN amplitudes to both full and partial errors were larger than in the neutral condition, confirming participants' sensitivity to the significance of errors. We also investigated the relationships between ERN amplitudes and the behavioral inhibition and activation systems (BIS/BAS). Regardless of reward/punishment condition, participants scoring higher on BAS showed smaller ERN amplitudes in full error trials. These findings provide further evidence that the ERN is related to motivational valence and that similar relationships hold for both full and partial errors. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Middleton, John; Vaks, Jeffrey E
2007-04-01
Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.
Errors in patient specimen collection: application of statistical process control.
Dzik, Walter Sunny; Beckman, Neil; Selleng, Kathleen; Heddle, Nancy; Szczepiorkowski, Zbigniew; Wendel, Silvano; Murphy, Michael
2008-10-01
Errors in the collection and labeling of blood samples for pretransfusion testing increase the risk of transfusion-associated patient morbidity and mortality. Statistical process control (SPC) is a recognized method to monitor the performance of a critical process. An easy-to-use SPC method was tested to determine its feasibility as a tool for monitoring quality in transfusion medicine. SPC control charts were adapted to a spreadsheet presentation. Data tabulating the frequency of mislabeled and miscollected blood samples from 10 hospitals in five countries from 2004 to 2006 were used to demonstrate the method. Control charts were produced to monitor process stability. The participating hospitals found the SPC spreadsheet very suitable to monitor the performance of the sample labeling and collection and applied SPC charts to suit their specific needs. One hospital monitored subcategories of sample error in detail. A large hospital monitored the number of wrong-blood-in-tube (WBIT) events. Four smaller-sized facilities, each following the same policy for sample collection, combined their data on WBIT samples into a single control chart. One hospital used the control chart to monitor the effect of an educational intervention. A simple SPC method is described that can monitor the process of sample collection and labeling in any hospital. SPC could be applied to other critical steps in the transfusion processes as a tool for biovigilance and could be used to develop regional or national performance standards for pretransfusion sample collection. A link is provided to download the spreadsheet for free.
A practical method of estimating standard error of age in the fission track dating method
Johnson, N.M.; McGee, V.E.; Naeser, C.W.
1979-01-01
A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.
Harada, Saki; Suzuki, Akio; Nishida, Shohei; Kobayashi, Ryo; Tamai, Sayuri; Kumada, Keisuke; Murakami, Nobuo; Itoh, Yoshinori
2017-06-01
Insulin is frequently used for glycemic control. Medication errors related to insulin are a common problem for medical institutions. Here, we prepared a standardized sliding scale insulin (SSI) order sheet and assessed the effect of its introduction. Observations before and after the introduction of the standardized SSI template were conducted at Gifu University Hospital. The incidence of medication errors, hyperglycemia, and hypoglycemia related to SSI were obtained from the electronic medical records. The introduction of the standardized SSI order sheet significantly reduced the incidence of medication errors related to SSI compared with that prior to its introduction (12/165 [7.3%] vs 4/159 [2.1%], P = .048). However, the incidence of hyperglycemia (≥250 mg/dL) and hypoglycemia (≤50 mg/dL) in patients who received SSI was not significantly different between the 2 groups. The introduction of the standardized SSI order sheet reduced the incidence of medication errors related to SSI. © 2016 John Wiley & Sons, Ltd.
A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test
NASA Technical Reports Server (NTRS)
Reeder, James R.
2002-01-01
The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.
Quantifying soil carbon loss and uncertainty from a peatland wildfire using multi-temporal LiDAR
Reddy, Ashwan D.; Hawbaker, Todd J.; Wurster, F.; Zhu, Zhiliang; Ward, S.; Newcomb, Doug; Murray, R.
2015-01-01
Peatlands are a major reservoir of global soil carbon, yet account for just 3% of global land cover. Human impacts like draining can hinder the ability of peatlands to sequester carbon and expose their soils to fire under dry conditions. Estimating soil carbon loss from peat fires can be challenging due to uncertainty about pre-fire surface elevations. This study uses multi-temporal LiDAR to obtain pre- and post-fire elevations and estimate soil carbon loss caused by the 2011 Lateral West fire in the Great Dismal Swamp National Wildlife Refuge, VA, USA. We also determine how LiDAR elevation error affects uncertainty in our carbon loss estimate by randomly perturbing the LiDAR point elevations and recalculating elevation change and carbon loss, iterating this process 1000 times. We calculated a total loss using LiDAR of 1.10 Tg C across the 25 km2 burned area. The fire burned an average of 47 cm deep, equivalent to 44 kg C/m2, a value larger than the 1997 Indonesian peat fires (29 kg C/m2). Carbon loss via the First-Order Fire Effects Model (FOFEM) was estimated to be 0.06 Tg C. Propagating the LiDAR elevation error to the carbon loss estimates, we calculated a standard deviation of 0.00009 Tg C, equivalent to 0.008% of total carbon loss. We conclude that LiDAR elevation error is not a significant contributor to uncertainty in soil carbon loss under severe fire conditions with substantial peat consumption. However, uncertainties may be more substantial when soil elevation loss is of a similar or smaller magnitude than the reported LiDAR error.
Forecasting volcanic air pollution in Hawaii: Tests of time series models
NASA Astrophysics Data System (ADS)
Reikard, Gordon
2012-12-01
Volcanic air pollution, known as vog (volcanic smog) has recently become a major issue in the Hawaiian islands. Vog is caused when volcanic gases react with oxygen and water vapor. It consists of a mixture of gases and aerosols, which include sulfur dioxide and other sulfates. The source of the volcanic gases is the continuing eruption of Mount Kilauea. This paper studies predicting vog using statistical methods. The data sets include time series for SO2 and SO4, over locations spanning the west, south and southeast coasts of Hawaii, and the city of Hilo. The forecasting models include regressions and neural networks, and a frequency domain algorithm. The most typical pattern for the SO2 data is for the frequency domain method to yield the most accurate forecasts over the first few hours, and at the 24 h horizon. The neural net places second. For the SO4 data, the results are less consistent. At two sites, the neural net generally yields the most accurate forecasts, except at the 1 and 24 h horizons, where the frequency domain technique wins narrowly. At one site, the neural net and the frequency domain algorithm yield comparable errors over the first 5 h, after which the neural net dominates. At the remaining site, the frequency domain method is more accurate over the first 4 h, after which the neural net achieves smaller errors. For all the series, the average errors are well within one standard deviation of the actual data at all the horizons. However, the errors also show irregular outliers. In essence, the models capture the central tendency of the data, but are less effective in predicting the extreme events.
Error analyses of JEM/SMILES standard products on L2 operational system
NASA Astrophysics Data System (ADS)
Mitsuda, C.; Takahashi, C.; Suzuki, M.; Hayashi, H.; Imai, K.; Sano, T.; Takayanagi, M.; Iwata, Y.; Taniguchi, H.
2009-12-01
SMILES (Superconducting Submillimeter-wave Limb-Emission Sounder) , which has been developed by Japan Aerospace Exploration Agency (JAXA) and National Institute of Information and Communications Technology (NICT), is planned to be launched in September, 2009 and will be on board the Japanese Experiment Module (JEM) of the International Space Station (ISS). The SMILES measures the atmospheric limb emission from stratospheric minor constituents in 640 GHz band. Target species on L2 operational system are O3, ClO, HCl, HNO3, HOCl, CH3CN, HO2, BrO, and O3 isotopes (18OOO, 17OOO and O17OO). The SMILES carries 4 K cooled Superconductor-Insulator-Superconductor mixers to carry out high-sensitivity observations. In sub-millimeter band, water vapor absorption is an important factor to decide the tropospheric and stratospheric brightness temperature. The uncertainty of water vapor absorption influences the accuracy of molecular vertical profiles. Since the SMILES bands are narrow and far from H2O lines, it is a good approximation to assume this uncertainly as linear function of frequency. We include 0th and 1st coefficients of ‘baseline’ function, not water vapor profile, in state vector and retrieve them to remove influence of the water vapor uncertainty. We performed retrieval simulations using spectra computed by L2 operatinal forward model for various H2O conditions (-/+ 5, 10% difference between true profile and a priori profile in the stratosphere and -/+ 10, 20% one in the troposphere). The results show that the incremental errors of molecules are smaller than 10% of measurements errors when height correlation of baseline coefficients and temperature are assumed to be 10 km. In conclusion, the retrieval of the baseline coefficients effectively suppresses profile error due to bias of water vapor profile.
Chou, C P; Bentler, P M; Satorra, A
1991-11-01
Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.
NASA Technical Reports Server (NTRS)
Antonille, Scott
2004-01-01
For potential use on the SHARPI mission, Eastman Kodak has delivered a 50.8cm CA f/1.25 ultra-lightweight UV parabolic mirror with a surface figure error requirement of 6nm RMS. We address the challenges involved in verifying and mapping the surface error of this large lightweight mirror to +/-3nm using a diffractive CGH null lens. Of main concern is removal of large systematic errors resulting from surface deflections of the mirror due to gravity as well as smaller contributions from system misalignment and reference optic errors. We present our efforts to characterize these errors and remove their wavefront error contribution in post-processing as well as minimizing the uncertainty these calculations introduce. Data from Kodak and preliminary measurements from NASA Goddard will be included.
Modeling the Violation of Reward Maximization and Invariance in Reinforcement Schedules
La Camera, Giancarlo; Richmond, Barry J.
2008-01-01
It is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in trials equally distant from reward but belonging to longer schedules (referred to as “schedule length effect”). This violates the principles of reward maximization and invariance and cannot be predicted by the standard methods of Reinforcement Learning, such as the method of temporal differences. We develop a heuristic model that accounts for all of the properties of the behavior in the reinforcement schedule task but whose predictions are not different from those of the standard temporal difference model in choice tasks. In the modification of temporal difference learning introduced here, the effect of schedule length emerges spontaneously from the sensitivity to the immediately preceding trial. We also introduce a policy for general Markov Decision Processes, where the decision made at each node is conditioned on the motivation to perform an instrumental action, and show that the application of our model to the reinforcement schedule task and the choice task are special cases of this general theoretical framework. Within this framework, Reinforcement Learning can approach contextual learning with the mixture of empirical findings and principled assumptions that seem to coexist in the best descriptions of animal behavior. As examples, we discuss two phenomena observed in humans that often derive from the violation of the principle of invariance: “framing,” wherein equivalent options are treated differently depending on the context in which they are presented, and the “sunk cost” effect, the greater tendency to continue an endeavor once an investment in money, effort, or time has been made. The schedule length effect might be a manifestation of these phenomena in monkeys. PMID:18688266
Modeling the violation of reward maximization and invariance in reinforcement schedules.
La Camera, Giancarlo; Richmond, Barry J
2008-08-08
It is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in trials equally distant from reward but belonging to longer schedules (referred to as "schedule length effect"). This violates the principles of reward maximization and invariance and cannot be predicted by the standard methods of Reinforcement Learning, such as the method of temporal differences. We develop a heuristic model that accounts for all of the properties of the behavior in the reinforcement schedule task but whose predictions are not different from those of the standard temporal difference model in choice tasks. In the modification of temporal difference learning introduced here, the effect of schedule length emerges spontaneously from the sensitivity to the immediately preceding trial. We also introduce a policy for general Markov Decision Processes, where the decision made at each node is conditioned on the motivation to perform an instrumental action, and show that the application of our model to the reinforcement schedule task and the choice task are special cases of this general theoretical framework. Within this framework, Reinforcement Learning can approach contextual learning with the mixture of empirical findings and principled assumptions that seem to coexist in the best descriptions of animal behavior. As examples, we discuss two phenomena observed in humans that often derive from the violation of the principle of invariance: "framing," wherein equivalent options are treated differently depending on the context in which they are presented, and the "sunk cost" effect, the greater tendency to continue an endeavor once an investment in money, effort, or time has been made. The schedule length effect might be a manifestation of these phenomena in monkeys.
Computer Programs for the Semantic Differential: Further Modifications.
ERIC Educational Resources Information Center
Lawson, Edwin D.; And Others
The original nine programs for semantic differential analysis have been condensed into three programs which have been further refined and augmented. They yield: (1) means, standard deviations, and standard errors for each subscale on each concept; (2) Evaluation, Potency, and Activity (EPA) means, standard deviations, and standard errors; (3)…
NASA Technical Reports Server (NTRS)
Knox, C. E.
1978-01-01
Navigation error data from these flights are presented in a format utilizing three independent axes - horizontal, vertical, and time. The navigation position estimate error term and the autopilot flight technical error term are combined to form the total navigation error in each axis. This method of error presentation allows comparisons to be made between other 2-, 3-, or 4-D navigation systems and allows experimental or theoretical determination of the navigation error terms. Position estimate error data are presented with the navigation system position estimate based on dual DME radio updates that are smoothed with inertial velocities, dual DME radio updates that are smoothed with true airspeed and magnetic heading, and inertial velocity updates only. The normal mode of navigation with dual DME updates that are smoothed with inertial velocities resulted in a mean error of 390 m with a standard deviation of 150 m in the horizontal axis; a mean error of 1.5 m low with a standard deviation of less than 11 m in the vertical axis; and a mean error as low as 252 m with a standard deviation of 123 m in the time axis.
NASA Astrophysics Data System (ADS)
Goebel, R.; Kurupakorn, C.; Fletcher, N.; Stock, M.
2010-01-01
This report describes the results obtained from a NIMT (Thailand)-BIPM bilateral comparison of 10 kΩ resistance standards in 2009. The comparison was carried out in the framework of the BIPM ongoing key comparison BIPM.EM-K13.b. Two BIPM 10 kΩ travelling standards of SR104 type were calibrated first at the BIPM, then at the NMIT and again at the BIPM after their return. The stability of the transfer standards was such that the uncertainty associated with the transfer was smaller than the uncertainty arising from the calibrations. The mean difference between the NIMT and the BIPM calibrations was found to be significantly larger than the expanded uncertainty (k = 2) of the comparison. However, this exercise allowed previously undetected sources of errors to be detected in the NIMT facility. A new bilateral comparison can be organized as soon as these problems are fixed. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
NASA Astrophysics Data System (ADS)
García-Resúa, Carlos; Pena-Verdeal, Hugo; Miñones, Mercedes; Gilino, Jorge; Giraldez, Maria J.; Yebra-Pimentel, Eva
2013-11-01
High tear fluid osmolarity is a feature common to all types of dry eye. This study was designed to establish the accuracy of two osmometers, a freezing point depression osmometer (Fiske 110) and an electrical impedance osmometer (TearLab™) by using standard samples. To assess the accuracy of the measurements provided by the two instruments we used 5 solutions of known osmolarity/osmolality; 50, 290 and 850 mOsm/kg and 292 and 338 mOsm/L. Fiske 110 is designed to be used in samples of 20 μl, so measurements were made on 1:9, 1:4, 1:1 and 1:0 dilutions of the standards. Tear Lab is addressed to be used in tear film and only a sample of 0.05 μl is required, so no dilutions were employed. Due to the smaller measurement range of the TearLab, the 50 and 850 mOsm/kg standards were not included. 20 measurements per standard sample were used and differences with the reference value was analysed by one sample t-test. Fiske 110 showed that osmolarity measurements differed statistically from standard values except those recorded for 290 mOsm/kg standard diluted 1:1 (p = 0.309), the 292 mOsm/L H2O sample (1:1) and 338 mOsm/L H2O standard (1:4). The more diluted the sample, the higher the error rate. For the TearLab measurements, one-sample t-test indicated that all determinations differed from the theoretical values (p = 0.001), though differences were always small. For undiluted solutions, Fiske 110 shows similar performance than TearLab. However, for the diluted standards, Fiske 110 worsens.
Evaluation of Acoustic Doppler Current Profiler measurements of river discharge
Morlock, S.E.
1996-01-01
The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.
Increasing point-count duration increases standard error
Smith, W.P.; Twedt, D.J.; Hamel, P.B.; Ford, R.P.; Wiedenfeld, D.A.; Cooper, R.J.
1998-01-01
We examined data from point counts of varying duration in bottomland forests of west Tennessee and the Mississippi Alluvial Valley to determine if counting interval influenced sampling efficiency. Estimates of standard error increased as point count duration increased both for cumulative number of individuals and species in both locations. Although point counts appear to yield data with standard errors proportional to means, a square root transformation of the data may stabilize the variance. Using long (>10 min) point counts may reduce sample size and increase sampling error, both of which diminish statistical power and thereby the ability to detect meaningful changes in avian populations.
Gaze Compensation as a Technique for Improving Hand–Eye Coordination in Prosthetic Vision
Titchener, Samuel A.; Shivdasani, Mohit N.; Fallon, James B.; Petoe, Matthew A.
2018-01-01
Purpose Shifting the region-of-interest within the input image to compensate for gaze shifts (“gaze compensation”) may improve hand–eye coordination in visual prostheses that incorporate an external camera. The present study investigated the effects of eye movement on hand-eye coordination under simulated prosthetic vision (SPV), and measured the coordination benefits of gaze compensation. Methods Seven healthy-sighted subjects performed a target localization-pointing task under SPV. Three conditions were tested, modeling: retinally stabilized phosphenes (uncompensated); gaze compensation; and no phosphene movement (center-fixed). The error in pointing was quantified for each condition. Results Gaze compensation yielded a significantly smaller pointing error than the uncompensated condition for six of seven subjects, and a similar or smaller pointing error than the center-fixed condition for all subjects (two-way ANOVA, P < 0.05). Pointing error eccentricity and gaze eccentricity were moderately correlated in the uncompensated condition (azimuth: R2 = 0.47; elevation: R2 = 0.51) but not in the gaze-compensated condition (azimuth: R2 = 0.01; elevation: R2 = 0.00). Increased variability in gaze at the time of pointing was correlated with greater reduction in pointing error in the center-fixed condition compared with the uncompensated condition (R2 = 0.64). Conclusions Eccentric eye position impedes hand–eye coordination in SPV. While limiting eye eccentricity in uncompensated viewing can reduce errors, gaze compensation is effective in improving coordination for subjects unable to maintain fixation. Translational Relevance The results highlight the present necessity for suppressing eye movement and support the use of gaze compensation to improve hand–eye coordination and localization performance in prosthetic vision. PMID:29321945
Ultrasonic tracking of shear waves using a particle filter.
Ingle, Atul N; Ma, Chi; Varghese, Tomy
2015-11-01
This paper discusses an application of particle filtering for estimating shear wave velocity in tissue using ultrasound elastography data. Shear wave velocity estimates are of significant clinical value as they help differentiate stiffer areas from softer areas which is an indicator of potential pathology. Radio-frequency ultrasound echo signals are used for tracking axial displacements and obtaining the time-to-peak displacement at different lateral locations. These time-to-peak data are usually very noisy and cannot be used directly for computing velocity. In this paper, the denoising problem is tackled using a hidden Markov model with the hidden states being the unknown (noiseless) time-to-peak values. A particle filter is then used for smoothing out the time-to-peak curve to obtain a fit that is optimal in a minimum mean squared error sense. Simulation results from synthetic data and finite element modeling suggest that the particle filter provides lower mean squared reconstruction error with smaller variance as compared to standard filtering methods, while preserving sharp boundary detail. Results from phantom experiments show that the shear wave velocity estimates in the stiff regions of the phantoms were within 20% of those obtained from a commercial ultrasound scanner and agree with estimates obtained using a standard method using least-squares fit. Estimates of area obtained from the particle filtered shear wave velocity maps were within 10% of those obtained from B-mode ultrasound images. The particle filtering approach can be used for producing visually appealing SWV reconstructions by effectively delineating various areas of the phantom with good image quality properties comparable to existing techniques.
An Approach to Addressing Selection Bias in Survival Analysis
Carlin, Caroline S.; Solid, Craig A.
2014-01-01
This work proposes a frailty model that accounts for non-random treatment assignment in survival analysis. Using Monte Carlo simulation, we found that estimated treatment parameters from our proposed endogenous selection survival model (esSurv) closely parallel the consistent two-stage residual inclusion (2SRI) results, while offering computational and interpretive advantages. The esSurv method greatly enhances computational speed relative to 2SRI by eliminating the need for bootstrapped standard errors, and generally results in smaller standard errors than those estimated by 2SRI. In addition, esSurv explicitly estimates the correlation of unobservable factors contributing to both treatment assignment and the outcome of interest, providing an interpretive advantage over the residual parameter estimate in the 2SRI method. Comparisons with commonly used propensity score methods and with a model that does not account for non-random treatment assignment show clear bias in these methods that is not mitigated by increased sample size. We illustrate using actual dialysis patient data comparing mortality of patients with mature arteriovenous grafts for venous access to mortality of patients with grafts placed but not yet ready for use at the initiation of dialysis. We find strong evidence of endogeneity (with estimate of correlation in unobserved factors ρ̂ = 0.55), and estimate a mature-graft hazard ratio of 0.197 in our proposed method, with a similar 0.173 hazard ratio using 2SRI. The 0.630 hazard ratio from a frailty model without a correction for the non-random nature of treatment assignment illustrates the importance of accounting for endogeneity. PMID:24845211
An efficient decoding for low density parity check codes
NASA Astrophysics Data System (ADS)
Zhao, Ling; Zhang, Xiaolin; Zhu, Manjie
2009-12-01
Low density parity check (LDPC) codes are a class of forward-error-correction codes. They are among the best-known codes capable of achieving low bit error rates (BER) approaching Shannon's capacity limit. Recently, LDPC codes have been adopted by the European Digital Video Broadcasting (DVB-S2) standard, and have also been proposed for the emerging IEEE 802.16 fixed and mobile broadband wireless-access standard. The consultative committee for space data system (CCSDS) has also recommended using LDPC codes in the deep space communications and near-earth communications. It is obvious that LDPC codes will be widely used in wired and wireless communication, magnetic recording, optical networking, DVB, and other fields in the near future. Efficient hardware implementation of LDPC codes is of great interest since LDPC codes are being considered for a wide range of applications. This paper presents an efficient partially parallel decoder architecture suited for quasi-cyclic (QC) LDPC codes using Belief propagation algorithm for decoding. Algorithmic transformation and architectural level optimization are incorporated to reduce the critical path. First, analyze the check matrix of LDPC code, to find out the relationship between the row weight and the column weight. And then, the sharing level of the check node updating units (CNU) and the variable node updating units (VNU) are determined according to the relationship. After that, rearrange the CNU and the VNU, and divide them into several smaller parts, with the help of some assistant logic circuit, these smaller parts can be grouped into CNU during the check node update processing and grouped into VNU during the variable node update processing. These smaller parts are called node update kernel units (NKU) and the assistant logic circuit are called node update auxiliary unit (NAU). With NAUs' help, the two steps of iteration operation are completed by NKUs, which brings in great hardware resource reduction. Meanwhile, efficient techniques have been developed to reduce the computation delay of the node processing units and to minimize hardware overhead for parallel processing. This method may be applied not only to regular LDPC codes, but also to the irregular ones. Based on the proposed architectures, a (7493, 6096) irregular QC-LDPC code decoder is described using verilog hardware design language and implemented on Altera field programmable gate array (FPGA) StratixII EP2S130. The implementation results show that over 20% of logic core size can be saved than conventional partially parallel decoder architectures without any performance degradation. If the decoding clock is 100MHz, the proposed decoder can achieve a maximum (source data) decoding throughput of 133 Mb/s at 18 iterations.
Biases and Standard Errors of Standardized Regression Coefficients
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Chan, Wai
2011-01-01
The paper obtains consistent standard errors (SE) and biases of order O(1/n) for the sample standardized regression coefficients with both random and given predictors. Analytical results indicate that the formulas for SEs given in popular text books are consistent only when the population value of the regression coefficient is zero. The sample…
NASA Astrophysics Data System (ADS)
Lee, Y.; Keehm, Y.
2011-12-01
Estimating the degree of weathering in stone cultural heritage, such as pagodas and statues is very important to plan conservation and restoration. The ultrasonic measurement is one of commonly-used techniques to evaluate weathering index of stone cultual properties, since it is easy to use and non-destructive. Typically we use a portable ultrasonic device, PUNDIT with exponential sensors. However, there are many factors to cause errors in measurements such as operators, sensor layouts or measurement directions. In this study, we carried out variety of measurements with different operators (male and female), different sensor layouts (direct and indirect), and sensor directions (anisotropy). For operators bias, we found that there were not significant differences by the operator's sex, while the pressure an operator exerts can create larger error in measurements. Calibrating with a standard sample for each operator is very essential in this case. For the sensor layout, we found that the indirect measurement (commonly used for cultural properties, since the direct measurement is difficult in most cases) gives lower velocity than the real one. We found that the correction coefficient is slightly different for different types of rocks: 1.50 for granite and sandstone and 1.46 for marble. From the sensor directions, we found that many rocks have slight anisotropy in their ultrasonic velocity measurement, though they are considered isotropic in macroscopic scale. Thus averaging four different directional measurement (0°, 45°, 90°, 135°) gives much less errors in measurements (the variance is 2-3 times smaller). In conclusion, we reported the error in ultrasonic meaurement of stone cultural properties by various sources quantitatively and suggested the amount of correction and procedures to calibrate the measurements. Acknowledgement: This study, which forms a part of the project, has been achieved with the support of national R&D project, which has been hosted by National Research Institute of Cultural Heritage of Cultural Heritage Administration(No. NRICH-1107-B01F).
Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions
Onufriev, Alexey V.
2013-01-01
We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790
Wetherbee, G.A.; Latysh, N.E.; Gordon, J.D.
2005-01-01
Data from the U.S. Geological Survey (USGS) collocated-sampler program for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) are used to estimate the overall error of NADP/NTN measurements. Absolute errors are estimated by comparison of paired measurements from collocated instruments. Spatial and temporal differences in absolute error were identified and are consistent with longitudinal distributions of NADP/NTN measurements and spatial differences in precipitation characteristics. The magnitude of error for calcium, magnesium, ammonium, nitrate, and sulfate concentrations, specific conductance, and sample volume is of minor environmental significance to data users. Data collected after a 1994 sample-handling protocol change are prone to less absolute error than data collected prior to 1994. Absolute errors are smaller during non-winter months than during winter months for selected constituents at sites where frozen precipitation is common. Minimum resolvable differences are estimated for different regions of the USA to aid spatial and temporal watershed analyses.
Self-Interaction Error in Density Functional Theory: An Appraisal.
Bao, Junwei Lucas; Gagliardi, Laura; Truhlar, Donald G
2018-05-03
Self-interaction error (SIE) is considered to be one of the major sources of error in most approximate exchange-correlation functionals for Kohn-Sham density-functional theory (KS-DFT), and it is large with all local exchange-correlation functionals and with some hybrid functionals. In this work, we consider systems conventionally considered to be dominated by SIE. For these systems, we demonstrate that by using multiconfiguration pair-density functional theory (MC-PDFT), the error of a translated local density-functional approximation is significantly reduced (by a factor of 3) when using an MCSCF density and on-top density, as compared to using KS-DFT with the parent functional; the error in MC-PDFT with local on-top functionals is even lower than the error in some popular KS-DFT hybrid functionals. Density-functional theory, either in MC-PDFT form with local on-top functionals or in KS-DFT form with some functionals having 50% or more nonlocal exchange, has smaller errors for SIE-prone systems than does CASSCF, which has no SIE.
Harrell-Williams, Leigh; Wolfe, Edward W
2014-01-01
Previous research has investigated the influence of sample size, model misspecification, test length, ability distribution offset, and generating model on the likelihood ratio difference test in applications of item response models. This study extended that research to the evaluation of dimensionality using the multidimensional random coefficients multinomial logit model (MRCMLM). Logistic regression analysis of simulated data reveal that sample size and test length have a large effect on the capacity of the LR difference test to correctly identify unidimensionality, with shorter tests and smaller sample sizes leading to smaller Type I error rates. Higher levels of simulated misfit resulted in fewer incorrect decisions than data with no or little misfit. However, Type I error rates indicate that the likelihood ratio difference test is not suitable under any of the simulated conditions for evaluating dimensionality in applications of the MRCMLM.
NASA Astrophysics Data System (ADS)
Rhee, Jinyoung; Kim, Gayoung; Im, Jungho
2017-04-01
Three regions of Indonesia with different rainfall characteristics were chosen to develop drought forecast models based on machine learning. The 6-month Standardized Precipitation Index (SPI6) was selected as the target variable. The models' forecast skill was compared to the skill of long-range climate forecast models in terms of drought accuracy and regression mean absolute error (MAE). Indonesian droughts are known to be related to El Nino Southern Oscillation (ENSO) variability despite of regional differences as well as monsoon, local sea surface temperature (SST), other large-scale atmosphere-ocean interactions such as Indian Ocean Dipole (IOD) and Southern Pacific Convergence Zone (SPCZ), and local factors including topography and elevation. Machine learning models are thus to enhance drought forecast skill by combining local and remote SST and remote sensing information reflecting initial drought conditions to the long-range climate forecast model results. A total of 126 machine learning models were developed for the three regions of West Java (JB), West Sumatra (SB), and Gorontalo (GO) and six long-range climate forecast models of MSC_CanCM3, MSC_CanCM4, NCEP, NASA, PNU, POAMA as well as one climatology model based on remote sensing precipitation data, and 1 to 6-month lead times. When compared the results between the machine learning models and the long-range climate forecast models, West Java and Gorontalo regions showed similar characteristics in terms of drought accuracy. Drought accuracy of the long-range climate forecast models were generally higher than the machine learning models with short lead times but the opposite appeared for longer lead times. For West Sumatra, however, the machine learning models and the long-range climate forecast models showed similar drought accuracy. The machine learning models showed smaller regression errors for all three regions especially with longer lead times. Among the three regions, the machine learning models developed for Gorontalo showed the highest drought accuracy and the lowest regression error. West Java showed higher drought accuracy compared to West Sumatra, while West Sumatra showed lower regression error compared to West Java. The lower error in West Sumatra may be because of the smaller sample size used for training and evaluation for the region. Regional differences of forecast skill are determined by the effect of ENSO and the following forecast skill of the long-range climate forecast models. While shown somewhat high in West Sumatra, relative importance of remote sensing variables was mostly low in most cases. High importance of the variables based on long-range climate forecast models indicates that the forecast skill of the machine learning models are mostly determined by the forecast skill of the climate models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunter, J. L.; Sutton, T. M.
2013-07-01
In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amountmore » of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)« less
Evidence for age-associated cognitive decline from Internet game scores.
Geyer, Jason; Insel, Philip; Farzin, Faraz; Sternberg, Daniel; Hardy, Joseph L; Scanlon, Michael; Mungas, Dan; Kramer, Joel; Mackin, R Scott; Weiner, Michael W
2015-06-01
Lumosity's Memory Match (LMM) is an online game requiring visual working memory. Change in LMM scores may be associated with individual differences in age-related changes in working memory. Effects of age and time on LMM learning and forgetting rates were estimated using data from 1890 game sessions for users aged 40 to 79 years. There were significant effects of age on baseline LMM scores (β = -.31, standard error or SE = .02, P < .0001) and lower learning rates (β = -.0066, SE = .0008, P < .0001). A sample size of 202 subjects/arm was estimated for a 1-year study for subjects in the lower quartile of game performance. Online memory games have the potential to identify age-related decline in cognition and to identify subjects at risk for cognitive decline with smaller sample sizes and lower cost than traditional recruitment methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Church, J; Slaughter, D; Norman, E
Error rates in a cargo screening system such as the Nuclear Car Wash [1-7] depend on the standard deviation of the background radiation count rate. Because the Nuclear Car Wash is an active interrogation technique, the radiation signal for fissile material must be detected above a background count rate consisting of cosmic, ambient, and neutron-activated radiations. It was suggested previously [1,6] that the Corresponding negative repercussions for the sensitivity of the system were shown. Therefore, to assure the most accurate estimation of the variation, experiments have been performed to quantify components of the actual variance in the background count rate,more » including variations in generator power, irradiation time, and container contents. The background variance is determined by these experiments to be a factor of 2 smaller than values assumed in previous analyses, resulting in substantially improved projections of system performance for the Nuclear Car Wash.« less
NASA Technical Reports Server (NTRS)
Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don
1998-01-01
Sophisticated network storage management applications are rapidly evolving to satisfy a market demand for highly reliable data storage systems with large data storage capacities and performance requirements. To preserve a high degree of data integrity, these applications must rely on intelligent data storage devices that can provide reliable indicators of data degradation. Error correction activity generally occurs within storage devices without notification to the host. Early indicators of degradation and media error monitoring 333 and reporting (MEMR) techniques implemented in data storage devices allow network storage management applications to notify system administrators of these events and to take appropriate corrective actions before catastrophic errors occur. Although MEMR techniques have been implemented in data storage devices for many years, until 1996 no MEMR standards existed. In 1996 the American National Standards Institute (ANSI) approved the only known (world-wide) industry standard specifying MEMR techniques to verify stored data on optical disks. This industry standard was developed under the auspices of the Association for Information and Image Management (AIIM). A recently formed AIIM Optical Tape Subcommittee initiated the development of another data integrity standard specifying a set of media error monitoring tools and media error monitoring information (MEMRI) to verify stored data on optical tape media. This paper discusses the need for intelligent storage devices that can provide data integrity metadata, the content of the existing data integrity standard for optical disks, and the content of the MEMRI standard being developed by the AIIM Optical Tape Subcommittee.
Li, Qi-Quan; Wang, Chang-Quan; Zhang, Wen-Jiang; Yu, Yong; Li, Bing; Yang, Juan; Bai, Gen-Chuan; Cai, Yan
2013-02-01
In this study, a radial basis function neural network model combined with ordinary kriging (RBFNN_OK) was adopted to predict the spatial distribution of soil nutrients (organic matter and total N) in a typical hilly region of Sichuan Basin, Southwest China, and the performance of this method was compared with that of ordinary kriging (OK) and regression kriging (RK). All the three methods produced the similar soil nutrient maps. However, as compared with those obtained by multiple linear regression model, the correlation coefficients between the measured values and the predicted values of soil organic matter and total N obtained by neural network model increased by 12. 3% and 16. 5% , respectively, suggesting that neural network model could more accurately capture the complicated relationships between soil nutrients and quantitative environmental factors. The error analyses of the prediction values of 469 validation points indicated that the mean absolute error (MAE) , mean relative error (MRE), and root mean squared error (RMSE) of RBFNN_OK were 6.9%, 7.4%, and 5. 1% (for soil organic matter), and 4.9%, 6.1% , and 4.6% (for soil total N) smaller than those of OK (P<0.01), and 2.4%, 2.6% , and 1.8% (for soil organic matter), and 2.1%, 2.8%, and 2.2% (for soil total N) smaller than those of RK, respectively (P<0.05).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-01-01
A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less
Performance monitoring and error significance in patients with obsessive-compulsive disorder.
Endrass, Tanja; Schuermann, Beate; Kaufmann, Christan; Spielberg, Rüdiger; Kniesche, Rainer; Kathmann, Norbert
2010-05-01
Performance monitoring has been consistently found to be overactive in obsessive-compulsive disorder (OCD). The present study examines whether performance monitoring in OCD is adjusted with error significance. Therefore, errors in a flanker task were followed by neutral (standard condition) or punishment feedbacks (punishment condition). In the standard condition patients had significantly larger error-related negativity (ERN) and correct-related negativity (CRN) ampliudes than controls. But, in the punishment condition groups did not differ in ERN and CRN amplitudes. While healthy controls showed an amplitude enhancement between standard and punishment condition, OCD patients showed no variation. In contrast, group differences were not found for the error positivity (Pe): both groups had larger Pe amplitudes in the punishment condition. Results confirm earlier findings of overactive error monitoring in OCD. The absence of a variation with error significance might indicate that OCD patients are unable to down-regulate their monitoring activity according to external requirements. Copyright 2010 Elsevier B.V. All rights reserved.
Lystrom, David J.
1972-01-01
Various methods of verifying real-time streamflow data are outlined in part II. Relatively large errors (those greater than 20-30 percent) can be detected readily by use of well-designed verification programs for a digital computer, and smaller errors can be detected only by discharge measurements and field observations. The capability to substitute a simulated discharge value for missing or erroneous data is incorporated in some of the verification routines described. The routines represent concepts ranging from basic statistical comparisons to complex watershed modeling and provide a selection from which real-time data users can choose a suitable level of verification.
NASA Astrophysics Data System (ADS)
Islamiyati, A.; Fatmawati; Chamidah, N.
2018-03-01
The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Lau, William K. M. (Technical Monitor)
2002-01-01
The proposed Global Precipitation Mission (GPM) builds on the success of the Tropical Rainfall Measuring Mission (TRMM), offering a constellation of microwave-sensor-equipped smaller satellites in addition to a larger, multiply-instrumented "mother" satellite that will include an improved precipitation radar system to which the precipitation estimates of the smaller satellites can be tuned. Coverage by the satellites will be nearly global rather than being confined as TRMM was to lower latitudes. It is hoped that the satellite constellation can provide observations at most places on the earth at least once every three hours, though practical considerations may force some compromises. The GPM system offers the possibility of providing precipitation maps with much better time resolution than the monthly averages around which TRMM was planned, and therefore opens up new possibilities for hydrology and data assimilation into models. In this talk, methods that were developed for estimating sampling error in the rainfall averages that TRMM is providing will be used to estimate sampling error levels for GPM-era configurations. Possible impacts on GPM products of compromises in the sampling frequency will be discussed.
Tung, Li-Chen; Yu, Wan-Hui; Lin, Gong-Hong; Yu, Tzu-Ying; Wu, Chien-Te; Tsai, Chia-Yin; Chou, Willy; Chen, Mei-Hsiang; Hsieh, Ching-Lin
2016-09-01
To develop a Tablet-based Symbol Digit Modalities Test (T-SDMT) and to examine the test-retest reliability and concurrent validity of the T-SDMT in patients with stroke. The study had two phases. In the first phase, six experts, nine college students and five outpatients participated in the development and testing of the T-SDMT. In the second phase, 52 outpatients were evaluated twice (2 weeks apart) with the T-SDMT and SDMT to examine the test-retest reliability and concurrent validity of the T-SDMT. The T-SDMT was developed via expert input and college student/patient feedback. Regarding test-retest reliability, the practise effects of the T-SDMT and SDMT were both trivial (d=0.12) but significant (p≦0.015). The improvement in the T-SDMT (4.7%) was smaller than that in the SDMT (5.6%). The minimal detectable changes (MDC%) of the T-SDMT and SDMT were 6.7 (22.8%) and 10.3 (32.8%), respectively. The T-SDMT and SDMT were highly correlated with each other at the two time points (Pearson's r=0.90-0.91). The T-SDMT demonstrated good concurrent validity with the SDMT. Because the T-SDMT had a smaller practise effect and less random measurement error (superior test-retest reliability), it is recommended over the SDMT for assessing information processing speed in patients with stroke. Implications for Rehabilitation The Symbol Digit Modalities Test (SDMT), a common measure of information processing speed, showed a substantial practise effect and considerable random measurement error in patients with stroke. The Tablet-based SDMT (T-SDMT) has been developed to reduce the practise effect and random measurement error of the SDMT in patients with stroke. The T-SDMT had smaller practise effect and random measurement error than the SDMT, which can provide more reliable assessments of information processing speed.
Random errors in interferometry with the least-squares method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Qi
2011-01-20
This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less
Algorithm 699 - A new representation of Patterson's quadrature formulae
NASA Technical Reports Server (NTRS)
Krogh, Fred T.; Van Snyder, W.
1991-01-01
A method is presented to reduce the number of coefficients necessary to represent Patterson's quadrature formulae. It also reduces the amount of storage necessary for storing function values, and produces slightly smaller error in evaluating the formulae.
Quadratic Zeeman effect for hydrogen: A method for rigorous bound-state error estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fonte, G.; Falsaperla, P.; Schiffrer, G.
1990-06-01
We present a variational method, based on direct minimization of energy, for the calculation of eigenvalues and eigenfunctions of a hydrogen atom in a strong uniform magnetic field in the framework of the nonrelativistic theory (quadratic Zeeman effect). Using semiparabolic coordinates and a harmonic-oscillator basis, we show that it is possible to give rigorous error estimates for both eigenvalues and eigenfunctions by applying some results of Kato (Proc. Phys. Soc. Jpn. 4, 334 (1949)). The method can be applied in this simple form only to the lowest level of given angular momentum and parity, but it is also possible tomore » apply it to any excited state by using the standard Rayleigh-Ritz diagonalization method. However, due to the particular basis, the method is expected to be more effective, the weaker the field and the smaller the excitation energy, while the results of Kato we have employed lead to good estimates only when the level spacing is not too small. We present a numerical application to the {ital m}{sup {ital p}}=0{sup +} ground state and the lowest {ital m}{sup {ital p}}=1{sup {minus}} excited state, giving results that are among the most accurate in the literature for magnetic fields up to about 10{sup 10} G.« less
Reljin, Natasa; Reyes, Bersain A.; Chon, Ki H.
2015-01-01
In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days. PMID:25923929
Reljin, Natasa; Reyes, Bersain A; Chon, Ki H
2015-04-27
In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days.
NASA Astrophysics Data System (ADS)
Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho
2017-03-01
So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.
Compensating for magnetic field inhomogeneity in multigradient-echo-based MR thermometry.
Simonis, Frank F J; Petersen, Esben T; Bartels, Lambertus W; Lagendijk, Jan J W; van den Berg, Cornelis A T
2015-03-01
MR thermometry (MRT) is a noninvasive method for measuring temperature that can potentially be used for radio frequency (RF) safety monitoring. This application requires measuring absolute temperature. In this study, a multigradient-echo (mGE) MRT sequence was used for that purpose. A drawback of this sequence, however, is that its accuracy is affected by background gradients. In this article, we present a method to minimize this effect and to improve absolute temperature measurements using MRI. By determining background gradients using a B0 map or by combining data acquired with two opposing readout directions, the error can be removed in a homogenous phantom, thus improving temperature maps. All scans were performed on a 3T system using ethylene glycol-filled phantoms. Background gradients were varied, and one phantom was uniformly heated to validate both compensation approaches. Independent temperature recordings were made with optical probes. Errors correlated closely to the background gradients in all experiments. Temperature distributions showed a much smaller standard deviation when the corrections were applied (0.21°C vs. 0.45°C) and correlated well with thermo-optical probes. The corrections offer the possibility to measure RF heating in phantoms more precisely. This allows mGE MRT to become a valuable tool in RF safety assessment. © 2014 Wiley Periodicals, Inc.
Accurate Acoustic Thermometry I: The Triple Point of Gallium
NASA Astrophysics Data System (ADS)
Moldover, M. R.; Trusler, J. P. M.
1988-01-01
The speed of sound in argon has been accurately measured in the pressure range 25-380 kPa at the temperature of the triple point of gallium (Tg) and at 340 kPa at the temperature of the triple point of water (Tt). The results are combined with previously published thermodynamic and transport property data to obtain Tg = (302.9169 +/- 0.0005) K on the thermodynamic scale. Among recent determinations of T68 (the temperature on IPTS-68) at the gallium triple point, those with the smallest measurement uncertainty fall in the range 302.923 71 to 302.923 98 K. We conclude that T-T68 = (-6.9 +/- 0.5) mK near 303 K, in agreement with results obtained from other primary thermometers. The speed of sound was measured with a spherical resonator. The volume and thermal expansion of the resonator were determined by weighing the mercury required to fill it at Tt and Tg. The largest part of the standard error in the present determination of Tg is systematic. It results from imperfect knowledge of the thermal expansion of mercury between Tt and Tg. Smaller parts of the error result from imperfections in the measurement of the temperature of the resonator and of the resonance frequencies.
Eye size and shape in newborn children and their relation to axial length and refraction at 3 years.
Lim, Laurence Shen; Chua, Sharon; Tan, Pei Ting; Cai, Shirong; Chong, Yap-Seng; Kwek, Kenneth; Gluckman, Peter D; Fortier, Marielle V; Ngo, Cheryl; Qiu, Anqi; Saw, Seang-Mei
2015-07-01
To determine if eye size and shape at birth are associated with eye size and refractive error 3 years later. A subset of 173 full-term newborn infants from the Growing Up in Singapore Towards healthy Outcomes (GUSTO) birth cohort underwent magnetic resonance imaging (MRI) to measure the dimensions of the internal eye. Eye shape was assessed by an oblateness index, calculated as 1 - (axial length/width) or 1 - (axial length/height). Cycloplegic autorefraction (Canon Autorefractor RK-F1) and optical biometry (IOLMaster) were performed 3 years later. Both eyes of 173 children were analysed. Eyes with longer axial length at birth had smaller increases in axial length at 3 years (p < 0.001). Eyes with larger baseline volumes and surface areas had smaller increases in axial length at 3 years (p < 0.001 for both). Eyes which were more oblate at birth had greater increases in axial length at 3 years (p < 0.001). Using width to calculate oblateness, prolate eyes had smaller increases in axial length at 3 years compared to oblate eyes (p < 0.001), and, using height, prolate and spherical eyes had smaller increases in axial length at 3 years compared to oblate eyes (p < 0.001 for both). There were no associations between eye size and shape at birth and refraction, corneal curvature or myopia at 3 years. Eyes that are larger and have prolate or spherical shapes at birth exhibit smaller increases in axial length over the first 3 years of life. Eye size and shape at birth influence subsequent eye growth but not refractive error development. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.
Li, Xia; Dawant, Benoit M.; Welch, E. Brian; Chakravarthy, A. Bapsi; Xu, Lei; Mayer, Ingrid; Kelley, Mark; Meszoely, Ingrid; Means-Powell, Julie; Gore, John C.; Yankeelov, Thomas E.
2010-01-01
Purpose: The authors present a method to validate coregistration of breast magnetic resonance images obtained at multiple time points during the course of treatment. In performing sequential registration of breast images, the effects of patient repositioning, as well as possible changes in tumor shape and volume, must be considered. The authors accomplish this by extending the adaptive bases algorithm (ABA) to include a tumor-volume preserving constraint in the cost function. In this study, the authors evaluate this approach using a novel validation method that simulates not only the bulk deformation associated with breast MR images obtained at different time points, but also the reduction in tumor volume typically observed as a response to neoadjuvant chemotherapy. Methods: For each of the six patients, high-resolution 3D contrast enhanced T1-weighted images were obtained before treatment, after one cycle of chemotherapy and at the conclusion of chemotherapy. To evaluate the effects of decreasing tumor size during the course of therapy, simulations were run in which the tumor in the original images was contracted by 25%, 50%, 75%, and 95%, respectively. The contracted area was then filled using texture from local healthy appearing tissue. Next, to simulate the post-treatment data, the simulated (i.e., contracted tumor) images were coregistered to the experimentally measured post-treatment images using a surface registration. By comparing the deformations generated by the constrained and unconstrained version of ABA, the authors assessed the accuracy of the registration algorithms. The authors also applied the two algorithms on experimental data to study the tumor volume changes, the value of the constraint, and the smoothness of transformations. Results: For the six patient data sets, the average voxel shift error (mean±standard deviation) for the ABA with constraint was 0.45±0.37, 0.97±0.83, 1.43±0.96, and 1.80±1.17 mm for the 25%, 50%, 75%, and 95% contraction simulations, respectively. In comparison, the average voxel shift error for the unconstrained ABA was 0.46±0.29, 1.13±1.17, 2.40±2.04, and 3.53±2.89 mm, respectively. These voxel shift errors translate into compression of the tumor volume: The ABA with constraint returned volumetric errors of 2.70±4.08%, 7.31±4.52%, 9.28±5.55%, and 13.19±6.73% for the 25%, 50%, 75%, and 95% contraction simulations, respectively, whereas the unconstrained ABA returned volumetric errors of 4.00±4.46%, 9.93±4.83%, 19.78±5.657%, and 29.75±15.18%. The ABA with constraint yields a smaller mean shift error, as well as a smaller volume error (p=0.031 25 for the 75% and 95% contractions), than the unconstrained ABA for the simulated sets. Visual and quantitative assessments on experimental data also indicate a good performance of the proposed algorithm. Conclusions: The ABA with constraint can successfully register breast MR images acquired at different time points with reasonable error. To the best of the authors’ knowledge, this is the first report of an attempt to quantitatively assess in both phantoms and a set of patients the accuracy of a registration algorithm for this purpose. PMID:20632566
Total ozone trend significance from space time variability of daily Dobson data
NASA Technical Reports Server (NTRS)
Wilcox, R. W.
1981-01-01
Estimates of standard errors of total ozone time and area means, as derived from ozone's natural temporal and spatial variability and autocorrelation in middle latitudes determined from daily Dobson data are presented. Assessing the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. Standard errors of time averages depend on the temporal variability and correlation of the averaged parameter. Trend detectability is discussed, both for the present network and for satellite measurements.
Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C
2013-12-01
To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.
Evaluation of lens distortion errors in video-based motion analysis
NASA Technical Reports Server (NTRS)
Poliner, Jeffrey; Wilmington, Robert; Klute, Glenn K.; Micocci, Angelo
1993-01-01
In an effort to study lens distortion errors, a grid of points of known dimensions was constructed and videotaped using a standard and a wide-angle lens. Recorded images were played back on a VCR and stored on a personal computer. Using these stored images, two experiments were conducted. Errors were calculated as the difference in distance from the known coordinates of the points to the calculated coordinates. The purposes of this project were as follows: (1) to develop the methodology to evaluate errors introduced by lens distortion; (2) to quantify and compare errors introduced by use of both a 'standard' and a wide-angle lens; (3) to investigate techniques to minimize lens-induced errors; and (4) to determine the most effective use of calibration points when using a wide-angle lens with a significant amount of distortion. It was seen that when using a wide-angle lens, errors from lens distortion could be as high as 10 percent of the size of the entire field of view. Even with a standard lens, there was a small amount of lens distortion. It was also found that the choice of calibration points influenced the lens distortion error. By properly selecting the calibration points and avoidance of the outermost regions of a wide-angle lens, the error from lens distortion can be kept below approximately 0.5 percent with a standard lens and 1.5 percent with a wide-angle lens.
Leveraging pattern matching to solve SRAM verification challenges at advanced nodes
NASA Astrophysics Data System (ADS)
Kan, Huan; Huang, Lucas; Yang, Legender; Zou, Elaine; Wan, Qijian; Du, Chunshan; Hu, Xinyi; Liu, Zhengfang; Zhu, Yu; Zhang, Recoo; Huang, Elven; Muirhead, Jonathan
2018-03-01
Memory is a critical component in today's system-on-chip (SoC) designs. Static random-access memory (SRAM) blocks are assembled by combining intellectual property (IP) blocks that come from SRAM libraries developed and certified by the foundries for both functionality and a specific process node. Customers place these SRAM IP in their designs, adjusting as necessary to achieve DRC-clean results. However, any changes a customer makes to these SRAM IP during implementation, whether intentionally or in error, can impact yield and functionality. Physical verification of SRAM has always been a challenge, because these blocks usually contain smaller feature sizes and spacing constraints compared to traditional logic or other layout structures. At advanced nodes, critical dimension becomes smaller and smaller, until there is almost no opportunity to use optical proximity correction (OPC) and lithography to adjust the manufacturing process to mitigate the effects of any changes. The smaller process geometries, reduced supply voltages, increasing process variation, and manufacturing uncertainty mean accurate SRAM physical verification results are not only reaching new levels of difficulty, but also new levels of criticality for design success. In this paper, we explore the use of pattern matching to create an SRAM verification flow that provides both accurate, comprehensive coverage of the required checks and visual output to enable faster, more accurate error debugging. Our results indicate that pattern matching can enable foundries to improve SRAM manufacturing yield, while allowing designers to benefit from SRAM verification kits that can shorten the time to market.
Sensitivity to prediction error in reach adaptation
Haith, Adrian M.; Harran, Michelle D.; Shadmehr, Reza
2012-01-01
It has been proposed that the brain predicts the sensory consequences of a movement and compares it to the actual sensory feedback. When the two differ, an error signal is formed, driving adaptation. How does an error in one trial alter performance in the subsequent trial? Here we show that the sensitivity to error is not constant but declines as a function of error magnitude. That is, one learns relatively less from large errors compared with small errors. We performed an experiment in which humans made reaching movements and randomly experienced an error in both their visual and proprioceptive feedback. Proprioceptive errors were created with force fields, and visual errors were formed by perturbing the cursor trajectory to create a visual error that was smaller, the same size, or larger than the proprioceptive error. We measured single-trial adaptation and calculated sensitivity to error, i.e., the ratio of the trial-to-trial change in motor commands to error size. We found that for both sensory modalities sensitivity decreased with increasing error size. A reanalysis of a number of previously published psychophysical results also exhibited this feature. Finally, we asked how the brain might encode sensitivity to error. We reanalyzed previously published probabilities of cerebellar complex spikes (CSs) and found that this probability declined with increasing error size. From this we posit that a CS may be representative of the sensitivity to error, and not error itself, a hypothesis that may explain conflicting reports about CSs and their relationship to error. PMID:22773782
PID Controller Design for FES Applied to Ankle Muscles in Neuroprosthesis for Standing Balance
Rouhani, Hossein; Same, Michael; Masani, Kei; Li, Ya Qi; Popovic, Milos R.
2017-01-01
Closed-loop controlled functional electrical stimulation (FES) applied to the lower limb muscles can be used as a neuroprosthesis for standing balance in neurologically impaired individuals. The objective of this study was to propose a methodology for designing a proportional-integral-derivative (PID) controller for FES applied to the ankle muscles toward maintaining standing balance for several minutes and in the presence of perturbations. First, a model of the physiological control strategy for standing balance was developed. Second, the parameters of a PID controller that mimicked the physiological balance control strategy were determined to stabilize the human body when modeled as an inverted pendulum. Third, this PID controller was implemented using a custom-made Inverted Pendulum Standing Apparatus that eliminated the effect of visual and vestibular sensory information on voluntary balance control. Using this setup, the individual-specific FES controllers were tested in able-bodied individuals and compared with disrupted voluntary control conditions in four experimental paradigms: (i) quiet-standing; (ii) sudden change of targeted pendulum angle (step response); (iii) balance perturbations that simulate arm movements; and (iv) sudden change of targeted angle of a pendulum with individual-specific body-weight (step response). In paradigms (i) to (iii), a standard 39.5-kg pendulum was used, and 12 subjects were involved. In paradigm (iv) 9 subjects were involved. Across the different experimental paradigms and subjects, the FES-controlled and disrupted voluntarily-controlled pendulum angle showed root mean square errors of <1.2 and 2.3 deg, respectively. The root mean square error (all paradigms), rise time, settle time, and overshoot [paradigms (ii) and (iv)] in FES-controlled balance were significantly smaller or tended to be smaller than those observed with voluntarily-controlled balance, implying improved steady-state and transient responses of FES-controlled balance. At the same time, the FES-controlled balance required similar torque levels (no significant difference) as voluntarily-controlled balance. The implemented PID parameters were to some extent consistent among subjects for standard weight conditions and did not require prolonged individual-specific tuning. The proposed methodology can be used to design FES controllers for closed-loop controlled neuroprostheses for standing balance. Further investigation of the clinical implementation of this approach for neurologically impaired individuals is needed. PMID:28676739
Intravenous Chemotherapy Compounding Errors in a Follow-Up Pan-Canadian Observational Study.
Gilbert, Rachel E; Kozak, Melissa C; Dobish, Roxanne B; Bourrier, Venetia C; Koke, Paul M; Kukreti, Vishal; Logan, Heather A; Easty, Anthony C; Trbovich, Patricia L
2018-05-01
Intravenous (IV) compounding safety has garnered recent attention as a result of high-profile incidents, awareness efforts from the safety community, and increasingly stringent practice standards. New research with more-sensitive error detection techniques continues to reinforce that error rates with manual IV compounding are unacceptably high. In 2014, our team published an observational study that described three types of previously unrecognized and potentially catastrophic latent chemotherapy preparation errors in Canadian oncology pharmacies that would otherwise be undetectable. We expand on this research and explore whether additional potential human failures are yet to be addressed by practice standards. Field observations were conducted in four cancer center pharmacies in four Canadian provinces from January 2013 to February 2015. Human factors specialists observed and interviewed pharmacy managers, oncology pharmacists, pharmacy technicians, and pharmacy assistants as they carried out their work. Emphasis was on latent errors (potential human failures) that could lead to outcomes such as wrong drug, dose, or diluent. Given the relatively short observational period, no active failures or actual errors were observed. However, 11 latent errors in chemotherapy compounding were identified. In terms of severity, all 11 errors create the potential for a patient to receive the wrong drug or dose, which in the context of cancer care, could lead to death or permanent loss of function. Three of the 11 practices were observed in our previous study, but eight were new. Applicable Canadian and international standards and guidelines do not explicitly address many of the potentially error-prone practices observed. We observed a significant degree of risk for error in manual mixing practice. These latent errors may exist in other regions where manual compounding of IV chemotherapy takes place. Continued efforts to advance standards, guidelines, technological innovation, and chemical quality testing are needed.
NASA Astrophysics Data System (ADS)
Wang, Hongcui; Kawahara, Tatsuya
CALL (Computer Assisted Language Learning) systems using ASR (Automatic Speech Recognition) for second language learning have received increasing interest recently. However, it still remains a challenge to achieve high speech recognition performance, including accurate detection of erroneous utterances by non-native speakers. Conventionally, possible error patterns, based on linguistic knowledge, are added to the lexicon and language model, or the ASR grammar network. However, this approach easily falls in the trade-off of coverage of errors and the increase of perplexity. To solve the problem, we propose a method based on a decision tree to learn effective prediction of errors made by non-native speakers. An experimental evaluation with a number of foreign students learning Japanese shows that the proposed method can effectively generate an ASR grammar network, given a target sentence, to achieve both better coverage of errors and smaller perplexity, resulting in significant improvement in ASR accuracy.
Correcting systematic errors in high-sensitivity deuteron polarization measurements
NASA Astrophysics Data System (ADS)
Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.
2012-02-01
This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.
Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection.
Gürsoy, Doğa; Hong, Young P; He, Kuan; Hujsak, Karl; Yoo, Seunghwan; Chen, Si; Li, Yue; Ge, Mingyuan; Miller, Lisa M; Chu, Yong S; De Andrade, Vincent; He, Kai; Cossairt, Oliver; Katsaggelos, Aggelos K; Jacobsen, Chris
2017-09-18
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the same error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.
NASA Astrophysics Data System (ADS)
Tada, Kohei; Koga, Hiroaki; Okumura, Mitsutaka; Tanaka, Shingo
2018-06-01
Spin contamination error in the total energy of the Au2/MgO system was estimated using the density functional theory/plane-wave scheme and approximate spin projection methods. This is the first investigation in which the errors in chemical phenomena on a periodic surface are estimated. The spin contamination error of the system was 0.06 eV. This value is smaller than that of the dissociation of Au2 in the gas phase (0.10 eV). This is because of the destabilization of the singlet spin state due to the weakening of the Au-Au interaction caused by the Au-MgO interaction.
Kim, Matthew H.; Marulis, Loren M.; Grammer, Jennie K.; Morrison, Frederick J.; Gehring, William J.
2016-01-01
Motivational beliefs and values influence how children approach challenging activities. The present study explores motivational processes from an expectancy-value theory framework by studying children's mistakes and their responses to them by focusing on two ERP components, the error-related negativity (ERN) and error positivity (Pe). Motivation was assessed using a child-friendly challenge puzzle task and a brief interview measure prior to ERP testing. Data from 50 four- to six-year-old children revealed that greater perceived competence beliefs were related to a larger Pe, while stronger intrinsic task value beliefs were associated with a smaller Pe. Motivation was unrelated to the ERN. Individual differences in early motivational processes may reflect electrophysiological activity related to conscious error awareness. PMID:27898304
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gürsoy, Doğa; Hong, Young P.; He, Kuan
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
Intimate Partner Violence, 1993-2010
... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ...
Estimating extreme stream temperatures by the standard deviate method
NASA Astrophysics Data System (ADS)
Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz
2006-02-01
It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.
Decreasing patient identification band errors by standardizing processes.
Walley, Susan Chu; Berger, Stephanie; Harris, Yolanda; Gallizzi, Gina; Hayes, Leslie
2013-04-01
Patient identification (ID) bands are an essential component in patient ID. Quality improvement methodology has been applied as a model to reduce ID band errors although previous studies have not addressed standardization of ID bands. Our specific aim was to decrease ID band errors by 50% in a 12-month period. The Six Sigma DMAIC (define, measure, analyze, improve, and control) quality improvement model was the framework for this study. ID bands at a tertiary care pediatric hospital were audited from January 2011 to January 2012 with continued audits to June 2012 to confirm the new process was in control. After analysis, the major improvement strategy implemented was standardization of styles of ID bands and labels. Additional interventions included educational initiatives regarding the new ID band processes and disseminating institutional and nursing unit data. A total of 4556 ID bands were audited with a preimprovement ID band error average rate of 9.2%. Significant variation in the ID band process was observed, including styles of ID bands. Interventions were focused on standardization of the ID band and labels. The ID band error rate improved to 5.2% in 9 months (95% confidence interval: 2.5-5.5; P < .001) and was maintained for 8 months. Standardization of ID bands and labels in conjunction with other interventions resulted in a statistical decrease in ID band error rates. This decrease in ID band error rates was maintained over the subsequent 8 months.
Chang, Hui-Yin; Chen, Ching-Tai; Lih, T. Mamie; Lynn, Ke-Shiuan; Juo, Chiun-Gung; Hsu, Wen-Lian; Sung, Ting-Yi
2016-01-01
Efficient and accurate quantitation of metabolites from LC-MS data has become an important topic. Here we present an automated tool, called iMet-Q (intelligent Metabolomic Quantitation), for label-free metabolomics quantitation from high-throughput MS1 data. By performing peak detection and peak alignment, iMet-Q provides a summary of quantitation results and reports ion abundance at both replicate level and sample level. Furthermore, it gives the charge states and isotope ratios of detected metabolite peaks to facilitate metabolite identification. An in-house standard mixture and a public Arabidopsis metabolome data set were analyzed by iMet-Q. Three public quantitation tools, including XCMS, MetAlign, and MZmine 2, were used for performance comparison. From the mixture data set, seven standard metabolites were detected by the four quantitation tools, for which iMet-Q had a smaller quantitation error of 12% in both profile and centroid data sets. Our tool also correctly determined the charge states of seven standard metabolites. By searching the mass values for those standard metabolites against Human Metabolome Database, we obtained a total of 183 metabolite candidates. With the isotope ratios calculated by iMet-Q, 49% (89 out of 183) metabolite candidates were filtered out. From the public Arabidopsis data set reported with two internal standards and 167 elucidated metabolites, iMet-Q detected all of the peaks corresponding to the internal standards and 167 metabolites. Meanwhile, our tool had small abundance variation (≤0.19) when quantifying the two internal standards and had higher abundance correlation (≥0.92) when quantifying the 167 metabolites. iMet-Q provides user-friendly interfaces and is publicly available for download at http://ms.iis.sinica.edu.tw/comics/Software_iMet-Q.html. PMID:26784691
Schoenberg, Mike R; Rum, Ruba S
2017-11-01
Rapid, clear and efficient communication of neuropsychological results is essential to benefit patient care. Errors in communication are a lead cause of medical errors; nevertheless, there remains a lack of consistency in how neuropsychological scores are communicated. A major limitation in the communication of neuropsychological results is the inconsistent use of qualitative descriptors for standardized test scores and the use of vague terminology. PubMed search from 1 Jan 2007 to 1 Aug 2016 to identify guidelines or consensus statements for the description and reporting of qualitative terms to communicate neuropsychological test scores was conducted. The review found the use of confusing and overlapping terms to describe various ranges of percentile standardized test scores. In response, we propose a simplified set of qualitative descriptors for normalized test scores (Q-Simple) as a means to reduce errors in communicating test results. The Q-Simple qualitative terms are: 'very superior', 'superior', 'high average', 'average', 'low average', 'borderline' and 'abnormal/impaired'. A case example illustrates the proposed Q-Simple qualitative classification system to communicate neuropsychological results for neurosurgical planning. The Q-Simple qualitative descriptor system is aimed as a means to improve and standardize communication of standardized neuropsychological test scores. Research are needed to further evaluate neuropsychological communication errors. Conveying the clinical implications of neuropsychological results in a manner that minimizes risk for communication errors is a quintessential component of evidence-based practice. Copyright © 2017 Elsevier B.V. All rights reserved.
Method for simulating dose reduction in digital mammography using the Anscombe transformation.
Borges, Lucas R; Oliveira, Helder C R de; Nunes, Polyana F; Bakic, Predrag R; Maidment, Andrew D A; Vieira, Marcelo A C
2016-06-01
This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions.
Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You
2013-11-04
A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.
Registration performance on EUV masks using high-resolution registration metrology
NASA Astrophysics Data System (ADS)
Steinert, Steffen; Solowan, Hans-Michael; Park, Jinback; Han, Hakseung; Beyer, Dirk; Scherübl, Thomas
2016-10-01
Next-generation lithography based on EUV continues to move forward to high-volume manufacturing. Given the technical challenges and the throughput concerns a hybrid approach with 193 nm immersion lithography is expected, at least in the initial state. Due to the increasing complexity at smaller nodes a multitude of different masks, both DUV (193 nm) and EUV (13.5 nm) reticles, will then be required in the lithography process-flow. The individual registration of each mask and the resulting overlay error are of crucial importance in order to ensure proper functionality of the chips. While registration and overlay metrology on DUV masks has been the standard for decades, this has yet to be demonstrated on EUV masks. Past generations of mask registration tools were not necessarily limited in their tool stability, but in their resolution capabilities. The scope of this work is an image placement investigation of high-end EUV masks together with a registration and resolution performance qualification. For this we employ a new generation registration metrology system embedded in a production environment for full-spec EUV masks. This paper presents excellent registration performance not only on standard overlay markers but also on more sophisticated e-beam calibration patterns.
Model for threading dislocations in metamorphic tandem solar cells on GaAs (001) substrates
NASA Astrophysics Data System (ADS)
Song, Yifei; Kujofsa, Tedi; Ayers, John E.
2018-02-01
We present an approximate model for the threading dislocations in III-V heterostructures and have applied this model to study the defect behavior in metamorphic triple-junction solar cells. This model represents a new approach in which the coefficient for second-order threading dislocation annihilation and coalescence reactions is considered to be determined by the length of misfit dislocations, LMD, in the structure, and we therefore refer to it as the LMD model. On the basis of this model we have compared the average threading dislocation densities in the active layers of triple junction solar cells using linearly-graded buffers of varying thicknesses as well as S-graded (complementary error function) buffers with varying thicknesses and standard deviation parameters. We have shown that the threading dislocation densities in the active regions of metamorphic tandem solar cells depend not only on the thicknesses of the buffer layers but on their compositional grading profiles. The use of S-graded buffer layers instead of linear buffers resulted in lower threading dislocation densities. Moreover, the threading dislocation densities depended strongly on the standard deviation parameters used in the S-graded buffers, with smaller values providing lower threading dislocation densities.
Benchmarking Distance Control and Virtual Drilling for Lateral Skull Base Surgery.
Voormolen, Eduard H J; Diederen, Sander; van Stralen, Marijn; Woerdeman, Peter A; Noordmans, Herke Jan; Viergever, Max A; Regli, Luca; Robe, Pierre A; Berkelbach van der Sprenkel, Jan Willem
2018-01-01
Novel audiovisual feedback methods were developed to improve image guidance during skull base surgery by providing audiovisual warnings when the drill tip enters a protective perimeter set at a distance around anatomic structures ("distance control") and visualizing bone drilling ("virtual drilling"). To benchmark the drill damage risk reduction provided by distance control, to quantify the accuracy of virtual drilling, and to investigate whether the proposed feedback methods are clinically feasible. In a simulated surgical scenario using human cadavers, 12 unexperienced users (medical students) drilled 12 mastoidectomies. Users were divided into a control group using standard image guidance and 3 groups using distance control with protective perimeters of 1, 2, or 3 mm. Damage to critical structures (sigmoid sinus, semicircular canals, facial nerve) was assessed. Neurosurgeons performed another 6 mastoidectomy/trans-labyrinthine and retro-labyrinthine approaches. Virtual errors as compared with real postoperative drill cavities were calculated. In a clinical setting, 3 patients received lateral skull base surgery with the proposed feedback methods. Users drilling with distance control protective perimeters of 3 mm did not damage structures, whereas the groups using smaller protective perimeters and the control group injured structures. Virtual drilling maximum cavity underestimations and overestimations were 2.8 ± 0.1 and 3.3 ± 0.4 mm, respectively. Feedback methods functioned properly in the clinical setting. Distance control reduced the risks of drill damage proportional to the protective perimeter distance. Errors in virtual drilling reflect spatial errors of the image guidance system. These feedback methods are clinically feasible. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Malys, Brian J.; Piotrowski, Michelle L.; Owens, Kevin G.
2018-02-01
Frustrated by worse than expected error for both peak area and time-of-flight (TOF) in matrix assisted laser desorption ionization (MALDI) experiments using samples prepared by electrospray deposition, it was finally determined that there was a correlation between sample location on the target plate and the measured TOF/peak area. Variations in both TOF and peak area were found to be due to small differences in the initial position of ions formed in the source region of the TOF mass spectrometer. These differences arise largely from misalignment of the instrument sample stage, with a smaller contribution arising from the non-ideal shape of the target plates used. By physically measuring the target plates used and comparing TOF data collected from three different instruments, an estimate of the magnitude and direction of the sample stage misalignment was determined for each of the instruments. A correction method was developed to correct the TOFs and peak areas obtained for a given combination of target plate and instrument. Two correction factors are determined, one by initially collecting spectra from each sample position used and another by using spectra from a single position for each set of samples on a target plate. For TOF and mass values, use of the correction factor reduced the error by a factor of 4, with the relative standard deviation (RSD) of the corrected masses being reduced to 12-24 ppm. For the peak areas, the RSD was reduced from 28% to 16% for samples deposited twice onto two target plates over two days.
NASA Astrophysics Data System (ADS)
Malys, Brian J.; Piotrowski, Michelle L.; Owens, Kevin G.
2017-12-01
Frustrated by worse than expected error for both peak area and time-of-flight (TOF) in matrix assisted laser desorption ionization (MALDI) experiments using samples prepared by electrospray deposition, it was finally determined that there was a correlation between sample location on the target plate and the measured TOF/peak area. Variations in both TOF and peak area were found to be due to small differences in the initial position of ions formed in the source region of the TOF mass spectrometer. These differences arise largely from misalignment of the instrument sample stage, with a smaller contribution arising from the non-ideal shape of the target plates used. By physically measuring the target plates used and comparing TOF data collected from three different instruments, an estimate of the magnitude and direction of the sample stage misalignment was determined for each of the instruments. A correction method was developed to correct the TOFs and peak areas obtained for a given combination of target plate and instrument. Two correction factors are determined, one by initially collecting spectra from each sample position used and another by using spectra from a single position for each set of samples on a target plate. For TOF and mass values, use of the correction factor reduced the error by a factor of 4, with the relative standard deviation (RSD) of the corrected masses being reduced to 12-24 ppm. For the peak areas, the RSD was reduced from 28% to 16% for samples deposited twice onto two target plates over two days. [Figure not available: see fulltext.
Comparing interval estimates for small sample ordinal CFA models
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
Cost-effectiveness of the stream-gaging program in Nebraska
Engel, G.B.; Wahl, K.L.; Boohar, J.A.
1984-01-01
This report documents the results of a study of the cost-effectiveness of the streamflow information program in Nebraska. Presently, 145 continuous surface-water stations are operated in Nebraska on a budget of $908,500. Data uses and funding sources are identified for each of the 145 stations. Data from most stations have multiple uses. All stations have sufficient justification for continuation, but two stations primarily are used in short-term research studies; their continued operation needs to be evaluated when the research studies end. The present measurement frequency produces an average standard error for instantaneous discharges of about 12 percent, including periods when stage data are missing. Altering the travel routes and the measurement frequency will allow a reduction in standard error of about 1 percent with the present budget. Standard error could be reduced to about 8 percent if lost record could be eliminated. A minimum budget of $822,000 is required to operate the present network, but operations at that funding level would result in an increase in standard error to about 16 percent. The maximum budget analyzed was $1,363,000, which would result in an average standard error of 6 percent. (USGS)
Spencer, Bruce D
2012-06-01
Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.
Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.
Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J
2012-08-01
Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.
Determination of Small Animal Long Bone Properties Using Densitometry
NASA Technical Reports Server (NTRS)
Breit, Gregory A.; Goldberg, BethAnn K.; Whalen, Robert T.; Hargens, Alan R. (Technical Monitor)
1996-01-01
Assessment of bone structural property changes due to loading regimens or pharmacological treatment typically requires destructive mechanical testing and sectioning. Our group has accurately and non-destructively estimated three dimensional cross-sectional areal properties (principal moments of inertia, Imax and Imin, and principal angle, Theta) of human cadaver long bones from pixel-by-pixel analysis of three non-coplanar densitometry scans. Because the scanner beam width is on the order of typical small animal diapbyseal diameters, applying this technique to high-resolution scans of rat long bones necessitates additional processing to minimize errors induced by beam smearing, such as dependence on sample orientation and overestimation of Imax and Imin. We hypothesized that these errors are correctable by digital image processing of the raw scan data. In all cases, four scans, using only the low energy data (Hologic QDR-1000W, small animal mode), are averaged to increase image signal-to-noise ratio. Raw scans are additionally processed by interpolation, deconvolution by a filter derived from scanner beam characteristics, and masking using a variable threshold based on image dynamic range. To assess accuracy, we scanned an aluminum step phantom at 12 orientations over a range of 180 deg about the longitudinal axis, in 15 deg increments. The phantom dimensions (2.5, 3.1, 3.8 mm x 4.4 mm; Imin/Imax: 0.33-0.74) were comparable to the dimensions of a rat femur which was also scanned. Cross-sectional properties were determined at 0.25 mm increments along the length of the phantom and femur. The table shows average error (+/- SD) from theory of Imax, Imin, and Theta) over the 12 orientations, calculated from raw and fully processed phantom images, as well as standard deviations about the mean for the femur scans. Processing of phantom scans increased agreement with theory, indicating improved accuracy. Smaller standard deviations with processing indicate increased precision and repeatability. Standard deviations for the femur are consistent with those of the phantom. We conclude that in conjunction with digital image enhancement, densitometry scans are suitable for non-destructive determination of areal properties of small animal bones of comparable size to our phantom, allowing prediction of Imax and Imin within 2.5% and Theta within a fraction of a degree. This method represents a considerable extension of current methods of analyzing bone tissue distribution in small animal bones.
Metrology for Industry for use in the Manufacture of Grazing Incidence Beam Line Mirrors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metz, James P.; Parks, Robert E.
2014-12-01
The goal of this SBIR was to determine the slope sensitivity of Specular Reflection Deflectometry (SRD) and whether shearing methods had the sensitivity to be able to separate errors in the test equipment from slope error in the unit under test (UUT), or mirror. After many variations of test parameters it does not appear that SRD yields results much better than 1 μ radian RMS independent of how much averaging is done. Of course, a single number slope sensitivity over the full range of spatial scales is not a very insightful number in the same sense as a single numbermore » phase or height RMS value in interferometry does not tell the full story. However, the 1 μ radian RMS number is meaningful when contrasted with a sensitivity goal of better than 0.1 μ radian RMS. Shearing is a time proven method of separating the errors in a measurement from the actual shape of a UUT. It is accomplished by taking multiple measurements while moving the UUT relative to the test instrument. This process makes it possible to separate the two errors sources but only to a sensitivity of about 1 μ radian RMS. Another aspect of our conclusions is that this limit probably holds largely independent of the spatial scale of the test equipment. In the proposal for this work it was suggested that a test screen the full size of the UUT could be used to determine the slopes on scales of maybe 0.01 to full scale of the UUT while smaller screens and shorter focal length lenses could be used to measure shorter, or smaller, patches of slope. What we failed to take into consideration was that as the scale of the test equipment got smaller so too did the optical lever arm on which the slope was calculated. Although we did not do a test with a shorter focal length lens over a smaller sample area it is hard to argue with the logic that the slope sensitivity will be about the same independent of the spatial scale of the measurement assuming the test equipment is similarly scaled. On a more positive note, SRD does appear to be a highly flexible, easy to implement, rather inexpensive test for free form optics that require a dynamic range that exceeds that of interferometry. These optics are quite often specified to have more relaxed slope errors, on the order of 1 μ radian RMS or greater. It would be shortsighted to not recognize the value of this test method in the bigger picture.« less
NASA Astrophysics Data System (ADS)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
Cost effectiveness of the US Geological Survey stream-gaging program in Alabama
Jeffcoat, H.H.
1987-01-01
A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)
How good are the Garvey-Kelson predictions of nuclear masses?
NASA Astrophysics Data System (ADS)
Morales, Irving O.; López Vieyra, J. C.; Hirsch, J. G.; Frank, A.
2009-09-01
The Garvey-Kelson relations are used in an iterative process to predict nuclear masses in the neighborhood of nuclei with measured masses. Average errors in the predicted masses for the first three iteration shells are smaller than those obtained with the best nuclear mass models. Their quality is comparable with the Audi-Wapstra extrapolations, offering a simple and reproducible procedure for short range mass predictions. A systematic study of the way the error grows as a function of the iteration and the distance to the known masses region, shows that a correlation exists between the error and the residual neutron-proton interaction, produced mainly by the implicit assumption that V varies smoothly along the nuclear landscape.
New class of photonic quantum error correction codes
NASA Astrophysics Data System (ADS)
Silveri, Matti; Michael, Marios; Brierley, R. T.; Salmilehto, Juha; Albert, Victor V.; Jiang, Liang; Girvin, S. M.
We present a new class of quantum error correction codes for applications in quantum memories, communication and scalable computation. These codes are constructed from a finite superposition of Fock states and can exactly correct errors that are polynomial up to a specified degree in creation and destruction operators. Equivalently, they can perform approximate quantum error correction to any given order in time step for the continuous-time dissipative evolution under these errors. The codes are related to two-mode photonic codes but offer the advantage of requiring only a single photon mode to correct loss (amplitude damping), as well as the ability to correct other errors, e.g. dephasing. Our codes are also similar in spirit to photonic ''cat codes'' but have several advantages including smaller mean occupation number and exact rather than approximate orthogonality of the code words. We analyze how the rate of uncorrectable errors scales with the code complexity and discuss the unitary control for the recovery process. These codes are realizable with current superconducting qubit technology and can increase the fidelity of photonic quantum communication and memories.
NASA Astrophysics Data System (ADS)
Sakata, Shojiro; Fujisawa, Masaya
It is a well-known fact [7], [9] that the BMS algorithm with majority voting can decode up to half the Feng-Rao designed distance dFR. Since dFR is not smaller than the Goppa designed distance dG, that algorithm can correct up to \\lfloor \\frac{d_G-1}{2}\\rfloor errors. On the other hand, it has been considered to be evident that the original BMS algorithm (without voting) [1], [2] can correct up to \\lfloor \\frac{d_G-g-1}{2}\\rfloor errors similarly to the basic algorithm by Skorobogatov-Vladut. But, is it true? In this short paper, we show that it is true, although we need a few remarks and some additional procedures for determining the Groebner basis of the error locator ideal exactly. In fact, as the basic algorithm gives a set of polynomials whose zero set contains the error locators as a subset, it cannot always give the exact error locators, unless the syndrome equation is solved to find the error values in addition.
In-die mask registration measurement on 28nm-node and beyond
NASA Astrophysics Data System (ADS)
Chen, Shen Hung; Cheng, Yung Feng; Chen, Ming Jui
2013-09-01
As semiconductor go to smaller node, the critical dimension (CD) of process become more and more small. For lithography, RET (Resolution Enhancement Technology) applications can be used for wafer printing of smaller CD/pitch on 28nm node and beyond. SMO (Source Mask Optimization), DPT (Double Patterning Technology) and SADP (Self-Align Double Patterning) can provide lower k1 value for lithography. In another way, image placement error and overlay control also become more and more important for smaller chip size (advanced node). Mask registration (image placement error) and mask overlay are important factors to affect wafer overlay control/performance especially for DPT or SADP. In traditional method, the designed registration marks (cross type, square type) with larger CD were put into scribe-line of mask frame for registration and overlay measurement. However, these patterns are far way from real patterns. It does not show the registration of real pattern directly and is not a convincing method. In this study, the in-die (in-chip) registration measurement is introduced. We extract the dummy patterns that are close to main pattern from post-OPC (Optical Proximity Correction) gds by our desired rule and choose the patterns that distribute over whole mask uniformly. The convergence test shows 100 points measurement has a reliable result.
Stuellein, Nicole; Radach, Ralph R; Jacobs, Arthur M; Hofmann, Markus J
2016-05-15
Computational models of word recognition already successfully used associative spreading from orthographic to semantic levels to account for false memories. But can they also account for semantic effects on event-related potentials in a recognition memory task? To address this question, target words in the present study had either many or few semantic associates in the stimulus set. We found larger P200 amplitudes and smaller N400 amplitudes for old words in comparison to new words. Words with many semantic associates led to larger P200 amplitudes and a smaller N400 in comparison to words with a smaller number of semantic associations. We also obtained inverted response time and accuracy effects for old and new words: faster response times and fewer errors were found for old words that had many semantic associates, whereas new words with a large number of semantic associates produced slower response times and more errors. Both behavioral and electrophysiological results indicate that semantic associations between words can facilitate top-down driven lexical access and semantic integration in recognition memory. Our results support neurophysiologically plausible predictions of the Associative Read-Out Model, which suggests top-down connections from semantic to orthographic layers. Copyright © 2016 Elsevier B.V. All rights reserved.
The proposed coding standard at GSFC
NASA Technical Reports Server (NTRS)
Morakis, J. C.; Helgert, H. J.
1977-01-01
As part of the continuing effort to introduce standardization of spacecraft and ground equipment in satellite systems, NASA's Goddard Space Flight Center and other NASA facilities have supported the development of a set of standards for the use of error control coding in telemetry subsystems. These standards are intended to ensure compatibility between spacecraft and ground encoding equipment, while allowing sufficient flexibility to meet all anticipated mission requirements. The standards which have been developed to date cover the application of block codes in error detection and error correction modes, as well as short and long constraint length convolutional codes decoded via the Viterbi and sequential decoding algorithms, respectively. Included are detailed specifications of the codes, and their implementation. Current effort is directed toward the development of standards covering channels with burst noise characteristics, channels with feedback, and code concatenation.
Neural Mechanisms Underlying the Cost of Task Switching: An ERP Study
Li, Ling; Wang, Meng; Zhao, Qian-Jing; Fogelson, Noa
2012-01-01
Background When switching from one task to a new one, reaction times are prolonged. This phenomenon is called switch cost (SC). Researchers have recently used several kinds of task-switching paradigms to uncover neural mechanisms underlying the SC. Task-set reconfiguration and passive dissipation of a previously relevant task-set have been reported to contribute to the cost of task switching. Methodology/Principal Findings An unpredictable cued task-switching paradigm was used, during which subjects were instructed to switch between a color and an orientation discrimination task. Electroencephalography (EEG) and behavioral measures were recorded in 14 subjects. Response-stimulus interval (RSI) and cue-stimulus interval (CSI) were manipulated with short and long intervals, respectively. Switch trials delayed reaction times (RTs) and increased error rates compared with repeat trials. The SC of RTs was smaller in the long CSI condition. For cue-locked waveforms, switch trials generated a larger parietal positive event-related potential (ERP), and a larger slow parietal positivity compared with repeat trials in the short and long CSI condition. Neural SC of cue-related ERP positivity was smaller in the long RSI condition. For stimulus-locked waveforms, a larger switch-related central negative ERP component was observed, and the neural SC of the ERP negativity was smaller in the long CSI. Results of standardized low resolution electromagnetic tomography (sLORETA) for both ERP positivity and negativity showed that switch trials evoked larger activation than repeat trials in dorsolateral prefrontal cortex (DLPFC) and posterior parietal cortex (PPC). Conclusions/Significance The results provide evidence that both RSI and CSI modulate the neural activities in the process of task-switching, but that these have a differential role during task-set reconfiguration and passive dissipation of a previously relevant task-set. PMID:22860090
Social disparities in nitrate-contaminated drinking water in California's San Joaquin Valley.
Balazs, Carolina; Morello-Frosch, Rachel; Hubbard, Alan; Ray, Isha
2011-09-01
Research on drinking water in the United States has rarely examined disproportionate exposures to contaminants faced by low-income and minority communities. This study analyzes the relationship between nitrate concentrations in community water systems (CWSs) and the racial/ethnic and socioeconomic characteristics of customers. We hypothesized that CWSs in California's San Joaquin Valley that serve a higher proportion of minority or residents of lower socioeconomic status have higher nitrate levels and that these disparities are greater among smaller drinking water systems. We used water quality monitoring data sets (1999-2001) to estimate nitrate levels in CWSs, and source location and census block group data to estimate customer demographics. Our linear regression model included 327 CWSs and reported robust standard errors clustered at the CWS level. Our adjusted model controlled for demographics and water system characteristics and stratified by CWS size. Percent Latino was associated with a 0.04-mg nitrate-ion (NO3)/L increase in a CWS's estimated NO3 concentration [95% confidence interval (CI), -0.08 to 0.16], and rate of home ownership was associated with a 0.16-mg NO3/L decrease (95% CI, -0.32 to 0.002). Among smaller systems, the percentage of Latinos and of homeownership was associated with an estimated increase of 0.44 mg NO3/L (95% CI, 0.03-0.84) and a decrease of 0.15 mg NO3/L (95% CI, -0.64 to 0.33), respectively. Our findings suggest that in smaller water systems, CWSs serving larger percentages of Latinos and renters receive drinking water with higher nitrate levels. This suggests an environmental inequity in drinking water quality.
Social Disparities in Nitrate-Contaminated Drinking Water in California’s San Joaquin Valley
Morello-Frosch, Rachel; Hubbard, Alan; Ray, Isha
2011-01-01
Background: Research on drinking water in the United States has rarely examined disproportionate exposures to contaminants faced by low-income and minority communities. This study analyzes the relationship between nitrate concentrations in community water systems (CWSs) and the racial/ethnic and socioeconomic characteristics of customers. Objectives: We hypothesized that CWSs in California’s San Joaquin Valley that serve a higher proportion of minority or residents of lower socioeconomic status have higher nitrate levels and that these disparities are greater among smaller drinking water systems. Methods: We used water quality monitoring data sets (1999–2001) to estimate nitrate levels in CWSs, and source location and census block group data to estimate customer demographics. Our linear regression model included 327 CWSs and reported robust standard errors clustered at the CWS level. Our adjusted model controlled for demographics and water system characteristics and stratified by CWS size. Results: Percent Latino was associated with a 0.04-mg nitrate-ion (NO3)/L increase in a CWS’s estimated NO3 concentration [95% confidence interval (CI), –0.08 to 0.16], and rate of home ownership was associated with a 0.16-mg NO3/L decrease (95% CI, –0.32 to 0.002). Among smaller systems, the percentage of Latinos and of homeownership was associated with an estimated increase of 0.44 mg NO3/L (95% CI, 0.03–0.84) and a decrease of 0.15 mg NO3/L (95% CI, –0.64 to 0.33), respectively. Conclusions: Our findings suggest that in smaller water systems, CWSs serving larger percentages of Latinos and renters receive drinking water with higher nitrate levels. This suggests an environmental inequity in drinking water quality. PMID:21642046
NASA Astrophysics Data System (ADS)
Sensui, Takayuki
2012-10-01
Although digitalization has tripled consumer-class camera market scale, extreme reductions in prices of fixed-lens cameras has reduced profitability. As a result, a number of manufacturers have entered the market of the System DSC i.e. digital still camera with interchangeable lens, where large profit margins are possible, and many high ratio zoom lenses with image stabilization functions have been released. Quiet actuators are another indispensable component. Design with which there is little degradation in performance due to all types of errors is preferred for good balance in terms of size, lens performance, and the rate of quality to sub-standard products. Decentering, such as that caused by tilting, sensitivity of moving groups is especially important. In addition, image stabilization mechanisms actively shift lens groups. Development of high ratio zoom lenses with vibration reduction mechanism is confronted by the challenge of reduced performance due to decentering, making control over decentering sensitivity between lens groups everything. While there are a number of ways to align lenses (axial alignment), shock resistance and ability to stand up to environmental conditions must also be considered. Naturally, it is very difficult, if not impossible, to make lenses smaller and achieve a low decentering sensitivity at the same time. 4-group zoom construction is beneficial in making lenses smaller, but decentering sensitivity is greater. 5-group zoom configuration makes smaller lenses more difficult, but it enables lower decentering sensitivities. At Nikon, the most advantageous construction is selected for each lens based on specifications. The AF-S DX NIKKOR 18-200mm f/3.5-5.6G ED VR II and AF-S NIKKOR 28-300mm f/3.5-5.6G ED VR are excellent examples of this.
Atmospheric Correction of Satellite Imagery Using Modtran 3.5 Code
NASA Technical Reports Server (NTRS)
Gonzales, Fabian O.; Velez-Reyes, Miguel
1997-01-01
When performing satellite remote sensing of the earth in the solar spectrum, atmospheric scattering and absorption effects provide the sensors corrupted information about the target's radiance characteristics. We are faced with the problem of reconstructing the signal that was reflected from the target, from the data sensed by the remote sensing instrument. This article presents a method for simulating radiance characteristic curves of satellite images using a MODTRAN 3.5 band model (BM) code to solve the radiative transfer equation (RTE), and proposes a method for the implementation of an adaptive system for automated atmospheric corrections. The simulation procedure is carried out as follows: (1) for each satellite digital image a radiance characteristic curve is obtained by performing a digital number (DN) to radiance conversion, (2) using MODTRAN 3.5 a simulation of the images characteristic curves is generated, (3) the output of the code is processed to generate radiance characteristic curves for the simulated cases. The simulation algorithm was used to simulate Landsat Thematic Mapper (TM) images for two types of locations: the ocean surface, and a forest surface. The simulation procedure was validated by computing the error between the empirical and simulated radiance curves. While results in the visible region of the spectrum where not very accurate, those for the infrared region of the spectrum were encouraging. This information can be used for correction of the atmospheric effects. For the simulation over ocean, the lowest error produced in this region was of the order of 105 and up to 14 times smaller than errors in the visible region. For the same spectral region on the forest case, the lowest error produced was of the order of 10-4, and up to 41 times smaller than errors in the visible region,
Quality and strength of patient safety climate on medical-surgical units.
Hughes, Linda C; Chang, Yunkyung; Mark, Barbara A
2009-01-01
Describing the safety climate in hospitals is an important first step in creating work environments where safety is a priority. Yet, little is known about the patient safety climate on medical-surgical units. Study purposes were to describe quality and strength of the patient safety climate on medical-surgical units and explore hospital and unit characteristics associated with this climate. Data came from a larger organizational study to investigate hospital and unit characteristics associated with organizational, nurse, and patient outcomes. The sample for this study was 3,689 RNs on 286 medical-surgical units in 146 hospitals. Nursing workgroup and managerial commitment to safety were the two most strongly positive attributes of the patient safety climate. However, issues surrounding the balance between job duties and safety compliance and nurses' reluctance to reveal errors continue to be problematic. Nurses in Magnet hospitals were more likely to communicate about errors and participate in error-related problem solving. Nurses on smaller units and units with lower work complexity reported greater safety compliance and were more likely to communicate about and reveal errors. Nurses on smaller units also reported greater commitment to patient safety and participation in error-related problem solving. Nursing workgroup commitment to safety is a valuable resource that can be leveraged to promote a sense of personal responsibility for and shared ownership of patient safety. Managers can capitalize on this commitment by promoting a work environment in which control over nursing practice and active participation in unit decisions are encouraged and by developing channels of communication that increase staff nurse involvement in identifying patient safety issues, prioritizing unit-level safety goals, and resolving day-to-day operational problems the have the potential to jeopardize patient safety.
Electron correlation and the self-interaction error of density functional theory
NASA Astrophysics Data System (ADS)
Polo, Victor; Kraka, Elfi; Cremer, Dieter
The self-interaction error (SIE) of commonly used DFT functionals has been systematically investigated by comparing the electron density distribution ρ( r ) generated by self-interaction corrected DFT (SIC-DFT) with a series of reference densities obtained by DFT or wavefunction theory (WFT) methods that cover typical electron correlation effects. Although the SIE of GGA functionals is considerably smaller than that of LDA functionals, it has significant consequences for the coverage of electron correlation effects at the DFT level of theory. The exchange SIE mimics long range (non-dynamic) pair correlation effects, and is responsible for the fact that the electron density of DFT exchange-only calculations resembles often that of MP4, MP2 or even CCSD(T) calculations. Changes in the electron density caused by SICDFT exchange are comparable with those that are associated with HF exchange. Correlation functionals contract the density towards the bond and the valence region, thus taking negative charge out of the van der Waals region where these effects are exaggerated by the influence of the SIE of the correlation functional. Hence, SIC-DFT leads in total to a relatively strong redistribution of negative charge from van der Waals, non-bonding, and valence regions of heavy atoms to the bond regions. These changes, although much stronger, resemble those obtained when comparing the densities of hybrid functionals such as B3LYP with the corresponding GGA functional BLYP. Hence, the balanced mixing of local and non-local exchange and correlation effects as it is achieved by hybrid functionals mimics SIC-DFT and can be considered as an economic way to include some SIC into standard DFT. However, the investigation shows also that the SIC-DFT description of molecules is unreliable because the standard functionals used were optimized for DFT including the SIE.
Holland-Letz, Tim; Endres, Heinz G; Biedermann, Stefanie; Mahn, Matthias; Kunert, Joachim; Groh, Sabine; Pittrow, David; von Bilderling, Peter; Sternitzky, Reinhardt; Diehm, Curt
2007-05-01
The reliability of ankle-brachial index (ABI) measurements performed by different observer groups in primary care has not yet been determined. The aims of the study were to provide precise estimates for all effects influencing the variability of the ABI (patients' individual variability, intra- and inter-observer variability), with particular focus on the performance of different observer groups. Using a partially balanced incomplete block design, 144 unselected individuals aged > or = 65 years underwent double ABI measurements by one vascular surgeon or vascular physician, one family physician and one nurse with training in Doppler sonography. Three groups comprising a total of 108 individuals were analyzed (only two with ABI < 0.90). Errors for two repeated measurements for all three observer groups did not differ (experts 8.5%, family physicians 7.7%, and nurses 7.5%, p = 0.39). There was no relevant bias among observer groups. Intra-observer variability expressed as standard deviation divided by the mean was 8%, and inter-observer variability was 9%. In conclusion, reproducibility of the ABI measurement was good in this cohort of elderly patients who almost all had values in the normal range. The mean error of 8-9% within or between observers is smaller than with established screening measures. Since there were no differences among observers with different training backgrounds, our study confirms the appropriateness of ABI assessment for screening peripheral arterial disease (PAD) and generalized atherosclerosis in the primary case setting. Given the importance of the early detection and management of PAD, this diagnostic tool should be used routinely as a standard for PAD screening. Additional studies will be required to confirm our observations in patients with PAD of various severities.
Ultrasonic tracking of shear waves using a particle filter
Ingle, Atul N.; Ma, Chi; Varghese, Tomy
2015-01-01
Purpose: This paper discusses an application of particle filtering for estimating shear wave velocity in tissue using ultrasound elastography data. Shear wave velocity estimates are of significant clinical value as they help differentiate stiffer areas from softer areas which is an indicator of potential pathology. Methods: Radio-frequency ultrasound echo signals are used for tracking axial displacements and obtaining the time-to-peak displacement at different lateral locations. These time-to-peak data are usually very noisy and cannot be used directly for computing velocity. In this paper, the denoising problem is tackled using a hidden Markov model with the hidden states being the unknown (noiseless) time-to-peak values. A particle filter is then used for smoothing out the time-to-peak curve to obtain a fit that is optimal in a minimum mean squared error sense. Results: Simulation results from synthetic data and finite element modeling suggest that the particle filter provides lower mean squared reconstruction error with smaller variance as compared to standard filtering methods, while preserving sharp boundary detail. Results from phantom experiments show that the shear wave velocity estimates in the stiff regions of the phantoms were within 20% of those obtained from a commercial ultrasound scanner and agree with estimates obtained using a standard method using least-squares fit. Estimates of area obtained from the particle filtered shear wave velocity maps were within 10% of those obtained from B-mode ultrasound images. Conclusions: The particle filtering approach can be used for producing visually appealing SWV reconstructions by effectively delineating various areas of the phantom with good image quality properties comparable to existing techniques. PMID:26520761
SEM Microanalysis of Particles: Concerns and Suggestions
NASA Astrophysics Data System (ADS)
Fournelle, J.
2008-12-01
The scanning electron microscope (SEM) is well suited to examine and characterize small (i.e. <10 micron) particles. Particles can be imaged and sizes and shapes determined. With energy dispersive x-ray spectrometers (EDS), chemical compositions can be determined quickly. Despite the ease in acquiring x-ray spectra and chemical compositions, there are potentially major sources of error to be recognized. Problems with EDS analyses of small particles: Qualitive estimates of composition (e.g. stating that Si>Al>Ca>Fe plus O) are easy. However, to be able to have confidence that a chemical composition is accurate, several issues should be examined. (1) Particle Mass Effect: Is the accelerating voltage appropriate for the specimen size? Are all the incident electrons remaining inside the particle, and not traveling out of the sample side or bottom? (2) Particle Absorption Effect: What is the geometric relationship of the beam impact point to the x-ray detector? The x-ray intensity will vary by significant amounts for the same material if the grains are irregular and the path out of the sample in the direction of the detector is longer or shorter. (3) Particle Fluorescence Effect: This is generally a smaller error, but should be considered: for small particles, using large standards, there will be a few % less x-rays generated in a small particle relative to one of the same composition and 50-100 times larger. Also, if the sample sits on a grid of a particular composition (e.g. Si wafer) potentially several % of Si could appear in the analysis. (4) In a increasing number of laboratories, with environmental or variable pressure SEMs, the Gas Skirt Effect is operating against you: here the incident electron beam scatters in the gas in the chamber, with less electrons impacting the target spot and some others hitting grains 100s of microns away, producing spectra that could be faulty. (5) Inclusion of measured oxygen: if the measured oxygen x-ray counts are utilized, significant errors can be introduced by differential absorption of this low energy x-ray. (6) Standardless Analysis: This typical method of doing EDS analysis has a major pitfall: the printed analysis is normalized to 100 wt%, thereby eliminating an important clue to analytical error. Suggestions: (1) Use lower voltage, e.g. 10 kV, reducing effects 1,2,3 above. (2) Use standards--traditional flat polished ones--and don't initially normalize totals. Discrepancies can be observed and addressed, not ignored. (3) Alway include oxygen by stoichometry, not measured. (4) Experimental simulation. Using material of constant composition (e.g. NIST glass K-411, or other homogeneous multi-element material with the elements of interest), grind into fragments of similar size to your unknowns, and see what is the analytical error for measurements of these known particles. Analyses of your unknown material will be no better, and probably worse than that, particularly if the grains are smaller. The results of this experiment should be reported whenever discussing measurements on the unknown materials. (5) Monte Carlo simulation. Programs such PENEPMA allows creation of complex geometry samples (and samples on substrates) and resulting EDS spectra can be generated. This allows estimation of errors for representative cases. It is slow, however; other simulations such as DTSA-II promise faster simulations with some limitations. (6) EBSD: this is a perfectly suited for some problems with SEM identification of small particles, e.g. distinguishing magnetite (Fe3O4) from hematite (Fe2O3), which is virtually impossible to do by EDS. With the appropriate hardware and software, electron diffraction patterns on particles can be gathered and the crystal type determined.
NASA Astrophysics Data System (ADS)
Zhang, Y.
2017-12-01
The unstructured formulation of the third/fourth-order flux operators used by the Advanced Research WRF is extended twofold on spherical icosahedral grids. First, the fifth- and sixth-order flux operators of WRF are further extended, and the nominally second- to sixth-order operators are then compared based on the solid body rotation and deformational flow tests. Results show that increasing the nominal order generally leads to smaller absolute errors. Overall, the fifth-order scheme generates the smallest errors in limited and unlimited tests, although it does not enhance the convergence rate. The fifth-order scheme also exhibits smaller sensitivity to the damping coefficient than the third-order scheme. Overall, the even-order schemes have higher limiter sensitivity than the odd-order schemes. Second, a triangular version of these high-order operators is repurposed for transporting the potential vorticity in a space-time-split shallow water framework. Results show that a class of nominally third-order upwind-biased operators generates better results than second- and fourth-order counterparts. The increase of the potential enstrophy over time is suppressed owing to the damping effect. The grid-scale noise in the vorticity is largely alleviated, and the total energy remains conserved. Moreover, models using high-order operators show smaller numerical errors in the vorticity field because of a more accurate representation of the nonlinear Coriolis term. This improvement is especially evident in the Rossby-Haurwitz wave test, in which the fluid is highly rotating. Overall, flux operators with higher damping coefficients, which essentially behaves like the Anticipated Potential Vorticity Method, present optimal results.
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Derivation of an analytic expression for the error associated with the noise reduction rating
NASA Astrophysics Data System (ADS)
Murphy, William J.
2005-04-01
Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-20
...The Food and Drug Administration (FDA or we) is correcting the preamble to a proposed rule that published in the Federal Register of January 16, 2013. That proposed rule would establish science-based minimum standards for the safe growing, harvesting, packing, and holding of produce, meaning fruits and vegetables grown for human consumption. FDA proposed these standards as part of our implementation of the FDA Food Safety Modernization Act. The document published with several technical errors, including some errors in cross references, as well as several errors in reference numbers cited throughout the document. This document corrects those errors. We are also placing a corrected copy of the proposed rule in the docket.
Davis, Edward T; Pagkalos, Joseph; Gallie, Price A M; Macgroarty, Kelly; Waddell, James P; Schemitsch, Emil H
2015-01-01
Optimal component alignment in total knee arthroplasty has been associated with better functional outcome as well as improved implant longevity. The ability to align components optimally during minimally invasive (MIS) total knee replacement (TKR) has been a cause of concern. Computer navigation is a useful aid in achieving the desired alignment although it is limited by the error during the manual registration of landmarks. Our study aims to compare the registration process error between a standard and a MIS surgical approach. We hypothesized that performing the registration error via an MIS approach would increase the registration process error. Five fresh frozen lower limbs were routinely prepared and draped. The registration process was performed through an MIS approach. This was then extended to the standard approach and the registration was performed again. Two surgeons performed the registration process five times with each approach. Performing the registration process through the MIS approach was not associated with higher error compared to the standard approach in the alignment parameters of interest. This rejects our hypothesis. Image-free navigated MIS TKR does not appear to carry higher risk of component malalignment due to the registration process error. Navigation can be used during MIS TKR to improve alignment without reduced accuracy due to the approach.
Construction and assembly of the wire planes for the MicroBooNE Time Projection Chamber
Acciarri, R.; Adams, C.; Asaadi, J.; ...
2017-03-09
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
Error suppression via complementary gauge choices in Reed-Muller codes
NASA Astrophysics Data System (ADS)
Chamberland, Christopher; Jochym-O'Connor, Tomas
2017-09-01
Concatenation of two quantum error-correcting codes with complementary sets of transversal gates can provide a means toward universal fault-tolerant quantum computation. We first show that it is generally preferable to choose the inner code with the higher pseudo-threshold to achieve lower logical failure rates. We then explore the threshold properties of a wide range of concatenation schemes. Notably, we demonstrate that the concatenation of complementary sets of Reed-Muller codes can increase the code capacity threshold under depolarizing noise when compared to extensions of previously proposed concatenation models. We also analyze the properties of logical errors under circuit-level noise, showing that smaller codes perform better for all sampled physical error rates. Our work provides new insights into the performance of universal concatenated quantum codes for both code capacity and circuit-level noise.
Kim, Matthew H; Marulis, Loren M; Grammer, Jennie K; Morrison, Frederick J; Gehring, William J
2017-03-01
Motivational beliefs and values influence how children approach challenging activities. The current study explored motivational processes from an expectancy-value theory framework by studying children's mistakes and their responses to them by focusing on two event-related potential (ERP) components: the error-related negativity (ERN) and the error positivity (Pe). Motivation was assessed using a child-friendly challenge puzzle task and a brief interview measure prior to ERP testing. Data from 50 4- to 6-year-old children revealed that greater perceived competence beliefs were related to a larger Pe, whereas stronger intrinsic task value beliefs were associated with a smaller Pe. Motivation was unrelated to the ERN. Individual differences in early motivational processes may reflect electrophysiological activity related to conscious error awareness. Copyright © 2016 Elsevier Inc. All rights reserved.
Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection
Gürsoy, Doğa; Hong, Young P.; He, Kuan; ...
2017-09-18
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
Construction and assembly of the wire planes for the MicroBooNE Time Projection Chamber
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acciarri, R.; Adams, C.; Asaadi, J.
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
Role-modeling and medical error disclosure: a national survey of trainees.
Martinez, William; Hickson, Gerald B; Miller, Bonnie M; Doukas, David J; Buckley, John D; Song, John; Sehgal, Niraj L; Deitz, Jennifer; Braddock, Clarence H; Lehmann, Lisa Soleymani
2014-03-01
To measure trainees' exposure to negative and positive role-modeling for responding to medical errors and to examine the association between that exposure and trainees' attitudes and behaviors regarding error disclosure. Between May 2011 and June 2012, 435 residents at two large academic medical centers and 1,187 medical students from seven U.S. medical schools received anonymous, electronic questionnaires. The questionnaire asked respondents about (1) experiences with errors, (2) training for responding to errors, (3) behaviors related to error disclosure, (4) exposure to role-modeling for responding to errors, and (5) attitudes regarding disclosure. Using multivariate regression, the authors analyzed whether frequency of exposure to negative and positive role-modeling independently predicted two primary outcomes: (1) attitudes regarding disclosure and (2) nontransparent behavior in response to a harmful error. The response rate was 55% (884/1,622). Training on how to respond to errors had the largest independent, positive effect on attitudes (standardized effect estimate, 0.32, P < .001); negative role-modeling had the largest independent, negative effect (standardized effect estimate, -0.26, P < .001). Positive role-modeling had a positive effect on attitudes (standardized effect estimate, 0.26, P < .001). Exposure to negative role-modeling was independently associated with an increased likelihood of trainees' nontransparent behavior in response to an error (OR 1.37, 95% CI 1.15-1.64; P < .001). Exposure to role-modeling predicts trainees' attitudes and behavior regarding the disclosure of harmful errors. Negative role models may be a significant impediment to disclosure among trainees.
Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool
NASA Astrophysics Data System (ADS)
Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo
2017-05-01
Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.
Error-related brain activity and error awareness in an error classification paradigm.
Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E
2016-10-01
Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.
Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.
2018-01-01
Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737
Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D
2018-01-01
Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.
An inversion-based self-calibration for SIMS measurements: Application to H, F, and Cl in apatite
NASA Astrophysics Data System (ADS)
Boyce, J. W.; Eiler, J. M.
2011-12-01
Measurements of volatile abundances in igneous apatites can provide information regarding the abundances and evolution of volatiles in magmas, with applications to terrestrial volcanism and planetary evolution. Secondary ion mass spectrometry (SIMS) measurements can produce accurate and precise measurements of H and other volatiles in many materials including apatite. SIMS standardization generally makes use of empirical linear transfer functions that relate measured ion ratios to independently known concentrations. However, this approach is often limited by the lack of compositionally diverse, well-characterized, homogeneous standards. In general, SIMS calibrations are developed for minor and trace elements, and any two are treated as independent of one another. However, in crystalline materials, additional stoichiometric constraints may apply. In the case of apatite, the sum of concentrations of abundant volatile elements (H, Cl, and F) should closely approach 100% occupancy of their collective structural site. Here we propose and document the efficacy of a method for standardizing SIMS analyses of abundant volatiles in apatites that takes advantage of this stoichiometric constraint. The principle advantage of this method is that it is effectively self-standardizing; i.e., it requires no independently known homogeneous reference standards. We define a system of independent linear equations relating measured ion ratios (H/P, Cl/P, F/P) and unknown calibration slopes. Given sufficient range in the concentrations of the different elements among apatites measured in a single analytical session, solving this system of equations allows for the calibration slope for each element to be determined without standards, using only blank-corrected ion ratios. In the case that a data set of this kind lacks sufficient range in measured compositions of one or more of the relevant ion ratios, one can employ measurements of additional apatites of a variety of compositions to increase the statistical range and make the inversion more accurate and precise. These additional non-standard apatites need only be wide-ranging in composition: They need not be homogenous nor have known H, F, or Cl concentrations. Tests utilizing synthetic data and data generated in the laboratory indicate that this method should yield satisfactory results provided apatites meet the criteria of the model. The inversion method is able to reproduce conventional calibrations to within <2.5%, a level of accuracy comparable to or even better than the uncertainty of the conventional calibration, and one that includes both error in the inversion method as well as any true error in the independently determined values of the standards. Uncertainties in the inversion calibrations range from 0.1-1.7% (2σ), typically an order of magnitude smaller than the uncertainties in conventional calibrations (~4-5% for H2O, 1-19% for F and Cl). However, potential systematic errors stem from the model assumption of 100% occupancy of this site by the measured elements. Use of this method simplifies analysis of H, F, and Cl in apatites by SIMS, and may also be amenable to other stoichiometrically limited substitution groups, including P+As+S+Si+C in apatite, and Zr+Hf+U+Th in non-metamict zircon.
Rogalsky, Corianne
2009-01-01
Numerous studies have identified an anterior temporal lobe (ATL) region that responds preferentially to sentence-level stimuli. It is unclear, however, whether this activity reflects a response to syntactic computations or some form of semantic integration. This distinction is difficult to investigate with the stimulus manipulations and anomaly detection paradigms traditionally implemented. The present functional magnetic resonance imaging study addresses this question via a selective attention paradigm. Subjects monitored for occasional semantic anomalies or occasional syntactic errors, thus directing their attention to semantic integration, or syntactic properties of the sentences. The hemodynamic response in the sentence-selective ATL region (defined with a localizer scan) was examined during anomaly/error-free sentences only, to avoid confounds due to error detection. The majority of the sentence-specific region of interest was equally modulated by attention to syntactic or compositional semantic features, whereas a smaller subregion was only modulated by the semantic task. We suggest that the sentence-specific ATL region is sensitive to both syntactic and integrative semantic functions during sentence processing, with a smaller portion of this area preferentially involved in the later. This study also suggests that selective attention paradigms may be effective tools to investigate the functional diversity of networks involved in sentence processing. PMID:18669589
The inference of atmospheric ozone using satellite horizon measurements in the 1042 per cm band.
NASA Technical Reports Server (NTRS)
Russell, J. M., III; Drayson, S. R.
1972-01-01
Description of a method for inferring atmospheric ozone information using infrared horizon radiance measurements in the 1042 per cm band. An analysis based on this method proves the feasibility of the horizon experiment for determining ozone information and shows that the ozone partial pressure can be determined in the altitude range from 50 down to 25 km. A comprehensive error study is conducted which considers effects of individual errors as well as the effect of all error sources acting simultaneously. The results show that in the absence of a temperature profile bias error, it should be possible to determine the ozone partial pressure to within an rms value of 15 to 20%. It may be possible to reduce this rms error to 5% by smoothing the solution profile. These results would be seriously degraded by an atmospheric temperature bias error of only 3 K; thus, great care should be taken to minimize this source of error in an experiment. It is probable, in view of recent technological developments, that these errors will be much smaller in future flight experiments and the altitude range will widen to include from about 60 km down to the tropopause region.
NASA Astrophysics Data System (ADS)
Wang, Yang; Beirle, Steffen; Hendrick, Francois; Hilboll, Andreas; Jin, Junli; Kyuberis, Aleksandra A.; Lampel, Johannes; Li, Ang; Luo, Yuhan; Lodi, Lorenzo; Ma, Jianzhong; Navarro, Monica; Ortega, Ivan; Peters, Enno; Polyansky, Oleg L.; Remmers, Julia; Richter, Andreas; Puentedura, Olga; Van Roozendael, Michel; Seyler, André; Tennyson, Jonathan; Volkamer, Rainer; Xie, Pinhua; Zobov, Nikolai F.; Wagner, Thomas
2017-10-01
In order to promote the development of the passive DOAS technique the Multi Axis DOAS - Comparison campaign for Aerosols and Trace gases (MAD-CAT) was held at the Max Planck Institute for Chemistry in Mainz, Germany, from June to October 2013. Here, we systematically compare the differential slant column densities (dSCDs) of nitrous acid (HONO) derived from measurements of seven different instruments. We also compare the tropospheric difference of SCDs (delta SCD) of HONO, namely the difference of the SCDs for the non-zenith observations and the zenith observation of the same elevation sequence. Different research groups analysed the spectra from their own instruments using their individual fit software. All the fit errors of HONO dSCDs from the instruments with cooled large-size detectors are mostly in the range of 0.1 to 0.3 × 1015 molecules cm-2 for an integration time of 1 min. The fit error for the mini MAX-DOAS is around 0.7 × 1015 molecules cm-2. Although the HONO delta SCDs are normally smaller than 6 × 1015 molecules cm-2, consistent time series of HONO delta SCDs are retrieved from the measurements of different instruments. Both fits with a sequential Fraunhofer reference spectrum (FRS) and a daily noon FRS lead to similar consistency. Apart from the mini-MAX-DOAS, the systematic absolute differences of HONO delta SCDs between the instruments are smaller than 0.63 × 1015 molecules cm-2. The correlation coefficients are higher than 0.7 and the slopes of linear regressions deviate from unity by less than 16 % for the elevation angle of 1°. The correlations decrease with an increase in elevation angle. All the participants also analysed synthetic spectra using the same baseline DOAS settings to evaluate the systematic errors of HONO results from their respective fit programs. In general the errors are smaller than 0.3 × 1015 molecules cm-2, which is about half of the systematic difference between the real measurements.The differences of HONO delta SCDs retrieved in the selected three spectral ranges 335-361, 335-373 and 335-390 nm are considerable (up to 0.57 × 1015 molecules cm-2) for both real measurements and synthetic spectra. We performed sensitivity studies to quantify the dominant systematic error sources and to find a recommended DOAS setting in the three spectral ranges. The results show that water vapour absorption, temperature and wavelength dependence of O4 absorption, temperature dependence of Ring spectrum, and polynomial and intensity offset correction all together dominate the systematic errors. We recommend a fit range of 335-373 nm for HONO retrievals. In such fit range the overall systematic uncertainty is about 0.87 × 1015 molecules cm-2, much smaller than those in the other two ranges. The typical random uncertainty is estimated to be about 0.16 × 1015 molecules cm-2, which is only 25 % of the total systematic uncertainty for most of the instruments in the MAD-CAT campaign. In summary for most of the MAX-DOAS instruments for elevation angle below 5°, half daytime measurements (usually in the morning) of HONO delta SCD can be over the detection limit of 0.2 × 1015 molecules cm-2 with an uncertainty of ˜ 0.9 × 1015 molecules cm-2.
Optimizing the learning rate for adaptive estimation of neural encoding models
2018-01-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069
Optimizing the learning rate for adaptive estimation of neural encoding models.
Hsieh, Han-Lin; Shanechi, Maryam M
2018-05-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.
ERIC Educational Resources Information Center
Jeptarus, Kipsamo E.; Ngene, Patrick K.
2016-01-01
The purpose of this research was to study the Lexico-semantic errors of the Keiyo-speaking standard seven primary school learners of English as a Second Language (ESL) in Keiyo District, Kenya. This study was guided by two related theories: Error Analysis Theory/Approach by Corder (1971) which approaches L2 learning through a detailed analysis of…
NASA Technical Reports Server (NTRS)
Hurley, K.; Briggs, M.; Connaughton, V.; Meegan, C.; von Kienlin, A.; Rau, A.; Zhang, X.; Golenetskii, S.; Aptekar, R.; Mazets, E.;
2012-01-01
In the first two years of operation of the Fermi GBM, the 9-spacecraft Interplanetary Network (IPN) detected 158 GBM bursts with one or two distant spacecraft, and triangulated them to annuli or error boxes. Combining the IPN and GBM localizations leads to error boxes which are up to 4 orders of magnitude smaller than those of the GBM alone. These localizations comprise the IPN supplement to the GBM catalog, and they support a wide range of scientific investigations.
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
49 CFR Appendix F to Part 240 - Medical Standards Guidelines
Code of Federal Regulations, 2010 CFR
2010-10-01
... greater guidance on the procedures that should be employed in administering the vision and hearing... more errors on plates 1-15. MULTIFUNCTION VISION TESTER Keystone Orthoscope Any error. OPTEC 2000 Any error. Titmus Vision Tester Any error. Titmus II Vision Tester Any error. (3) In administering any of...
49 CFR Appendix F to Part 240 - Medical Standards Guidelines
Code of Federal Regulations, 2011 CFR
2011-10-01
... greater guidance on the procedures that should be employed in administering the vision and hearing... more errors on plates 1-15. MULTIFUNCTION VISION TESTER Keystone Orthoscope Any error. OPTEC 2000 Any error. Titmus Vision Tester Any error. Titmus II Vision Tester Any error. (3) In administering any of...
Comparison of Optimal Design Methods in Inverse Problems
2011-05-11
corresponding FIM can be estimated by F̂ (τ) = F̂ (τ, θ̂OLS) = (Σ̂ N (θ̂OLS)) −1. (13) The asymptotic standard errors are given by SEk (θ0) = √ (ΣN0 )kk, k...1, . . . , p. (14) These standard errors are estimated in practice (when θ0 and σ0 are not known) by SEk (θ̂OLS) = √ (Σ̂N (θ̂OLS))kk, k = 1... SEk (θ̂boot) = √ Cov(θ̂boot)kk. We will compare the optimal design methods using the standard errors resulting from the op- timal time points each
ERIC Educational Resources Information Center
Longford, Nicholas T.
Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…
Vajda, E G; Skedros, J G; Bloebaum, R D
1998-10-01
Backscattered electron (BSE) imaging has proven to be a useful method for analyzing the mineral distribution in microscopic regions of bone. However, an accepted method of standardization has not been developed, limiting the utility of BSE imaging for truly quantitative analysis. Previous work has suggested that BSE images can be standardized by energy-dispersive x-ray spectrometry (EDX). Unfortunately, EDX-standardized BSE images tend to underestimate the mineral content of bone when compared with traditional ash measurements. The goal of this study is to investigate the nature of the deficit between EDX-standardized BSE images and ash measurements. A series of analytical standards, ashed bone specimens, and unembedded bone specimens were investigated to determine the source of the deficit previously reported. The primary source of error was found to be inaccurate ZAF corrections to account for the organic phase of the bone matrix. Conductive coatings, methylmethacrylate embedding media, and minor elemental constituents in bone mineral introduced negligible errors. It is suggested that the errors would remain constant and an empirical correction could be used to account for the deficit. However, extensive preliminary testing of the analysis equipment is essential.
5 CFR 1605.11 - Makeup of missed or insufficient contributions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... associated breakage to the participant's account in accordance with § 1605.2. (c) Employee makeup... employing agency acknowledges that an error has occurred which has caused a smaller amount of employee... establish a schedule to make up the deficient contributions through future payroll deductions. Employee...
Analysis of DGPS/INS and MLS/INS final approach navigation errors and control performance data
NASA Technical Reports Server (NTRS)
Hueschen, Richard M.; Spitzer, Cary R.
1992-01-01
Flight tests were conducted jointly by NASA Langley Research Center and Honeywell, Inc., on a B-737 research aircraft to record a data base for evaluating the performance of a differential DGPS/inertial navigation system (INS) which used GPS Course/Acquisition code receivers. Estimates from the DGPS/INS and a Microwave Landing System (MLS)/INS, and various aircraft parameter data were recorded in real time aboard the aircraft while flying along the final approach path to landing. This paper presents the mean and standard deviation of the DGPS/INS and MLS/INS navigation position errors computed relative to the laser tracker system and of the difference between the DGPS/INS and MLS/INS velocity estimates. RMS errors are presented for DGPS/INS and MLS/INS guidance errors (localizer and glideslope). The mean navigation position errors and standard deviation of the x position coordinate of the DGPS/INS and MLS/INS systems were found to be of similar magnitude while the standard deviation of the y and z position coordinate errors were significantly larger for DGPS/INS compared to MLS/INS.
Performance improvement of robots using a learning control scheme
NASA Technical Reports Server (NTRS)
Krishna, Ramuhalli; Chiang, Pen-Tai; Yang, Jackson C. S.
1987-01-01
Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.
NASA Astrophysics Data System (ADS)
Shulman, Igor; Gould, Richard W.; Frolov, Sergey; McCarthy, Sean; Penta, Brad; Anderson, Stephanie; Sakalaukus, Peter
2018-03-01
An ensemble-based approach to specify observational error covariance in the data assimilation of satellite bio-optical properties is proposed. The observational error covariance is derived from statistical properties of the generated ensemble of satellite MODIS-Aqua chlorophyll (Chl) images. The proposed observational error covariance is used in the Optimal Interpolation scheme for the assimilation of MODIS-Aqua Chl observations. The forecast error covariance is specified in the subspace of the multivariate (bio-optical, physical) empirical orthogonal functions (EOFs) estimated from a month-long model run. The assimilation of surface MODIS-Aqua Chl improved surface and subsurface model Chl predictions. Comparisons with surface and subsurface water samples demonstrate that data assimilation run with the proposed observational error covariance has higher RMSE than the data assimilation run with "optimistic" assumption about observational errors (10% of the ensemble mean), but has smaller or comparable RMSE than data assimilation run with an assumption that observational errors equal to 35% of the ensemble mean (the target error for satellite data product for chlorophyll). Also, with the assimilation of the MODIS-Aqua Chl data, the RMSE between observed and model-predicted fractions of diatoms to the total phytoplankton is reduced by a factor of two in comparison to the nonassimilative run.
Conditional Standard Errors of Measurement for Scale Scores.
ERIC Educational Resources Information Center
Kolen, Michael J.; And Others
1992-01-01
A procedure is described for estimating the reliability and conditional standard errors of measurement of scale scores incorporating the discrete transformation of raw scores to scale scores. The method is illustrated using a strong true score model, and practical applications are described. (SLD)
Impact of Tropospheric Aerosol Absorption on Ozone Retrieval from buv Measurements
NASA Technical Reports Server (NTRS)
Torres, O.; Bhartia, P. K.
1998-01-01
The impact of tropospheric aerosols on the retrieval of column ozone amounts using spaceborne measurements of backscattered ultraviolet radiation is examined. Using radiative transfer calculations, we show that uv-absorbing desert dust may introduce errors as large as 10% in ozone column amount, depending on the aerosol layer height and optical depth. Smaller errors are produced by carbonaceous aerosols that result from biomass burning. Though the error is produced by complex interactions between ozone absorption (both stratospheric and tropospheric), aerosol scattering, and aerosol absorption, a surprisingly simple correction procedure reduces the error to about 1%, for a variety of aerosols and for a wide range of aerosol loading. Comparison of the corrected TOMS data with operational data indicates that though the zonal mean total ozone derived from TOMS are not significantly affected by these errors, localized affects in the tropics can be large enough to seriously affect the studies of tropospheric ozone that are currently undergoing using the TOMS data.
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
Global Warming Estimation from MSU
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, Robert, Jr.
1999-01-01
In this study, we have developed time series of global temperature from 1980-97 based on the Microwave Sounding Unit (MSU) Ch 2 (53.74 GHz) observations taken from polar-orbiting NOAA operational satellites. In order to create these time series, systematic errors (approx. 0.1 K) in the Ch 2 data arising from inter-satellite differences are removed objectively. On the other hand, smaller systematic errors (approx. 0.03 K) in the data due to orbital drift of each satellite cannot be removed objectively. Such errors are expected to remain in the time series and leave an uncertainty in the inferred global temperature trend. With the help of a statistical method, the error in the MSU inferred global temperature trend resulting from orbital drifts and residual inter-satellite differences of all satellites is estimated to be 0.06 K decade. Incorporating this error, our analysis shows that the global temperature increased at a rate of 0.13 +/- 0.06 K decade during 1980-97.
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
What to use to express the variability of data: Standard deviation or standard error of mean?
Barde, Mohini P; Barde, Prajakt J
2012-07-01
Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.
Price, Larry R; Raju, Nambury; Lurie, Anna; Wilkins, Charles; Zhu, Jianjun
2006-02-01
A specific recommendation of the 1999 Standards for Educational and Psychological Testing by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education is that test publishers report estimates of the conditional standard error of measurement (SEM). Procedures for calculating the conditional (score-level) SEM based on raw scores are well documented; however, few procedures have been developed for estimating the conditional SEM of subtest or composite scale scores resulting from a nonlinear transformation. Item response theory provided the psychometric foundation to derive the conditional standard errors of measurement and confidence intervals for composite scores on the Wechsler Preschool and Primary Scale of Intelligence-Third Edition.
Processing and error compensation of diffractive optical element
NASA Astrophysics Data System (ADS)
Zhang, Yunlong; Wang, Zhibin; Zhang, Feng; Qin, Hui; Li, Junqi; Mai, Yuying
2014-09-01
Diffractive optical element (DOE) shows high diffraction efficiency and good dispersion performance, which makes the optical system becoming light-weight and more miniature. In this paper, the design, processing, testing, compensation of DOE are discussed, especially the analyzing of compensation technology which based on the analyzing the DOE measurement date from Taylor Hobson PGI 1250. In this method, the relationship between shadowing effect with diamond tool and processing accuracy are analyzed. According to verification processing on the Taylor Hobson NANOFORM 250 lathe, the results indicate that the PV reaches 0.539 micron, the surface roughness reaches 4nm, the step position error is smaller than λ /10 and the step height error is less than 0.23 micron after compensation processing one time.
NASA Astrophysics Data System (ADS)
Cheong, Kwang-Ho; Lee, Me-Yeon; Kang, Sei-Kwon; Yoon, Jai-Woong; Park, Soah; Hwang, Taejin; Kim, Haeyoung; Kim, Kyoung Ju; Han, Tae Jin; Bae, Hoonsik
2015-07-01
The aim of this study is to set up statistical quality control for monitoring the volumetric modulated arc therapy (VMAT) delivery error by using the machine's log data. Eclipse and a Clinac iX linac with the RapidArc system (Varian Medical Systems, Palo Alto, USA) are used for delivery of the VMAT plan. During the delivery of the RapidArc fields, the machine determines the delivered monitor units (MUs) and the gantry angle's position accuracy and the standard deviations of the MU ( σMU: dosimetric error) and the gantry angle ( σGA: geometric error) are displayed on the console monitor after completion of the RapidArc delivery. In the present study, first, the log data were analyzed to confirm its validity and usability; then, statistical process control (SPC) was applied to monitor the σMU and the σGA in a timely manner for all RapidArc fields: a total of 195 arc fields for 99 patients. The MU and the GA were determined twice for all fields, that is, first during the patient-specific plan QA and then again during the first treatment. The sMU and the σGA time series were quite stable irrespective of the treatment site; however, the sGA strongly depended on the gantry's rotation speed. The σGA of the RapidArc delivery for stereotactic body radiation therapy (SBRT) was smaller than that for the typical VMAT. Therefore, SPC was applied for SBRT cases and general cases respectively. Moreover, the accuracy of the potential meter of the gantry rotation is important because the σGA can change dramatically due to its condition. By applying SPC to the σMU and σGA, we could monitor the delivery error efficiently. However, the upper and the lower limits of SPC need to be determined carefully with full knowledge of the machine and log data.
Cost-effectiveness of the stream-gaging program in Kentucky
Ruhl, K.J.
1989-01-01
This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)
Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten
2013-01-01
Background Brain-machine interfaces (BMIs) can translate the neuronal activity underlying a user’s movement intention into movements of an artificial effector. In spite of continuous improvements, errors in movement decoding are still a major problem of current BMI systems. If the difference between the decoded and intended movements becomes noticeable, it may lead to an execution error. Outcome errors, where subjects fail to reach a certain movement goal, are also present during online BMI operation. Detecting such errors can be beneficial for BMI operation: (i) errors can be corrected online after being detected and (ii) adaptive BMI decoding algorithm can be updated to make fewer errors in the future. Methodology/Principal Findings Here, we show that error events can be detected from human electrocorticography (ECoG) during a continuous task with high precision, given a temporal tolerance of 300–400 milliseconds. We quantified the error detection accuracy and showed that, using only a small subset of 2×2 ECoG electrodes, 82% of detection information for outcome error and 74% of detection information for execution error available from all ECoG electrodes could be retained. Conclusions/Significance The error detection method presented here could be used to correct errors made during BMI operation or to adapt a BMI algorithm to make fewer errors in the future. Furthermore, our results indicate that smaller ECoG implant could be used for error detection. Reducing the size of an ECoG electrode implant used for BMI decoding and error detection could significantly reduce the medical risk of implantation. PMID:23383315
Visual error augmentation enhances learning in three dimensions.
Sharp, Ian; Huang, Felix; Patton, James
2011-09-02
Because recent preliminary evidence points to the use of Error augmentation (EA) for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed). Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation) when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.
An error analysis perspective for patient alignment systems.
Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann
2013-09-01
This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.
Pleil, Joachim D
2016-01-01
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.
ERIC Educational Resources Information Center
Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.
2011-01-01
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the…
Vehicle Speed and Length Estimation Using Data from Two Anisotropic Magneto-Resistive (AMR) Sensors
Markevicius, Vytautas; Navikas, Dangirutis; Valinevicius, Algimantas; Zilys, Mindaugas
2017-01-01
Methods for estimating a car’s length are presented in this paper, as well as the results achieved by using a self-designed system equipped with two anisotropic magneto-resistive (AMR) sensors, which were placed on a road lane. The purpose of the research was to compare the lengths of mid-size cars, i.e., family cars (hatchbacks), saloons (sedans), station wagons and SUVs. Four methods were used in the research: a simple threshold based method, a threshold method based on moving average and standard deviation, a two-extreme-peak detection method and a method based on the amplitude and time normalization using linear extrapolation (or interpolation). The results were achieved by analyzing changes in the magnitude and in the absolute z-component of the magnetic field as well. The tests, which were performed in four different Earth directions, show differences in the values of estimated lengths. The magnitude-based results in the case when cars drove from the South to the North direction were even up to 1.2 m higher than the other results achieved using the threshold methods. Smaller differences in lengths were observed when the distances were measured between two extreme peaks in the car magnetic signatures. The results were summarized in tables and the errors of estimated lengths were presented. The maximal errors, related to real lengths, were up to 22%. PMID:28771171
Yenilmez, Firdes; Düzgün, Sebnem; Aksoy, Aysegül
2015-01-01
In this study, kernel density estimation (KDE) was coupled with ordinary two-dimensional kriging (OK) to reduce the number of sampling locations in measurement and kriging of dissolved oxygen (DO) concentrations in Porsuk Dam Reservoir (PDR). Conservation of the spatial correlation structure in the DO distribution was a target. KDE was used as a tool to aid in identification of the sampling locations that would be removed from the sampling network in order to decrease the total number of samples. Accordingly, several networks were generated in which sampling locations were reduced from 65 to 10 in increments of 4 or 5 points at a time based on kernel density maps. DO variograms were constructed, and DO values in PDR were kriged. Performance of the networks in DO estimations were evaluated through various error metrics, standard error maps (SEM), and whether the spatial correlation structure was conserved or not. Results indicated that smaller number of sampling points resulted in loss of information in regard to spatial correlation structure in DO. The minimum representative sampling points for PDR was 35. Efficacy of the sampling location selection method was tested against the networks generated by experts. It was shown that the evaluation approach proposed in this study provided a better sampling network design in which the spatial correlation structure of DO was sustained for kriging.
Morrison, William R.; Cullum, John P.; Leskey, Tracy C.
2015-01-01
Halyomorpha halys (Stål) is an invasive pest that attacks numerous crops. For growers to make informed management decisions against H. halys, an effective monitoring tool must be in place. We evaluated various trap designs baited with the two-component aggregation pheromone of H. halys and synergist and deployed in commercial apple orchards. We compared our current experimental standard trap, a black plywood pyramid trap 1.22 m in height deployed between border row apple trees with other trap designs for two growing seasons. These included a black lightweight coroplast pyramid trap of similar dimension, a smaller (29 cm) pyramid trap also ground deployed, a smaller limb-attached pyramid trap, a smaller pyramid trap hanging from a horizontal branch, and a semipyramid design known as the Rescue trap. We found that the coroplast pyramid was the most sensitive, capturing more adults than all other trap designs including our experimental standard. Smaller pyramid traps performed equally in adult captures to our experimental standard, though nymphal captures were statistically lower for the hanging traps. Experimental standard plywood and coroplast pyramid trap correlations were strong, suggesting that standard plywood pyramid traps could be replaced with lighter, cheaper coroplast pyramid traps. Strong correlations with small ground- and limb-deployed pyramid traps also suggest that these designs offer promise as well. Growers may be able to adopt alternative trap designs that are cheaper, lighter, and easier to deploy to monitor H. halys in orchards without a significant loss in sensitivity. PMID:26470309
Calibration of a stack of NaI scintillators at the Berkeley Bevalac
NASA Technical Reports Server (NTRS)
Schindler, S. M.; Buffington, A.; Lau, K.; Rasmussen, I. L.
1983-01-01
An analysis of the carbon and argon data reveals that essentially all of the charge-changing fragmentation reactions within the stack can be identified and removed by imposing the simple criteria relating the observed energy deposition profiles to the expected Bragg curve depositions. It is noted that these criteria are even capable of identifying approximately one-third of the expected neutron-stripping interactions, which in these cases have anomalous deposition profiles. The contribution of mass error from uncertainty in delta E has an upper limit of 0.25 percent for Mn; this produces an associated mass error for the experiment of about 0.14 amu. It is believed that this uncertainty will change little with changing gamma. Residual errors in the mapping produce even smaller mass errors for lighter isotopes, whereas photoelectron fluctuations and delta-ray effects are approximately the same independent of the charge and energy deposition.
A Very Low Cost BCH Decoder for High Immunity of On-Chip Memories
NASA Astrophysics Data System (ADS)
Seo, Haejun; Han, Sehwan; Heo, Yoonseok; Cho, Taewon
BCH(Bose-Chaudhuri-Hoquenbhem) code, a type of block codes-cyclic codes, has very strong error-correcting ability which is vital for performing the error protection on the memory system. BCH code has many kinds of dual algorithms, PGZ(Pererson-Gorenstein-Zierler) algorithm out of them is advantageous in view of correcting the errors through the simple calculation in t value. However, this is problematic when this becomes 0 (divided by zero) in case ν ≠ t. In this paper, the circuit would be simplified by suggesting the multi-mode hardware architecture in preparation that v were 0~3. First, production cost would be less thanks to the smaller number of gates. Second, lessening power consumption could lengthen the recharging period. The very low cost and simple datapath make our design a good choice in small-footprint SoC(System on Chip) as ECC(Error Correction Code/Circuit) in memory system.
Tropical forecasting - Predictability perspective
NASA Technical Reports Server (NTRS)
Shukla, J.
1989-01-01
Results are presented of classical predictability studies and forecast experiments with observed initial conditions to show the nature of initial error growth and final error equilibration for the tropics and midlatitudes, separately. It is found that the theoretical upper limit of tropical circulation predictability is far less than for midlatitudes. The error growth for a complete general circulation model is compared to a dry version of the same model in which there is no prognostic equation for moisture, and diabatic heat sources are prescribed. It is found that the growth rate of synoptic-scale errors for the dry model is significantly smaller than for the moist model, suggesting that the interactions between dynamics and moist processes are among the important causes of atmospheric flow predictability degradation. Results are then presented of numerical experiments showing that correct specification of the slowly varying boundary condition of SST produces significant improvement in the prediction of time-averaged circulation and rainfall over the tropics.
General Aviation Avionics Statistics.
1980-12-01
designed to produce standard errors on these variables at levels specified by the FAA. No controls were placed on the standard errors of the non-design...Transponder Encoding Requirement. and Mode CAutomatic (11as been deleted) Altitude Reporting Ca- pabili.,; Two-way Radio; VOR or TACAN Receiver. Remaining 42
2011-01-01
Background Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Methods Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Results Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Conclusions Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for the hemiparetic arm, suggesting that the increased demands associated with controlling an affected arm make the motor system more prone to slack when distracted. Providing an alternate sensory channel for feedback, i.e., auditory feedback of tracking error, enabled the participants to simultaneously perform the tracking task and distracter task effectively. Thus, incorporating real-time auditory feedback of performance errors might improve clinical outcomes of robotic therapy systems. PMID:21513561
2014-01-01
Background The DerSimonian and Laird approach (DL) is widely used for random effects meta-analysis, but this often results in inappropriate type I error rates. The method described by Hartung, Knapp, Sidik and Jonkman (HKSJ) is known to perform better when trials of similar size are combined. However evidence in realistic situations, where one trial might be much larger than the other trials, is lacking. We aimed to evaluate the relative performance of the DL and HKSJ methods when studies of different sizes are combined and to develop a simple method to convert DL results to HKSJ results. Methods We evaluated the performance of the HKSJ versus DL approach in simulated meta-analyses of 2–20 trials with varying sample sizes and between-study heterogeneity, and allowing trials to have various sizes, e.g. 25% of the trials being 10-times larger than the smaller trials. We also compared the number of “positive” (statistically significant at p < 0.05) findings using empirical data of recent meta-analyses with > = 3 studies of interventions from the Cochrane Database of Systematic Reviews. Results The simulations showed that the HKSJ method consistently resulted in more adequate error rates than the DL method. When the significance level was 5%, the HKSJ error rates at most doubled, whereas for DL they could be over 30%. DL, and, far less so, HKSJ had more inflated error rates when the combined studies had unequal sizes and between-study heterogeneity. The empirical data from 689 meta-analyses showed that 25.1% of the significant findings for the DL method were non-significant with the HKSJ method. DL results can be easily converted into HKSJ results. Conclusions Our simulations showed that the HKSJ method consistently results in more adequate error rates than the DL method, especially when the number of studies is small, and can easily be applied routinely in meta-analyses. Even with the HKSJ method, extra caution is needed when there are = <5 studies of very unequal sizes. PMID:24548571
Method for validating cloud mask obtained from satellite measurements using ground-based sky camera.
Letu, Husi; Nagao, Takashi M; Nakajima, Takashi Y; Matsumae, Yoshiaki
2014-11-01
Error propagation in Earth's atmospheric, oceanic, and land surface parameters of the satellite products caused by misclassification of the cloud mask is a critical issue for improving the accuracy of satellite products. Thus, characterizing the accuracy of the cloud mask is important for investigating the influence of the cloud mask on satellite products. In this study, we proposed a method for validating multiwavelength satellite data derived cloud masks using ground-based sky camera (GSC) data. First, a cloud cover algorithm for GSC data has been developed using sky index and bright index. Then, Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data derived cloud masks by two cloud-screening algorithms (i.e., MOD35 and CLAUDIA) were validated using the GSC cloud mask. The results indicate that MOD35 is likely to classify ambiguous pixels as "cloudy," whereas CLAUDIA is likely to classify them as "clear." Furthermore, the influence of error propagations caused by misclassification of the MOD35 and CLAUDIA cloud masks on MODIS derived reflectance, brightness temperature, and normalized difference vegetation index (NDVI) in clear and cloudy pixels was investigated using sky camera data. It shows that the influence of the error propagation by the MOD35 cloud mask on the MODIS derived monthly mean reflectance, brightness temperature, and NDVI for clear pixels is significantly smaller than for the CLAUDIA cloud mask; the influence of the error propagation by the CLAUDIA cloud mask on MODIS derived monthly mean cloud products for cloudy pixels is significantly smaller than that by the MOD35 cloud mask.
ERIC Educational Resources Information Center
Schretlen, David; And Others
1994-01-01
Composite reliability and standard errors of measurement were computed for prorated Verbal, Performance, and Full-Scale intelligence quotient (IQ) scores from a seven-subtest short form of the Wechsler Adult Intelligence Scale-Revised. Results with 1,880 adults (standardization sample) indicate that this form is as reliable as the complete test.…
A Brief Look at: Test Scores and the Standard Error of Measurement. E&R Report No. 10.13
ERIC Educational Resources Information Center
Holdzkom, David; Sumner, Brian; McMillen, Brad
2010-01-01
In the context of standardized testing, the standard error of measurement (SEM) is a measure of the factors other than the student's actual knowledge of the tested material that may affect the student's test score. Such factors may include distractions in the testing environment, fatigue, hunger, or even luck. This means that a student's observed…
NASA Astrophysics Data System (ADS)
Schaub, D.; Boersma, K. F.; Kaiser, J. W.; Weiss, A. K.; Folini, D.; Eskes, H. J.; Buchmann, B.
2006-08-01
Nitrogen dioxide (NO2) vertical tropospheric column densities (VTCs) retrieved from the Global Ozone Monitoring Experiment (GOME) are compared to coincident ground-based tropospheric NO2 columns. The ground-based columns are deduced from in situ measurements at different altitudes in the Alps for 1997 to June 2003, yielding a unique long-term comparison of GOME NO2 VTC data retrieved by a collaboration of KNMI (Royal Netherlands Meteorological Institute) and BIRA/IASB (Belgian Institute for Space Aeronomy) with independently derived tropospheric NO2 profiles. A first comparison relates the GOME retrieved tropospheric columns to the tropospheric columns obtained by integrating the ground-based NO2 measurements. For a second comparison, the tropospheric profiles constructed from the ground-based measurements are first multiplied with the averaging kernel (AK) of the GOME retrieval. The second approach makes the comparison independent from the a priori NO2 profile used in the GOME retrieval. This allows splitting the total difference between the column data sets into two contributions: one that is due to differences between the a priori and the ground-based NO2 profile shapes, and another that can be attributed to uncertainties in both the remaining retrieval parameters (such as, e.g., surface albedo or aerosol concentration) and the ground-based in situ NO2 profiles. For anticyclonic clear sky conditions the comparison indicates a good agreement between the columns (n=157, R=0.70/0.74 for the first/second comparison approach, respectively). The mean relative difference (with respect to the ground-based columns) is -7% with a standard deviation of 40% and GOME on average slightly underestimating the ground-based columns. Both data sets show a similar seasonal behaviour with a distinct maximum of spring NO2 VTCs. Further analysis indicates small GOME columns being systematically smaller than the ground-based ones. The influence of different shapes in the a priori and the ground-based NO2 profile is analysed by considering AK information. It is moderate and indicates similar shapes of the profiles for clear sky conditions. Only for large GOME columns, differences between the profile shapes explain the larger part of the relative difference. In contrast, the other error sources give rise to the larger relative differences found towards smaller columns. Further, for the clear sky cases, errors from different sources are found to compensate each other partially. The comparison for cloudy cases indicates a poorer agreement between the columns (n=60, R=0.61). The mean relative difference between the columns is 60% with a standard deviation of 118% and GOME on average overestimating the ground-based columns. The clear improvement after inclusion of AK information (n=60, R=0.87) suggests larger errors in the a priori NO2 profiles under cloudy conditions and demonstrates the importance of using accurate profile information for (partially) clouded scenes.
Toward a new culture in verified quantum operations
NASA Astrophysics Data System (ADS)
Flammia, Steve
Measuring error rates of quantum operations has become an indispensable component in any aspiring platform for quantum computation. As the quality of controlled quantum operations increases, the demands on the accuracy and precision with which we measure these error rates also grows. However, well-meaning scientists that report these error measures are faced with a sea of non-standardized methodologies and are often asked during publication for only coarse information about how their estimates were obtained. Moreover, there are serious incentives to use methodologies and measures that will continually produce numbers that improve with time to show progress. These problems will only get exacerbated as our typical error rates go from 1 in 100 to 1 in 1000 or less. This talk will survey existing challenges presented by the current paradigm and offer some suggestions for solutions than can help us move toward fair and standardized methods for error metrology in quantum computing experiments, and towards a culture that values full disclose of methodologies and higher standards for data analysis.
Sample-size needs for forestry herbicide trials
S.M. Zedaker; T.G. Gregoire; James H. Miller
1994-01-01
Forest herbicide experiments are increasingly being designed to evaluate smaller treatment differences when comparing existing effective treatments, tank mix ratios, surfactants, and new low-rate products. The ability to detect small differences in efficacy is dependent upon the relationship among sample size. type I and II error probabilities, and the coefficients of...
Radiation-Hardened Solid-State Drive
NASA Technical Reports Server (NTRS)
Sheldon, Douglas J.
2010-01-01
A method is provided for a radiationhardened (rad-hard) solid-state drive for space mission memory applications by combining rad-hard and commercial off-the-shelf (COTS) non-volatile memories (NVMs) into a hybrid architecture. The architecture is controlled by a rad-hard ASIC (application specific integrated circuit) or a FPGA (field programmable gate array). Specific error handling and data management protocols are developed for use in a rad-hard environment. The rad-hard memories are smaller in overall memory density, but are used to control and manage radiation-induced errors in the main, and much larger density, non-rad-hard COTS memory devices. Small amounts of rad-hard memory are used as error buffers and temporary caches for radiation-induced errors in the large COTS memories. The rad-hard ASIC/FPGA implements a variety of error-handling protocols to manage these radiation-induced errors. The large COTS memory is triplicated for protection, and CRC-based counters are calculated for sub-areas in each COTS NVM array. These counters are stored in the rad-hard non-volatile memory. Through monitoring, rewriting, regeneration, triplication, and long-term storage, radiation-induced errors in the large NV memory are managed. The rad-hard ASIC/FPGA also interfaces with the external computer buses.
Checa, Purificación; Castellanos, M C; Abundis-Gutiérrez, Alicia; Rosario Rueda, M
2014-01-01
Regulation of thoughts and behavior requires attention, particularly when there is conflict between alternative responses or when errors are to be prevented or corrected. Conflict monitoring and error processing are functions of the executive attention network, a neurocognitive system that greatly matures during childhood. In this study, we examined the development of brain mechanisms underlying conflict and error processing with event-related potentials (ERPs), and explored the relationship between brain function and individual differences in the ability to self-regulate behavior. Three groups of children aged 4-6, 7-9, and 10-13 years, and a group of adults performed a child-friendly version of the flanker task while ERPs were registered. Marked developmental changes were observed in both conflict processing and brain reactions to errors. After controlling by age, higher self-regulation skills are associated with smaller amplitude of the conflict effect but greater amplitude of the error-related negativity. Additionally, we found that electrophysiological measures of conflict and error monitoring predict individual differences in impulsivity and the capacity to delay gratification. These findings inform of brain mechanisms underlying the development of cognitive control and self-regulation.
Checa, Purificación; Castellanos, M. C.; Abundis-Gutiérrez, Alicia; Rosario Rueda, M.
2014-01-01
Regulation of thoughts and behavior requires attention, particularly when there is conflict between alternative responses or when errors are to be prevented or corrected. Conflict monitoring and error processing are functions of the executive attention network, a neurocognitive system that greatly matures during childhood. In this study, we examined the development of brain mechanisms underlying conflict and error processing with event-related potentials (ERPs), and explored the relationship between brain function and individual differences in the ability to self-regulate behavior. Three groups of children aged 4–6, 7–9, and 10–13 years, and a group of adults performed a child-friendly version of the flanker task while ERPs were registered. Marked developmental changes were observed in both conflict processing and brain reactions to errors. After controlling by age, higher self-regulation skills are associated with smaller amplitude of the conflict effect but greater amplitude of the error-related negativity. Additionally, we found that electrophysiological measures of conflict and error monitoring predict individual differences in impulsivity and the capacity to delay gratification. These findings inform of brain mechanisms underlying the development of cognitive control and self-regulation. PMID:24795676
Liu, Hesheng; Gao, Xiaorong; Schimpf, Paul H; Yang, Fusheng; Gao, Shangkai
2004-10-01
Estimation of intracranial electric activity from the scalp electroencephalogram (EEG) requires a solution to the EEG inverse problem, which is known as an ill-conditioned problem. In order to yield a unique solution, weighted minimum norm least square (MNLS) inverse methods are generally used. This paper proposes a recursive algorithm, termed Shrinking LORETA-FOCUSS, which combines and expands upon the central features of two well-known weighted MNLS methods: LORETA and FOCUSS. This recursive algorithm makes iterative adjustments to the solution space as well as the weighting matrix, thereby dramatically reducing the computation load, and increasing local source resolution. Simulations are conducted on a 3-shell spherical head model registered to the Talairach human brain atlas. A comparative study of four different inverse methods, standard Weighted Minimum Norm, L1-norm, LORETA-FOCUSS and Shrinking LORETA-FOCUSS are presented. The results demonstrate that Shrinking LORETA-FOCUSS is able to reconstruct a three-dimensional source distribution with smaller localization and energy errors compared to the other methods.
Wu, Yanwei; Guo, Pan; Chen, Siying; Chen, He; Zhang, Yinchao
2017-04-01
Auto-adaptive background subtraction (AABS) is proposed as a denoising method for data processing of the coherent Doppler lidar (CDL). The method is proposed specifically for a low-signal-to-noise-ratio regime, in which the drifting power spectral density of CDL data occurs. Unlike the periodogram maximum (PM) and adaptive iteratively reweighted penalized least squares (airPLS), the proposed method presents reliable peaks and is thus advantageous in identifying peak locations. According to the analysis results of simulated and actually measured data, the proposed method outperforms the airPLS method and the PM algorithm in the furthest detectable range. The proposed method improves the detection range approximately up to 16.7% and 40% when compared to the airPLS method and the PM method, respectively. It also has smaller mean wind velocity and standard error values than the airPLS and PM methods. The AABS approach improves the quality of Doppler shift estimates and can be applied to obtain the whole wind profiling by the CDL.
Analysis of the low-flow characteristics of streams in Louisiana
Lee, Fred N.
1985-01-01
The U.S. Geological Survey, in cooperation with the Louisiana Department of Transportation and Development, Office of Public Works, used geologic maps, soils maps, precipitation data, and low-flow data to define four hydrographic regions in Louisiana having distinct low-flow characteristics. Equations were derived, using regression analyses, to estimate the 7Q2, 7Q10, and 7Q20 flow rates for basically unaltered stream basins smaller than 525 square miles. Independent variables in the equations include drainage area (square miles), mean annual precipitation index (inches), and main channel slope (feet per mile). Average standard errors of regression ranged from +44 to +61 percent. Graphs are given for estimating the 7Q2, 7Q10, and 7Q20 for stream basins for which the drainage area of the most downstream data-collection site is larger than 525 square miles. Detailed examples are given in this report for the use of the equations and graphs.
Classifying multispectral data by neural networks
NASA Technical Reports Server (NTRS)
Telfer, Brian A.; Szu, Harold H.; Kiang, Richard K.
1993-01-01
Several energy functions for synthesizing neural networks are tested on 2-D synthetic data and on Landsat-4 Thematic Mapper data. These new energy functions, designed specifically for minimizing misclassification error, in some cases yield significant improvements in classification accuracy over the standard least mean squares energy function. In addition to operating on networks with one output unit per class, a new energy function is tested for binary encoded outputs, which result in smaller network sizes. The Thematic Mapper data (four bands were used) is classified on a single pixel basis, to provide a starting benchmark against which further improvements will be measured. Improvements are underway to make use of both subpixel and superpixel (i.e. contextual or neighborhood) information in tile processing. For single pixel classification, the best neural network result is 78.7 percent, compared with 71.7 percent for a classical nearest neighbor classifier. The 78.7 percent result also improves on several earlier neural network results on this data.
Neutron Electric Dipole Moment and Tensor Charges from Lattice QCD.
Bhattacharya, Tanmoy; Cirigliano, Vincenzo; Gupta, Rajan; Lin, Huey-Wen; Yoon, Boram
2015-11-20
We present lattice QCD results on the neutron tensor charges including, for the first time, a simultaneous extrapolation in the lattice spacing, volume, and light quark masses to the physical point in the continuum limit. We find that the "disconnected" contribution is smaller than the statistical error in the "connected" contribution. Our estimates in the modified minimal subtraction scheme at 2 GeV, including all systematics, are g_{T}^{d-u}=1.020(76), g_{T}^{d}=0.774(66), g_{T}^{u}=-0.233(28), and g_{T}^{s}=0.008(9). The flavor diagonal charges determine the size of the neutron electric dipole moment (EDM) induced by quark EDMs that are generated in many new scenarios of CP violation beyond the standard model. We use our results to derive model-independent bounds on the EDMs of light quarks and update the EDM phenomenology in split supersymmetry with gaugino mass unification, finding a stringent upper bound of d_{n}<4×10^{-28} e cm for the neutron EDM in this scenario.
Analysis of high-resolution spectra from a hybrid interferometric/dispersive spectrometer
Ko, Phyllis; Scott, Jill R.; Jovanovic, Igor
2015-09-05
To fully take advantage of a low-cost, small footprint hybrid interferometric/dispersive spectrometer, a math- ematical reconstruction technique was developed to accurately capture the high-resolution and relative peak intensities from complex patterns. A Fabry-Perot etalon was coupled to a Czerny-Turner spectrometer, in- creasing spectral resolution by an order of magnitude without the commensurate increase in spectrometer size. Measurement of the industry standard Hg 313.1555/313.1844 nm doublet yielded a ratio of 0.682 with 1.8%error, which agreed well with an independent measurement and literature values. The doublet separation (29 pm), is similar to the U isotope shift (25 pm) at 424.437 nm thatmore » is of interest to monitoring nuclear nonpro-liferation activities. Additionally, the technique was applied to a LIBS measurement of the mineral cinnabar (HgS) and resulted in a ratio of 0.681. In addition, this reconstruction method could enable significantly smaller, portable high-resolution instruments with isotopic specificity, benefiting a variety of spectroscopic applications.« less
Forkey, Joseph N.; Quinlan, Margot E.; Goldman, Yale E.
2005-01-01
A new approach is presented for measuring the three-dimensional orientation of individual macromolecules using single molecule fluorescence polarization (SMFP) microscopy. The technique uses the unique polarizations of evanescent waves generated by total internal reflection to excite the dipole moment of individual fluorophores. To evaluate the new SMFP technique, single molecule orientation measurements from sparsely labeled F-actin are compared to ensemble-averaged orientation data from similarly prepared densely labeled F-actin. Standard deviations of the SMFP measurements taken at 40 ms time intervals indicate that the uncertainty for individual measurements of axial and azimuthal angles is ∼10° at 40 ms time resolution. Comparison with ensemble data shows there are no substantial systematic errors associated with the single molecule measurements. In addition to evaluating the technique, the data also provide a new measurement of the torsional rigidity of F-actin. These measurements support the smaller of two values of the torsional rigidity of F-actin previously reported. PMID:15894632
Kletting, P; Schimmel, S; Kestler, H A; Hänscheid, H; Luster, M; Fernández, M; Bröer, J H; Nosske, D; Lassmann, M; Glatting, G
2013-10-01
Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error. The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB. To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit parameters and their standard error estimated by using SAAM numerical and NUKFIT showed differences of <1%. The differences for the time-integrated activity coefficients were also <1% (standard error between 0.4% and 3%). In general, the application of the software is user-friendly and the results are mathematically correct and reproducible. An application of NUKFIT is presented for three different clinical examples. The software tool with its underlying methodology can be employed to objectively and reproducibly estimate the time integrated activity coefficient and its standard error for most time activity data in molecular radiotherapy.
Lewis, Matthew S; Maruff, Paul; Silbert, Brendan S; Evered, Lis A; Scott, David A
2007-02-01
The reliable change index (RCI) expresses change relative to its associated error, and is useful in the identification of postoperative cognitive dysfunction (POCD). This paper examines four common RCIs that each account for error in different ways. Three rules incorporate a constant correction for practice effects and are contrasted with the standard RCI that had no correction for practice. These rules are applied to 160 patients undergoing coronary artery bypass graft (CABG) surgery who completed neuropsychological assessments preoperatively and 1 week postoperatively using error and reliability data from a comparable healthy nonsurgical control group. The rules all identify POCD in a similar proportion of patients, but the use of the within-subject standard deviation (WSD), expressing the effects of random error, as an error estimate is a theoretically appropriate denominator when a constant error correction, removing the effects of systematic error, is deducted from the numerator in a RCI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxim, Peter G.; Loo, Billy W.; Murphy, James D.
2011-11-15
Purpose: To evaluate the positioning accuracy of an optical positioning system for stereotactic radiosurgery in a pilot experience of optically guided, conventionally fractionated, radiotherapy for paranasal sinus and skull base tumors. Methods and Materials: Before each daily radiotherapy session, the positioning of 28 patients was set up using an optical positioning system. After this initial setup, the patients underwent standard on-board imaging that included daily orthogonal kilovoltage images and weekly cone beam computed tomography scans. Daily translational shifts were made after comparing the on-board images with the treatment planning computed tomography scans. These daily translational shifts represented the daily positionalmore » error in the optical tracking system and were recorded during the treatment course. For 13 patients treated with smaller fields, a three-degree of freedom (3DOF) head positioner was used for more accurate setup. Results: The mean positional error for the optically guided system in patients with and without the 3DOF head positioner was 1.4 {+-} 1.1 mm and 3.9 {+-} 1.6 mm, respectively (p <.0001). The mean positional error drifted 0.11 mm/wk upward during the treatment course for patients using the 3DOF head positioner (p = .057). No positional drift was observed in the patients without the 3DOF head positioner. Conclusion: Our initial clinical experience with optically guided head-and-neck fractionated radiotherapy was promising and demonstrated clinical feasibility. The optically guided setup was especially useful when used in conjunction with the 3DOF head positioner and when it was recalibrated to the shifts using the weekly portal images.« less
NASA Technical Reports Server (NTRS)
Eluszkiewicz, Janusz; Nehrkorn, Thomas; Wofsy, Steven C.; Matross, Daniel; Gerbig, Christoph; Lin, John C.; Freitas, Saulo; Longo, Marcos; Andrews, Arlyn E.; Peters, Wouter
2007-01-01
This paper evaluates simulations of atmospheric CO2 measured in 2004 at continental surface and airborne receptors, intended to test the capability to use data with high temporal and spatial resolution for analyses of carbon sources and sinks at regional and continental scales. The simulations were performed using the Stochastic Time-Inverted Lagrangian Transport (STILT) model driven by the Weather Forecast and Research (WRF) model, and linked to surface fluxes from the satellite-driven Vegetation Photosynthesis and Respiration Model (VPRM). The simulations provide detailed representations of hourly CO2 tower data and reproduce the shapes of airborne vertical profiles with high fidelity. WRF meteorology gives superior model performance compared with standard meteorological products, and the impact of including WRF convective mass fluxes in the STILT trajectory calculations is significant in individual cases. Important biases in the simulation are associated with the nighttime CO2 build-up and subsequent morning transition to convective conditions, and with errors in the advected lateral boundary condition. Comparison of STILT simulations driven by the WRF model against those driven by the Brazilian variant of the Regional Atmospheric Modeling System (BRAMS) shows that model-to-model differences are smaller than between an individual transport model and observations, pointing to systematic errors in the simulated transport. Future developments in the WRF model s data assimilation capabilities, basic research into the fundamental aspects of trajectory calculations, and intercomparison studies involving other transport models, are possible venues for reducing these errors. Overall, the STILT/WRF/VPRM offers a powerful tool for continental and regional scale carbon flux estimates.
Fendler, Wojciech; Hogendorf, Anna; Szadkowska, Agnieszka; Młynarski, Wojciech
2011-01-01
Self-monitoring of blood glucose (SMBG) is one of the cornerstones of diabetes management. To evaluate the potential for miscoding of a personal glucometer, to define a target population among pediatric patients with diabetes for a non-coding glucometer and the accuracy of the Contour TS non-coding system. Potential for miscoding during self-monitoring of blood glucose was evaluated by means of an anonymous questionnaire, with worst and best case scenarios evaluated depending on the responses pattern. Testing of the Contour TS system was performed according to guidelines set by the national committee for clinical laboratory standards. Estimated frequency of individuals prone to non-coding ranged from 68.21% (95% 60.70- 75.72%) to 7.95% (95%CI 3.86-12.31%) for the worse and best case scenarios respectively. Factors associated with increased likelihood of non-coding were: a smaller number of tests per day, a greater number of individuals involved in testing and self-testing by the patient with diabetes. The Contour TS device showed intra- and inter-assay accuracy -95%, linear association with laboratory measurements (R2=0.99, p <0.0001) and consistent, but small bias of -1.12% (95% Confidence Interval -3.27 to 1.02%). Clarke error grid analysis showed 4% of values within the benign error zone (B) with the other measurements yielding an acceptably accurate result (zone A). The Contour TS system showed sufficient accuracy to be safely used in monitoring of pediatric diabetic patients. Patients from families with a high throughput of test-strips or multiple individuals involved in SMBG using the same meter are candidates for clinical use of such devices due to an increased risk of calibration errors.
McClure, Foster D; Lee, Jung K
2005-01-01
Sample size formulas are developed to estimate the repeatability and reproducibility standard deviations (Sr and S(R)) such that the actual error in (Sr and S(R)) relative to their respective true values, sigmar and sigmaR, are at predefined levels. The statistical consequences associated with AOAC INTERNATIONAL required sample size to validate an analytical method are discussed. In addition, formulas to estimate the uncertainties of (Sr and S(R)) were derived and are provided as supporting documentation. Formula for the Number of Replicates Required for a Specified Margin of Relative Error in the Estimate of the Repeatability Standard Deviation.
Bootstrap Estimates of Standard Errors in Generalizability Theory
ERIC Educational Resources Information Center
Tong, Ye; Brennan, Robert L.
2007-01-01
Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…
Eppenhof, Koen A J; Pluim, Josien P W
2018-04-01
Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.
Hejl, H.R.
1989-01-01
The precipitation-runoff modeling system was applied to the 8.21 sq-mi drainage area of the Ah-shi-sle-pah Wash watershed in northwestern New Mexico. The calibration periods were May to September of 1981 and 1982, and the verification period was May to September 1983. Twelve storms were available for calibration and 8 storms were available for verification. For calibration A (hydraulic conductivity estimated from onsite data and other storm-mode parameters optimized), the computed standard error of estimate was 50% for runoff volumes and 72% of peak discharges. Calibration B included hydraulic conductivity in the optimization, which reduced the standard error of estimate to 28 % for runoff volumes and 50% for peak discharges. Optimized values for hydraulic conductivity resulted in reductions from 1.00 to 0.26 in/h and 0.20 to 0.03 in/h for the 2 general soils groups in the calibrations. Simulated runoff volumes using 7 of 8 storms occurring during the verification period had a standard error of estimate of 40% for verification A and 38% for verification B. Simulated peak discharge had a standard error of estimate of 120% for verification A and 56% for verification B. Including the eighth storm which had a relatively small magnitude in the verification analysis more than doubled the standard error of estimating volumes and peaks. (USGS)
Hess, G.W.; Bohman, L.R.
1996-01-01
Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada were developed using streamflow records at six gaged sites and basin physical and climatic characteristics. Streamflow data at gaged sites were related by regression techniques to concurrent flows at nearby gaging stations so that monthly mean streamflows for periods of missing or no record can be estimated for gaged sites in central Nevada. The standard error of estimate for relations at these sites ranged from 12 to 196 percent. Also, monthly streamflow data for selected percent exceedence levels were used in regression analyses with basin and climatic variables to determine relations for ungaged basins for annual and monthly percent exceedence levels. Analyses indicate that the drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the annual percent exceedence, the standard error of estimate of the relations for ungaged sites ranged from 51 to 96 percent and standard error of prediction for ungaged sites ranged from 96 to 249 percent. For the monthly percent exceedence values, the standard error of estimate of the relations ranged from 31 to 168 percent, and the standard error of prediction ranged from 115 to 3,124 percent. Reliability and limitations of the estimating methods are described.
NASA Astrophysics Data System (ADS)
Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.
2014-01-01
Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.
New dimension analyses with error analysis for quaking aspen and black spruce
NASA Technical Reports Server (NTRS)
Woods, K. D.; Botkin, D. B.; Feiveson, A. H.
1987-01-01
Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.
Horowitz-Kraus, Tzipi
2016-05-01
The error-detection mechanism aids in preventing error repetition during a given task. Electroencephalography demonstrates that error detection involves two event-related potential components: error-related and correct-response negativities (ERN and CRN, respectively). Dyslexia is characterized by slow, inaccurate reading. In particular, individuals with dyslexia have a less active error-detection mechanism during reading than typical readers. In the current study, we examined whether a reading training programme could improve the ability to recognize words automatically (lexical representations) in adults with dyslexia, thereby resulting in more efficient error detection during reading. Behavioural and electrophysiological measures were obtained using a lexical decision task before and after participants trained with the reading acceleration programme. ERN amplitudes were smaller in individuals with dyslexia than in typical readers before training but increased following training, as did behavioural reading scores. Differences between the pre-training and post-training ERN and CRN components were larger in individuals with dyslexia than in typical readers. Also, the error-detection mechanism as represented by the ERN/CRN complex might serve as a biomarker for dyslexia and be used to evaluate the effectiveness of reading intervention programmes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Helical tomotherapy setup variations in canine nasal tumor patients immobilized with a bite block.
Kubicek, Lyndsay N; Seo, Songwon; Chappell, Richard J; Jeraj, Robert; Forrest, Lisa J
2012-01-01
The purpose of our study was to compare setup variation in four degrees of freedom (vertical, longitudinal, lateral, and roll) between canine nasal tumor patients immobilized with a mattress and bite block, versus a mattress alone. Our secondary aim was to define a clinical target volume (CTV) to planning target volume (PTV) expansion margin based on our mean systematic error values associated with nasal tumor patients immobilized by a mattress and bite block. We evaluated six parameters for setup corrections: systematic error, random error, patient-patient variation in systematic errors, the magnitude of patient-specific random errors (root mean square [RMS]), distance error, and the variation of setup corrections from zero shift. The variations in all parameters were statistically smaller in the group immobilized by a mattress and bite block. The mean setup corrections in the mattress and bite block group ranged from 0.91 mm to 1.59 mm for the translational errors and 0.5°. Although most veterinary radiation facilities do not have access to Image-guided radiotherapy (IGRT), we identified a need for more rigid fixation, established the value of adding IGRT to veterinary radiation therapy, and define the CTV-PTV setup error margin for canine nasal tumor patients immobilized in a mattress and bite block. © 2012 Veterinary Radiology & Ultrasound.
Does ease mediate the ease-of-retrieval effect? A meta-analysis.
Weingarten, Evan; Hutchinson, J Wesley
2018-03-01
A wealth of literature suggests individuals use feelings in addition to facts as sources of information for judgment. This paper focuses on a manipulation in which participants list either a few or many examples of a given type, and then make a judgment. Instead of using the number of arguments or evidence strength, participants are hypothesized to use the subjective ease of generating examples as the primary input to judgment. This result is commonly called the ease-of-retrieval effect , and the feeling of ease is typically assumed to mediate the effect. We use meta-analytic methods across 142 papers, 263 studies, and 582 effect sizes to assess the robustness of the ease-of-retrieval effect, and whether or not the effect is mediated by subjective ease. On average, the standard few-versus-many manipulation exhibits a medium-sized effect. In experimental conditions designed to replicate the standard effect, about a third to half of the total effect is mediated by subjective ease. This supports the standard explanation, but suggests that other mediators are present. Further, we find evidence of publication bias that reduces the standard effect by up to 1 third. We also find that (a) moderator manipulations that differ from the standard manipulation lead to smaller, often reversed effects that are not as strongly mediated by ease, (b) several manipulations of theory-based moderators (e.g., polarized attitudes, misattribution) yield strong theory-consistent effects, (c) method-based moderators have little or no effects on the results, and (d) the mediation results are robust with respect to assumptions about error structure. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Guo-Qiang, Zhang; Yan, Huang; Licong, Cui
2017-01-01
We introduce RGT, Retrospective Ground-Truthing, as a surrogate reference standard for evaluating the performance of automated Ontology Quality Assurance (OQA) methods. The key idea of RGT is to use cumulative SNOMED CT changes derived from its regular longitudinal distributions by the official SNOMED CT editorial board as a partial, surrogate reference standard. The contributions of this paper are twofold: (1) to construct an RGT reference set for SNOMED CT relational changes; and (2) to perform a comparative evaluation of the performances of lattice, non-lattice, and randomized relational error detection methods using the standard precision, recall, and geometric measures. An RGT relational-change reference set of 32,241 IS-A changes were constructed from 5 U.S. editions of SNOMED CT from September 2014 to September 2016, with reversals and changes due to deletion or addition of new concepts excluded. 68,849 independent non-lattice fragments, 118,587 independent lattice fragments, and 446,603 relations were extracted from the SNOMED CT March 2014 distribution. Comparative performance analysis of smaller (less than 15) lattice vs. non-lattice fragments was also given to approach the more realistic setting in which such methods may be applied. Among the 32,241 IS-A changes, independent non-lattice fragments covered 52.8% changes with 26.4% precision with a G-score of 0.373. Even though this G-score is significantly lower in comparison to those in information retrieval, it breaks new ground in that such evaluations have never performed before in the highly discovery-oriented setting of OQA. PMID:29854262
Guo-Qiang, Zhang; Yan, Huang; Licong, Cui
2017-01-01
We introduce RGT, Retrospective Ground-Truthing, as a surrogate reference standard for evaluating the performance of automated Ontology Quality Assurance (OQA) methods. The key idea of RGT is to use cumulative SNOMED CT changes derived from its regular longitudinal distributions by the official SNOMED CT editorial board as a partial, surrogate reference standard. The contributions of this paper are twofold: (1) to construct an RGT reference set for SNOMED CT relational changes; and (2) to perform a comparative evaluation of the performances of lattice, non-lattice, and randomized relational error detection methods using the standard precision, recall, and geometric measures. An RGT relational-change reference set of 32,241 IS-A changes were constructed from 5 U.S. editions of SNOMED CT from September 2014 to September 2016, with reversals and changes due to deletion or addition of new concepts excluded. 68,849 independent non-lattice fragments, 118,587 independent lattice fragments, and 446,603 relations were extracted from the SNOMED CT March 2014 distribution. Comparative performance analysis of smaller (less than 15) lattice vs. non-lattice fragments was also given to approach the more realistic setting in which such methods may be applied. Among the 32,241 IS-A changes, independent non-lattice fragments covered 52.8% changes with 26.4% precision with a G-score of 0.373. Even though this G-score is significantly lower in comparison to those in information retrieval, it breaks new ground in that such evaluations have never performed before in the highly discovery-oriented setting of OQA.
The computation of equating errors in international surveys in education.
Monseur, Christian; Berezner, Alla
2007-01-01
Since the IEA's Third International Mathematics and Science Study, one of the major objectives of international surveys in education has been to report trends in achievement. The names of the two current IEA surveys reflect this growing interest: Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study (PIRLS). Similarly a central concern of the OECD's PISA is with trends in outcomes over time. To facilitate trend analyses these studies link their tests using common item equating in conjunction with item response modelling methods. IEA and PISA policies differ in terms of reporting the error associated with trends. In IEA surveys, the standard errors of the trend estimates do not include the uncertainty associated with the linking step while PISA does include a linking error component in the standard errors of trend estimates. In other words, PISA implicitly acknowledges that trend estimates partly depend on the selected common items, while the IEA's surveys do not recognise this source of error. Failing to recognise the linking error leads to an underestimation of the standard errors and thus increases the Type I error rate, thereby resulting in reporting of significant changes in achievement when in fact these are not significant. The growing interest of policy makers in trend indicators and the impact of the evaluation of educational reforms appear to be incompatible with such underestimation. However, the procedure implemented by PISA raises a few issues about the underlying assumptions for the computation of the equating error. After a brief introduction, this paper will describe the procedure PISA implemented to compute the linking error. The underlying assumptions of this procedure will then be discussed. Finally an alternative method based on replication techniques will be presented, based on a simulation study and then applied to the PISA 2000 data.
Stabilizing Conditional Standard Errors of Measurement in Scale Score Transformations
ERIC Educational Resources Information Center
Moses, Tim; Kim, YoungKoung
2017-01-01
The focus of this article is on scale score transformations that can be used to stabilize conditional standard errors of measurement (CSEMs). Three transformations for stabilizing the estimated CSEMs are reviewed, including the traditional arcsine transformation, a recently developed general variance stabilization transformation, and a new method…
WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.
Grech, Victor
2018-03-01
The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.
Determinants of Standard Errors of MLEs in Confirmatory Factor Analysis
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Cheng, Ying; Zhang, Wei
2010-01-01
This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teo, P; Guo, K; Alayoubi, N
Purpose: Accounting for tumor motion during radiation therapy is important to ensure that the tumor receives the prescribed dose. Increasing the field size to account for this motion exposes the surrounding healthy tissues to unnecessary radiation. In contrast to using motion-encompassing techniques to treat moving tumors, conformal radiation therapy (RT) uses a smaller field to track the tumor and adapts the beam aperture according to the motion detected. This work investigates and compares the performance of three markerless, EPID based, optical flow methods to track tumor motion with conformal RT. Methods: Three techniques were used to track the motions ofmore » a 3D printed lung tumor programmed to move according to the tumor of seven lung cancer patients. These techniques utilized a multi-resolution optical flow algorithm as the core computation for image registration. The first method (DIR) registers the incoming images with an initial reference frame, while the second method (RFSF) uses an adaptive reference frame and the third method (CU) uses preceding image frames for registration. The patient traces and errors were evaluated for the seven patients. Results: The average position errors for all patient traces were 0.12 ± 0.33 mm, −0.05 ± 0.04 mm and −0.28 ± 0.44 mm for CU, DIR and RFSF method respectively. The position errors distributed within 1 standard deviation are 0.74 mm, 0.37 mm and 0.96 mm respectively. The CU and RFSF algorithms are sensitive to the characteristics of the patient trace and produce a wider distribution of errors amongst patients. Although the mean error for the DIR method is negatively biased (−0.05 mm) for all patients, it has the narrowest distribution of position error, which can be corrected using an offset calibration. Conclusion: Three techniques of image registration and position update were studied. Using direct comparison with an initial frame yields the best performance. The authors would like to thank Dr.YeLin Suh for making the Cyberknife dataset available to us. Scholarship funding from the Natural Sciences and Engineering Research Council of Canada (NSERC) and CancerCare Manitoba Foundation is acknowledged.« less
40 CFR 1074.110 - Adoption of California standards by other states.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... 7501 to 7515) may adopt and enforce emission standards for any period for nonroad engines and vehicles... adopted such standards. (2) Such standards may not apply to new engines smaller than 175 horsepower that..., to the California standards authorized by the Administrator. (4) The state must adopt such standards...
NASA Astrophysics Data System (ADS)
Nasyrov, R. K.; Poleshchuk, A. G.
2017-09-01
This paper describes the development and manufacture of diffraction corrector and imitator for the interferometric control of the surface shape of the 6-m main mirror of the Big Azimuthal Telescope of the Russian Academy of Sciences. The effect of errors in manufacture and adjustment on the quality of the measurement wavefront is studied. The corrector is controlled with the use of an off-axis diffraction imitator operating in a reflection mode. The measured error is smaller than 0.0138λ (RMS).
Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Abotteen, K. M. (Principal Investigator)
1980-01-01
The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.
Errors in Bibliographic Citations: A Continuing Problem.
ERIC Educational Resources Information Center
Sweetland, James H.
1989-01-01
Summarizes studies examining citation errors and illustrates errors resulting from a lack of standardization, misunderstanding of foreign languages, failure to examine the document cited, and general lack of training in citation norms. It is argued that the failure to detect and correct citation errors is due to diffusion of responsibility in the…
Evaluation of image quality metrics for the prediction of subjective best focus.
Kilintari, Marina; Pallikaris, Aristophanis; Tsiklis, Nikolaos; Ginis, Harilaos S
2010-03-01
Seven existing and three new image quality metrics were evaluated in terms of their effectiveness in predicting subjective cycloplegic refraction. Monochromatic wavefront aberrations (WA) were measured in 70 eyes using a Shack-Hartmann based device (Complete Ophthalmic Analysis System; Wavefront Sciences). Subjective cycloplegic spherocylindrical correction was obtained using a standard manifest refraction procedure. The dioptric amount required to optimize each metric was calculated and compared with the subjective refraction result. Metrics included monochromatic and polychromatic variants, as well as variants taking into consideration the Stiles and Crawford effect (SCE). WA measurements were performed using infrared light and converted to visible before all calculations. The mean difference between subjective cycloplegic and WA-derived spherical refraction ranged from 0.17 to 0.36 diopters (D), while paraxial curvature resulted in a difference of 0.68 D. Monochromatic metrics exhibited smaller mean differences between subjective cycloplegic and objective refraction. Consideration of the SCE reduced the standard deviation (SD) of the difference between subjective and objective refraction. All metrics exhibited similar performance in terms of accuracy and precision. We hypothesize that errors pertaining to the conversion between infrared and visible wavelengths rather than calculation method may be the limiting factor in determining objective best focus from near infrared WA measurements.
Boundary-based cellwise OPC for standard-cell layouts
NASA Astrophysics Data System (ADS)
Pawlowski, David M.; Deng, Liang; Wong, Martin D. F.
2007-03-01
Model based optical proximity correction (OPC) has become necessary at 90nm technology node. Cellwise OPC is an attractive technique to reduce the mask data size as well as the prohibitive runtime of full-chip OPC. As feature dimensions have gotten smaller, the radius of influence for edge features has extended further into neighboring cells such that it is no longer sufficient to perform cellwise OPC independent of neighboring cells, especially for the critical layers. The methodology described in this work accounts for features in neighboring cells and allows a cellwise approach to be applied to cells with a printed gate length of 45nm with the projection that it can also be applied to future technology nodes. OPC-ready cells are generated at library creation (independent of placement) using a boundary-based technique. Each cell has a tractable number of OPC-ready versions due to an intelligent characterization of standard cell layout features. Results are very promising: the average edge placement error (EPE) for all metal1 features in 100 layouts is 0.731nm which is less than 1% of metal1 width; the maximum EPE for poly features reduced to 1/3, compared to cellwise OPC without considering boundaries, creating similar levels of lithographic accuracy while obviating any of the drawbacks inherent in layout specific full-chip model-based OPC.
Comparison of Asymmetric and Ice-cream Cone Models for Halo Coronal Mass Ejections
NASA Astrophysics Data System (ADS)
Na, H.; Moon, Y.
2011-12-01
Halo coronal mass ejections (HCMEs) are major cause of the geomagnetic storms. To minimize the projection effect by coronagraph observation, several cone models have been suggested: an ice-cream cone model, an asymmetric cone model etc. These models allow us to determine the three dimensional parameters of HCMEs such as radial speed, angular width, and the angle between sky plane and central axis of the cone. In this study, we compare these parameters obtained from different models using 48 well-observed HCMEs from 2001 to 2002. And we obtain the root mean square error (RMS error) between measured projection speeds and calculated projection speeds for both cone models. As a result, we find that the radial speeds obtained from the models are well correlated with each other (R = 0.86), and the correlation coefficient of angular width is 0.6. The correlation coefficient of the angle between sky plane and central axis of the cone is 0.31, which is much smaller than expected. The reason may be due to the fact that the source locations of the asymmetric cone model are distributed near the center, while those of the ice-cream cone model are located in a wide range. The average RMS error of the asymmetric cone model (85.6km/s) is slightly smaller than that of the ice-cream cone model (87.8km/s).
Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.
de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo
2018-03-01
Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.
ERIC Educational Resources Information Center
Deke, John; Wei, Thomas; Kautz, Tim
2017-01-01
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts…
Evaluating concentration estimation errors in ELISA microarray experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; White, Amanda M.; Varnum, Susan M.
Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Althoughmore » propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.« less
WE-H-BRC-05: Catastrophic Error Metrics for Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, S; Molloy, J
Purpose: Intuitive evaluation of complex radiotherapy treatments is impractical, while data transfer anomalies create the potential for catastrophic treatment delivery errors. Contrary to prevailing wisdom, logical scrutiny can be applied to patient-specific machine settings. Such tests can be automated, applied at the point of treatment delivery and can be dissociated from prior states of the treatment plan, potentially revealing errors introduced early in the process. Methods: Analytical metrics were formulated for conventional and intensity modulated RT (IMRT) treatments. These were designed to assess consistency between monitor unit settings, wedge values, prescription dose and leaf positioning (IMRT). Institutional metric averages formore » 218 clinical plans were stratified over multiple anatomical sites. Treatment delivery errors were simulated using a commercial treatment planning system and metric behavior assessed via receiver-operator-characteristic (ROC) analysis. A positive result was returned if the erred plan metric value exceeded a given number of standard deviations, e.g. 2. The finding was declared true positive if the dosimetric impact exceeded 25%. ROC curves were generated over a range of metric standard deviations. Results: Data for the conventional treatment metric indicated standard deviations of 3%, 12%, 11%, 8%, and 5 % for brain, pelvis, abdomen, lung and breast sites, respectively. Optimum error declaration thresholds yielded true positive rates (TPR) between 0.7 and 1, and false positive rates (FPR) between 0 and 0.2. Two proposed IMRT metrics possessed standard deviations of 23% and 37%. The superior metric returned TPR and FPR of 0.7 and 0.2, respectively, when both leaf position and MUs were modelled. Isolation to only leaf position errors yielded TPR and FPR values of 0.9 and 0.1. Conclusion: Logical tests can reveal treatment delivery errors and prevent large, catastrophic errors. Analytical metrics are able to identify errors in monitor units, wedging and leaf positions with favorable sensitivity and specificity. In part by Varian.« less
On the relationship between aerosol content and errors in telephotometer experiments.
NASA Technical Reports Server (NTRS)
Thomas, R. W. L.
1971-01-01
This paper presents an invariant imbedding theory of multiple scattering phenomena contributing to errors in telephotometer experiments. The theory indicates that there is a simple relationship between the magnitudes of the errors introduced by successive orders of scattering and it is shown that for all optical thicknesses each order can be represented by a coefficient which depends on the field of view of the telescope and the properties of the scattering medium. The verification of the theory and the derivation of the coefficients have been accomplished by a Monte Carlo program. Both monodisperse and polydisperse systems of Mie scatterers have been treated. The results demonstrate that for a given optical thickness the coefficients increase strongly with the mean particle size particularly for the smaller fields of view.
Poster - 49: Assessment of Synchrony respiratory compensation error for CyberKnife liver treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ming; Cygler,
The goal of this work is to quantify respiratory motion compensation errors for liver tumor patients treated by the CyberKnife system with Synchrony tracking, to identify patients with the smallest tracking errors and to eventually help coach patient’s breathing patterns to minimize dose delivery errors. The accuracy of CyberKnife Synchrony respiratory motion compensation was assessed for 37 patients treated for liver lesions by analyzing data from system logfiles. A predictive model is used to modulate the direction of individual beams during dose delivery based on the positions of internally implanted fiducials determined using an orthogonal x-ray imaging system and themore » current location of LED external markers. For each x-ray pair acquired, system logfiles report the prediction error, the difference between the measured and predicted fiducial positions, and the delivery error, which is an estimate of the statistical error in the model overcoming the latency between x-ray acquisition and robotic repositioning. The total error was calculated at the time of each x-ray pair, for the number of treatment fractions and the number of patients, giving the average respiratory motion compensation error in three dimensions. The 99{sup th} percentile for the total radial error is 3.85 mm, with the highest contribution of 2.79 mm in superior/inferior (S/I) direction. The absolute mean compensation error is 1.78 mm radially with a 1.27 mm contribution in the S/I direction. Regions of high total error may provide insight into features predicting groups of patients with larger or smaller total errors.« less
DOT National Transportation Integrated Search
1999-08-01
This study examines certain airport design standards in an effort to understand the rationale behind their development. Researchers studied the standards to identify potential standards for relaxing. The focus is on smaller, less active airports wher...
Five-equation and robust three-equation methods for solution verification of large eddy simulation
NASA Astrophysics Data System (ADS)
Dutta, Rabijit; Xing, Tao
2018-02-01
This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.
Degrees of Freedom for Allan Deviation Estimates of Multiple Clocks
2016-04-01
Allan deviation . Allan deviation will be represented by σ and standard deviation will be represented by δ. In practice, when the Allan deviation of a...the Allan deviation of standard noise types. Once the number of degrees of freedom is known, an approximate confidence interval can be assigned by...measurement errors from paired difference data. We extend this approach by using the Allan deviation to estimate the error in a frequency standard
Impact of Standardized Communication Techniques on Errors during Simulated Neonatal Resuscitation.
Yamada, Nicole K; Fuerch, Janene H; Halamek, Louis P
2016-03-01
Current patterns of communication in high-risk clinical situations, such as resuscitation, are imprecise and prone to error. We hypothesized that the use of standardized communication techniques would decrease the errors committed by resuscitation teams during neonatal resuscitation. In a prospective, single-blinded, matched pairs design with block randomization, 13 subjects performed as a lead resuscitator in two simulated complex neonatal resuscitations. Two nurses assisted each subject during the simulated resuscitation scenarios. In one scenario, the nurses used nonstandard communication; in the other, they used standardized communication techniques. The performance of the subjects was scored to determine errors committed (defined relative to the Neonatal Resuscitation Program algorithm), time to initiation of positive pressure ventilation (PPV), and time to initiation of chest compressions (CC). In scenarios in which subjects were exposed to standardized communication techniques, there was a trend toward decreased error rate, time to initiation of PPV, and time to initiation of CC. While not statistically significant, there was a 1.7-second improvement in time to initiation of PPV and a 7.9-second improvement in time to initiation of CC. Should these improvements in human performance be replicated in the care of real newborn infants, they could improve patient outcomes and enhance patient safety. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
NASA Technical Reports Server (NTRS)
Weaver, W. L.; Green, R. N.
1980-01-01
Geometric shape factors were computed and applied to satellite simulated irradiance measurements to estimate Earth emitted flux densities for global and zonal scales and for areas smaller than the detector field of view (FOV). Wide field of view flat plate detectors were emphasized, but spherical detectors were also studied. The radiation field was modeled after data from the Nimbus 2 and 3 satellites. At a satellite altitude of 600 km, zonal estimates were in error 1.0 to 1.2 percent and global estimates were in error less than 0.2 percent. Estimates with unrestricted field of view (UFOV) detectors were about the same for Lambertian and limb darkening radiation models. The opposite was found for restricted field of view detectors. The UFOV detectors are found to be poor estimators of flux density from the total FOV and are shown to be much better as estimators of flux density from a circle centered at the FOV with an area significantly smaller than that for the total FOV.
Sample preparation techniques for the determination of trace residues and contaminants in foods.
Ridgway, Kathy; Lalljie, Sam P D; Smith, Roger M
2007-06-15
The determination of trace residues and contaminants in complex matrices, such as food, often requires extensive sample extraction and preparation prior to instrumental analysis. Sample preparation is often the bottleneck in analysis and there is a need to minimise the number of steps to reduce both time and sources of error. There is also a move towards more environmentally friendly techniques, which use less solvent and smaller sample sizes. Smaller sample size becomes important when dealing with real life problems, such as consumer complaints and alleged chemical contamination. Optimal sample preparation can reduce analysis time, sources of error, enhance sensitivity and enable unequivocal identification, confirmation and quantification. This review considers all aspects of sample preparation, covering general extraction techniques, such as Soxhlet and pressurised liquid extraction, microextraction techniques such as liquid phase microextraction (LPME) and more selective techniques, such as solid phase extraction (SPE), solid phase microextraction (SPME) and stir bar sorptive extraction (SBSE). The applicability of each technique in food analysis, particularly for the determination of trace organic contaminants in foods is discussed.
NASA Technical Reports Server (NTRS)
Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.
2007-01-01
The desire to create more complex visual scenes in modern flight simulators outpaces recent increases in processor speed. As a result, simulation transport delay remains a problem. New approaches for compensating the transport delay in a flight simulator have been developed and are presented in this report. The lead/lag filter, the McFarland compensator and the Sobiski/Cardullo state space filter are three prominent compensators. The lead/lag filter provides some phase lead, while introducing significant gain distortion in the same frequency interval. The McFarland predictor can compensate for much longer delay and cause smaller gain error in low frequencies than the lead/lag filter, but the gain distortion beyond the design frequency interval is still significant, and it also causes large spikes in prediction. Though, theoretically, the Sobiski/Cardullo predictor, a state space filter, can compensate the longest delay with the least gain distortion among the three, it has remained in laboratory use due to several limitations. The first novel compensator is an adaptive predictor that makes use of the Kalman filter algorithm in a unique manner. In this manner the predictor can accurately provide the desired amount of prediction, while significantly reducing the large spikes caused by the McFarland predictor. Among several simplified online adaptive predictors, this report illustrates mathematically why the stochastic approximation algorithm achieves the best compensation results. A second novel approach employed a reference aircraft dynamics model to implement a state space predictor on a flight simulator. The practical implementation formed the filter state vector from the operator s control input and the aircraft states. The relationship between the reference model and the compensator performance was investigated in great detail, and the best performing reference model was selected for implementation in the final tests. Theoretical analyses of data from offline simulations with time delay compensation show that both novel predictors effectively suppress the large spikes caused by the McFarland compensator. The phase errors of the three predictors are not significant. The adaptive predictor yields greater gain errors than the McFarland predictor for short delays (96 and 138 ms), but shows smaller errors for long delays (186 and 282 ms). The advantage of the adaptive predictor becomes more obvious for a longer time delay. Conversely, the state space predictor results in substantially smaller gain error than the other two predictors for all four delay cases.
Phu, Jack; Bui, Bang V; Kalloniatis, Michael; Khuu, Sieu K
2018-03-01
The number of subjects needed to establish the normative limits for visual field (VF) testing is not known. Using bootstrap resampling, we determined whether the ground truth mean, distribution limits, and standard deviation (SD) could be approximated using different set size ( x ) levels, in order to provide guidance for the number of healthy subjects required to obtain robust VF normative data. We analyzed the 500 Humphrey Field Analyzer (HFA) SITA-Standard results of 116 healthy subjects and 100 HFA full threshold results of 100 psychophysically experienced healthy subjects. These VFs were resampled (bootstrapped) to determine mean sensitivity, distribution limits (5th and 95th percentiles), and SD for different ' x ' and numbers of resamples. We also used the VF results of 122 glaucoma patients to determine the performance of ground truth and bootstrapped results in identifying and quantifying VF defects. An x of 150 (for SITA-Standard) and 60 (for full threshold) produced bootstrapped descriptive statistics that were no longer different to the original distribution limits and SD. Removing outliers produced similar results. Differences between original and bootstrapped limits in detecting glaucomatous defects were minimized at x = 250. Ground truth statistics of VF sensitivities could be approximated using set sizes that are significantly smaller than the original cohort. Outlier removal facilitates the use of Gaussian statistics and does not significantly affect the distribution limits. We provide guidance for choosing the cohort size for different levels of error when performing normative comparisons with glaucoma patients.
A new multigroup method for cross-sections that vary rapidly in energy
NASA Astrophysics Data System (ADS)
Haut, T. S.; Ahrens, C.; Jonko, A.; Lowrie, R.; Till, A.
2017-01-01
We present a numerical method for solving the time-independent thermal radiative transfer (TRT) equation or the neutron transport (NT) equation when the opacity (cross-section) varies rapidly in frequency (energy) on the microscale ε; ε corresponds to the characteristic spacing between absorption lines or resonances, and is much smaller than the macroscopic frequency (energy) variation of interest. The approach is based on a rigorous homogenization of the TRT/NT equation in the frequency (energy) variable. Discretization of the homogenized TRT/NT equation results in a multigroup-type system, and can therefore be solved by standard methods. We demonstrate the accuracy and efficiency of the approach on three model problems. First we consider the Elsasser band model with constant temperature and a line spacing ε =10-4 . Second, we consider a neutron transport application for fast neutrons incident on iron, where the characteristic resonance spacing ε necessitates ≈ 16 , 000 energy discretization parameters if Planck-weighted cross sections are used. Third, we consider an atmospheric TRT problem for an opacity corresponding to water vapor over a frequency range 1000-2000 cm-1, where we take 12 homogeneous layers between 1-15 km, and temperature/pressure values in each layer from the standard US atmosphere. For all three problems, we demonstrate that we can achieve between 0.1 and 1 percent relative error in the solution, and with several orders of magnitude fewer parameters than a standard multigroup formulation using Planck-weighted (source-weighted) opacities for a comparable accuracy.
Mehta, Shraddha; Bastero-Caballero, Rowena F; Sun, Yijun; Zhu, Ray; Murphy, Diane K; Hardas, Bhushan; Koch, Gary
2018-04-29
Many published scale validation studies determine inter-rater reliability using the intra-class correlation coefficient (ICC). However, the use of this statistic must consider its advantages, limitations, and applicability. This paper evaluates how interaction of subject distribution, sample size, and levels of rater disagreement affects ICC and provides an approach for obtaining relevant ICC estimates under suboptimal conditions. Simulation results suggest that for a fixed number of subjects, ICC from the convex distribution is smaller than ICC for the uniform distribution, which in turn is smaller than ICC for the concave distribution. The variance component estimates also show that the dissimilarity of ICC among distributions is attributed to the study design (ie, distribution of subjects) component of subject variability and not the scale quality component of rater error variability. The dependency of ICC on the distribution of subjects makes it difficult to compare results across reliability studies. Hence, it is proposed that reliability studies should be designed using a uniform distribution of subjects because of the standardization it provides for representing objective disagreement. In the absence of uniform distribution, a sampling method is proposed to reduce the non-uniformity. In addition, as expected, high levels of disagreement result in low ICC, and when the type of distribution is fixed, any increase in the number of subjects beyond a moderately large specification such as n = 80 does not have a major impact on ICC. Copyright © 2018 John Wiley & Sons, Ltd.
Blöchliger, Nicolas; Keller, Peter M; Böttger, Erik C; Hombach, Michael
2017-09-01
The procedure for setting clinical breakpoints (CBPs) for antimicrobial susceptibility has been poorly standardized with respect to population data, pharmacokinetic parameters and clinical outcome. Tools to standardize CBP setting could result in improved antibiogram forecast probabilities. We propose a model to estimate probabilities for methodological categorization errors and defined zones of methodological uncertainty (ZMUs), i.e. ranges of zone diameters that cannot reliably be classified. The impact of ZMUs on methodological error rates was used for CBP optimization. The model distinguishes theoretical true inhibition zone diameters from observed diameters, which suffer from methodological variation. True diameter distributions are described with a normal mixture model. The model was fitted to observed inhibition zone diameters of clinical Escherichia coli strains. Repeated measurements for a quality control strain were used to quantify methodological variation. For 9 of 13 antibiotics analysed, our model predicted error rates of < 0.1% applying current EUCAST CBPs. Error rates were > 0.1% for ampicillin, cefoxitin, cefuroxime and amoxicillin/clavulanic acid. Increasing the susceptible CBP (cefoxitin) and introducing ZMUs (ampicillin, cefuroxime, amoxicillin/clavulanic acid) decreased error rates to < 0.1%. ZMUs contained low numbers of isolates for ampicillin and cefuroxime (3% and 6%), whereas the ZMU for amoxicillin/clavulanic acid contained 41% of all isolates and was considered not practical. We demonstrate that CBPs can be improved and standardized by minimizing methodological categorization error rates. ZMUs may be introduced if an intermediate zone is not appropriate for pharmacokinetic/pharmacodynamic or drug dosing reasons. Optimized CBPs will provide a standardized antibiotic susceptibility testing interpretation at a defined level of probability. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Standard Errors of Equating Differences: Prior Developments, Extensions, and Simulations
ERIC Educational Resources Information Center
Moses, Tim; Zhang, Wenmin
2011-01-01
The purpose of this article was to extend the use of standard errors for equated score differences (SEEDs) to traditional equating functions. The SEEDs are described in terms of their original proposal for kernel equating functions and extended so that SEEDs for traditional linear and traditional equipercentile equating functions can be computed.…
Progress in the improved lattice calculation of direct CP-violation in the Standard Model
NASA Astrophysics Data System (ADS)
Kelly, Christopher
2018-03-01
We discuss the ongoing effort by the RBC & UKQCD collaborations to improve our lattice calculation of the measure of Standard Model direct CP violation, ɛ', with physical kinematics. We present our progress in decreasing the (dominant) statistical error and discuss other related activities aimed at reducing the systematic errors.
The Development of MST Test Information for the Prediction of Test Performances
ERIC Educational Resources Information Center
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.
2017-01-01
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
ERIC Educational Resources Information Center
National Center for Education Statistics, 2010
2010-01-01
This paper presents the supplemental figures, tables, and standard error tables for the report "Student Financing of Undergraduate Education: 2007-08. Web Tables. NCES 2010-162." (Contains 6 figures and 10 tables.) [For the main report, see ED511828.
Expression-invariant representations of faces.
Bronstein, Alexander M; Bronstein, Michael M; Kimmel, Ron
2007-01-01
Addressed here is the problem of constructing and analyzing expression-invariant representations of human faces. We demonstrate and justify experimentally a simple geometric model that allows to describe facial expressions as isometric deformations of the facial surface. The main step in the construction of expression-invariant representation of a face involves embedding of the facial intrinsic geometric structure into some low-dimensional space. We study the influence of the embedding space geometry and dimensionality choice on the representation accuracy and argue that compared to its Euclidean counterpart, spherical embedding leads to notably smaller metric distortions. We experimentally support our claim showing that a smaller embedding error leads to better recognition.
Error model for the SAO 1969 standard earth.
NASA Technical Reports Server (NTRS)
Martin, C. F.; Roy, N. A.
1972-01-01
A method is developed for estimating an error model for geopotential coefficients using satellite tracking data. A single station's apparent timing error for each pass is attributed to geopotential errors. The root sum of the residuals for each station also depends on the geopotential errors, and these are used to select an error model. The model chosen is 1/4 of the difference between the SAO M1 and the APL 3.5 geopotential.
Dodd, Lori E; Korn, Edward L; Freidlin, Boris; Gu, Wenjuan; Abrams, Jeffrey S; Bushnell, William D; Canetta, Renzo; Doroshow, James H; Gray, Robert J; Sridhara, Rajeshwari
2013-10-01
Measurement error in time-to-event end points complicates interpretation of treatment effects in clinical trials. Non-differential measurement error is unlikely to produce large bias [1]. When error depends on treatment arm, bias is of greater concern. Blinded-independent central review (BICR) of all images from a trial is commonly undertaken to mitigate differential measurement-error bias that may be present in hazard ratios (HRs) based on local evaluations. Similar BICR and local evaluation HRs may provide reassurance about the treatment effect, but BICR adds considerable time and expense to trials. We describe a BICR audit strategy [2] and apply it to five randomized controlled trials to evaluate its use and to provide practical guidelines. The strategy requires BICR on a subset of study subjects, rather than a complete-case BICR, and makes use of an auxiliary-variable estimator. When the effect size is relatively large, the method provides a substantial reduction in the size of the BICRs. In a trial with 722 participants and a HR of 0.48, an average audit of 28% of the data was needed and always confirmed the treatment effect as assessed by local evaluations. More moderate effect sizes and/or smaller trial sizes required larger proportions of audited images, ranging from 57% to 100% for HRs ranging from 0.55 to 0.77 and sample sizes between 209 and 737. The method is developed for a simple random sample of study subjects. In studies with low event rates, more efficient estimation may result from sampling individuals with events at a higher rate. The proposed strategy can greatly decrease the costs and time associated with BICR, by reducing the number of images undergoing review. The savings will depend on the underlying treatment effect and trial size, with larger treatment effects and larger trials requiring smaller proportions of audited data.
Huang, Xinchuan; Schwenke, David W; Lee, Timothy J
2011-01-28
In this work, we build upon our previous work on the theoretical spectroscopy of ammonia, NH(3). Compared to our 2008 study, we include more physics in our rovibrational calculations and more experimental data in the refinement procedure, and these enable us to produce a potential energy surface (PES) of unprecedented accuracy. We call this the HSL-2 PES. The additional physics we include is a second-order correction for the breakdown of the Born-Oppenheimer approximation, and we find it to be critical for improved results. By including experimental data for higher rotational levels in the refinement procedure, we were able to greatly reduce our systematic errors for the rotational dependence of our predictions. These additions together lead to a significantly improved total angular momentum (J) dependence in our computed rovibrational energies. The root-mean-square error between our predictions using the HSL-2 PES and the reliable energy levels from the HITRAN database for J = 0-6 and J = 7∕8 for (14)NH(3) is only 0.015 cm(-1) and 0.020∕0.023 cm(-1), respectively. The root-mean-square errors for the characteristic inversion splittings are approximately 1∕3 smaller than those for energy levels. The root-mean-square error for the 6002 J = 0-8 transition energies is 0.020 cm(-1). Overall, for J = 0-8, the spectroscopic data computed with HSL-2 is roughly an order of magnitude more accurate relative to our previous best ammonia PES (denoted HSL-1). These impressive numbers are eclipsed only by the root-mean-square error between our predictions for purely rotational transition energies of (15)NH(3) and the highly accurate Cologne database (CDMS): 0.00034 cm(-1) (10 MHz), in other words, 2 orders of magnitude smaller. In addition, we identify a deficiency in the (15)NH(3) energy levels determined from a model of the experimental data.
Ahearn, Elizabeth A.
2010-01-01
Multiple linear regression equations for determining flow-duration statistics were developed to estimate select flow exceedances ranging from 25- to 99-percent for six 'bioperiods'-Salmonid Spawning (November), Overwinter (December-February), Habitat Forming (March-April), Clupeid Spawning (May), Resident Spawning (June), and Rearing and Growth (July-October)-in Connecticut. Regression equations also were developed to estimate the 25- and 99-percent flow exceedances without reference to a bioperiod. In total, 32 equations were developed. The predictive equations were based on regression analyses relating flow statistics from streamgages to GIS-determined basin and climatic characteristics for the drainage areas of those streamgages. Thirty-nine streamgages (and an additional 6 short-term streamgages and 28 partial-record sites for the non-bioperiod 99-percent exceedance) in Connecticut and adjacent areas of neighboring States were used in the regression analysis. Weighted least squares regression analysis was used to determine the predictive equations; weights were assigned based on record length. The basin characteristics-drainage area, percentage of area with coarse-grained stratified deposits, percentage of area with wetlands, mean monthly precipitation (November), mean seasonal precipitation (December, January, and February), and mean basin elevation-are used as explanatory variables in the equations. Standard errors of estimate of the 32 equations ranged from 10.7 to 156 percent with medians of 19.2 and 55.4 percent to predict the 25- and 99-percent exceedances, respectively. Regression equations to estimate high and median flows (25- to 75-percent exceedances) are better predictors (smaller variability of the residual values around the regression line) than the equations to estimate low flows (less than 75-percent exceedance). The Habitat Forming (March-April) bioperiod had the smallest standard errors of estimate, ranging from 10.7 to 20.9 percent. In contrast, the Rearing and Growth (July-October) bioperiod had the largest standard errors, ranging from 30.9 to 156 percent. The adjusted coefficient of determination of the equations ranged from 77.5 to 99.4 percent with medians of 98.5 and 90.6 percent to predict the 25- and 99-percent exceedances, respectively. Descriptive information on the streamgages used in the regression, measured basin and climatic characteristics, and estimated flow-duration statistics are provided in this report. Flow-duration statistics and the 32 regression equations for estimating flow-duration statistics in Connecticut are stored on the U.S. Geological Survey World Wide Web application ?StreamStats? (http://water.usgs.gov/osw/streamstats/index.html). The regression equations developed in this report can be used to produce unbiased estimates of select flow exceedances statewide.
Mathes, Tim; Klaßen, Pauline; Pieper, Dawid
2017-11-28
Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.
Defining the Relationship Between Human Error Classes and Technology Intervention Strategies
NASA Technical Reports Server (NTRS)
Wiegmann, Douglas A.; Rantanen, Eas M.
2003-01-01
The modus operandi in addressing human error in aviation systems is predominantly that of technological interventions or fixes. Such interventions exhibit considerable variability both in terms of sophistication and application. Some technological interventions address human error directly while others do so only indirectly. Some attempt to eliminate the occurrence of errors altogether whereas others look to reduce the negative consequences of these errors. In any case, technological interventions add to the complexity of the systems and may interact with other system components in unforeseeable ways and often create opportunities for novel human errors. Consequently, there is a need to develop standards for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the biggest benefit to flight safety as well as to mitigate any adverse ramifications. The purpose of this project was to help define the relationship between human error and technological interventions, with the ultimate goal of developing a set of standards for evaluating or measuring the potential benefits of new human error fixes.
Comparative study of anatomical normalization errors in SPM and 3D-SSP using digital brain phantom.
Onishi, Hideo; Matsutake, Yuki; Kawashima, Hiroki; Matsutomo, Norikazu; Amijima, Hizuru
2011-01-01
In single photon emission computed tomography (SPECT) cerebral blood flow studies, two major algorithms are widely used statistical parametric mapping (SPM) and three-dimensional stereotactic surface projections (3D-SSP). The aim of this study is to compare an SPM algorithm-based easy Z score imaging system (eZIS) and a 3D-SSP system in the errors of anatomical standardization using 3D-digital brain phantom images. We developed a 3D-brain digital phantom based on MR images to simulate the effects of head tilt, perfusion defective region size, and count value reduction rate on the SPECT images. This digital phantom was used to compare the errors of anatomical standardization by the eZIS and the 3D-SSP algorithms. While the eZIS allowed accurate standardization of the images of the phantom simulating a head in rotation, lateroflexion, anteflexion, or retroflexion without angle dependency, the standardization by 3D-SSP was not accurate enough at approximately 25° or more head tilt. When the simulated head contained perfusion defective regions, one of the 3D-SSP images showed an error of 6.9% from the true value. Meanwhile, one of the eZIS images showed an error as large as 63.4%, revealing a significant underestimation. When required to evaluate regions with decreased perfusion due to such causes as hemodynamic cerebral ischemia, the 3D-SSP is desirable. In a statistical image analysis, we must reconfirm the image after anatomical standardization by all means.
Cost effectiveness of the stream-gaging program in South Carolina
Barker, A.C.; Wright, B.C.; Bennett, C.S.
1985-01-01
The cost effectiveness of the stream-gaging program in South Carolina was documented for the 1983 water yr. Data uses and funding sources were identified for the 76 continuous stream gages currently being operated in South Carolina. The budget of $422,200 for collecting and analyzing streamflow data also includes the cost of operating stage-only and crest-stage stations. The streamflow records for one stream gage can be determined by alternate, less costly methods, and should be discontinued. The remaining 75 stations should be maintained in the program for the foreseeable future. The current policy for the operation of the 75 stations including the crest-stage and stage-only stations would require a budget of $417,200/yr. The average standard error of estimation of streamflow records is 16.9% for the present budget with missing record included. However, the standard error of estimation would decrease to 8.5% if complete streamflow records could be obtained. It was shown that the average standard error of estimation of 16.9% could be obtained at the 75 sites with a budget of approximately $395,000 if the gaging resources were redistributed among the gages. A minimum budget of $383,500 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 18.6%. The maximum budget analyzed was $850,000, which resulted in an average standard error of 7.6 %. (Author 's abstract)
Medical students' experiences with medical errors: an analysis of medical student essays.
Martinez, William; Lo, Bernard
2008-07-01
This study aimed to examine medical students' experiences with medical errors. In 2001 and 2002, 172 fourth-year medical students wrote an anonymous description of a significant medical error they had witnessed or committed during their clinical clerkships. The assignment represented part of a required medical ethics course. We analysed 147 of these essays using thematic content analysis. Many medical students made or observed significant errors. In either situation, some students experienced distress that seemingly went unaddressed. Furthermore, this distress was sometimes severe and persisted after the initial event. Some students also experienced considerable uncertainty as to whether an error had occurred and how to prevent future errors. Many errors may not have been disclosed to patients, and some students who desired to discuss or disclose errors were apparently discouraged from doing so by senior doctors. Some students criticised senior doctors who attempted to hide errors or avoid responsibility. By contrast, students who witnessed senior doctors take responsibility for errors and candidly disclose errors to patients appeared to recognise the importance of honesty and integrity and said they aspired to these standards. There are many missed opportunities to teach students how to respond to and learn from errors. Some faculty members and housestaff may at times respond to errors in ways that appear to contradict professional standards. Medical educators should increase exposure to exemplary responses to errors and help students to learn from and cope with errors.
Siebert, Johan N; Ehrler, Frederic; Lovis, Christian; Combescure, Christophe; Haddad, Kevin; Gervaix, Alain; Manzano, Sergio
2017-08-22
During pediatric cardiopulmonary resuscitation (CPR), vasoactive drug preparation for continuous infusions is complex and time-consuming. The need for individual specific weight-based drug dose calculation and preparation places children at higher risk than adults for medication errors. Following an evidence-based and ergonomic driven approach, we developed a mobile device app called Pediatric Accurate Medication in Emergency Situations (PedAMINES), intended to guide caregivers step-by-step from preparation to delivery of drugs requiring continuous infusion. In a prior single center randomized controlled trial, medication errors were reduced from 70% to 0% by using PedAMINES when compared with conventional preparation methods. The purpose of this study is to determine whether the use of PedAMINES in both university and smaller hospitals reduces medication dosage errors (primary outcome), time to drug preparation (TDP), and time to drug delivery (TDD) (secondary outcomes) during pediatric CPR when compared with conventional preparation methods. This is a multicenter, prospective, randomized controlled crossover trial with 2 parallel groups comparing PedAMINES with a conventional and internationally used drug infusion rate table in the preparation of continuous drug infusion. The evaluation setting uses a simulation-based pediatric CPR cardiac arrest scenario with a high-fidelity manikin. The study involving 120 certified nurses (sample size) will take place in the resuscitation rooms of 3 tertiary pediatric emergency departments and 3 smaller hospitals. After epinephrine-induced return of spontaneous circulation, nurses will be asked to prepare a continuous infusion of dopamine using either PedAMINES (intervention group) or the infusion table (control group) and then prepare a continuous infusion of norepinephrine by crossing the procedure. The primary outcome is the medication dosage error rate. The secondary outcome is the time in seconds elapsed since the oral prescription by the physician to drug delivery by the nurse in each allocation group. TDD includes TDP. Stress level during the resuscitation scenario will be assessed for each participant by questionnaire and recorded by the heart rate monitor of a fitness watch. The study is formatted according to the Consolidated Standards of Reporting Trials Statement for Randomized Controlled Trials of Electronic and Mobile Health Applications and Online TeleHealth (CONSORT-EHEALTH) and the Reporting Guidelines for Health Care Simulation Research. Enrollment and data analysis started in March 2017. We anticipate the intervention will be completed in late 2017, and study results will be submitted in early 2018 for publication expected in mid-2018. Results will be reported in line with recommendations from CONSORT-EHEALTH and the Reporting Guidelines for Health Care Simulation Research . This paper describes the protocol used for a clinical trial assessing the impact of a mobile device app to reduce the rate of medication errors, time to drug preparation, and time to drug delivery during pediatric resuscitation. As research in this area is scarce, results generated from this study will be of great importance and might be sufficient to change and improve the pediatric emergency care practice. ClinicalTrials.gov NCT03021122; https://clinicaltrials.gov/ct2/show/NCT03021122 (Archived by WebCite at http://www.webcitation.org/6nfVJ5b4R). ©Johan N Siebert, Frederic Ehrler, Christian Lovis, Christophe Combescure, Kevin Haddad, Alain Gervaix, Sergio Manzano. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 22.08.2017.
Target volume and artifact evaluation of a new data-driven 4D CT.
Martin, Rachael; Pan, Tinsu
Four-dimensional computed tomography (4D CT) is often used to define the internal gross target volume (IGTV) for radiation therapy of lung cancer. Traditionally, this technique requires the use of an external motion surrogate; however, a new image, data-driven 4D CT, has become available. This study aims to describe this data-driven 4D CT and compare target contours created with it to those created using standard 4D CT. Cine CT data of 35 patients undergoing stereotactic body radiation therapy were collected and sorted into phases using standard and data-driven 4D CT. IGTV contours were drawn using a semiautomated method on maximum intensity projection images of both 4D CT methods. Errors resulting from reproducibility of the method were characterized. A comparison of phase image artifacts was made using a normalized cross-correlation method that assigned a score from +1 (data-driven "better") to -1 (standard "better"). The volume difference between the data-driven and standard IGTVs was not significant (data driven was 2.1 ± 1.0% smaller, P = .08). The Dice similarity coefficient showed good similarity between the contours (0.949 ± 0.006). The mean surface separation was 0.4 ± 0.1 mm and the Hausdorff distance was 3.1 ± 0.4 mm. An average artifact score of +0.37 indicated that the data-driven method had significantly fewer and/or less severe artifacts than the standard method (P = 1.5 × 10 -5 for difference from 0). On average, the difference between IGTVs derived from data-driven and standard 4D CT was not clinically relevant or statistically significant, suggesting data-driven 4D CT can be used in place of standard 4D CT without adjustments to IGTVs. The relatively large differences in some patients were usually attributed to limitations in automatic contouring or differences in artifacts. Artifact reduction and setup simplicity suggest a clinical advantage to data-driven 4D CT. Published by Elsevier Inc.
Multiscale measurement error models for aggregated small area health data.
Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin
2016-08-01
Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. © The Author(s) 2016.
Performance of Bootstrap MCEWMA: Study case of Sukuk Musyarakah data
NASA Astrophysics Data System (ADS)
Safiih, L. Muhamad; Hila, Z. Nurul
2014-07-01
Sukuk Musyarakah is one of several instruments of Islamic bond investment in Malaysia, where the form of this sukuk is actually based on restructuring the conventional bond to become a Syariah compliant bond. The Syariah compliant is based on prohibition of any influence of usury, benefit or fixed return. Despite of prohibition, daily returns of sukuk are non-fixed return and in statistic, the data of sukuk returns are said to be a time series data which is dependent and autocorrelation distributed. This kind of data is a crucial problem whether in statistical and financing field. Returns of sukuk can be statistically viewed by its volatility, whether it has high volatility that describing the dramatically change of price and categorized it as risky bond or else. However, this crucial problem doesn't get serious attention among researcher compared to conventional bond. In this study, MCEWMA chart in Statistical Process Control (SPC) is mainly used to monitor autocorrelated data and its application on daily returns of securities investment data has gained widespread attention among statistician. However, this chart has always been influence by inaccurate estimation, whether on base model or its limit, due to produce large error and high of probability of signalling out-of-control process for false alarm study. To overcome this problem, a bootstrap approach used in this study, by hybridise it on MCEWMA base model to construct a new chart, i.e. Bootstrap MCEWMA (BMCEWMA) chart. The hybrid model, BMCEWMA, will be applied to daily returns of sukuk Musyarakah for Rantau Abang Capital Bhd. The performance of BMCEWMA base model showed that its more effective compare to real model, MCEWMA based on smaller error estimation, shorter the confidence interval and smaller false alarm. In other word, hybrid chart reduce the variability which shown by smaller error and false alarm. It concludes that the application of BMCEWMA is better than MCEWMA.
Basavanhally, Ajay; Viswanath, Satish; Madabhushi, Anant
2015-01-01
Clinical trials increasingly employ medical imaging data in conjunction with supervised classifiers, where the latter require large amounts of training data to accurately model the system. Yet, a classifier selected at the start of the trial based on smaller and more accessible datasets may yield inaccurate and unstable classification performance. In this paper, we aim to address two common concerns in classifier selection for clinical trials: (1) predicting expected classifier performance for large datasets based on error rates calculated from smaller datasets and (2) the selection of appropriate classifiers based on expected performance for larger datasets. We present a framework for comparative evaluation of classifiers using only limited amounts of training data by using random repeated sampling (RRS) in conjunction with a cross-validation sampling strategy. Extrapolated error rates are subsequently validated via comparison with leave-one-out cross-validation performed on a larger dataset. The ability to predict error rates as dataset size increases is demonstrated on both synthetic data as well as three different computational imaging tasks: detecting cancerous image regions in prostate histopathology, differentiating high and low grade cancer in breast histopathology, and detecting cancerous metavoxels in prostate magnetic resonance spectroscopy. For each task, the relationships between 3 distinct classifiers (k-nearest neighbor, naive Bayes, Support Vector Machine) are explored. Further quantitative evaluation in terms of interquartile range (IQR) suggests that our approach consistently yields error rates with lower variability (mean IQRs of 0.0070, 0.0127, and 0.0140) than a traditional RRS approach (mean IQRs of 0.0297, 0.0779, and 0.305) that does not employ cross-validation sampling for all three datasets. PMID:25993029
Brain Potentials Measured During a Go/NoGo Task Predict Completion of Substance Abuse Treatment
Steele, Vaughn R.; Fink, Brandi C.; Maurer, J. Michael; Arbabshirani, Mohammad R.; Wilber, Charles H.; Jaffe, Adam J.; Sidz, Anna; Pearlson, Godfrey D.; Calhoun, Vince D.; Clark, Vincent P.; Kiehl, Kent A.
2014-01-01
Background US nationwide estimates indicate 50–80% of prisoners have a history of substance abuse or dependence. Tailoring substance abuse treatment to specific needs of incarcerated individuals could improve effectiveness of treating substance dependence and preventing drug abuse relapse. The purpose of the present study was to test the hypothesis that pre-treatment neural measures of a Go/NoGo task would predict which individuals would or would not complete a 12-week cognitive behavioral substance abuse treatment program. Methods Adult incarcerated participants (N=89; Females=55) who volunteered for substance abuse treatment performed a response inhibition (Go/NoGo) task while event-related potentials (ERP) were recorded. Stimulus- and response-locked ERPs were compared between individuals who completed (N=68; Females=45) and discontinued (N=21; Females=10) treatment. Results As predicted, stimulus-locked P2, response-locked error-related negativity (ERN/Ne), and response-locked error positivity (Pe), measured with windowed time-domain and principal component analysis, differed between groups. Using logistic regression and support-vector machine (i.e., pattern classifiers) models, P2 and Pe predicted treatment completion above and beyond other measures (i.e., N2, P300, ERN/Ne, age, sex, IQ, impulsivity, and self-reported depression, anxiety, motivation for change, and years of drug abuse). Conclusions We conclude individuals who discontinue treatment exhibited deficiencies in sensory gating, as indexed by smaller P2, error-monitoring, as indexed by smaller ERN/Ne, and adjusting response strategy post-error, as indexed by larger Pe. However, the combination of P2 and Pe reliably predicted 83.33% of individuals who discontinued treatment. These results may help in the development of individualized therapies, which could lead to more favorable, long-term outcomes. PMID:24238783
Fu, Haijin; Wang, Yue; Tan, Jiubin; Fan, Zhigang
2018-01-01
Even after the Heydemann correction, residual nonlinear errors, ranging from hundreds of picometers to several nanometers, are still found in heterodyne laser interferometers. This is a crucial factor impeding the realization of picometer level metrology, but its source and mechanism have barely been investigated. To study this problem, a novel nonlinear model based on optical mixing and coupling with ghost reflection is proposed and then verified by experiments. After intense investigation of this new model’s influence, results indicate that new additional high-order and negative-order nonlinear harmonics, arising from ghost reflection and its coupling with optical mixing, have only a negligible contribution to the overall nonlinear error. In real applications, any effect on the Lissajous trajectory might be invisible due to the small ghost reflectance. However, even a tiny ghost reflection can significantly worsen the effectiveness of the Heydemann correction, or even make this correction completely ineffective, i.e., compensation makes the error larger rather than smaller. Moreover, the residual nonlinear error after correction is dominated only by ghost reflectance. PMID:29498685
NASA Technical Reports Server (NTRS)
Esbensen, S. K.; Chelton, D. B.; Vickers, D.; Sun, J.
1993-01-01
The method proposed by Liu (1984) is used to estimate monthly averaged evaporation over the global oceans from 1 yr of special sensor microwave imager (SDSM/I) data. Intercomparisons involving SSM/I and in situ data are made over a wide range of oceanic conditions during August 1987 and February 1988 to determine the source of errors in the evaporation estimates. The most significant spatially coherent evaporation errors are found to come from estimates of near-surface specific humidity, q. Systematic discrepancies of over 2 g/kg are found in the tropics, as well as in the middle and high latitudes. The q errors are partitioned into contributions from the parameterization of q in terms of the columnar water vapor, i.e., the Liu q/W relationship, and from the retrieval algorithm for W. The effects of W retrieval errors are found to be smaller over most of the global oceans and due primarily to the implicitly assumed vertical structures of temperature and specific humidity on which the physically based SSM/I retrievals of W are based.
de Cueto, Marina; Ceballos, Esther; Martinez-Martinez, Luis; Perea, Evelio J.; Pascual, Alvaro
2004-01-01
In order to further decrease the time lapse between initial inoculation of blood culture media and the reporting of results of identification and antimicrobial susceptibility tests for microorganisms causing bacteremia, we performed a prospective study in which specially processed fluid from positive blood culture bottles from Bactec 9240 (Becton Dickinson, Cockeysville, Md.) containing aerobic media were directly inoculated into Vitek 2 system cards (bio-Mérieux, France). Organism identification and susceptibility results were compared with those obtained from cards inoculated with a standardized bacterial suspension obtained following subculture to agar; 100 consecutive positive monomicrobic blood cultures, consisting of 50 gram-negative rods and 50 gram-positive cocci, were included in the study. For gram-negative organisms, 31 of the 50 (62%) showed complete agreement with the standard method for species identification, while none of the 50 gram-positive cocci were correctly identified by the direct method. For gram-negative rods, there were 50% categorical agreements between the direct and standard methods for all drugs tested. The very major error rate was 2.4%, and the major error rate was 0.6%. The overall error rate for gram-negatives was 6.6%. Complete agreement in clinical categories of all antimicrobial agents evaluated was obtained for 19 of 50 (38%) gram-positive cocci evaluated; the overall error rate was 8.4%, with 2.8% minor errors, 2.4% major errors, and 3.2% very major errors. These findings suggest that the Vitek 2 cards inoculated directly from positive Bactec 9240 bottles do not provide acceptable bacterial identification or susceptibility testing in comparison with corresponding cards tested by a standard method. PMID:15297523
[The quality of medication orders--can it be improved?].
Vaknin, Ofra; Wingart-Emerel, Efrat; Stern, Zvi
2003-07-01
Medication errors are a common cause of morbidity and mortality among patients. Medication administration in hospitals is a complicated procedure with the possibility of error at each step. Errors are most commonly found at the prescription and transcription stages, although it is known that most errors can easily be avoided through strict adherence to standardized procedure guidelines. In examination of medication errors reported in the hospital in the year 2000, we found that 38% reported to have resulted from transcription errors. In the year 2001, the hospital initiated a program designed to identify faulty process of orders in an effort to improve the quality and effectiveness of the medication administration process. As part of this program, it was decided to check and evaluate the quality of the written doctor's orders and the transcription of those orders to the nursing cadre, in various hospital units. The study was conducted using a questionnaire which checked compliance to hospital standards with regard to the medication administration process, as applied to 6 units over the course of 8 weeks. Results of the survey showed poor compliance to guidelines on the part of doctors and nurses. Only 18% of doctors' orders in the study and 37% of the nurses' transcriptions were written according to standards. The Emergency Department showed an even lower compliance with only 3% of doctors' orders and 25% of nurses' transcriptions complying to standards. As a result of this study, it was decided to initiate an intensive in-service teaching course to refresh the staff's knowledge of medication administration guidelines. In the future it is recommended that hand-written orders be replaced by computerized orders in an effort to limit the chance of error.
Method for simulating dose reduction in digital mammography using the Anscombe transformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borges, Lucas R., E-mail: lucas.rodrigues.borges@usp.br; Oliveira, Helder C. R. de; Nunes, Polyana F.
2016-06-15
Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtainedmore » by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions.« less
Method for simulating dose reduction in digital mammography using the Anscombe transformation
Borges, Lucas R.; de Oliveira, Helder C. R.; Nunes, Polyana F.; Bakic, Predrag R.; Maidment, Andrew D. A.; Vieira, Marcelo A. C.
2016-01-01
Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions. PMID:27277017
Joo, Yeon Kyoung; Lee-Won, Roselyn J
2016-10-01
For members of a group negatively stereotyped in a domain, making mistakes can aggravate the influence of stereotype threat because negative stereotypes often blame target individuals and attribute the outcome to their lack of ability. Virtual agents offering real-time error feedback may influence performance under stereotype threat by shaping the performers' attributional perception of errors they commit. We explored this possibility with female drivers, considering the prevalence of the "women-are-bad-drivers" stereotype. Specifically, we investigated how in-vehicle voice agents offering error feedback based on responsibility attribution (internal vs. external) and outcome attribution (ability vs. effort) influence female drivers' performance under stereotype threat. In addressing this question, we conducted an experiment in a virtual driving simulation environment that provided moment-to-moment error feedback messages. Participants performed a challenging driving task and made mistakes preprogrammed to occur. Results showed that the agent's error feedback with outcome attribution moderated the stereotype threat effect on driving performance. Participants under stereotype threat had a smaller number of collisions when the errors were attributed to effort than to ability. In addition, outcome attribution feedback moderated the effect of responsibility attribution on driving performance. Implications of these findings are discussed.
Increasing accuracy of dispersal kernels in grid-based population models
Slone, D.H.
2011-01-01
Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.
Hoyo, Javier Del; Choi, Heejoo; Burge, James H; Kim, Geon-Hee; Kim, Dae Wook
2017-06-20
The control of surface errors as a function of spatial frequency is critical during the fabrication of modern optical systems. A large-scale surface figure error is controlled by a guided removal process, such as computer-controlled optical surfacing. Smaller-scale surface errors are controlled by polishing process parameters. Surface errors of only a few millimeters may degrade the performance of an optical system, causing background noise from scattered light and reducing imaging contrast for large optical systems. Conventionally, the microsurface roughness is often given by the root mean square at a high spatial frequency range, with errors within a 0.5×0.5 mm local surface map with 500×500 pixels. This surface specification is not adequate to fully describe the characteristics for advanced optical systems. The process for controlling and minimizing mid- to high-spatial frequency surface errors with periods of up to ∼2-3 mm was investigated for many optical fabrication conditions using the measured surface power spectral density (PSD) of a finished Zerodur optical surface. Then, the surface PSD was systematically related to various fabrication process parameters, such as the grinding methods, polishing interface materials, and polishing compounds. The retraceable experimental polishing conditions and processes used to produce an optimal optical surface PSD are presented.
Improving patient safety through quality assurance.
Raab, Stephen S
2006-05-01
Anatomic pathology laboratories use several quality assurance tools to detect errors and to improve patient safety. To review some of the anatomic pathology laboratory patient safety quality assurance practices. Different standards and measures in anatomic pathology quality assurance and patient safety were reviewed. Frequency of anatomic pathology laboratory error, variability in the use of specific quality assurance practices, and use of data for error reduction initiatives. Anatomic pathology error frequencies vary according to the detection method used. Based on secondary review, a College of American Pathologists Q-Probes study showed that the mean laboratory error frequency was 6.7%. A College of American Pathologists Q-Tracks study measuring frozen section discrepancy found that laboratories improved the longer they monitored and shared data. There is a lack of standardization across laboratories even for governmentally mandated quality assurance practices, such as cytologic-histologic correlation. The National Institutes of Health funded a consortium of laboratories to benchmark laboratory error frequencies, perform root cause analysis, and design error reduction initiatives, using quality assurance data. Based on the cytologic-histologic correlation process, these laboratories found an aggregate nongynecologic error frequency of 10.8%. Based on gynecologic error data, the laboratory at my institution used Toyota production system processes to lower gynecologic error frequencies and to improve Papanicolaou test metrics. Laboratory quality assurance practices have been used to track error rates, and laboratories are starting to use these data for error reduction initiatives.
Characteristics of advanced hydrogen maser frequency standards
NASA Technical Reports Server (NTRS)
Peters, H. E.
1973-01-01
Measurements with several operational atomic hydrogen maser standards have been made which illustrate the fundamental characteristics of the maser as well as the analysability of the corrections which are made to relate the oscillation frequency to the free, unperturbed, hydrogen standard transition frequency. Sources of the most important perturbations, and the magnitude of the associated errors, are discussed. A variable volume storage bulb hydrogen maser is also illustrated which can provide on the order of 2 parts in 10 to the 14th power or better accuracy in evaluating the wall shift. Since the other basic error sources combined contribute no more than approximately 1 part in 10 to the 14th power uncertainty, the variable volume storage bulb hydrogen maser will have net intrinsic accuracy capability of the order of 2 parts in 10 to the 14th power or better. This is an order of magnitude less error than anticipated with cesium standards and is comparable to the basic limit expected for a free atom hydrogen beam resonance standard.
On a more rigorous gravity field processing for future LL-SST type gravity satellite missions
NASA Astrophysics Data System (ADS)
Daras, I.; Pail, R.; Murböck, M.
2013-12-01
In order to meet the augmenting demands of the user community concerning accuracies of temporal gravity field models, future gravity missions of low-low satellite-to-satellite tracking (LL-SST) type are planned to carry more precise sensors than their precedents. A breakthrough is planned with the improved LL-SST measurement link, where the traditional K-band microwave instrument of 1μm accuracy will be complemented by an inter-satellite ranging instrument of several nm accuracy. This study focuses on investigations concerning the potential performance of the new sensors and their impact in gravity field solutions. The processing methods for gravity field recovery have to meet the new sensor standards and be able to take full advantage of the new accuracies that they provide. We use full-scale simulations in a realistic environment to investigate whether the standard processing techniques suffice to fully exploit the new sensors standards. We achieve that by performing full numerical closed-loop simulations based on the Integral Equation approach. In our simulation scheme, we simulate dynamic orbits in a conventional tracking analysis to compute pseudo inter-satellite ranges or range-rates that serve as observables. Each part of the processing is validated separately with special emphasis on numerical errors and their impact in gravity field solutions. We demonstrate that processing with standard precision may be a limiting factor for taking full advantage of new generation sensors that future satellite missions will carry. Therefore we have created versions of our simulator with enhanced processing precision with primarily aim to minimize round-off system errors. Results using the enhanced precision show a big reduction of system errors that were present at the standard precision processing even for the error-free scenario, and reveal the improvements the new sensors will bring into the gravity field solutions. As a next step, we analyze the contribution of individual error sources to the system's error budget. More specifically we analyze sensor noise from the laser interferometer and the accelerometers, errors in the kinematic orbits and the background fields as well as temporal and spatial aliasing errors. We give special care on the assessment of error sources with stochastic behavior, such as the laser interferometer and the accelerometers, and their consistent stochastic modeling in frame of the adjustment process.
Luo, Y.; Xia, J.; Liu, J.; Xu, Y.; Liu, Q.
2008-01-01
Multichannel Analysis of Surface Waves utilizes a multichannel recording system to estimate near-surface shear (S)-wave velocities from high-frequency Rayleigh waves. A pseudo-2D S-wave velocity (vS) section is constructed by aligning 1D models at the midpoint of each receiver spread and using a spatial interpolation scheme. The horizontal resolution of the section is therefore most influenced by the receiver spread length and the source interval. The receiver spread length sets the theoretical lower limit and any vS structure with its lateral dimension smaller than this length will not be properly resolved in the final vS section. A source interval smaller than the spread length will not improve the horizontal resolution because spatial smearing has already been introduced by the receiver spread. In this paper, we first analyze the horizontal resolution of a pair of synthetic traces. Resolution analysis shows that (1) a pair of traces with a smaller receiver spacing achieves higher horizontal resolution of inverted S-wave velocities but results in a larger relative error; (2) the relative error of the phase velocity at a high frequency is smaller than at a low frequency; and (3) a relative error of the inverted S-wave velocity is affected by the signal-to-noise ratio of data. These results provide us with a guideline to balance the trade-off between receiver spacing (horizontal resolution) and accuracy of the inverted S-wave velocity. We then present a scheme to generate a pseudo-2D S-wave velocity section with high horizontal resolution using multichannel records by inverting high-frequency surface-wave dispersion curves calculated through cross-correlation combined with a phase-shift scanning method. This method chooses only a pair of consecutive traces within a shot gather to calculate a dispersion curve. We finally invert surface-wave dispersion curves of synthetic and real-world data. Inversion results of both synthetic and real-world data demonstrate that inverting high-frequency surface-wave dispersion curves - by a pair of traces through cross-correlation with phase-shift scanning method and with the damped least-square method and the singular-value decomposition technique - can feasibly achieve a reliable pseudo-2D S-wave velocity section with relatively high horizontal resolution. ?? 2008 Elsevier B.V. All rights reserved.
Intermittent nocturnal hypoxia and metabolic risk in obese adolescents with obstructive sleep apnea.
Narang, Indra; McCrindle, Brian W; Manlhiot, Cedric; Lu, Zihang; Al-Saleh, Suhail; Birken, Catherine S; Hamilton, Jill
2018-01-22
There is conflicting data regarding the independent associations of obstructive sleep apnea (OSA) with metabolic risk in obese youth. Previous studies have not consistently addressed central adiposity, specifically elevated waist to height ratio (WHtR), which is associated with metabolic risk independent of body mass index. The objective of this study was to determine the independent effects of the obstructive apnea-hypopnea index (OAHI) and associated indices of nocturnal hypoxia on metabolic function in obese youth after adjusting for WHtR. Subjects had standardized anthropometric measurements. Fasting blood included insulin, glucose, glycated hemoglobin, alanine transferase, and aspartate transaminase. Insulin resistance was quantified with the homeostatic model assessment. Overnight polysomnography determined the OAHI and nocturnal oxygenation indices. Of the 75 recruited subjects, 23% were diagnosed with OSA. Adjusting for age, gender, and WHtR in multivariable linear regression models, a higher oxygen desaturation index was associated with a higher fasting insulin (coefficient [standard error] = 48.076 [11.255], p < 0.001), higher glycated hemoglobin (coefficient [standard error] = 0.097 [0.041], p = 0.02), higher insulin resistance (coefficient [standard error] = 1.516 [0.364], p < 0.001), elevated alanine transferase (coefficient [standard error] = 11.631 [2.770], p < 0.001), and aspartate transaminase (coefficient [standard error] = 4.880 [1.444], p = 0.001). However, there were no significant associations between OAHI, glucose metabolism, and liver enzymes. Intermittent nocturnal hypoxia rather than the OAHI was associated with metabolic risk in obese youth after adjusting for WHtR. Measures of abdominal adiposity such as WHtR should be considered in future studies that evaluate the impact of OSA on metabolic health.
Asymptotic Standard Errors for Item Response Theory True Score Equating of Polytomous Items
ERIC Educational Resources Information Center
Cher Wong, Cheow
2015-01-01
Building on previous works by Lord and Ogasawara for dichotomous items, this article proposes an approach to derive the asymptotic standard errors of item response theory true score equating involving polytomous items, for equivalent and nonequivalent groups of examinees. This analytical approach could be used in place of empirical methods like…
Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method
ERIC Educational Resources Information Center
Liu, Yuming; Schulz, E. Matthew; Yu, Lei
2008-01-01
A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…
ERIC Educational Resources Information Center
Doppelt, Jerome E.
1956-01-01
The standard error of measurement as a means for estimating the margin of error that should be allowed for in test scores is discussed. The true score measures the performance that is characteristic of the person tested; the variations, plus and minus, around the true score describe a characteristic of the test. When the standard deviation is used…
ERIC Educational Resources Information Center
Sachse, Karoline A.; Haag, Nicole
2017-01-01
Standard errors computed according to the operational practices of international large-scale assessment studies such as the Programme for International Student Assessment's (PISA) or the Trends in International Mathematics and Science Study (TIMSS) may be biased when cross-national differential item functioning (DIF) and item parameter drift are…
ERIC Educational Resources Information Center
Zu, Jiyun; Yuan, Ke-Hai
2012-01-01
In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed-score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the…
Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters
ERIC Educational Resources Information Center
Hoshino, Takahiro; Shigemasu, Kazuo
2008-01-01
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of diameters meter per meter m/m 1 b atomic oxygen-to-carbon ratio mole per mole mol/mol 1 C # number... error between a quantity and its reference e brake-specific emission or fuel consumption gram per... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...
Standard errors in forest area
Joseph McCollum
2002-01-01
I trace the development of standard error equations for forest area, beginning with the theory behind double sampling and the variance of a product. The discussion shifts to the particular problem of forest area - at which time the theory becomes relevant. There are subtle difficulties in figuring out which variance of a product equation should be used. The equations...
ERIC Educational Resources Information Center
Rocconi, Louis M.
2011-01-01
Hierarchical linear models (HLM) solve the problems associated with the unit of analysis problem such as misestimated standard errors, heterogeneity of regression and aggregation bias by modeling all levels of interest simultaneously. Hierarchical linear modeling resolves the problem of misestimated standard errors by incorporating a unique random…
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
ERIC Educational Resources Information Center
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…
Patient Safety: Moving the Bar in Prison Health Care Standards
Greifinger, Robert B.; Mellow, Jeff
2010-01-01
Improvements in community health care quality through error reduction have been slow to transfer to correctional settings. We convened a panel of correctional experts, which recommended 60 patient safety standards focusing on such issues as creating safety cultures at organizational, supervisory, and staff levels through changes to policy and training and by ensuring staff competency, reducing medication errors, encouraging the seamless transfer of information between and within practice settings, and developing mechanisms to detect errors or near misses and to shift the emphasis from blaming staff to fixing systems. To our knowledge, this is the first published set of standards focusing on patient safety in prisons, adapted from the emerging literature on quality improvement in the community. PMID:20864714
Kappa statistic for the clustered dichotomous responses from physicians and patients
Kang, Chaeryon; Qaqish, Bahjat; Monaco, Jane; Sheridan, Stacey L.; Cai, Jianwen
2013-01-01
The bootstrap method for estimating the standard error of the kappa statistic in the presence of clustered data is evaluated. Such data arise, for example, in assessing agreement between physicians and their patients regarding their understanding of the physician-patient interaction and discussions. We propose a computationally efficient procedure for generating correlated dichotomous responses for physicians and assigned patients for simulation studies. The simulation result demonstrates that the proposed bootstrap method produces better estimate of the standard error and better coverage performance compared to the asymptotic standard error estimate that ignores dependence among patients within physicians with at least a moderately large number of clusters. An example of an application to a coronary heart disease prevention study is presented. PMID:23533082
ERIC Educational Resources Information Center
Bouck, Emily C.; Bouck, Mary K.; Joshi, Gauri S.; Johnson, Linley
2016-01-01
Students with learning disabilities struggle with word problems in mathematics classes. Understanding the type of errors students make when working through such mathematical problems can further describe student performance and highlight student difficulties. Through the use of error codes, researchers analyzed the type of errors made by 14 sixth…
Defining the Relationship Between Human Error Classes and Technology Intervention Strategies
NASA Technical Reports Server (NTRS)
Wiegmann, Douglas A.; Rantanen, Esa; Crisp, Vicki K. (Technical Monitor)
2002-01-01
One of the main factors in all aviation accidents is human error. The NASA Aviation Safety Program (AvSP), therefore, has identified several human-factors safety technologies to address this issue. Some technologies directly address human error either by attempting to reduce the occurrence of errors or by mitigating the negative consequences of errors. However, new technologies and system changes may also introduce new error opportunities or even induce different types of errors. Consequently, a thorough understanding of the relationship between error classes and technology "fixes" is crucial for the evaluation of intervention strategies outlined in the AvSP, so that resources can be effectively directed to maximize the benefit to flight safety. The purpose of the present project, therefore, was to examine the repositories of human factors data to identify the possible relationship between different error class and technology intervention strategies. The first phase of the project, which is summarized here, involved the development of prototype data structures or matrices that map errors onto "fixes" (and vice versa), with the hope of facilitating the development of standards for evaluating safety products. Possible follow-on phases of this project are also discussed. These additional efforts include a thorough and detailed review of the literature to fill in the data matrix and the construction of a complete database and standards checklists.
Translating Radiometric Requirements for Satellite Sensors to Match International Standards.
Pearlman, Aaron; Datla, Raju; Kacker, Raghu; Cao, Changyong
2014-01-01
International scientific standards organizations created standards on evaluating uncertainty in the early 1990s. Although scientists from many fields use these standards, they are not consistently implemented in the remote sensing community, where traditional error analysis framework persists. For a satellite instrument under development, this can create confusion in showing whether requirements are met. We aim to create a methodology for translating requirements from the error analysis framework to the modern uncertainty approach using the product level requirements of the Advanced Baseline Imager (ABI) that will fly on the Geostationary Operational Environmental Satellite R-Series (GOES-R). In this paper we prescribe a method to combine several measurement performance requirements, written using a traditional error analysis framework, into a single specification using the propagation of uncertainties formula. By using this approach, scientists can communicate requirements in a consistent uncertainty framework leading to uniform interpretation throughout the development and operation of any satellite instrument.
Translating Radiometric Requirements for Satellite Sensors to Match International Standards
Pearlman, Aaron; Datla, Raju; Kacker, Raghu; Cao, Changyong
2014-01-01
International scientific standards organizations created standards on evaluating uncertainty in the early 1990s. Although scientists from many fields use these standards, they are not consistently implemented in the remote sensing community, where traditional error analysis framework persists. For a satellite instrument under development, this can create confusion in showing whether requirements are met. We aim to create a methodology for translating requirements from the error analysis framework to the modern uncertainty approach using the product level requirements of the Advanced Baseline Imager (ABI) that will fly on the Geostationary Operational Environmental Satellite R-Series (GOES-R). In this paper we prescribe a method to combine several measurement performance requirements, written using a traditional error analysis framework, into a single specification using the propagation of uncertainties formula. By using this approach, scientists can communicate requirements in a consistent uncertainty framework leading to uniform interpretation throughout the development and operation of any satellite instrument. PMID:26601032
Genetic Parameter Estimates for Metabolizing Two Common Pharmaceuticals in Swine.
Howard, Jeremy T; Ashwell, Melissa S; Baynes, Ronald E; Brooks, James D; Yeatts, James L; Maltecca, Christian
2018-01-01
In livestock, the regulation of drugs used to treat livestock has received increased attention and it is currently unknown how much of the phenotypic variation in drug metabolism is due to the genetics of an animal. Therefore, the objective of the study was to determine the amount of phenotypic variation in fenbendazole and flunixin meglumine drug metabolism due to genetics. The population consisted of crossbred female and castrated male nursery pigs ( n = 198) that were sired by boars represented by four breeds. The animals were spread across nine batches. Drugs were administered intravenously and blood collected a minimum of 10 times over a 48 h period. Genetic parameters for the parent drug and metabolite concentration within each drug were estimated based on pharmacokinetics (PK) parameters or concentrations across time utilizing a random regression model. The PK parameters were estimated using a non-compartmental analysis. The PK model included fixed effects of sex and breed of sire along with random sire and batch effects. The random regression model utilized Legendre polynomials and included a fixed population concentration curve, sex, and breed of sire effects along with a random sire deviation from the population curve and batch effect. The sire effect included the intercept for all models except for the fenbendazole metabolite (i.e., intercept and slope). The mean heritability across PK parameters for the fenbendazole and flunixin meglumine parent drug (metabolite) was 0.15 (0.18) and 0.31 (0.40), respectively. For the parent drug (metabolite), the mean heritability across time was 0.27 (0.60) and 0.14 (0.44) for fenbendazole and flunixin meglumine, respectively. The errors surrounding the heritability estimates for the random regression model were smaller compared to estimates obtained from PK parameters. Across both the PK and plasma drug concentration across model, a moderate heritability was estimated. The model that utilized the plasma drug concentration across time resulted in estimates with a smaller standard error compared to models that utilized PK parameters. The current study found a low to moderate proportion of the phenotypic variation in metabolizing fenbendazole and flunixin meglumine that was explained by genetics in the current study.
Couch height–based patient setup for abdominal radiation therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohira, Shingo; Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita; Ueda, Yoshihiro
2016-04-01
There are 2 methods commonly used for patient positioning in the anterior-posterior (A-P) direction: one is the skin mark patient setup method (SMPS) and the other is the couch height–based patient setup method (CHPS). This study compared the setup accuracy of these 2 methods for abdominal radiation therapy. The enrollment for this study comprised 23 patients with pancreatic cancer. For treatments (539 sessions), patients were set up by using isocenter skin marks and thereafter treatment couch was shifted so that the distance between the isocenter and the upper side of the treatment couch was equal to that indicated on themore » computed tomographic (CT) image. Setup deviation in the A-P direction for CHPS was measured by matching the spine of the digitally reconstructed radiograph (DRR) of a lateral beam at simulation with that of the corresponding time-integrated electronic portal image. For SMPS with no correction (SMPS/NC), setup deviation was calculated based on the couch-level difference between SMPS and CHPS. SMPS/NC was corrected using 2 off-line correction protocols: no action level (SMPS/NAL) and extended NAL (SMPS/eNAL) protocols. Margins to compensate for deviations were calculated using the Stroom formula. A-P deviation > 5 mm was observed in 17% of SMPS/NC, 4% of SMPS/NAL, and 4% of SMPS/eNAL sessions but only in one CHPS session. For SMPS/NC, 7 patients (30%) showed deviations at an increasing rate of > 0.1 mm/fraction, but for CHPS, no such trend was observed. The standard deviations (SDs) of systematic error (Σ) were 2.6, 1.4, 0.6, and 0.8 mm and the root mean squares of random error (σ) were 2.1, 2.6, 2.7, and 0.9 mm for SMPS/NC, SMPS/NAL, SMPS/eNAL, and CHPS, respectively. Margins to compensate for the deviations were wide for SMPS/NC (6.7 mm), smaller for SMPS/NAL (4.6 mm) and SMPS/eNAL (3.1 mm), and smallest for CHPS (2.2 mm). Achieving better setup with smaller margins, CHPS appears to be a reproducible method for abdominal patient setup.« less
Genetic Parameter Estimates for Metabolizing Two Common Pharmaceuticals in Swine
Howard, Jeremy T.; Ashwell, Melissa S.; Baynes, Ronald E.; Brooks, James D.; Yeatts, James L.; Maltecca, Christian
2018-01-01
In livestock, the regulation of drugs used to treat livestock has received increased attention and it is currently unknown how much of the phenotypic variation in drug metabolism is due to the genetics of an animal. Therefore, the objective of the study was to determine the amount of phenotypic variation in fenbendazole and flunixin meglumine drug metabolism due to genetics. The population consisted of crossbred female and castrated male nursery pigs (n = 198) that were sired by boars represented by four breeds. The animals were spread across nine batches. Drugs were administered intravenously and blood collected a minimum of 10 times over a 48 h period. Genetic parameters for the parent drug and metabolite concentration within each drug were estimated based on pharmacokinetics (PK) parameters or concentrations across time utilizing a random regression model. The PK parameters were estimated using a non-compartmental analysis. The PK model included fixed effects of sex and breed of sire along with random sire and batch effects. The random regression model utilized Legendre polynomials and included a fixed population concentration curve, sex, and breed of sire effects along with a random sire deviation from the population curve and batch effect. The sire effect included the intercept for all models except for the fenbendazole metabolite (i.e., intercept and slope). The mean heritability across PK parameters for the fenbendazole and flunixin meglumine parent drug (metabolite) was 0.15 (0.18) and 0.31 (0.40), respectively. For the parent drug (metabolite), the mean heritability across time was 0.27 (0.60) and 0.14 (0.44) for fenbendazole and flunixin meglumine, respectively. The errors surrounding the heritability estimates for the random regression model were smaller compared to estimates obtained from PK parameters. Across both the PK and plasma drug concentration across model, a moderate heritability was estimated. The model that utilized the plasma drug concentration across time resulted in estimates with a smaller standard error compared to models that utilized PK parameters. The current study found a low to moderate proportion of the phenotypic variation in metabolizing fenbendazole and flunixin meglumine that was explained by genetics in the current study. PMID:29487615
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, Francis J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
NASA Technical Reports Server (NTRS)
Auger, Ludovic; Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
A suboptimal Kalman filter system which evolves error covariances in terms of a truncated set of wavelet coefficients has been developed for the assimilation of chemical tracer observations of CH4. The truncation is carried out in such a way that the resolution of the error covariance, is reduced only in the zonal direction, where gradients are smaller. Assimilation experiments which last 24 days, and used different degrees of truncation were carried out. These reduced the covariance, by 90, 97 and 99 % and the computational cost of covariance propagation by 80, 93 and 96 % respectively. The difference in both error covariance and the tracer field between the truncated and full systems over this period were found to be not growing in the first case, and a growing relatively slowly in the later two cases. The largest errors in the tracer fields were found to occur in regions of largest zonal gradients in the tracer field.
Bathymetric surveying with GPS and heave, pitch, and roll compensation
Work, P.A.; Hansen, M.; Rogers, W.E.
1998-01-01
Field and laboratory tests of a shipborne hydrographic survey system were conducted. The system consists of two 12-channel GPS receivers (one on-board, one fixed on shore), a digital acoustic fathometer, and a digital heave-pitch-roll (HPR) recorder. Laboratory tests of the HPR recorder and fathometer are documented. Results of field tests of the isolated GPS system and then of the entire suite of instruments are presented. A method for data reduction is developed to account for vertical errors introduced by roll and pitch of the survey vessel, which can be substantial (decimeters). The GPS vertical position data are found to be reliable to 2-3 cm and the fathometer to 5 cm in the laboratory. The field test of the complete system in shallow water (<2 m) indicates absolute vertical accuracy of 10-20 cm. Much of this error is attributed to the fathometer. Careful surveying and equipment setup can minimize systematic error and yield much smaller average errors.
Synthesis of hover autopilots for rotary-wing VTOL aircraft
NASA Technical Reports Server (NTRS)
Hall, W. E.; Bryson, A. E., Jr.
1972-01-01
The practical situation is considered where imperfect information on only a few rotor and fuselage state variables is available. Filters are designed to estimate all the state variables from noisy measurements of fuselage pitch/roll angles and from noisy measurements of both fuselage and rotor pitch/roll angles. The mean square response of the vehicle to a very gusty, random wind is computed using various filter/controllers and is found to be quite satisfactory although, of course, not so good as when one has perfect information (idealized case). The second part of the report considers precision hover over a point on the ground. A vehicle model without rotor dynamics is used and feedback signals in position and integral of position error are added. The mean square response of the vehicle to a very gusty, random wind is computed, assuming perfect information feedback, and is found to be excellent. The integral error feedback gives zero position error for a steady wind, and smaller position error for a random wind.
Zimmerman, Dale L; Fang, Xiangming; Mazumdar, Soumya; Rushton, Gerard
2007-01-10
The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km) outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m) than 100%-matched automated geocoding (median error length = 168 m). The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.
The refractive index of krypton for lambda in the closed interval 168-288 nm
NASA Technical Reports Server (NTRS)
Smith, P. L.; Parkinson, W. H.; Huber, M. C. E.
1975-01-01
The index of refraction of krypton has been measured at 27 wavelengths between and including 168 and 288 nm. The probable error of each measurement is plus or minus 0.1%. Our results are compared with other measurements. Our data are about 3.8% smaller than those of Abjean et al.
Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B
2016-05-01
The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.
A Regional CO2 Observing System Simulation Experiment for the ASCENDS Satellite Mission
NASA Technical Reports Server (NTRS)
Wang, J. S.; Kawa, S. R.; Eluszkiewicz, J.; Baker, D. F.; Mountain, M.; Henderson, J.; Nehrkorn, T.; Zaccheo, T. S.
2014-01-01
Top-down estimates of the spatiotemporal variations in emissions and uptake of CO2 will benefit from the increasing measurement density brought by recent and future additions to the suite of in situ and remote CO2 measurement platforms. In particular, the planned NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) satellite mission will provide greater coverage in cloudy regions, at high latitudes, and at night than passive satellite systems, as well as high precision and accuracy. In a novel approach to quantifying the ability of satellite column measurements to constrain CO2 fluxes, we use a portable library of footprints (surface influence functions) generated by the WRF-STILT Lagrangian transport model in a regional Bayesian synthesis inversion. The regional Lagrangian framework is well suited to make use of ASCENDS observations to constrain fluxes at high resolution, in this case at 1 degree latitude x 1 degree longitude and weekly for North America. We consider random measurement errors only, modeled as a function of mission and instrument design specifications along with realistic atmospheric and surface conditions. We find that the ASCENDS observations could potentially reduce flux uncertainties substantially at biome and finer scales. At the 1 degree x 1 degree, weekly scale, the largest uncertainty reductions, on the order of 50 percent, occur where and when there is good coverage by observations with low measurement errors and the a priori uncertainties are large. Uncertainty reductions are smaller for a 1.57 micron candidate wavelength than for a 2.05 micron wavelength, and are smaller for the higher of the two measurement error levels that we consider (1.0 ppm vs. 0.5 ppm clear-sky error at Railroad Valley, Nevada). Uncertainty reductions at the annual, biome scale range from 40 percent to 75 percent across our four instrument design cases, and from 65 percent to 85 percent for the continent as a whole. Our uncertainty reductions at various scales are substantially smaller than those from a global ASCENDS inversion on a coarser grid, demonstrating how quantitative results can depend on inversion methodology. The a posteriori flux uncertainties we obtain, ranging from 0.01 to 0.06 Pg C yr-1 across the biomes, would meet requirements for improved understanding of long-term carbon sinks suggested by a previous study.
A multi points ultrasonic detection method for material flow of belt conveyor
NASA Astrophysics Data System (ADS)
Zhang, Li; He, Rongjun
2018-03-01
For big detection error of single point ultrasonic ranging technology used in material flow detection of belt conveyor when coal distributes unevenly or is large, a material flow detection method of belt conveyor is designed based on multi points ultrasonic counter ranging technology. The method can calculate approximate sectional area of material by locating multi points on surfaces of material and belt, in order to get material flow according to running speed of belt conveyor. The test results show that the method has smaller detection error than single point ultrasonic ranging technology under the condition of big coal with uneven distribution.