Sample records for small standard errors

  1. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  2. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  3. The two errors of using the within-subject standard deviation (WSD) as the standard error of a reliable change index.

    PubMed

    Maassen, Gerard H

    2010-08-01

    In this Journal, Lewis and colleagues introduced a new Reliable Change Index (RCI(WSD)), which incorporated the within-subject standard deviation (WSD) of a repeated measurement design as the standard error. In this note, two opposite errors in using WSD this way are demonstrated. First, being the standard error of measurement of only a single assessment makes WSD too small when practice effects are absent. Then, too many individuals will be designated reliably changed. Second, WSD can grow unlimitedly to the extent that differential practice effects occur. This can even make RCI(WSD) unable to detect any reliable change.

  4. The Calibration of Gloss Reference Standards

    NASA Astrophysics Data System (ADS)

    Budde, W.

    1980-04-01

    In present international and national standards for the measurement of specular gloss the primary and secondary reference standards are defined for monochromatic radiation. However the glossmeter specified is using polychromatic radiation (CIE Standard Illuminant C) and the CIE Standard Photometric Observer. This produces errors in practical gloss measurements of up to 0.5%. Although this may be considered small as compared to the accuracy of most practical gloss measurements, such an error should not be tolerated in the calibration of secondary standards. Corrections for such errors are presented and various alternatives for amendments of the existing documentary standards are discussed.

  5. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.

  6. A Hands-On Exercise Improves Understanding of the Standard Error of the Mean

    ERIC Educational Resources Information Center

    Ryan, Robert S.

    2006-01-01

    One of the most difficult concepts for statistics students is the standard error of the mean. To improve understanding of this concept, 1 group of students used a hands-on procedure to sample from small populations representing either a true or false null hypothesis. The distribution of 120 sample means (n = 3) from each population had standard…

  7. SU-E-T-257: Output Constancy: Reducing Measurement Variations in a Large Practice Group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hedrick, K; Fitzgerald, T; Miller, R

    2014-06-01

    Purpose: To standardize output constancy check procedures in a large medical physics practice group covering multiple sites, in order to identify and reduce small systematic errors caused by differences in equipment and the procedures of multiple physicists. Methods: A standardized machine output constancy check for both photons and electrons was instituted within the practice group in 2010. After conducting annual TG-51 measurements in water and adjusting the linac to deliver 1.00 cGy/MU at Dmax, an acrylic phantom (comparable at all sites) and PTW farmer ion chamber are used to obtain monthly output constancy reference readings. From the collected charge reading,more » measurements of air pressure and temperature, and chamber Ndw and Pelec, a value we call the Kacrylic factor is determined, relating the chamber reading in acrylic to the dose in water with standard set-up conditions. This procedure easily allows for multiple equipment combinations to be used at any site. The Kacrylic factors and output results from all sites and machines are logged monthly in a central database and used to monitor trends in calibration and output. Results: The practice group consists of 19 sites, currently with 34 Varian and 8 Elekta linacs (24 Varian and 5 Elekta linacs in 2010). Over the past three years, the standard deviation of Kacrylic factors measured on all machines decreased by 20% for photons and high energy electrons as systematic errors were found and reduced. Low energy electrons showed very little change in the distribution of Kacrylic values. Small errors in linac beam data were found by investigating outlier Kacrylic values. Conclusion: While the use of acrylic phantoms introduces an additional source of error through small differences in depth and effective depth, the new standardized procedure eliminates potential sources of error from using many different phantoms and results in more consistent output constancy measurements.« less

  8. Error-Transparent Quantum Gates for Small Logical Qubit Architectures

    NASA Astrophysics Data System (ADS)

    Kapit, Eliot

    2018-02-01

    One of the largest obstacles to building a quantum computer is gate error, where the physical evolution of the state of a qubit or group of qubits during a gate operation does not match the intended unitary transformation. Gate error stems from a combination of control errors and random single qubit errors from interaction with the environment. While great strides have been made in mitigating control errors, intrinsic qubit error remains a serious problem that limits gate fidelity in modern qubit architectures. Simultaneously, recent developments of small error-corrected logical qubit devices promise significant increases in logical state lifetime, but translating those improvements into increases in gate fidelity is a complex challenge. In this Letter, we construct protocols for gates on and between small logical qubit devices which inherit the parent device's tolerance to single qubit errors which occur at any time before or during the gate. We consider two such devices, a passive implementation of the three-qubit bit flip code, and the author's own [E. Kapit, Phys. Rev. Lett. 116, 150501 (2016), 10.1103/PhysRevLett.116.150501] very small logical qubit (VSLQ) design, and propose error-tolerant gate sets for both. The effective logical gate error rate in these models displays superlinear error reduction with linear increases in single qubit lifetime, proving that passive error correction is capable of increasing gate fidelity. Using a standard phenomenological noise model for superconducting qubits, we demonstrate a realistic, universal one- and two-qubit gate set for the VSLQ, with error rates an order of magnitude lower than those for same-duration operations on single qubits or pairs of qubits. These developments further suggest that incorporating small logical qubits into a measurement based code could substantially improve code performance.

  9. Worldwide Survey of Alcohol and Nonmedical Drug Use among Military Personnel: 1982,

    DTIC Science & Technology

    1983-01-01

    cell . The first number is an estimate of the percentage of the population with the characteristics that define the cell . The second number, in...multiplying 1.96 times the standard error for that cell . (Obviously, for very small or very large estimates, the respective smallest or largest value in...that the cell proportions estimate the true population value more precisely, and larger standard errors indicate that the true population value is

  10. Effect of grid transparency and finite collector size on determining ion temperature and density by the retarding potential analyzer

    NASA Technical Reports Server (NTRS)

    Troy, B. E., Jr.; Maier, E. J.

    1975-01-01

    The effects of the grid transparency and finite collector size on the values of thermal ion density and temperature determined by the standard RPA (retarding potential analyzer) analysis method are investigated. The current-voltage curves calculated for varying RPA parameters and a given ion mass, temperature, and density are analyzed by the standard RPA method. It is found that only small errors in temperature and density are introduced for an RPA with typical dimensions, and that even when the density error is substantial for nontypical dimensions, the temperature error remains minimum.

  11. Effect of Random Circuit Fabrication Errors on Small Signal Gain and Phase in Helix Traveling Wave Tubes

    NASA Astrophysics Data System (ADS)

    Pengvanich, P.; Chernin, D. P.; Lau, Y. Y.; Luginsland, J. W.; Gilgenbach, R. M.

    2007-11-01

    Motivated by the current interest in mm-wave and THz sources, which use miniature, difficult-to-fabricate circuit components, we evaluate the statistical effects of random fabrication errors on a helix traveling wave tube amplifier's small signal characteristics. The small signal theory is treated in a continuum model in which the electron beam is assumed to be monoenergetic, and axially symmetric about the helix axis. Perturbations that vary randomly along the beam axis are introduced in the dimensionless Pierce parameters b, the beam-wave velocity mismatch, C, the gain parameter, and d, the cold tube circuit loss. Our study shows, as expected, that perturbation in b dominates the other two. The extensive numerical data have been confirmed by our analytic theory. They show in particular that the standard deviation of the output phase is linearly proportional to standard deviation of the individual perturbations in b, C, and d. Simple formulas have been derived which yield the output phase variations in terms of the statistical random manufacturing errors. This work was supported by AFOSR and by ONR.

  12. Effect of photogrammetric reading error on slope-frequency distributions. [obtained from Apollo 17 mission

    NASA Technical Reports Server (NTRS)

    Moore, H. J.; Wu, S. C.

    1973-01-01

    The effect of reading error on two hypothetical slope frequency distributions and two slope frequency distributions from actual lunar data in order to ensure that these errors do not cause excessive overestimates of algebraic standard deviations for the slope frequency distributions. The errors introduced are insignificant when the reading error is small and the slope length is large. A method for correcting the errors in slope frequency distributions is presented and applied to 11 distributions obtained from Apollo 15, 16, and 17 panoramic camera photographs and Apollo 16 metric camera photographs.

  13. Evaluation of lens distortion errors in video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Wilmington, Robert; Klute, Glenn K.; Micocci, Angelo

    1993-01-01

    In an effort to study lens distortion errors, a grid of points of known dimensions was constructed and videotaped using a standard and a wide-angle lens. Recorded images were played back on a VCR and stored on a personal computer. Using these stored images, two experiments were conducted. Errors were calculated as the difference in distance from the known coordinates of the points to the calculated coordinates. The purposes of this project were as follows: (1) to develop the methodology to evaluate errors introduced by lens distortion; (2) to quantify and compare errors introduced by use of both a 'standard' and a wide-angle lens; (3) to investigate techniques to minimize lens-induced errors; and (4) to determine the most effective use of calibration points when using a wide-angle lens with a significant amount of distortion. It was seen that when using a wide-angle lens, errors from lens distortion could be as high as 10 percent of the size of the entire field of view. Even with a standard lens, there was a small amount of lens distortion. It was also found that the choice of calibration points influenced the lens distortion error. By properly selecting the calibration points and avoidance of the outermost regions of a wide-angle lens, the error from lens distortion can be kept below approximately 0.5 percent with a standard lens and 1.5 percent with a wide-angle lens.

  14. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  15. New dimension analyses with error analysis for quaking aspen and black spruce

    NASA Technical Reports Server (NTRS)

    Woods, K. D.; Botkin, D. B.; Feiveson, A. H.

    1987-01-01

    Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subjectmore » to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).« less

  17. Hemispheric Differences in Processing Handwritten Cursive

    ERIC Educational Resources Information Center

    Hellige, Joseph B.; Adamson, Maheen M.

    2007-01-01

    Hemispheric asymmetry was examined for native English speakers identifying consonant-vowel-consonant (CVC) non-words presented in standard printed form, in standard handwritten cursive form or in handwritten cursive with the letters separated by small gaps. For all three conditions, fewer errors occurred when stimuli were presented to the right…

  18. ASME B89.4.19 Performance Evaluation Tests and Geometric Misalignments in Laser Trackers

    PubMed Central

    Muralikrishnan, B.; Sawyer, D.; Blackburn, C.; Phillips, S.; Borchardt, B.; Estler, W. T.

    2009-01-01

    Small and unintended offsets, tilts, and eccentricity of the mechanical and optical components in laser trackers introduce systematic errors in the measured spherical coordinates (angles and range readings) and possibly in the calculated lengths of reference artifacts. It is desirable that the tests described in the ASME B89.4.19 Standard [1] be sensitive to these geometric misalignments so that any resulting systematic errors are identified during performance evaluation. In this paper, we present some analysis, using error models and numerical simulation, of the sensitivity of the length measurement system tests and two-face system tests in the B89.4.19 Standard to misalignments in laser trackers. We highlight key attributes of the testing strategy adopted in the Standard and propose new length measurement system tests that demonstrate improved sensitivity to some misalignments. Experimental results with a tracker that is not properly error corrected for the effects of the misalignments validate claims regarding the proposed new length tests. PMID:27504211

  19. Evaluating concentration estimation errors in ELISA microarray experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; White, Amanda M.; Varnum, Susan M.

    Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Althoughmore » propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.« less

  20. Modified expression for bulb-tracer depletion—Effect on argon dating standards

    USGS Publications Warehouse

    Fleck, Robert J.; Calvert, Andrew T.

    2014-01-01

    40Ar/39Ar geochronology depends critically on well-calibrated standards, often traceable to first-principles K-Ar age calibrations using bulb-tracer systems. Tracer systems also provide precise standards for noble-gas studies and interlaboratory calibration. The exponential expression long used for calculating isotope tracer concentrations in K-Ar age dating and calibration of 40Ar/39Ar age standards may provide a close approximation of those values, but is not correct. Appropriate equations are derived that accurately describe the depletion of tracer reservoirs and concentrations of sequential tracers. In the modified expression the depletion constant is not in the exponent, which only varies as integers by tracer-number. Evaluation of the expressions demonstrates that systematic error introduced through use of the original expression may be substantial where reservoir volumes are small and resulting depletion constants are large. Traditional use of large reservoir to tracer volumes and the resulting small depletion constants have kept errors well less than experimental uncertainties in most previous K-Ar and calibration studies. Use of the proper expression, however, permits use of volumes appropriate to the problems addressed.

  1. Application of the precipitation-runoff modeling system to the Ah- shi-sle-pah Wash watershed, San Juan County, New Mexico

    USGS Publications Warehouse

    Hejl, H.R.

    1989-01-01

    The precipitation-runoff modeling system was applied to the 8.21 sq-mi drainage area of the Ah-shi-sle-pah Wash watershed in northwestern New Mexico. The calibration periods were May to September of 1981 and 1982, and the verification period was May to September 1983. Twelve storms were available for calibration and 8 storms were available for verification. For calibration A (hydraulic conductivity estimated from onsite data and other storm-mode parameters optimized), the computed standard error of estimate was 50% for runoff volumes and 72% of peak discharges. Calibration B included hydraulic conductivity in the optimization, which reduced the standard error of estimate to 28 % for runoff volumes and 50% for peak discharges. Optimized values for hydraulic conductivity resulted in reductions from 1.00 to 0.26 in/h and 0.20 to 0.03 in/h for the 2 general soils groups in the calibrations. Simulated runoff volumes using 7 of 8 storms occurring during the verification period had a standard error of estimate of 40% for verification A and 38% for verification B. Simulated peak discharge had a standard error of estimate of 120% for verification A and 56% for verification B. Including the eighth storm which had a relatively small magnitude in the verification analysis more than doubled the standard error of estimating volumes and peaks. (USGS)

  2. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    PubMed

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  3. A Generally Robust Approach for Testing Hypotheses and Setting Confidence Intervals for Effect Sizes

    ERIC Educational Resources Information Center

    Keselman, H. J.; Algina, James; Lix, Lisa M.; Wilcox, Rand R.; Deering, Kathleen N.

    2008-01-01

    Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of…

  4. The use of deep convective clouds to uniformly calibrate the next generation of geostationary reflective solar imagers

    NASA Astrophysics Data System (ADS)

    Doelling, David R.; Bhatt, Rajendra; Haney, Conor O.; Gopalan, Arun; Scarino, Benjamin R.

    2017-09-01

    The new 3rd generation geostationary (GEO) imagers will have many of the same NPP-VIIRS imager spectral bands, thereby offering the opportunity to apply the VIIRS cloud, aerosol, and land use retrieval algorithms on the new GEO imager measurements. Climate quality retrievals require multi-channel calibrated radiances that are stable over time. The deep convective cloud calibration technique (DCCT) is a large ensemble statistical technique that assumes that the DCC reflectance is stable over time. Because DCC are found in sufficient numbers across all GEO domains, they provide a uniform calibration stability evaluation across the GEO constellation. The baseline DCCT has been successful in calibrating visible and near-infrared channels. However, for shortwave infrared (SWIR) channels the DCCT is not as effective to monitor radiometric stability. The DCCT was optimized as a function wavelength in this paper. For SWIR bands, the greatest reduction of the DCC response trend standard error was achieved through deseasonalization. This is effective because the DCC reflectance exhibits small regional seasonal cycles that can be characterized on a monthly basis. On the other hand, the inter-annually variability in DCC response was found to be extremely small. The Met-9 0.65-μm channel DCC response was found to have a 3% seasonal cycle. Deseasonalization reduced the trend standard error from 1% to 0.4%. For the NPP-VIIRS SWIR bands, deseasonalization reduced the trend standard error by more than half. All VIIRS SWIR band trend standard errors were less than 1%. The DCCT should be able to monitor the stability of all GEO imager solar reflective bands across the tropical domain with the same uniform accuracy.

  5. Improving the analysis of composite endpoints in rare disease trials.

    PubMed

    McMenamin, Martina; Berglind, Anna; Wason, James M S

    2018-05-22

    Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.

  6. Correcting quantum errors with entanglement.

    PubMed

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  7. Bandwagon effects and error bars in particle physics

    NASA Astrophysics Data System (ADS)

    Jeng, Monwhea

    2007-02-01

    We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit "bandwagon effects": reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations.

  8. Adobe photoshop quantification (PSQ) rather than point-counting: A rapid and precise method for quantifying rock textural data and porosities

    NASA Astrophysics Data System (ADS)

    Zhang, Xuefeng; Liu, Bo; Wang, Jieqiong; Zhang, Zhe; Shi, Kaibo; Wu, Shuanglin

    2014-08-01

    Commonly used petrological quantification methods are visual estimation, counting, and image analyses. However, in this article, an Adobe Photoshop-based analyzing method (PSQ) is recommended for quantifying the rock textural data and porosities. Adobe Photoshop system provides versatile abilities in selecting an area of interest and the pixel number of a selection could be read and used to calculate its area percentage. Therefore, Adobe Photoshop could be used to rapidly quantify textural components, such as content of grains, cements, and porosities including total porosities and different genetic type porosities. This method was named as Adobe Photoshop Quantification (PSQ). The workflow of the PSQ method was introduced with the oolitic dolomite samples from the Triassic Feixianguan Formation, Northeastern Sichuan Basin, China, for example. And the method was tested by comparing with the Folk's and Shvetsov's "standard" diagrams. In both cases, there is a close agreement between the "standard" percentages and those determined by the PSQ method with really small counting errors and operator errors, small standard deviations and high confidence levels. The porosities quantified by PSQ were evaluated against those determined by the whole rock helium gas expansion method to test the specimen errors. Results have shown that the porosities quantified by the PSQ are well correlated to the porosities determined by the conventional helium gas expansion method. Generally small discrepancies (mostly ranging from -3% to 3%) are caused by microporosities which would cause systematic underestimation of 2% and/or by macroporosities causing underestimation or overestimation in different cases. Adobe Photoshop could be used to quantify rock textural components and porosities. This method has been tested to be precise and accurate. It is time saving compared with usual methods.

  9. Residue frequencies and pairing preferences at protein-protein interfaces.

    PubMed

    Glaser, F; Steinberg, D M; Vakser, I A; Ben-Tal, N

    2001-05-01

    We used a nonredundant set of 621 protein-protein interfaces of known high-resolution structure to derive residue composition and residue-residue contact preferences. The residue composition at the interfaces, in entire proteins and in whole genomes correlates well, indicating the statistical strength of the data set. Differences between amino acid distributions were observed for interfaces with buried surface area of less than 1,000 A(2) versus interfaces with area of more than 5,000 A(2). Hydrophobic residues were abundant in large interfaces while polar residues were more abundant in small interfaces. The largest residue-residue preferences at the interface were recorded for interactions between pairs of large hydrophobic residues, such as Trp and Leu, and the smallest preferences for pairs of small residues, such as Gly and Ala. On average, contacts between pairs of hydrophobic and polar residues were unfavorable, and the charged residues tended to pair subject to charge complementarity, in agreement with previous reports. A bootstrap procedure, lacking from previous studies, was used for error estimation. It showed that the statistical errors in the set of pairing preferences are generally small; the average standard error is approximately 0.2, i.e., about 8% of the average value of the pairwise index (2.9). However, for a few pairs (e.g., Ser-Ser and Glu-Asp) the standard error is larger in magnitude than the pairing index, which makes it impossible to tell whether contact formation is favorable or unfavorable. The results are interpreted using physicochemical factors and their implications for the energetics of complex formation and for protein docking are discussed. Proteins 2001;43:89-102. Copyright 2001 Wiley-Liss, Inc.

  10. Can binary early warning scores perform as well as standard early warning scores for discriminating a patient's risk of cardiac arrest, death or unanticipated intensive care unit admission?

    PubMed

    Jarvis, Stuart; Kovacs, Caroline; Briggs, Jim; Meredith, Paul; Schmidt, Paul E; Featherstone, Peter I; Prytherch, David R; Smith, Gary B

    2015-08-01

    Although the weightings to be summed in an early warning score (EWS) calculation are small, calculation and other errors occur frequently, potentially impacting on hospital efficiency and patient care. Use of a simpler EWS has the potential to reduce errors. We truncated 36 published 'standard' EWSs so that, for each component, only two scores were possible: 0 when the standard EWS scored 0 and 1 when the standard EWS scored greater than 0. Using 1564,153 vital signs observation sets from 68,576 patient care episodes, we compared the discrimination (measured using the area under the receiver operator characteristic curve--AUROC) of each standard EWS and its truncated 'binary' equivalent. The binary EWSs had lower AUROCs than the standard EWSs in most cases, although for some the difference was not significant. One system, the binary form of the National Early Warning System (NEWS), had significantly better discrimination than all standard EWSs, except for NEWS. Overall, Binary NEWS at a trigger value of 3 would detect as many adverse outcomes as are detected by NEWS using a trigger of 5, but would require a 15% higher triggering rate. The performance of Binary NEWS is only exceeded by that of standard NEWS. It may be that Binary NEWS, as a simplified system, can be used with fewer errors. However, its introduction could lead to significant increases in workload for ward and rapid response team staff. The balance between fewer errors and a potentially greater workload needs further investigation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. How large are the consequences of covariate imbalance in cluster randomized trials: a simulation study with a continuous outcome and a binary covariate at the cluster level.

    PubMed

    Moerbeek, Mirjam; van Schie, Sander

    2016-07-11

    The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.

  12. Modeling Bloch oscillations in ultra-small Josephson junctions

    NASA Astrophysics Data System (ADS)

    Vora, Heli; Kautz, Richard; Nam, Sae Woo; Aumentado, Jose

    In a seminal paper, Likharev et al. developed a theory for ultra-small Josephson junctions with Josephson coupling energy (Ej) less than the charging energy (Ec) and showed that such junctions demonstrate Bloch oscillations which could be used to make a fundamental current standard that is a dual of the Josephson volt standard. Here, based on the model of Geigenmüller and Schön, we numerically calculate the current-voltage relationship of such an ultra-small junction which includes various error processes present in a nanoscale Josephson junction such as random quasiparticle tunneling events and Zener tunneling between bands. This model allows us to explore the parameter space to see the effect of each process on the width and height of the Bloch step and serves as a guide to determine whether it is possible to build a quantum current standard of a metrological precision using Bloch oscillations.

  13. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.

  14. Error simulation of paired-comparison-based scaling methods

    NASA Astrophysics Data System (ADS)

    Cui, Chengwu

    2000-12-01

    Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.

  15. Physics Notes

    ERIC Educational Resources Information Center

    School Science Review, 1972

    1972-01-01

    Short articles describe the production, photography, and analysis of diffraction patterns using a small laser, a technique for measuring electrical resistance without a standard resistor, a demonstration of a thermocouple effect in a galvanometer with a built-in light source, and a common error in deriving the expression for centripetal force. (AL)

  16. Functional Mixed Effects Model for Small Area Estimation.

    PubMed

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  17. Ensemble Kalman filters for dynamical systems with unresolved turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grooms, Ian, E-mail: grooms@cims.nyu.edu; Lee, Yoonsang; Majda, Andrew J.

    Ensemble Kalman filters are developed for turbulent dynamical systems where the forecast model does not resolve all the active scales of motion. Coarse-resolution models are intended to predict the large-scale part of the true dynamics, but observations invariably include contributions from both the resolved large scales and the unresolved small scales. The error due to the contribution of unresolved scales to the observations, called ‘representation’ or ‘representativeness’ error, is often included as part of the observation error, in addition to the raw measurement error, when estimating the large-scale part of the system. It is here shown how stochastic superparameterization (amore » multiscale method for subgridscale parameterization) can be used to provide estimates of the statistics of the unresolved scales. In addition, a new framework is developed wherein small-scale statistics can be used to estimate both the resolved and unresolved components of the solution. The one-dimensional test problem from dispersive wave turbulence used here is computationally tractable yet is particularly difficult for filtering because of the non-Gaussian extreme event statistics and substantial small scale turbulence: a shallow energy spectrum proportional to k{sup −5/6} (where k is the wavenumber) results in two-thirds of the climatological variance being carried by the unresolved small scales. Because the unresolved scales contain so much energy, filters that ignore the representation error fail utterly to provide meaningful estimates of the system state. Inclusion of a time-independent climatological estimate of the representation error in a standard framework leads to inaccurate estimates of the large-scale part of the signal; accurate estimates of the large scales are only achieved by using stochastic superparameterization to provide evolving, large-scale dependent predictions of the small-scale statistics. Again, because the unresolved scales contain so much energy, even an accurate estimate of the large-scale part of the system does not provide an accurate estimate of the true state. By providing simultaneous estimates of both the large- and small-scale parts of the solution, the new framework is able to provide accurate estimates of the true system state.« less

  18. Previous Estimates of Mitochondrial DNA Mutation Level Variance Did Not Account for Sampling Error: Comparing the mtDNA Genetic Bottleneck in Mice and Humans

    PubMed Central

    Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.

    2010-01-01

    In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273

  19. A Simulation Analysis of Errors in the Measurement of Standard Electrochemical Rate Constants from Phase-Selective Impedance Data.

    DTIC Science & Technology

    1987-09-30

    RESTRICTIVE MARKINGSC Unclassif ied 2a SECURIly CLASSIFICATION ALIIMOA4TY 3 DIS1RSBj~jiOAVAILAB.I1Y OF RkPORI _________________________________ Approved...of the AC current, including the time dependence at a growing DME, at a given fixed potential either in the presence or the absence of an...the relative error in k b(app) is ob relatively small for ks (true) : 0.5 cm s-, and increases rapidly for ob larger rate constants as kob reaches the

  20. Error in total ozone measurements arising from aerosol attenuation

    NASA Technical Reports Server (NTRS)

    Thomas, R. W. L.; Basher, R. E.

    1979-01-01

    A generalized least squares method for deducing both total ozone and aerosol extinction spectrum parameters from Dobson spectrophotometer measurements was developed. An error analysis applied to this system indicates that there is little advantage to additional measurements once a sufficient number of line pairs have been employed to solve for the selected detail in the attenuation model. It is shown that when there is a predominance of small particles (less than about 0.35 microns in diameter) the total ozone from the standard AD system is too high by about one percent. When larger particles are present the derived total ozone may be an overestimate or an underestimate but serious errors occur only for narrow polydispersions.

  1. Treating Sample Covariances for Use in Strongly Coupled Atmosphere-Ocean Data Assimilation

    NASA Astrophysics Data System (ADS)

    Smith, Polly J.; Lawless, Amos S.; Nichols, Nancy K.

    2018-01-01

    Strongly coupled data assimilation requires cross-domain forecast error covariances; information from ensembles can be used, but limited sampling means that ensemble derived error covariances are routinely rank deficient and/or ill-conditioned and marred by noise. Thus, they require modification before they can be incorporated into a standard assimilation framework. Here we compare methods for improving the rank and conditioning of multivariate sample error covariance matrices for coupled atmosphere-ocean data assimilation. The first method, reconditioning, alters the matrix eigenvalues directly; this preserves the correlation structures but does not remove sampling noise. We show that it is better to recondition the correlation matrix rather than the covariance matrix as this prevents small but dynamically important modes from being lost. The second method, model state-space localization via the Schur product, effectively removes sample noise but can dampen small cross-correlation signals. A combination that exploits the merits of each is found to offer an effective alternative.

  2. Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error

    NASA Astrophysics Data System (ADS)

    Jung, Insung; Koo, Lockjo; Wang, Gi-Nam

    2008-11-01

    The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.

  3. A Multi-Modal Active Learning Experience for Teaching Social Categorization

    ERIC Educational Resources Information Center

    Schwarzmueller, April

    2011-01-01

    This article details a multi-modal active learning experience to help students understand elements of social categorization. Each student in a group dynamics course observed two groups in conflict and identified examples of in-group bias, double-standard thinking, out-group homogeneity bias, law of small numbers, group attribution error, ultimate…

  4. Soil respiration patterns in root gaps 27 years after small scale experimental disturbance in Pinus contorta forests

    NASA Astrophysics Data System (ADS)

    Baker, S.; Berryman, E.; Hawbaker, T. J.; Ewers, B. E.

    2015-12-01

    While much attention has been focused on large scale forest disturbances such as fire, harvesting, drought and insect attacks, small scale forest disturbances that create gaps in forest canopies and below ground root and mycorrhizal networks may accumulate to impact regional scale carbon budgets. In a lodgepole pine (Pinus contorta) forest near Fox Park, WY, clusters of 15 and 30 trees were removed in 1988 to assess the effect of tree gap disturbance on fine root density and nitrogen transformation. Twenty seven years later the gaps remain with limited regeneration present only in the center of the 30 tree plots, beyond the influence of roots from adjacent intact trees. Soil respiration was measured in the summer of 2015 to assess the influence of these disturbances on carbon cycling in Pinus contorta forests. Positions at the centers of experimental disturbances were found to have the lowest respiration rates (mean 2.45 μmol C/m2/s, standard error 0.17 C/m2/s), control plots in the undisturbed forest were highest (mean 4.15 μmol C/m2/s, standard error 0.63 C/m2/s), and positions near the margin of the disturbance were intermediate (mean 3.7 μmol C/m2/s, standard error 0.34 C/m2/s). Fine root densities, soil nitrogen, and microclimate changes were also measured and played an important role in respiration rates of disturbed plots. This demonstrates that a long-term effect on carbon cycling occurs when gaps are created in the canopy and root network of lodgepole forests.

  5. Micro-mass standards to calibrate the sensitivity of mass comparators

    NASA Astrophysics Data System (ADS)

    Madec, Tanguy; Mann, Gaëlle; Meury, Paul-André; Rabault, Thierry

    2007-10-01

    In mass metrology, the standards currently used are calibrated by a chain of comparisons, performed using mass comparators, that extends ultimately from the international prototype (which is the definition of the unit of mass) to the standards in routine use. The differences measured in the course of these comparisons become smaller and smaller as the standards approach the definitions of their units, precisely because of how accurately they have been adjusted. One source of uncertainty in the determination of the difference of mass between the mass compared and the reference mass is the sensitivity error of the comparator used. Unfortunately, in the market there are no mass standards small enough (of the order of a few hundreds of micrograms) for a valid evaluation of this source of uncertainty. The users of these comparators therefore have no choice but to rely on the characteristics claimed by the makers of the comparators, or else to determine this sensitivity error at higher values (at least 1 mg) and interpolate from this result to smaller differences of mass. For this reason, the LNE decided to produce and calibrate micro-mass standards having nominal values between 100 µg and 900 µg. These standards were developed, then tested in multiple comparisons on an A5 type automatic comparator. They have since been qualified and calibrated in a weighing design, repeatedly and over an extended period of time, to establish their stability with respect to oxidation and the harmlessness of the handling and storage procedure associated with their use. Finally, the micro-standards so qualified were used to characterize the sensitivity errors of two of the LNE's mass comparators, including the one used to tie France's Platinum reference standard (Pt 35) to stainless steel and superalloy standards.

  6. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    PubMed

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  7. Mass measurement errors of Fourier-transform mass spectrometry (FTMS): distribution, recalibration, and application.

    PubMed

    Zhang, Jiyang; Ma, Jie; Dou, Lei; Wu, Songfeng; Qian, Xiaohong; Xie, Hongwei; Zhu, Yunping; He, Fuchu

    2009-02-01

    The hybrid linear trap quadrupole Fourier-transform (LTQ-FT) ion cyclotron resonance mass spectrometer, an instrument with high accuracy and resolution, is widely used in the identification and quantification of peptides and proteins. However, time-dependent errors in the system may lead to deterioration of the accuracy of these instruments, negatively influencing the determination of the mass error tolerance (MET) in database searches. Here, a comprehensive discussion of LTQ/FT precursor ion mass error is provided. On the basis of an investigation of the mass error distribution, we propose an improved recalibration formula and introduce a new tool, FTDR (Fourier-transform data recalibration), that employs a graphic user interface (GUI) for automatic calibration. It was found that the calibration could adjust the mass error distribution to more closely approximate a normal distribution and reduce the standard deviation (SD). Consequently, we present a new strategy, LDSF (Large MET database search and small MET filtration), for database search MET specification and validation of database search results. As the name implies, a large-MET database search is conducted and the search results are then filtered using the statistical MET estimated from high-confidence results. By applying this strategy to a standard protein data set and a complex data set, we demonstrate the LDSF can significantly improve the sensitivity of the result validation procedure.

  8. Uav-Based Photogrammetric Point Clouds and Hyperspectral Imaging for Mapping Biodiversity Indicators in Boreal Forests

    NASA Astrophysics Data System (ADS)

    Saarinen, N.; Vastaranta, M.; Näsi, R.; Rosnell, T.; Hakala, T.; Honkavaara, E.; Wulder, M. A.; Luoma, V.; Tommaselli, A. M. G.; Imai, N. N.; Ribeiro, E. A. W.; Guimarães, R. B.; Holopainen, M.; Hyyppä, J.

    2017-10-01

    Biodiversity is commonly referred to as species diversity but in forest ecosystems variability in structural and functional characteristics can also be treated as measures of biodiversity. Small unmanned aerial vehicles (UAVs) provide a means for characterizing forest ecosystem with high spatial resolution, permitting measuring physical characteristics of a forest ecosystem from a viewpoint of biodiversity. The objective of this study is to examine the applicability of photogrammetric point clouds and hyperspectral imaging acquired with a small UAV helicopter in mapping biodiversity indicators, such as structural complexity as well as the amount of deciduous and dead trees at plot level in southern boreal forests. Standard deviation of tree heights within a sample plot, used as a proxy for structural complexity, was the most accurately derived biodiversity indicator resulting in a mean error of 0.5 m, with a standard deviation of 0.9 m. The volume predictions for deciduous and dead trees were underestimated by 32.4 m3/ha and 1.7 m3/ha, respectively, with standard deviation of 50.2 m3/ha for deciduous and 3.2 m3/ha for dead trees. The spectral features describing brightness (i.e. higher reflectance values) were prevailing in feature selection but several wavelengths were represented. Thus, it can be concluded that structural complexity can be predicted reliably but at the same time can be expected to be underestimated with photogrammetric point clouds obtained with a small UAV. Additionally, plot-level volume of dead trees can be predicted with small mean error whereas identifying deciduous species was more challenging at plot level.

  9. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies

    PubMed Central

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-01-01

    Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476

  10. Determination of Small Animal Long Bone Properties Using Densitometry

    NASA Technical Reports Server (NTRS)

    Breit, Gregory A.; Goldberg, BethAnn K.; Whalen, Robert T.; Hargens, Alan R. (Technical Monitor)

    1996-01-01

    Assessment of bone structural property changes due to loading regimens or pharmacological treatment typically requires destructive mechanical testing and sectioning. Our group has accurately and non-destructively estimated three dimensional cross-sectional areal properties (principal moments of inertia, Imax and Imin, and principal angle, Theta) of human cadaver long bones from pixel-by-pixel analysis of three non-coplanar densitometry scans. Because the scanner beam width is on the order of typical small animal diapbyseal diameters, applying this technique to high-resolution scans of rat long bones necessitates additional processing to minimize errors induced by beam smearing, such as dependence on sample orientation and overestimation of Imax and Imin. We hypothesized that these errors are correctable by digital image processing of the raw scan data. In all cases, four scans, using only the low energy data (Hologic QDR-1000W, small animal mode), are averaged to increase image signal-to-noise ratio. Raw scans are additionally processed by interpolation, deconvolution by a filter derived from scanner beam characteristics, and masking using a variable threshold based on image dynamic range. To assess accuracy, we scanned an aluminum step phantom at 12 orientations over a range of 180 deg about the longitudinal axis, in 15 deg increments. The phantom dimensions (2.5, 3.1, 3.8 mm x 4.4 mm; Imin/Imax: 0.33-0.74) were comparable to the dimensions of a rat femur which was also scanned. Cross-sectional properties were determined at 0.25 mm increments along the length of the phantom and femur. The table shows average error (+/- SD) from theory of Imax, Imin, and Theta) over the 12 orientations, calculated from raw and fully processed phantom images, as well as standard deviations about the mean for the femur scans. Processing of phantom scans increased agreement with theory, indicating improved accuracy. Smaller standard deviations with processing indicate increased precision and repeatability. Standard deviations for the femur are consistent with those of the phantom. We conclude that in conjunction with digital image enhancement, densitometry scans are suitable for non-destructive determination of areal properties of small animal bones of comparable size to our phantom, allowing prediction of Imax and Imin within 2.5% and Theta within a fraction of a degree. This method represents a considerable extension of current methods of analyzing bone tissue distribution in small animal bones.

  11. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    PubMed

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  12. Improved model for correcting the ionospheric impact on bending angle in radio occultation measurements

    NASA Astrophysics Data System (ADS)

    Angling, Matthew J.; Elvidge, Sean; Healy, Sean B.

    2018-04-01

    The standard approach to remove the effects of the ionosphere from neutral atmosphere GPS radio occultation measurements is to estimate a corrected bending angle from a combination of the L1 and L2 bending angles. This approach is known to result in systematic errors and an extension has been proposed to the standard ionospheric correction that is dependent on the squared L1 / L2 bending angle difference and a scaling term (κ). The variation of κ with height, time, season, location and solar activity (i.e. the F10.7 flux) has been investigated by applying a 1-D bending angle operator to electron density profiles provided by a monthly median ionospheric climatology model. As expected, the residual bending angle is well correlated (negatively) with the vertical total electron content (TEC). κ is more strongly dependent on the solar zenith angle, indicating that the TEC-dependent component of the residual error is effectively modelled by the squared L1 / L2 bending angle difference term in the correction. The residual error from the ionospheric correction is likely to be a major contributor to the overall error budget of neutral atmosphere retrievals between 40 and 80 km. Over this height range κ is approximately linear with height. A simple κ model has also been developed. It is independent of ionospheric measurements, but incorporates geophysical dependencies (i.e. solar zenith angle, solar flux, altitude). The global mean error (i.e. bias) and the standard deviation of the residual errors are reduced from -1.3×10-8 and 2.2×10-8 for the uncorrected case to -2.2×10-10 rad and 2.0×10-9 rad, respectively, for the corrections using the κ model. Although a fixed scalar κ also reduces bias for the global average, the selected value of κ (14 rad-1) is only appropriate for a small band of locations around the solar terminator. In the daytime, the scalar κ is consistently too high and this results in an overcorrection of the bending angles and a positive bending angle bias. Similarly, in the nighttime, the scalar κ is too low. However, in this case, the bending angles are already small and the impact of the choice of κ is less pronounced.

  13. Three-dimensional quantitative structure-activity relationship studies on novel series of benzotriazine based compounds acting as Src inhibitors using CoMFA and CoMSIA.

    PubMed

    Gueto, Carlos; Ruiz, José L; Torres, Juan E; Méndez, Jefferson; Vivas-Reyes, Ricardo

    2008-03-01

    Comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were performed on a series of benzotriazine derivatives, as Src inhibitors. Ligand molecular superimposition on the template structure was performed by database alignment method. The statistically significant model was established of 72 molecules, which were validated by a test set of six compounds. The CoMFA model yielded a q(2)=0.526, non cross-validated R(2) of 0.781, F value of 88.132, bootstrapped R(2) of 0.831, standard error of prediction=0.587, and standard error of estimate=0.351 while the CoMSIA model yielded the best predictive model with a q(2)=0.647, non cross-validated R(2) of 0.895, F value of 115.906, bootstrapped R(2) of 0.953, standard error of prediction=0.519, and standard error of estimate=0.178. The contour maps obtained from 3D-QSAR studies were appraised for activity trends for the molecules analyzed. Results indicate that small steric volumes in the hydrophobic region, electron-withdrawing groups next to the aryl linker region, and atoms close to the solvent accessible region increase the Src inhibitory activity of the compounds. In fact, adding substituents at positions 5, 6, and 8 of the benzotriazine nucleus were generated new compounds having a higher predicted activity. The data generated from the present study will further help to design novel, potent, and selective Src inhibitors as anticancer therapeutic agents.

  14. Imperfect Gold Standards for Kidney Injury Biomarker Evaluation

    PubMed Central

    Betensky, Rebecca A.; Emerson, Sarah C.; Bonventre, Joseph V.

    2012-01-01

    Clinicians have used serum creatinine in diagnostic testing for acute kidney injury for decades, despite its imperfect sensitivity and specificity. Novel tubular injury biomarkers may revolutionize the diagnosis of acute kidney injury; however, even if a novel tubular injury biomarker is 100% sensitive and 100% specific, it may appear inaccurate when using serum creatinine as the gold standard. Acute kidney injury, as defined by serum creatinine, may not reflect tubular injury, and the absence of changes in serum creatinine does not assure the absence of tubular injury. In general, the apparent diagnostic performance of a biomarker depends not only on its ability to detect injury, but also on disease prevalence and the sensitivity and specificity of the imperfect gold standard. Assuming that, at a certain cutoff value, serum creatinine is 80% sensitive and 90% specific and disease prevalence is 10%, a new perfect biomarker with a true 100% sensitivity may seem to have only 47% sensitivity compared with serum creatinine as the gold standard. Minimizing misclassification by using more strict criteria to diagnose acute kidney injury will reduce the error when evaluating the performance of a biomarker under investigation. Apparent diagnostic errors using a new biomarker may be a reflection of errors in the imperfect gold standard itself, rather than poor performance of the biomarker. The results of this study suggest that small changes in serum creatinine alone should not be used to define acute kidney injury in biomarker or interventional studies. PMID:22021710

  15. Accuracy of Jump-Mat Systems for Measuring Jump Height.

    PubMed

    Pueo, Basilio; Lipinska, Patrycja; Jiménez-Olmedo, José M; Zmijewski, Piotr; Hopkins, Will G

    2017-08-01

    Vertical-jump tests are commonly used to evaluate lower-limb power of athletes and nonathletes. Several types of equipment are available for this purpose. To compare the error of measurement of 2 jump-mat systems (Chronojump-Boscosystem and Globus Ergo Tester) with that of a motion-capture system as a criterion and to determine the modifying effect of foot length on jump height. Thirty-one young adult men alternated 4 countermovement jumps with 4 squat jumps. Mean jump height and standard deviations representing technical error of measurement arising from each device and variability arising from the subjects themselves were estimated with a novel mixed model and evaluated via standardization and magnitude-based inference. The jump-mat systems produced nearly identical measures of jump height (differences in means and in technical errors of measurement ≤1 mm). Countermovement and squat-jump height were both 13.6 cm higher with motion capture (90% confidence limits ±0.3 cm), but this very large difference was reduced to small unclear differences when adjusted to a foot length of zero. Variability in countermovement and squat-jump height arising from the subjects was small (1.1 and 1.5 cm, respectively, 90% confidence limits ±0.3 cm); technical error of motion capture was similar in magnitude (1.7 and 1.6 cm, ±0.3 and ±0.4 cm), and that of the jump mats was similar or smaller (1.2 and 0.3 cm, ±0.5 and ±0.9 cm). The jump-mat systems provide trustworthy measurements for monitoring changes in jump height. Foot length can explain the substantially higher jump height observed with motion capture.

  16. The relationship between purely stochastic sampling error and the number of technical replicates used to estimate concentration at an extreme dilution

    USDA-ARS?s Scientific Manuscript database

    For any analytical system the population mean (mu) number of entities (e.g., cells or molecules) per tested volume, surface area, or mass also defines the population standard deviation (sigma = square root of mu ). For a preponderance of analytical methods, sigma is very small relative to mu due to...

  17. The Use of Quality Control and Data Mining Techniques for Monitoring Scaled Scores: An Overview. Research Report. ETS RR-12-20

    ERIC Educational Resources Information Center

    von Davier, Alina A.

    2012-01-01

    Maintaining comparability of test scores is a major challenge faced by testing programs that have almost continuous administrations. Among the potential problems are scale drift and rapid accumulation of errors. Many standard quality control techniques for testing programs, which can effectively detect and address scale drift for small numbers of…

  18. The effect of income and occupation on body mass index among women in the Cebu Longitudinal Health and Nutrition Surveys (1983-2002).

    PubMed

    Colchero, M Arantxa; Caballero, Benjamin; Bishai, David

    2008-05-01

    We assessed the effects of changes in income and occupational activities on changes in body weight among 2952 non-pregnant women enrolled in the Cebu Longitudinal Health and Nutrition Surveys between 1983 and 2002. On average, body mass index (BMI) among women occupied in low activities was 0.29 kg/m(2) (standard error 0.11) larger compared to women occupied in heavy activities. BMI among women involved in medium activities was on average 0.12 kg/m(2) (standard error 0.05) larger compared to women occupied in heavy activities. A one-unit increase in log household income in the previous survey was associated with a small and positive change in BMI of 0.006 kg/m(2) (standard error 0.02) but the effect was not significant. The trend of increasing body mass was higher in the late 1980s than during the 1990s. These period effects were stronger for the women who were younger at baseline and for women with low or medium activity levels. Our analysis suggests a trend in the environment over the last 20 years that has increased the susceptibility of Filipino women to larger body mass.

  19. Absolute method of measuring magnetic susceptibility

    USGS Publications Warehouse

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  20. Deriving Color-Color Transformations for VRI Photometry

    NASA Astrophysics Data System (ADS)

    Taylor, B. J.; Joner, M. D.

    2006-12-01

    In this paper, transformations between Cousins R-I and other indices are considered. New transformations to Cousins V-R and Johnson V-K are derived, a published transformation involving T1-T2 on the Washington system is rederived, and the basis for a transformation involving b-y is considered. In addition, a statistically rigorous procedure for deriving such transformations is presented and discussed in detail. Highlights of the discussion include (1) the need for statistical analysis when least-squares relations are determined and interpreted, (2) the permitted forms and best forms for such relations, (3) the essential role played by accidental errors, (4) the decision process for selecting terms to appear in the relations, (5) the use of plots of residuals, (6) detection of influential data, (7) a protocol for assessing systematic effects from absorption features and other sources, (8) the reasons for avoiding extrapolation of the relations, (9) a protocol for ensuring uniformity in data used to determine the relations, and (10) the derivation and testing of the accidental errors of those data. To put the last of these subjects in perspective, it is shown that rms errors for VRI photometry have been as small as 6 mmag for more than three decades and that standard errors for quantities derived from such photometry can be as small as 1 mmag or less.

  1. A Simulation Study of Categorizing Continuous Exposure Variables Measured with Error in Autism Research: Small Changes with Large Effects.

    PubMed

    Heavner, Karyn; Burstyn, Igor

    2015-08-24

    Variation in the odds ratio (OR) resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations) used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff) curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to "small numbers." Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship.

  2. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  3. Measurement of diffusion coefficients from solution rates of bubbles

    NASA Technical Reports Server (NTRS)

    Krieger, I. M.

    1979-01-01

    The rate of solution of a stationary bubble is limited by the diffusion of dissolved gas molecules away from the bubble surface. Diffusion coefficients computed from measured rates of solution give mean values higher than accepted literature values, with standard errors as high as 10% for a single observation. Better accuracy is achieved with sparingly soluble gases, small bubbles, and highly viscous liquids. Accuracy correlates with the Grashof number, indicating that free convection is the major source of error. Accuracy should, therefore, be greatly increased in a gravity-free environment. The fact that the bubble will need no support is an additional important advantage of Spacelab for this measurement.

  4. Extending the Solvation-Layer Interface Condition Continum Electrostatic Model to a Linearized Poisson-Boltzmann Solvent.

    PubMed

    Molavi Tabrizi, Amirhossein; Goossens, Spencer; Mehdizadeh Rahimi, Ali; Cooper, Christopher D; Knepley, Matthew G; Bardhan, Jaydeep P

    2017-06-13

    We extend the linearized Poisson-Boltzmann (LPB) continuum electrostatic model for molecular solvation to address charge-hydration asymmetry. Our new solvation-layer interface condition (SLIC)/LPB corrects for first-shell response by perturbing the traditional continuum-theory interface conditions at the protein-solvent and the Stern-layer interfaces. We also present a GPU-accelerated treecode implementation capable of simulating large proteins, and our results demonstrate that the new model exhibits significant accuracy improvements over traditional LPB models, while reducing the number of fitting parameters from dozens (atomic radii) to just five parameters, which have physical meanings related to first-shell water behavior at an uncharged interface. In particular, atom radii in the SLIC model are not optimized but uniformly scaled from their Lennard-Jones radii. Compared to explicit-solvent free-energy calculations of individual atoms in small molecules, SLIC/LPB is significantly more accurate than standard parametrizations (RMS error 0.55 kcal/mol for SLIC, compared to RMS error of 3.05 kcal/mol for standard LPB). On parametrizing the electrostatic model with a simple nonpolar component for total molecular solvation free energies, our model predicts octanol/water transfer free energies with an RMS error 1.07 kcal/mol. A more detailed assessment illustrates that standard continuum electrostatic models reproduce total charging free energies via a compensation of significant errors in atomic self-energies; this finding offers a window into improving the accuracy of Generalized-Born theories and other coarse-grained models. Most remarkably, the SLIC model also reproduces positive charging free energies for atoms in hydrophobic groups, whereas standard PB models are unable to generate positive charging free energies regardless of the parametrized radii. The GPU-accelerated solver is freely available online, as is a MATLAB implementation.

  5. Multiple window spatial registration error of a gamma camera: 133Ba point source as a replacement of the NEMA procedure.

    PubMed

    Bergmann, Helmar; Minear, Gregory; Raith, Maria; Schaffarich, Peter M

    2008-12-09

    The accuracy of multiple window spatial resolution characterises the performance of a gamma camera for dual isotope imaging. In the present study we investigate an alternative method to the standard NEMA procedure for measuring this performance parameter. A long-lived 133Ba point source with gamma energies close to 67Ga and a single bore lead collimator were used to measure the multiple window spatial registration error. Calculation of the positions of the point source in the images used the NEMA algorithm. The results were validated against the values obtained by the standard NEMA procedure which uses a liquid 67Ga source with collimation. Of the source-collimator configurations under investigation an optimum collimator geometry, consisting of a 5 mm thick lead disk with a diameter of 46 mm and a 5 mm central bore, was selected. The multiple window spatial registration errors obtained by the 133Ba method showed excellent reproducibility (standard deviation < 0.07 mm). The values were compared with the results from the NEMA procedure obtained at the same locations and showed small differences with a correlation coefficient of 0.51 (p < 0.05). In addition, the 133Ba point source method proved to be much easier to use. A Bland-Altman analysis showed that the 133Ba and the 67Ga Method can be used interchangeably. The 133Ba point source method measures the multiple window spatial registration error with essentially the same accuracy as the NEMA-recommended procedure, but is easier and safer to use and has the potential to replace the current standard procedure.

  6. Reproducibility of 3D kinematics and surface electromyography measurements of mastication.

    PubMed

    Remijn, Lianne; Groen, Brenda E; Speyer, Renée; van Limbeek, Jacques; Nijhuis-van der Sanden, Maria W G

    2016-03-01

    The aim of this study was to determine the measurement reproducibility for a procedure evaluating the mastication process and to estimate the smallest detectable differences of 3D kinematic and surface electromyography (sEMG) variables. Kinematics of mandible movements and sEMG activity of the masticatory muscles were obtained over two sessions with four conditions: two food textures (biscuit and bread) of two sizes (small and large). Twelve healthy adults (mean age 29.1 years) completed the study. The second to the fifth chewing cycle of 5 bites were used for analyses. The reproducibility per outcome variable was calculated with an intraclass correlation coefficient (ICC) and a Bland-Altman analysis was applied to determine the standard error of measurement relative error of measurement and smallest detectable differences of all variables. ICCs ranged from 0.71 to 0.98 for all outcome variables. The outcome variables consisted of four bite and fourteen chewing cycle variables. The relative standard error of measurement of the bite variables was up to 17.3% for 'time-to-swallow', 'time-to-transport' and 'number of chewing cycles', but ranged from 31.5% to 57.0% for 'change of chewing side'. The relative standard error of measurement ranged from 4.1% to 24.7% for chewing cycle variables and was smaller for kinematic variables than sEMG variables. In general, measurements obtained with 3D kinematics and sEMG are reproducible techniques to assess the mastication process. The duration of the chewing cycle and frequency of chewing were the best reproducible measurements. Change of chewing side could not be reproduced. The published measurement error and smallest detectable differences will aid the interpretation of the results of future clinical studies using the same study variables. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Method for estimating low-flow characteristics of ungaged streams in Indiana

    USGS Publications Warehouse

    Arihood, Leslie D.; Glatfelter, Dale R.

    1991-01-01

    Equations for estimating the 7-day, 2-year and 7oday, 10-year low flows at sites on ungaged streams are presented. Regression analysis was used to develop equations relating basin characteristics and low-flow characteristics at 82 gaging stations. Significant basin characteristics in the equations are contributing drainage area and flow-duration ratio, which is the 20-percent flow duration divided by the 90-percent flow duration. Flow-duration ratio has been regionalized for Indiana on a plate. Ratios for use in the equations are obtained from the plate. Drainage areas are determined from maps or are obtained from reports. The predictive capability of the method was determined by tests of the equations and of the flow-duration ratios on the plate. The accuracy of the equations alone was tested by estimating the low-flow characteristics at 82 gaging stations where flow-duration ratio is already known. In this case, the standard errors of estimate for 7-day, 2-year and 7-day, 10-year low flows are 19 and 28 percent. When flow-duration ratios for the 82 gaging stations are obtained from the map, the standard errors are 46 and 61 percent. However, when stations having drainage areas of less than 10 square miles are excluded from the test, the standard errors decrease to 38 and 49 percent. Standard errors increase when stations with small basins are included, probably because some of the flow-duration ratios obtained for these small basins are incorrect. Local geology and its effect on the ratio are not adequately reflected on the plate, which shows the regional variation in flow-duration ratio. In all the tests, no bias is apparent areally, with increasing drainage area or with increasing ratio. Guidelines and limitations should be considered when using the method. The method can be applied only at sites in the northern and central physiographic zones of the State. Low-flow characteristics cannot be estimated for regulated streams unless the amount of regulation is known so that the estimated low-flow characteristic can be adjusted. The method is most accurate for sites having drainage areas ranging from 10 to 1,000 square miles and for predictions of 7-day, 10-year low flows ranging from 0.5 to 340 cubic feet per second.

  8. Method for estimating low-flow characteristics of ungaged streams in Indiana

    USGS Publications Warehouse

    Arihood, L.D.; Glatfelter, D.R.

    1986-01-01

    Equations for estimating the 7-day, 2-yr and 7-day, 10-yr low flows at sites on ungaged streams are presented. Regression analysis was used to develop equations relating basin characteristics and low flow characteristics at 82 gaging stations. Significant basin characteristics in the equations are contributing drainage area and flow duration ratio, which is the 20% flow duration divided by the 90% flow duration. Flow duration ratio has been regionalized for Indiana on a plate. Ratios for use in the equations are obtained from this plate. Drainage areas are determined from maps or are obtained from reports. The predictive capability of the method was determined by tests of the equations and of the flow duration ratios on the plate. The accuracy of the equations alone was tested by estimating the low flow characteristics at 82 gaging stations where flow duration ratio is already known. In this case, the standard errors of estimate for 7-day, 2-yr and 7-day, 10-yr low flows are 19% and 28%. When flow duration ratios for the 82 gaging stations are obtained from the map, the standard errors are 46% and 61%. However, when stations with drainage areas < 10 sq mi are excluded from the test, the standard errors reduce to 38% and 49%. Standard errors increase when stations with small basins are included, probably because some of the flow duration ratios obtained for these small basins are incorrect. Local geology and its effect on the ratio are not adequately reflected on the plate, which shows the regional variation in flow duration ratio. In all the tests, no bias is apparent areally, with increasing drainage area or with increasing ratio. Guidelines and limitations should be considered when using the method. The method can be applied only at sites in the northern and the central physiographic zones of the state. Low flow characteristics cannot be estimated for regulated streams unless the amount of regulation is known so that the estimated low flow characteristic can be adjusted. The method is most accurate for sites with drainage areas ranging from 10 to 1,000 sq mi and for predictions of 7-day, 10-yr low flows ranging from 0.5 to 340 cu ft/sec. (Author 's abstract)

  9. Acoustic holography as a metrological tool for characterizing medical ultrasound sources and fields

    PubMed Central

    Sapozhnikov, Oleg A.; Tsysar, Sergey A.; Khokhlova, Vera A.; Kreider, Wayne

    2015-01-01

    Acoustic holography is a powerful technique for characterizing ultrasound sources and the fields they radiate, with the ability to quantify source vibrations and reduce the number of required measurements. These capabilities are increasingly appealing for meeting measurement standards in medical ultrasound; however, associated uncertainties have not been investigated systematically. Here errors associated with holographic representations of a linear, continuous-wave ultrasound field are studied. To facilitate the analysis, error metrics are defined explicitly, and a detailed description of a holography formulation based on the Rayleigh integral is provided. Errors are evaluated both for simulations of a typical therapeutic ultrasound source and for physical experiments with three different ultrasound sources. Simulated experiments explore sampling errors introduced by the use of a finite number of measurements, geometric uncertainties in the actual positions of acquired measurements, and uncertainties in the properties of the propagation medium. Results demonstrate the theoretical feasibility of keeping errors less than about 1%. Typical errors in physical experiments were somewhat larger, on the order of a few percent; comparison with simulations provides specific guidelines for improving the experimental implementation to reduce these errors. Overall, results suggest that holography can be implemented successfully as a metrological tool with small, quantifiable errors. PMID:26428789

  10. Role of the standard deviation in the estimation of benchmark doses with continuous data.

    PubMed

    Gaylor, David W; Slikker, William

    2004-12-01

    For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.

  11. The effect of covariate mean differences on the standard error and confidence interval for the comparison of treatment means.

    PubMed

    Liu, Xiaofeng Steven

    2011-05-01

    The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.

  12. Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.

    PubMed

    Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas

    2016-11-14

    Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.

  13. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  14. Methods for estimating streamflow at mountain fronts in southern New Mexico

    USGS Publications Warehouse

    Waltemeyer, S.D.

    1994-01-01

    The infiltration of streamflow is potential recharge to alluvial-basin aquifers at or near mountain fronts in southern New Mexico. Data for 13 streamflow-gaging stations were used to determine a relation between mean annual stream- flow and basin and climatic conditions. Regression analysis was used to develop an equation that can be used to estimate mean annual streamflow on the basis of drainage areas and mean annual precipi- tation. The average standard error of estimate for this equation is 46 percent. Regression analysis also was used to develop an equation to estimate mean annual streamflow on the basis of active- channel width. Measurements of the width of active channels were determined for 6 of the 13 gaging stations. The average standard error of estimate for this relation is 29 percent. Stream- flow estimates made using a regression equation based on channel geometry are considered more reliable than estimates made from an equation based on regional relations of basin and climatic conditions. The sample size used to develop these relations was small, however, and the reported standard error of estimate may not represent that of the entire population. Active-channel-width measurements were made at 23 ungaged sites along the Rio Grande upstream from Elephant Butte Reservoir. Data for additional sites would be needed for a more comprehensive assessment of mean annual streamflow in southern New Mexico.

  15. Telemetry Standards, RCC Standard 106-17, Annex A.1, Pulse Amplitude Modulation Standards

    DTIC Science & Technology

    2017-07-01

    conform to either Figure Error! No text of specified style in document.-1 or Figure Error! No text of specified style in document.-2. Figure Error...No text of specified style in document.-1. 50 percent duty cycle PAM with amplitude synchronization A 20-25 percent deviation reserved for pulse...synchronization is recommended. Telemetry Standards, RCC Standard 106-17 Annex A.1, July 2017 A.1.2 Figure Error! No text of specified style

  16. A novel auto-tuning PID control mechanism for nonlinear systems.

    PubMed

    Cetin, Meric; Iplikci, Serdar

    2015-09-01

    In this paper, a novel Runge-Kutta (RK) discretization-based model-predictive auto-tuning proportional-integral-derivative controller (RK-PID) is introduced for the control of continuous-time nonlinear systems. The parameters of the PID controller are tuned using RK model of the system through prediction error-square minimization where the predicted information of tracking error provides an enhanced tuning of the parameters. Based on the model-predictive control (MPC) approach, the proposed mechanism provides necessary PID parameter adaptations while generating additive correction terms to assist the initially inadequate PID controller. Efficiency of the proposed mechanism has been tested on two experimental real-time systems: an unstable single-input single-output (SISO) nonlinear magnetic-levitation system and a nonlinear multi-input multi-output (MIMO) liquid-level system. RK-PID has been compared to standard PID, standard nonlinear MPC (NMPC), RK-MPC and conventional sliding-mode control (SMC) methods in terms of control performance, robustness, computational complexity and design issue. The proposed mechanism exhibits acceptable tuning and control performance with very small steady-state tracking errors, and provides very short settling time for parameter convergence. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Evaluation of a laser scanner for large volume coordinate metrology: a comparison of results before and after factory calibration

    NASA Astrophysics Data System (ADS)

    Ferrucci, M.; Muralikrishnan, B.; Sawyer, D.; Phillips, S.; Petrov, P.; Yakovlev, Y.; Astrelin, A.; Milligan, S.; Palmateer, J.

    2014-10-01

    Large volume laser scanners are increasingly being used for a variety of dimensional metrology applications. Methods to evaluate the performance of these scanners are still under development and there are currently no documentary standards available. This paper describes the results of extensive ranging and volumetric performance tests conducted on a large volume laser scanner. The results demonstrated small but clear systematic errors that are explained in the context of a geometric error model for the instrument. The instrument was subsequently returned to the manufacturer for factory calibration. The ranging and volumetric tests were performed again and the results are compared against those obtained prior to the factory calibration.

  18. Repetition code of 15 qubits

    NASA Astrophysics Data System (ADS)

    Wootton, James R.; Loss, Daniel

    2018-05-01

    The repetition code is an important primitive for the techniques of quantum error correction. Here we implement repetition codes of at most 15 qubits on the 16 qubit ibmqx3 device. Each experiment is run for a single round of syndrome measurements, achieved using the standard quantum technique of using ancilla qubits and controlled operations. The size of the final syndrome is small enough to allow for lookup table decoding using experimentally obtained data. The results show strong evidence that the logical error rate decays exponentially with code distance, as is expected and required for the development of fault-tolerant quantum computers. The results also give insight into the nature of noise in the device.

  19. Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.

    PubMed

    Spiess, Martin; Jordan, Pascal; Wendt, Mike

    2018-05-07

    In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.

  20. Inference With Difference-in-Differences With a Small Number of Groups: A Review, Simulation Study, and Empirical Application Using SHARE Data.

    PubMed

    Rokicki, Slawa; Cohen, Jessica; Fink, Günther; Salomon, Joshua A; Landrum, Mary Beth

    2018-01-01

    Difference-in-differences (DID) estimation has become increasingly popular as an approach to evaluate the effect of a group-level policy on individual-level outcomes. Several statistical methodologies have been proposed to correct for the within-group correlation of model errors resulting from the clustering of data. Little is known about how well these corrections perform with the often small number of groups observed in health research using longitudinal data. First, we review the most commonly used modeling solutions in DID estimation for panel data, including generalized estimating equations (GEE), permutation tests, clustered standard errors (CSE), wild cluster bootstrapping, and aggregation. Second, we compare the empirical coverage rates and power of these methods using a Monte Carlo simulation study in scenarios in which we vary the degree of error correlation, the group size balance, and the proportion of treated groups. Third, we provide an empirical example using the Survey of Health, Ageing, and Retirement in Europe. When the number of groups is small, CSE are systematically biased downwards in scenarios when data are unbalanced or when there is a low proportion of treated groups. This can result in over-rejection of the null even when data are composed of up to 50 groups. Aggregation, permutation tests, bias-adjusted GEE, and wild cluster bootstrap produce coverage rates close to the nominal rate for almost all scenarios, though GEE may suffer from low power. In DID estimation with a small number of groups, analysis using aggregation, permutation tests, wild cluster bootstrap, or bias-adjusted GEE is recommended.

  1. Theory of injection locking and rapid start-up of magnetrons, and effects of manufacturing errors in terahertz traveling wave tubes

    NASA Astrophysics Data System (ADS)

    Pengvanich, Phongphaeth

    In this thesis, several contemporary issues on coherent radiation sources are examined. They include the fast startup and the injection locking of microwave magnetrons, and the effects of random manufacturing errors on phase and small signal gain of terahertz traveling wave amplifiers. In response to the rapid startup and low noise magnetron experiments performed at the University of Michigan that employed periodic azimuthal perturbations in the axial magnetic field, a systematic study of single particle orbits is performed for a crossed electric and periodic magnetic field. A parametric instability in the orbits, which brings a fraction of the electrons from the cathode toward the anode, is discovered. This offers an explanation of the rapid startup observed in the experiments. A phase-locking model has been constructed from circuit theory to qualitatively explain various regimes observed in kilowatt magnetron injection-locking experiments, which were performed at the University of Michigan. These experiments utilize two continuous-wave magnetrons; one functions as an oscillator and the other as a driver. Time and frequency domain solutions are developed from the model, allowing investigations into growth, saturation, and frequency response of the output. The model qualitatively recovers many of the phase-locking frequency characteristics observed in the experiments. Effects of frequency chirp and frequency perturbation on the phase and lockability have also been quantified. Development of traveling wave amplifier operating at terahertz is a subject of current interest. The small circuit size has prompted a statistical analysis of the effects of random fabrication errors on phase and small signal gain of these amplifiers. The small signal theory is treated with a continuum model in which the electron beam is monoenergetic. Circuit perturbations that vary randomly along the beam axis are introduced through the dimensionless Pierce parameters describing the beam-wave velocity mismatch (b), the gain parameter (C), and the cold tube circuit loss ( d). Our study shows that perturbation in b dominates the other two in terms of power gain and phase shift. Extensive data show that standard deviation of the output phase is linearly proportional to standard deviation of the individual perturbations in b, C and d.

  2. Multiplate Radiation Shields: Investigating Radiational Heating Errors

    NASA Astrophysics Data System (ADS)

    Richardson, Scott James

    1995-01-01

    Multiplate radiation shield errors are examined using the following techniques: (1) analytic heat transfer analysis, (2) optical ray tracing, (3) numerical fluid flow modeling, (4) laboratory testing, (5) wind tunnel testing, and (6) field testing. Guidelines for reducing radiational heating errors are given that are based on knowledge of the temperature sensor to be used, with the shield being chosen to match the sensor design. Small, reflective sensors that are exposed directly to the air stream (not inside a filter as is the case for many temperature and relative humidity probes) should be housed in a shield that provides ample mechanical and rain protection while impeding the air flow as little as possible; protection from radiation sources is of secondary importance. If a sensor does not meet the above criteria (i.e., is large or absorbing), then a standard Gill shield performs reasonably well. A new class of shields, called part-time aspirated multiplate radiation shields, are introduced. This type of shield consists of a multiplate design usually operated in a passive manner but equipped with a fan-forced aspiration capability to be used when necessary (e.g., low wind speed). The fans used here are 12 V DC that can be operated with a small dedicated solar panel. This feature allows the fan to operate when global solar radiation is high, which is when the largest radiational heating errors usually occur. A prototype shield was constructed and field tested and an example is given in which radiational heating errors were reduced from 2 ^circC to 1.2 ^circC. The fan was run continuously to investigate night-time low wind speed errors and the prototype shield reduced errors from 1.6 ^ circC to 0.3 ^circC. Part-time aspirated shields are an inexpensive alternative to fully aspirated shields and represent a good compromise between cost, power consumption, reliability (because they should be no worse than a standard multiplate shield if the fan fails), and accuracy. In addition, it is possible to modify existing passive shields to incorporate part-time aspiration, thus making them even more cost-effective. Finally, a new shield is described that incorporates a large diameter top plate that is designed to shade the lower portion of the shield. This shield increases flow through it by 60%, compared to the Gill design and it is likely to reduce radiational heating errors, although it has not been tested.

  3. Development of Next Generation Memory Test Experiment for Deployment on a Small Satellite

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd; Ho, Fat D.

    2012-01-01

    The original Memory Test Experiment successfully flew on the FASTSAT satellite launched in November 2010. It contained a single Ramtron 512K ferroelectric memory. The memory device went through many thousands of read/write cycles and recorded any errors that were encountered. The original mission length was schedule to last 6 months but was extended to 18 months. New opportunities exist to launch a similar satellite and considerations for a new memory test experiment should be examined. The original experiment had to be designed and integrated in less than two months, so the experiment was a simple design using readily available parts. The follow-on experiment needs to be more sophisticated and encompass more technologies. This paper lays out the considerations for the design and development of this follow-on flight memory experiment. It also details the results from the original Memory Test Experiment that flew on board FASTSAT. Some of the design considerations for the new experiment include the number and type of memory devices to be used, the kinds of tests that will be performed, other data needed to analyze the results, and best use of limited resources on a small satellite. The memory technologies that are considered are FRAM, FLASH, SONOS, Resistive Memory, Phase Change Memory, Nano-wire Memory, Magneto-resistive Memory, Standard DRAM, and Standard SRAM. The kinds of tests that could be performed are read/write operations, non-volatile memory retention, write cycle endurance, power measurements, and testing Error Detection and Correction schemes. Other data that may help analyze the results are GPS location of recorded errors, time stamp of all data recorded, radiation measurements, temperature, and other activities being perform by the satellite. The resources of power, volume, mass, temperature, processing power, and telemetry bandwidth are extremely limited on a small satellite. Design considerations must be made to allow the experiment to not interfere with the satellite s primary mission.

  4. Standardising the Lactulose Mannitol Test of Gut Permeability to Minimise Error and Promote Comparability

    PubMed Central

    Sequeira, Ivana R.; Lentle, Roger G.; Kruger, Marlena C.; Hurst, Roger D.

    2014-01-01

    Background Lactulose mannitol ratio tests are clinically useful for assessing disorders characterised by changes in gut permeability and for assessing mixing in the intestinal lumen. Variations between currently used test protocols preclude meaningful comparisons between studies. We determined the optimal sampling period and related this to intestinal residence. Methods Half-hourly lactulose and mannitol urinary excretions were determined over 6 hours in 40 healthy female volunteers after administration of either 600 mg aspirin or placebo, in randomised order at weekly intervals. Gastric and small intestinal transit times were assessed by the SmartPill in 6 subjects from the same population. Half-hourly percentage recoveries of lactulose and mannitol were grouped on a basis of compartment transit time. The rate of increase or decrease of each sugar within each group was explored by simple linear regression to assess the optimal period of sampling. Key Results The between subject standard errors for each half-hourly lactulose and mannitol excretion were lowest, the correlation of the quantity of each sugar excreted with time was optimal and the difference between the two sugars in this temporal relationship maximal during the period from 2½-4 h after ingestion. Half-hourly lactulose excretions were generally increased after dosage with aspirin whilst those of mannitol were unchanged as was the temporal pattern and period of lowest between subject standard error for both sugars. Conclusion The results indicate that between subject variation in the percentage excretion of the two sugars would be minimised and the differences in the temporal patterns of excretion would be maximised if the period of collection of urine used in clinical tests of small intestinal permeability were restricted to 2½-4 h post dosage. This period corresponds to a period when the column of digesta column containing the probes is passing from the small to the large intestine. PMID:24901524

  5. A comparison of two estimates of standard error for a ratio-of-means estimator for a mapped-plot sample design in southeast Alaska.

    Treesearch

    Willem W.S. van Hees

    2002-01-01

    Comparisons of estimated standard error for a ratio-of-means (ROM) estimator are presented for forest resource inventories conducted in southeast Alaska between 1995 and 2000. Estimated standard errors for the ROM were generated by using a traditional variance estimator and also approximated by bootstrap methods. Estimates of standard error generated by both...

  6. Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Dale; Selby, Neil

    2012-08-14

    Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.

  7. A smart all-in-one device to measure vital signs in admitted patients

    PubMed Central

    van Goor, Harry; van Acht, Maartje; van de Belt, Tom H.; Bredie, Sebastian J. H.

    2018-01-01

    Background Vital sign measurements in hospitalized patients by nurses are time consuming and prone to operational errors. The Checkme, a smart all-in-one device capable of measuring vital signs, could improve daily patient monitoring by reducing measurement time, inter-observer variability, and incorrect inputs in the Electronic Health Record (EHR). We evaluated the accuracy of self measurements by patient using the Checkme in comparison with gold standard and nurse measurements. Methods and findings This prospective comparative study was conducted at the Internal Medicine ward of an academic hospital in the Netherlands. Fifty non-critically ill patients were enrolled in the study. Time-related measurement sessions were conducted on consecutive patients in a randomized order: vital sign measurement in duplicate by a well-trained investigator (gold standard), a Checkme measurement by the patient, and a routine vital sign measurement by a nurse. In 41 patients (82%), initial calibration of the Checkme was successful and results were eligible for analysis. In total, 69 sessions were conducted for these 41 patients. The temperature results recorded by the patient with the Checkme differed significantly from the gold standard core temperature measurements (mean difference 0.1 ± 0.3). Obtained differences in vital signs and calculated Modified Early Warning Score (MEWS) were small and were in range with predefined accepted discrepancies. Conclusions Patient-calculated MEWS using the Checkme, nurse measurements, and gold standard measurements all correlated well, and the small differences observed between modalities would not have affected clinical decision making. Using the Checkme, patients in a general medical ward setting are able to measure their own vital signs easily and accurately by themselves. This could be time saving for nurses and prevent errors due to manually entering data in the EHR. PMID:29432461

  8. A smart all-in-one device to measure vital signs in admitted patients.

    PubMed

    Weenk, Mariska; van Goor, Harry; van Acht, Maartje; Engelen, Lucien Jlpg; van de Belt, Tom H; Bredie, Sebastian J H

    2018-01-01

    Vital sign measurements in hospitalized patients by nurses are time consuming and prone to operational errors. The Checkme, a smart all-in-one device capable of measuring vital signs, could improve daily patient monitoring by reducing measurement time, inter-observer variability, and incorrect inputs in the Electronic Health Record (EHR). We evaluated the accuracy of self measurements by patient using the Checkme in comparison with gold standard and nurse measurements. This prospective comparative study was conducted at the Internal Medicine ward of an academic hospital in the Netherlands. Fifty non-critically ill patients were enrolled in the study. Time-related measurement sessions were conducted on consecutive patients in a randomized order: vital sign measurement in duplicate by a well-trained investigator (gold standard), a Checkme measurement by the patient, and a routine vital sign measurement by a nurse. In 41 patients (82%), initial calibration of the Checkme was successful and results were eligible for analysis. In total, 69 sessions were conducted for these 41 patients. The temperature results recorded by the patient with the Checkme differed significantly from the gold standard core temperature measurements (mean difference 0.1 ± 0.3). Obtained differences in vital signs and calculated Modified Early Warning Score (MEWS) were small and were in range with predefined accepted discrepancies. Patient-calculated MEWS using the Checkme, nurse measurements, and gold standard measurements all correlated well, and the small differences observed between modalities would not have affected clinical decision making. Using the Checkme, patients in a general medical ward setting are able to measure their own vital signs easily and accurately by themselves. This could be time saving for nurses and prevent errors due to manually entering data in the EHR.

  9. Unmanned aircraft systems image collection and computer vision image processing for surveying and mapping that meets professional needs

    NASA Astrophysics Data System (ADS)

    Peterson, James Preston, II

    Unmanned Aerial Systems (UAS) are rapidly blurring the lines between traditional and close range photogrammetry, and between surveying and photogrammetry. UAS are providing an economic platform for performing aerial surveying on small projects. The focus of this research was to describe traditional photogrammetric imagery and Light Detection and Ranging (LiDAR) geospatial products, describe close range photogrammetry (CRP), introduce UAS and computer vision (CV), and investigate whether industry mapping standards for accuracy can be met using UAS collection and CV processing. A 120-acre site was selected and 97 aerial targets were surveyed for evaluation purposes. Four UAS flights of varying heights above ground level (AGL) were executed, and three different target patterns of varying distances between targets were analyzed for compliance with American Society for Photogrammetry and Remote Sensing (ASPRS) and National Standard for Spatial Data Accuracy (NSSDA) mapping standards. This analysis resulted in twelve datasets. Error patterns were evaluated and reasons for these errors were determined. The relationship between the AGL, ground sample distance, target spacing and the root mean square error of the targets is exploited by this research to develop guidelines that use the ASPRS and NSSDA map standard as the template. These guidelines allow the user to select the desired mapping accuracy and determine what target spacing and AGL is required to produce the desired accuracy. These guidelines also address how UAS/CV phenomena affect map accuracy. General guidelines and recommendations are presented that give the user helpful information for planning a UAS flight using CV technology.

  10. A relative navigation sensor for CubeSats based on LED fiducial markers

    NASA Astrophysics Data System (ADS)

    Sansone, Francesco; Branz, Francesco; Francesconi, Alessandro

    2018-05-01

    Small satellite platforms are becoming very appealing both for scientific and commercial applications, thanks to their low cost, short development times and availability of standard components and subsystems. The main disadvantage with such vehicles is the limitation of available resources to perform mission tasks. To overcome this drawback, mission concepts are under study that foresee cooperation between autonomous small satellites to accomplish complex tasks; among these, on-orbit servicing and on-orbit assembly of large structures are of particular interest and the global scientific community is putting a significant effort in the miniaturization of critical technologies that are required for such innovative mission scenarios. In this work, the development and the laboratory testing of an accurate relative navigation package for nanosatellites compliant to the CubeSat standard is presented. The system features a small camera and two sets of LED fiducial markers, and is conceived as a standard package that allows small spacecraft to perform mutual tracking during rendezvous and docking maneuvers. The hardware is based on off-the-shelf components assembled in a compact configuration that is compatible with the CubeSat standard. The image processing and pose estimation software was custom developed. The experimental evaluation of the system allowed to determine both the static and dynamic performances. The system is capable to determine the close range relative position and attitude faster than 10 S/s, with errors always below 10 mm and 2 deg.

  11. Standard error of estimated average timber volume per acre under point sampling when trees are measured for volume on a subsample of all points.

    Treesearch

    Floyd A. Johnson

    1961-01-01

    This report assumes a knowledge of the principles of point sampling as described by Grosenbaugh, Bell and Alexander, and others. Whenever trees are counted at every point in a sample of points (large sample) and measured for volume at a portion (small sample) of these points, the sampling design could be called ratio double sampling. If the large...

  12. Perceptions and Efficacy of Flight Operational Quality Assurance (FOQA) Programs Among Small-scale Operators

    DTIC Science & Technology

    2012-01-01

    regressive Integrated Moving Average ( ARIMA ) model for the data, eliminating the need to identify an appropriate model through trial and error alone...06 .11 13.67 16 .62 16 .14 .11 8.06 16 .95 * Based on the asymptotic chi-square approximation. 8 In general, ARIMA models address three...performance standards and measurement processes and a prevailing climate of organizational trust were important factors. Unfortunately, uneven

  13. Simultaneous estimation of human and exoskeleton motion: A simplified protocol.

    PubMed

    Alvarez, M T; Torricelli, D; Del-Ama, A J; Pinto, D; Gonzalez-Vargas, J; Moreno, J C; Gil-Agudo, A; Pons, J L

    2017-07-01

    Adequate benchmarking procedures in the area of wearable robots is gaining importance in order to compare different devices on a quantitative basis, improve them and support the standardization and regulation procedures. Performance assessment usually focuses on the execution of locomotion tasks, and is mostly based on kinematic-related measures. Typical drawbacks of marker-based motion capture systems, gold standard for measure of human limb motion, become challenging when measuring limb kinematics, due to the concomitant presence of the robot. This work answers the question of how to reliably assess the subject's body motion by placing markers over the exoskeleton. Focusing on the ankle joint, the proposed methodology showed that it is possible to reconstruct the trajectory of the subject's joint by placing markers on the exoskeleton, although foot flexibility during walking can impact the reconstruction accuracy. More experiments are needed to confirm this hypothesis, and more subjects and walking conditions are needed to better characterize the errors of the proposed methodology, although our results are promising, indicating small errors.

  14. Architecture overview and data summary of a 5.4 km free-space laser communication experiment

    NASA Astrophysics Data System (ADS)

    Moores, John D.; Walther, Frederick G.; Greco, Joseph A.; Michael, Steven; Wilcox, William E., Jr.; Volpicelli, Alicia M.; Magliocco, Richard J.; Henion, Scott R.

    2009-08-01

    MIT Lincoln Laboratory designed and built two free-space laser communications terminals, and successfully demonstrated error-free communication between two ground sites separated by 5.4 km in September, 2008. The primary goal of this work was to emulate a low elevation angle air-to-ground link capable of supporting standard OTU1 (2.667 Gb/s) data formatting with standard client interfaces. Mitigation of turbulence-induced scintillation effects was accomplished through the use of multiple small-aperture receivers and novel encoding and interleaver hardware. Data from both the field and laboratory experiments were used to assess link performance as a function of system parameters such as transmitted power, degree of spatial diversity, and interleaver span, with and without forward error correction. This work was sponsored by the Department of Defense, RRCO DDR&E, under Air Force Contract FA8721-05-C-0002. Opinions, interpretations, conclusions and recommendations are those of the authors and are not necessarily endorsed by the United States Government.

  15. Uncertainty evaluation in normalization of isotope delta measurement results against international reference materials.

    PubMed

    Meija, Juris; Chartrand, Michelle M G

    2018-01-01

    Isotope delta measurements are normalized against international reference standards. Although multi-point normalization is becoming a standard practice, the existing uncertainty evaluation practices are either undocumented or are incomplete. For multi-point normalization, we present errors-in-variables regression models for explicit accounting of the measurement uncertainty of the international standards along with the uncertainty that is attributed to their assigned values. This manuscript presents framework to account for the uncertainty that arises due to a small number of replicate measurements and discusses multi-laboratory data reduction while accounting for inevitable correlations between the laboratories due to the use of identical reference materials for calibration. Both frequentist and Bayesian methods of uncertainty analysis are discussed.

  16. Measurements of stem diameter: implications for individual- and stand-level errors.

    PubMed

    Paul, Keryn I; Larmour, John S; Roxburgh, Stephen H; England, Jacqueline R; Davies, Micah J; Luck, Hamish D

    2017-08-01

    Stem diameter is one of the most common measurements made to assess the growth of woody vegetation, and the commercial and environmental benefits that it provides (e.g. wood or biomass products, carbon sequestration, landscape remediation). Yet inconsistency in its measurement is a continuing source of error in estimates of stand-scale measures such as basal area, biomass, and volume. Here we assessed errors in stem diameter measurement through repeated measurements of individual trees and shrubs of varying size and form (i.e. single- and multi-stemmed) across a range of contrasting stands, from complex mixed-species plantings to commercial single-species plantations. We compared a standard diameter tape with a Stepped Diameter Gauge (SDG) for time efficiency and measurement error. Measurement errors in diameter were slightly (but significantly) influenced by size and form of the tree or shrub, and stem height at which the measurement was made. Compared to standard tape measurement, the mean systematic error with SDG measurement was only -0.17 cm, but varied between -0.10 and -0.52 cm. Similarly, random error was relatively large, with standard deviations (and percentage coefficients of variation) averaging only 0.36 cm (and 3.8%), but varying between 0.14 and 0.61 cm (and 1.9 and 7.1%). However, at the stand scale, sampling errors (i.e. how well individual trees or shrubs selected for measurement of diameter represented the true stand population in terms of the average and distribution of diameter) generally had at least a tenfold greater influence on random errors in basal area estimates than errors in diameter measurements. This supports the use of diameter measurement tools that have high efficiency, such as the SDG. Use of the SDG almost halved the time required for measurements compared to the diameter tape. Based on these findings, recommendations include the following: (i) use of a tape to maximise accuracy when developing allometric models, or when monitoring relatively small changes in permanent sample plots (e.g. National Forest Inventories), noting that care is required in irregular-shaped, large-single-stemmed individuals, and (ii) use of a SDG to maximise efficiency when using inventory methods to assess basal area, and hence biomass or wood volume, at the stand scale (i.e. in studies of impacts of management or site quality) where there are budgetary constraints, noting the importance of sufficient sample sizes to ensure that the population sampled represents the true population.

  17. TU-H-207A-02: Relative Importance of the Various Factors Influencing the Accuracy of Monte Carlo Simulated CT Dose Index

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marous, L; Muryn, J; Liptak, C

    2016-06-15

    Purpose: Monte Carlo simulation is a frequently used technique for assessing patient dose in CT. The accuracy of a Monte Carlo program is often validated using the standard CT dose index (CTDI) phantoms by comparing simulated and measured CTDI{sub 100}. To achieve good agreement, many input parameters in the simulation (e.g., energy spectrum and effective beam width) need to be determined. However, not all the parameters have equal importance. Our aim was to assess the relative importance of the various factors that influence the accuracy of simulated CTDI{sub 100}. Methods: A Monte Carlo program previously validated for a clinical CTmore » system was used to simulate CTDI{sub 100}. For the standard CTDI phantoms (32 and 16 cm in diameter), CTDI{sub 100} values from central and four peripheral locations at 70 and 120 kVp were first simulated using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which intentional errors were introduced into the input parameters, the effects of which on simulated CTDI{sub 100} were analyzed. Results: At 38.4-mm collimation, errors in effective beam width up to 5.0 mm showed negligible effects on simulated CTDI{sub 100} (<1.0%). Likewise, errors in acrylic density of up to 0.01 g/cm{sup 3} resulted in small CTDI{sub 100} errors (<2.5%). In contrast, errors in spectral HVL produced more significant effects: slight deviations (±0.2 mm Al) produced errors up to 4.4%, whereas more extreme deviations (±1.4 mm Al) produced errors as high as 25.9%. Lastly, ignoring the CT table introduced errors up to 13.9%. Conclusion: Monte Carlo simulated CTDI{sub 100} is insensitive to errors in effective beam width and acrylic density. However, they are sensitive to errors in spectral HVL. To obtain accurate results, the CT table should not be ignored. This work was supported by a Faculty Research and Development Award from Cleveland State University.« less

  18. Dose assessment in contrast enhanced digital mammography using simple phantoms simulating standard model breasts.

    PubMed

    Bouwman, R W; van Engen, R E; Young, K C; Veldkamp, W J H; Dance, D R

    2015-01-07

    Slabs of polymethyl methacrylate (PMMA) or a combination of PMMA and polyethylene (PE) slabs are used to simulate standard model breasts for the evaluation of the average glandular dose (AGD) in digital mammography (DM) and digital breast tomosynthesis (DBT). These phantoms are optimized for the energy spectra used in DM and DBT, which normally have a lower average energy than used in contrast enhanced digital mammography (CEDM). In this study we have investigated whether these phantoms can be used for the evaluation of AGD with the high energy x-ray spectra used in CEDM. For this purpose the calculated values of the incident air kerma for dosimetry phantoms and standard model breasts were compared in a zero degree projection with the use of an anti scatter grid. It was found that the difference in incident air kerma compared to standard model breasts ranges between -10% to +4% for PMMA slabs and between 6% and 15% for PMMA-PE slabs. The estimated systematic error in the measured AGD for both sets of phantoms were considered to be sufficiently small for the evaluation of AGD in quality control procedures for CEDM. However, the systematic error can be substantial if AGD values from different phantoms are compared.

  19. Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.

    PubMed

    Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia

    2017-06-01

    Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.

  20. MO-F-CAMPUS-T-03: Data Driven Approaches for Determination of Treatment Table Tolerance Values for Record and Verification Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, N; DiCostanzo, D; Fullenkamp, M

    2015-06-15

    Purpose: To determine appropriate couch tolerance values for modern radiotherapy linac R&V systems with indexed patient setup. Methods: Treatment table tolerance values have been the most difficult to lower, due to many factors including variations in patient positioning and differences in table tops between machines. We recently installed nine linacs with similar tables and started indexing every patient in our clinic. In this study we queried our R&V database and analyzed the deviation of couch position values from the acquired values at verification simulation for all patients treated with indexed positioning. Mean and standard deviations of daily setup deviations weremore » computed in the longitudinal, lateral and vertical direction for 343 patient plans. The mean, median and standard error of the standard deviations across the whole patient population and for some disease sites were computed to determine tolerance values. Results: The plot of our couch deviation values showed a gaussian distribution, with some small deviations, corresponding to setup uncertainties on non-imaging days, and SRS/SRT/SBRT patients, as well as some large deviations which were spot checked and found to be corresponding to indexing errors that were overriden. Setting our tolerance values based on the median + 1 standard error resulted in tolerance values of 1cm lateral and longitudinal, and 0.5 cm vertical for all non- SRS/SRT/SBRT cases. Re-analizing the data, we found that about 92% of the treated fractions would be within these tolerance values (ignoring the mis-indexed patients). We also analyzed data for disease site based subpopulations and found no difference in the tolerance values that needed to be used. Conclusion: With the use of automation, auto-setup and other workflow efficiency tools being introduced into radiotherapy workflow, it is very essential to set table tolerances that allow safe treatments, but flag setup errors that need to be reassessed before treatments.« less

  1. Automated Hypothesis Tests and Standard Errors for Nonstandard Problems with Description of Computer Package: A Draft.

    ERIC Educational Resources Information Center

    Lord, Frederic M.; Stocking, Martha

    A general Computer program is described that will compute asymptotic standard errors and carry out significance tests for an endless variety of (standard and) nonstandard large-sample statistical problems, without requiring the statistician to derive asymptotic standard error formulas. The program assumes that the observations have a multinormal…

  2. The effect of keyboard key spacing on typing speed, error, usability, and biomechanics, Part 2: Vertical spacing.

    PubMed

    Pereira, Anna; Hsieh, Chih-Ming; Laroche, Charles; Rempel, David

    2014-06-01

    The objective was to evaluate the effects of vertical key spacing on a conventional computer keyboard on typing speed, percentage error, usability, forearm muscle activity, and wrist posture for both females with small fingers and males with large fingers. Part I evaluated primarily horizontal key spacing and found that for male typists with large fingers, productivity and usability were similar for spacings of 17, 18, and 19 mm but were reduced for spacings of 16 mm. Few other key spacing studies are available, and the international standards that specify the spacing between keys on a keyboard have been mainly guided by design convention. Experienced female typists (n = 26) with small fingers (middle finger length < or = 7.71 cm or finger breadth of < or = 1.93 cm) and male typists (n = 26) with large fingers (middle finger length > or = 8.37 cm or finger breadth of > or = 2.24 cm) typed on five keyboards that differed primarily in vertical key spacing (17 x 18, 17 x 17, 17 x 16, 17 x 15.5, and 18 x 16 mm) while typing speed, error, fatigue, preference, forearm muscle activity, and wrist posture were recorded. Productivity and usability ratings were significantly worse for the keyboard with 15.5 mm vertical spacing compared to the other keyboards for both groups.There were few significant differences on usability ratings between the other keyboards. Reducing vertical key spacing,from 18 to 17 to 16 mm, had no significant effect on productivity or usability. The findings support the design of keyboards with vertical key spacings of 16, 17, or 18 mm. These findings may influence keyboard design and standards.

  3. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns

    PubMed Central

    Severns, Paul M.

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches. PMID:26312190

  4. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns.

    PubMed

    Breed, Greg A; Severns, Paul M

    2015-01-01

    Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m) ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm), this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.

  5. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  6. In Vivo Characterization of a Wireless Telemetry Module for a Capsule Endoscopy System Utilizing a Conformal Antenna.

    PubMed

    Faerber, Julia; Cummins, Gerard; Pavuluri, Sumanth Kumar; Record, Paul; Rodriguez, Adrian R Ayastuy; Lay, Holly S; McPhillips, Rachael; Cox, Benjamin F; Connor, Ciaran; Gregson, Rachael; Clutton, Richard Eddie; Khan, Sadeque Reza; Cochran, Sandy; Desmulliez, Marc P Y

    2018-02-01

    This paper describes the design, fabrication, packaging, and performance characterization of a conformal helix antenna created on the outside of a capsule endoscope designed to operate at a carrier frequency of 433 MHz within human tissue. Wireless data transfer was established between the integrated capsule system and an external receiver. The telemetry system was tested within a tissue phantom and in vivo porcine models. Two different types of transmission modes were tested. The first mode, replicating normal operating conditions, used data packets at a steady power level of 0 dBm, while the capsule was being withdrawn at a steady rate from the small intestine. The second mode, replicating the worst-case clinical scenario of capsule retention within the small bowel, sent data with stepwise increasing power levels of -10, 0, 6, and 10 dBm, with the capsule fixed in position. The temperature of the tissue surrounding the external antenna was monitored at all times using thermistors embedded within the capsule shell to observe potential safety issues. The recorded data showed, for both modes of operation, a low error transmission of 10 -3 packet error rate and 10 -5 bit error rate and no temperature increase of the tissue according to IEEE standards.

  7. Step-by-step magic state encoding for efficient fault-tolerant quantum computation

    PubMed Central

    Goto, Hayato

    2014-01-01

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation. PMID:25511387

  8. Step-by-step magic state encoding for efficient fault-tolerant quantum computation.

    PubMed

    Goto, Hayato

    2014-12-16

    Quantum error correction allows one to make quantum computers fault-tolerant against unavoidable errors due to decoherence and imperfect physical gate operations. However, the fault-tolerant quantum computation requires impractically large computational resources for useful applications. This is a current major obstacle to the realization of a quantum computer. In particular, magic state distillation, which is a standard approach to universality, consumes the most resources in fault-tolerant quantum computation. For the resource problem, here we propose step-by-step magic state encoding for concatenated quantum codes, where magic states are encoded step by step from the physical level to the logical one. To manage errors during the encoding, we carefully use error detection. Since the sizes of intermediate codes are small, it is expected that the resource overheads will become lower than previous approaches based on the distillation at the logical level. Our simulation results suggest that the resource requirements for a logical magic state will become comparable to those for a single logical controlled-NOT gate. Thus, the present method opens a new possibility for efficient fault-tolerant quantum computation.

  9. Correlation and registration of ERTS multispectral imagery. [by a digital processing technique

    NASA Technical Reports Server (NTRS)

    Bonrud, L. O.; Henrikson, P. J.

    1974-01-01

    Examples of automatic digital processing demonstrate the feasibility of registering one ERTS multispectral scanner (MSS) image with another obtained on a subsequent orbit, and automatic matching, correlation, and registration of MSS imagery with aerial photography (multisensor correlation) is demonstrated. Excellent correlation was obtained with patch sizes exceeding 16 pixels square. Qualities which lead to effective control point selection are distinctive features, good contrast, and constant feature characteristics. Results of the study indicate that more than 300 degrees of freedom are required to register two standard ERTS-1 MSS frames covering 100 by 100 nautical miles to an accuracy of 0.6 pixel mean radial displacement error. An automatic strip processing technique demonstrates 600 to 1200 degrees of freedom over a quater frame of ERTS imagery. Registration accuracies in the range of 0.3 pixel to 0.5 pixel mean radial error were confirmed by independent error analysis. Accuracies in the range of 0.5 pixel to 1.4 pixel mean radial error were demonstrated by semi-automatic registration over small geographic areas.

  10. Frame error rate for single-hop and dual-hop transmissions in 802.15.4 LoWPANs

    NASA Astrophysics Data System (ADS)

    Biswas, Sankalita; Ghosh, Biswajit; Chandra, Aniruddha; Dhar Roy, Sanjay

    2017-08-01

    IEEE 802.15.4 is a popular standard for personal area networks used in different low-rate short-range applications. This paper examines the error rate performance of 802.15.4 in fading wireless channel. An analytical model is formulated for evaluating frame error rate (FER); first, for direct single-hop transmission between two sensor nodes, and second, for dual-hop (DH) transmission using an in-between relay node. During modeling the transceiver design parameters are chosen according to the specifications set for both the 2.45 GHz and 868/915 MHz bands. We have also developed a simulation test bed for evaluating FER. Some results showed expected trends, such as FER is higher for larger payloads. Other observations are not that intuitive. It is interesting to note that the error rates are significantly higher for the DH case and demands a signal-to-noise ratio (SNR) penalty of about 7 dB. Also, the FER shoots from zero to one within a very small range of SNR.

  11. The Infinitesimal Jackknife with Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  12. A Comparison of Three Methods for Computing Scale Score Conditional Standard Errors of Measurement. ACT Research Report Series, 2013 (7)

    ERIC Educational Resources Information Center

    Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu

    2013-01-01

    Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…

  13. Up-Regulation of Autophagy in Small Intestine Paneth Cells in Response to Total-Body gamma-Irradiation

    DTIC Science & Technology

    2009-01-01

    standard error of the mean (SEM). Analysis of variance procedures with Tukey post hoc correction examined the existence and nature of temporal trends ...apoptosis. Cell 2006;126:121–134. 20. Yorimitsu T, Klionsky DJ. Eating the enoplasmic reticulum: quality control by autophagy. Trends Cell Biol 2007;17...oxide signaling to iron- regulatory protein: direct control of ferritin mRNA translation and transferrin receptor mRNA stability in transfected

  14. Rational approximations of f(R) cosmography through Pad'e polynomials

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando

    2018-05-01

    We consider high-redshift f(R) cosmography adopting the technique of polynomial reconstruction. In lieu of considering Taylor treatments, which turn out to be non-predictive as soon as z>1, we take into account the Pad&apose rational approximations which consist in performing expansions converging at high redshift domains. Particularly, our strategy is to reconstruct f(z) functions first, assuming the Ricci scalar to be invertible with respect to the redshift z. Having the so-obtained f(z) functions, we invert them and we easily obtain the corresponding f(R) terms. We minimize error propagation, assuming no errors upon redshift data. The treatment we follow naturally leads to evaluating curvature pressure, density and equation of state, characterizing the universe evolution at redshift much higher than standard cosmographic approaches. We therefore match these outcomes with small redshift constraints got by framing the f(R) cosmology through Taylor series around 0zsimeq . This gives rise to a calibration procedure with small redshift that enables the definitions of polynomial approximations up to zsimeq 10. Last but not least, we show discrepancies with the standard cosmological model which go towards an extension of the ΛCDM paradigm, indicating an effective dark energy term evolving in time. We finally describe the evolution of our effective dark energy term by means of basic techniques of data mining.

  15. Statistical models for estimating daily streamflow in Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.; Salehi, Habib

    1992-01-01

    Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.

  16. An equivalent circuit for small atrial trabeculae of frog.

    PubMed Central

    Jakobsson, E; Barr, L; Connor, J A

    1975-01-01

    An equivalent electrical circuit has been constructed for small atrial trabecula of frog in a double sucrose gap voltage clamp apparatus. The basic strategy in constructing the circuit was to derive the distribution of membrane capacitance and extracellular resistance from the preparation's response to small voltage displacements near the resting condition, when the membrane conductance is presumably quite low. Then standard Hodgkin-Huxley channels were placed in parallel with the capacitance and the results of voltage clamp experiments were simulated. The results suggest that the membranes of the preparation cannot in fact be clamped near the control voltage nor can the ionic currents be measured directly with reasonable accuracy by axon standards. It may or may not be a realizable goal in the future to define the preparation's electrical behavior well enough to permit the ultimate quantitative description of the membrane's specific ion conductances. The result of this paper suggest that if this goal is achieved using the double sucrose gap voltage clamp, it will be by a detailed quantitative accounting for substantial irreducible errors in voltage control, rather than by experimental achievement of good voltage control. PMID:1203441

  17. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2008-01-01

    The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

  18. Factor Rotation and Standard Errors in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.

    2015-01-01

    In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…

  19. Calibration-free assays on standard real-time PCR devices

    PubMed Central

    Debski, Pawel R.; Gewartowski, Kamil; Bajer, Seweryn; Garstecki, Piotr

    2017-01-01

    Quantitative Polymerase Chain Reaction (qPCR) is one of central techniques in molecular biology and important tool in medical diagnostics. While being a golden standard qPCR techniques depend on reference measurements and are susceptible to large errors caused by even small changes of reaction efficiency or conditions that are typically not marked by decreased precision. Digital PCR (dPCR) technologies should alleviate the need for calibration by providing absolute quantitation using binary (yes/no) signals from partitions provided that the basic assumption of amplification a single target molecule into a positive signal is met. Still, the access to digital techniques is limited because they require new instruments. We show an analog-digital method that can be executed on standard (real-time) qPCR devices. It benefits from real-time readout, providing calibration-free assessment. The method combines advantages of qPCR and dPCR and bypasses their drawbacks. The protocols provide for small simplified partitioning that can be fitted within standard well plate format. We demonstrate that with the use of synergistic assay design standard qPCR devices are capable of absolute quantitation when normal qPCR protocols fail to provide accurate estimates. We list practical recipes how to design assays for required parameters, and how to analyze signals to estimate concentration. PMID:28327545

  20. Calibration-free assays on standard real-time PCR devices

    NASA Astrophysics Data System (ADS)

    Debski, Pawel R.; Gewartowski, Kamil; Bajer, Seweryn; Garstecki, Piotr

    2017-03-01

    Quantitative Polymerase Chain Reaction (qPCR) is one of central techniques in molecular biology and important tool in medical diagnostics. While being a golden standard qPCR techniques depend on reference measurements and are susceptible to large errors caused by even small changes of reaction efficiency or conditions that are typically not marked by decreased precision. Digital PCR (dPCR) technologies should alleviate the need for calibration by providing absolute quantitation using binary (yes/no) signals from partitions provided that the basic assumption of amplification a single target molecule into a positive signal is met. Still, the access to digital techniques is limited because they require new instruments. We show an analog-digital method that can be executed on standard (real-time) qPCR devices. It benefits from real-time readout, providing calibration-free assessment. The method combines advantages of qPCR and dPCR and bypasses their drawbacks. The protocols provide for small simplified partitioning that can be fitted within standard well plate format. We demonstrate that with the use of synergistic assay design standard qPCR devices are capable of absolute quantitation when normal qPCR protocols fail to provide accurate estimates. We list practical recipes how to design assays for required parameters, and how to analyze signals to estimate concentration.

  1. Research on Standard Errors of Equating Differences. Research Report. ETS RR-10-25

    ERIC Educational Resources Information Center

    Moses, Tim; Zhang, Wenmin

    2010-01-01

    In this paper, the "standard error of equating difference" (SEED) is described in terms of originally proposed kernel equating functions (von Davier, Holland, & Thayer, 2004) and extended to incorporate traditional linear and equipercentile functions. These derivations expand on prior developments of SEEDs and standard errors of equating and…

  2. Comparison between Different Methods for Biomechanical Assessment of Ex Vivo Fracture Callus Stiffness in Small Animal Bone Healing Studies

    PubMed Central

    Steiner, Malte; Volkheimer, David; Meyers, Nicholaus; Wehner, Tim; Wilke, Hans-Joachim; Claes, Lutz; Ignatius, Anita

    2015-01-01

    For ex vivo measurements of fracture callus stiffness in small animals, different test methods, such as torsion or bending tests, are established. Each method provides advantages and disadvantages, and it is still debated which of those is most sensitive to experimental conditions (i.e. specimen alignment, directional dependency, asymmetric behavior). The aim of this study was to experimentally compare six different testing methods regarding their robustness against experimental errors. Therefore, standardized specimens were created by selective laser sintering (SLS), mimicking size, directional behavior, and embedding variations of respective rat long bone specimens. For the latter, five different geometries were created which show shifted or tilted specimen alignments. The mechanical tests included three-point bending, four-point bending, cantilever bending, axial compression, constrained torsion, and unconstrained torsion. All three different bending tests showed the same principal behavior. They were highly dependent on the rotational direction of the maximum fracture callus expansion relative to the loading direction (creating experimental errors of more than 60%), however small angular deviations (<15°) were negligible. Differences in the experimental results between the bending tests originate in their respective location of maximal bending moment induction. Compared to four-point bending, three-point bending is easier to apply on small rat and mouse bones under realistic testing conditions and yields robust measurements, provided low variation of the callus shape among the tested specimens. Axial compressive testing was highly sensitive to embedding variations, and therefore cannot be recommended. Although it is experimentally difficult to realize, unconstrained torsion testing was found to be the most robust method, since it was independent of both rotational alignment and embedding uncertainties. Constrained torsional testing showed small errors (up to 16.8%, compared to corresponding alignment under unconstrained torsion) due to a parallel offset between the specimens’ axis of gravity and the torsional axis of rotation. PMID:25781027

  3. Simplified Approach Charts Improve Data Retrieval Performance

    PubMed Central

    Stewart, Michael; Laraway, Sean; Jordan, Kevin; Feary, Michael S.

    2016-01-01

    The effectiveness of different instrument approach charts to deliver minimum visibility and altitude information during airport equipment outages was investigated. Eighteen pilots flew simulated instrument approaches in three conditions: (a) normal operations using a standard approach chart (standard-normal), (b) equipment outage conditions using a standard approach chart (standard-outage), and (c) equipment outage conditions using a prototype decluttered approach chart (prototype-outage). Errors and retrieval times in identifying minimum altitudes and visibilities were measured. The standard-outage condition produced significantly more errors and longer retrieval times versus the standard-normal condition. The prototype-outage condition had significantly fewer errors and shorter retrieval times than did the standard-outage condition. The prototype-outage condition produced significantly fewer errors but similar retrieval times when compared with the standard-normal condition. Thus, changing the presentation of minima may reduce risk and increase safety in instrument approaches, specifically with airport equipment outages. PMID:28491009

  4. Subspace Arrangement Codes and Cryptosystems

    DTIC Science & Technology

    2011-05-09

    any other prov1sion of law, no person shall be subject to any penalty for failing to comply w1th a collection of information if it does not display a...NUMBER OF PAGES 49 19a. NAME OF RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std...theory is finding codes that have a small number of digits (length) with a high number codewords (dimension), as well as good error-correction properties

  5. Cost effectiveness of a pharmacist-led information technology intervention for reducing rates of clinically important errors in medicines management in general practices (PINCER).

    PubMed

    Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J

    2014-06-01

    We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.

  6. Development of a compact, fiber-coupled, six degree-of-freedom measurement system for precision linear stage metrology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Xiangzhi, E-mail: xiangzhi.yu@rochester.edu; Gillmer, Steven R.; Woody, Shane C.

    2016-06-15

    A compact, fiber-coupled, six degree-of-freedom measurement system which enables fast, accurate calibration, and error mapping of precision linear stages is presented. The novel design has the advantages of simplicity, compactness, and relatively low cost. This proposed sensor can simultaneously measure displacement, two straightness errors, and changes in pitch, yaw, and roll using a single optical beam traveling between the measurement system and a small target. The optical configuration of the system and the working principle for all degrees-of-freedom are presented along with the influence and compensation of crosstalk motions in roll and straightness measurements. Several comparison experiments are conducted tomore » investigate the feasibility and performance of the proposed system in each degree-of-freedom independently. Comparison experiments to a commercial interferometer demonstrate error standard deviations of 0.33 μm in straightness, 0.14 μrad in pitch, 0.44 μradin yaw, and 45.8 μrad in roll.« less

  7. Turboprop+: enhanced Turboprop diffusion-weighted imaging with a new phase correction.

    PubMed

    Lee, Chu-Yu; Li, Zhiqiang; Pipe, James G; Debbins, Josef P

    2013-08-01

    Faster periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) diffusion-weighted imaging acquisitions, such as Turboprop and X-prop, remain subject to phase errors inherent to a gradient echo readout, which ultimately limits the applied turbo factor (number of gradient echoes between each pair of radiofrequency refocusing pulses) and, thus, scan time reductions. This study introduces a new phase correction to Turboprop, called Turboprop+. This technique employs calibration blades, which generate 2-D phase error maps and are rotated in accordance with the data blades, to correct phase errors arising from off-resonance and system imperfections. The results demonstrate that with a small increase in scan time for collecting calibration blades, Turboprop+ had a superior immunity to the off-resonance-related artifacts when compared to standard Turboprop and recently proposed X-prop with the high turbo factor (turbo factor = 7). Thus, low specific absorption rate and short scan time can be achieved in Turboprop+ using a high turbo factor, whereas off-resonance related artifacts are minimized. © 2012 Wiley Periodicals, Inc.

  8. Ellipsoidal geometry in asteroid thermal models - The standard radiometric model

    NASA Technical Reports Server (NTRS)

    Brown, R. H.

    1985-01-01

    The major consequences of ellipsoidal geometry in an othewise standard radiometric model for asteroids are explored. It is shown that for small deviations from spherical shape a spherical model of the same projected area gives a reasonable aproximation to the thermal flux from an ellipsoidal body. It is suggested that large departures from spherical shape require that some correction be made for geometry. Systematic differences in the radii of asteroids derived radiometrically at 10 and 20 microns may result partly from nonspherical geometry. It is also suggested that extrapolations of the rotational variation of thermal flux from a nonspherical body based solely on the change in cross-sectional area are in error.

  9. Assessing the Progress of Trapped-Ion Processors Towards Fault-Tolerant Quantum Computation

    NASA Astrophysics Data System (ADS)

    Bermudez, A.; Xu, X.; Nigmatullin, R.; O'Gorman, J.; Negnevitsky, V.; Schindler, P.; Monz, T.; Poschinger, U. G.; Hempel, C.; Home, J.; Schmidt-Kaler, F.; Biercuk, M.; Blatt, R.; Benjamin, S.; Müller, M.

    2017-10-01

    A quantitative assessment of the progress of small prototype quantum processors towards fault-tolerant quantum computation is a problem of current interest in experimental and theoretical quantum information science. We introduce a necessary and fair criterion for quantum error correction (QEC), which must be achieved in the development of these quantum processors before their sizes are sufficiently big to consider the well-known QEC threshold. We apply this criterion to benchmark the ongoing effort in implementing QEC with topological color codes using trapped-ion quantum processors and, more importantly, to guide the future hardware developments that will be required in order to demonstrate beneficial QEC with small topological quantum codes. In doing so, we present a thorough description of a realistic trapped-ion toolbox for QEC and a physically motivated error model that goes beyond standard simplifications in the QEC literature. We focus on laser-based quantum gates realized in two-species trapped-ion crystals in high-optical aperture segmented traps. Our large-scale numerical analysis shows that, with the foreseen technological improvements described here, this platform is a very promising candidate for fault-tolerant quantum computation.

  10. Standard Errors of Equating for the Percentile Rank-Based Equipercentile Equating with Log-Linear Presmoothing

    ERIC Educational Resources Information Center

    Wang, Tianyou

    2009-01-01

    Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…

  11. Empirical best linear unbiased prediction method for small areas with restricted maximum likelihood and bootstrap procedure to estimate the average of household expenditure per capita in Banjar Regency

    NASA Astrophysics Data System (ADS)

    Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho

    2017-03-01

    So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.

  12. Uncertainty estimates in broadband seismometer sensitivities using microseisms

    USGS Publications Warehouse

    Ringler, Adam T.; Storm, Tyler L.; Gee, Lind S.; Hutt, Charles R.; Wilson, David C.

    2015-01-01

    The midband sensitivity of a seismic instrument is one of the fundamental parameters used in published station metadata. Any errors in this value can compromise amplitude estimates in otherwise high-quality data. To estimate an upper bound in the uncertainty of the midband sensitivity for modern broadband instruments, we compare daily microseism (4- to 8-s period) amplitude ratios between the vertical components of colocated broadband sensors across the IRIS/USGS (network code IU) seismic network. We find that the mean of the 145,972 daily ratios used between 2002 and 2013 is 0.9895 with a standard deviation of 0.0231. This suggests that the ratio between instruments shows a small bias and considerable scatter. We also find that these ratios follow a standard normal distribution (R 2 = 0.95442), which suggests that the midband sensitivity of an instrument has an error of no greater than ±6 % with a 99 % confidence interval. This gives an upper bound on the precision to which we know the sensitivity of a fielded instrument.

  13. A Fully Sensorized Cooperative Robotic System for Surgical Interventions

    PubMed Central

    Tovar-Arriaga, Saúl; Vargas, José Emilio; Ramos, Juan M.; Aceves, Marco A.; Gorrostieta, Efren; Kalender, Willi A.

    2012-01-01

    In this research a fully sensorized cooperative robot system for manipulation of needles is presented. The setup consists of a DLR/KUKA Light Weight Robot III especially designed for safe human/robot interaction, a FD-CT robot-driven angiographic C-arm system, and a navigation camera. Also, new control strategies for robot manipulation in the clinical environment are introduced. A method for fast calibration of the involved components and the preliminary accuracy tests of the whole possible errors chain are presented. Calibration of the robot with the navigation system has a residual error of 0.81 mm (rms) with a standard deviation of ±0.41 mm. The accuracy of the robotic system while targeting fixed points at different positions within the workspace is of 1.2 mm (rms) with a standard deviation of ±0.4 mm. After calibration, and due to close loop control, the absolute positioning accuracy was reduced to the navigation camera accuracy which is of 0.35 mm (rms). The implemented control allows the robot to compensate for small patient movements. PMID:23012551

  14. Estimating peak discharges, flood volumes, and hydrograph shapes of small ungaged urban streams in Ohio

    USGS Publications Warehouse

    Sherwood, J.M.

    1986-01-01

    Methods are presented for estimating peak discharges, flood volumes and hydrograph shapes of small (less than 5 sq mi) urban streams in Ohio. Examples of how to use the various regression equations and estimating techniques also are presented. Multiple-regression equations were developed for estimating peak discharges having recurrence intervals of 2, 5, 10, 25, 50, and 100 years. The significant independent variables affecting peak discharge are drainage area, main-channel slope, average basin-elevation index, and basin-development factor. Standard errors of regression and prediction for the peak discharge equations range from +/-37% to +/-41%. An equation also was developed to estimate the flood volume of a given peak discharge. Peak discharge, drainage area, main-channel slope, and basin-development factor were found to be the significant independent variables affecting flood volumes for given peak discharges. The standard error of regression for the volume equation is +/-52%. A technique is described for estimating the shape of a runoff hydrograph by applying a specific peak discharge and the estimated lagtime to a dimensionless hydrograph. An equation for estimating the lagtime of a basin was developed. Two variables--main-channel length divided by the square root of the main-channel slope and basin-development factor--have a significant effect on basin lagtime. The standard error of regression for the lagtime equation is +/-48%. The data base for the study was established by collecting rainfall-runoff data at 30 basins distributed throughout several metropolitan areas of Ohio. Five to eight years of data were collected at a 5-min record interval. The USGS rainfall-runoff model A634 was calibrated for each site. The calibrated models were used in conjunction with long-term rainfall records to generate a long-term streamflow record for each site. Each annual peak-discharge record was fitted to a Log-Pearson Type III frequency curve. Multiple-regression techniques were then used to analyze the peak discharge data as a function of the basin characteristics of the 30 sites. (Author 's abstract)

  15. A Note on Standard Deviation and Standard Error

    ERIC Educational Resources Information Center

    Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth

    2010-01-01

    Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.

  16. Monte Carlo simulations of the impact of troposphere, clock and measurement errors on the repeatability of VLBI positions

    NASA Astrophysics Data System (ADS)

    Pany, A.; Böhm, J.; MacMillan, D.; Schuh, H.; Nilsson, T.; Wresnik, J.

    2011-01-01

    Within the International VLBI Service for Geodesy and Astrometry (IVS) Monte Carlo simulations have been carried out to design the next generation VLBI system ("VLBI2010"). Simulated VLBI observables were generated taking into account the three most important stochastic error sources in VLBI, i.e. wet troposphere delay, station clock, and measurement error. Based on realistic physical properties of the troposphere and clocks we ran simulations to investigate the influence of the troposphere on VLBI analyses, and to gain information about the role of clock performance and measurement errors of the receiving system in the process of reaching VLBI2010's goal of mm position accuracy on a global scale. Our simulations confirm that the wet troposphere delay is the most important of these three error sources. We did not observe significant improvement of geodetic parameters if the clocks were simulated with an Allan standard deviation better than 1 × 10-14 at 50 min and found the impact of measurement errors to be relatively small compared with the impact of the troposphere. Along with simulations to test different network sizes, scheduling strategies, and antenna slew rates these studies were used as a basis for the definition and specification of VLBI2010 antennas and recording system and might also be an example for other space geodetic techniques.

  17. Small baseline subsets approach of DInSAR for investigating land surface deformation along the high-speed railway

    NASA Astrophysics Data System (ADS)

    Rao, Xiong; Tang, Yunwei

    2014-11-01

    Land surface deformation evidently exists in a newly-built high-speed railway in the southeast of China. In this study, we utilize the Small BAseline Subsets (SBAS)-Differential Synthetic Aperture Radar Interferometry (DInSAR) technique to detect land surface deformation along the railway. In this work, 40 Cosmo-SkyMed satellite images were selected to analyze the spatial distribution and velocity of the deformation in study area. 88 pairs of image with high coherence were firstly chosen with an appropriate threshold. These images were used to deduce the deformation velocity map and the variation in time series. This result can provide information for orbit correctness and ground control point (GCP) selection in the following steps. Then, more pairs of image were selected to tighten the constraint in time dimension, and to improve the final result by decreasing the phase unwrapping error. 171 combinations of SAR pairs were ultimately selected. Reliable GCPs were re-selected according to the previously derived deformation velocity map. Orbital residuals error was rectified using these GCPs, and nonlinear deformation components were estimated. Therefore, a more accurate surface deformation velocity map was produced. Precise geodetic leveling work was implemented in the meantime. We compared the leveling result with the geocoding SBAS product using the nearest neighbour method. The mean error and standard deviation of the error were respectively 0.82 mm and 4.17 mm. This result demonstrates the effectiveness of DInSAR technique for monitoring land surface deformation, which can serve as a reliable decision for supporting highspeed railway project design, construction, operation and maintenance.

  18. Error, Power, and Blind Sentinels: The Statistics of Seagrass Monitoring

    PubMed Central

    Schultz, Stewart T.; Kruschel, Claudia; Bakran-Petricioli, Tatjana; Petricioli, Donat

    2015-01-01

    We derive statistical properties of standard methods for monitoring of habitat cover worldwide, and criticize them in the context of mandated seagrass monitoring programs, as exemplified by Posidonia oceanica in the Mediterranean Sea. We report the novel result that cartographic methods with non-trivial classification errors are generally incapable of reliably detecting habitat cover losses less than about 30 to 50%, and the field labor required to increase their precision can be orders of magnitude higher than that required to estimate habitat loss directly in a field campaign. We derive a universal utility threshold of classification error in habitat maps that represents the minimum habitat map accuracy above which direct methods are superior. Widespread government reliance on blind-sentinel methods for monitoring seafloor can obscure the gradual and currently ongoing losses of benthic resources until the time has long passed for meaningful management intervention. We find two classes of methods with very high statistical power for detecting small habitat cover losses: 1) fixed-plot direct methods, which are over 100 times as efficient as direct random-plot methods in a variable habitat mosaic; and 2) remote methods with very low classification error such as geospatial underwater videography, which is an emerging, low-cost, non-destructive method for documenting small changes at millimeter visual resolution. General adoption of these methods and their further development will require a fundamental cultural change in conservation and management bodies towards the recognition and promotion of requirements of minimal statistical power and precision in the development of international goals for monitoring these valuable resources and the ecological services they provide. PMID:26367863

  19. Repeatability of standard metabolic rate (SMR) in a small fish, the spined loach (Cobitis taenia).

    PubMed

    Maciak, Sebastian; Konarzewski, Marek

    2010-10-01

    Significant repeatability of a trait of interest is an essential assumption for undertaking studies of phenotypic variability. It is especially important in studies on highly variable traits, such as metabolic rates. Recent publications suggest that resting/basal metabolic rate of homeotherms is repeatable across wide range of species. In contrast, studies on the consistency of standard metabolic rate (SMR) in ectotherms, particularly fish, are scarce. Here we present a comprehensive analysis of several important technical aspects of body mass-corrected SMR measurements and its repeatability in a small (average weight approximately 3g) fish, the spined loach (Cobitis taenia). First we demonstrated that release of oxygen from the walls of metabolic chambers exposed to hypoxic conditions did not confound SMR measurements. Next, using principle of propagation of measurement uncertainties we demonstrated that in aquatic systems, measurement error is significantly higher in open than closed respirometry setups. The measurement error for SMR of a small fish determined in a closed aquatic system is comparable to that obtainable using top-notch open-flow systems used for air-breathing terrestrial animals. Using a closed respirometer we demonstrated that body mass-corrected SMR in spined loaches was repeatable under both normoxia and hypoxia over a 5-month period (Pearson correlation r=0.68 and r=0.73, respectively) as well as across both conditions (intraclass correlation coefficient tau=0.30). In these analyses we accounted for possible effect of oxygen consumption of the oxygen electrode on repeatability of SMR. Significant SMR consistency was accompanied by significant repeatability of body mass (intraclass correlation coefficient tau=0.86). To our knowledge, this is the first study showing long-term repeatability of body mass and SMR in a small fish, and is consistent with the existence of heritable variation of these two traits. 2010 Elsevier Inc. All rights reserved.

  20. Conditional Standard Errors, Reliability and Decision Consistency of Performance Levels Using Polytomous IRT.

    ERIC Educational Resources Information Center

    Wang, Tianyou; And Others

    M. J. Kolen, B. A. Hanson, and R. L. Brennan (1992) presented a procedure for assessing the conditional standard error of measurement (CSEM) of scale scores using a strong true-score model. They also investigated the ways of using nonlinear transformation from number-correct raw score to scale score to equalize the conditional standard error along…

  1. Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes

    ERIC Educational Resources Information Center

    Zavorsky, Gerald S.

    2010-01-01

    Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…

  2. Real-time precise orbit determination of LEO satellites using a single-frequency GPS receiver: Preliminary results of Chinese SJ-9A satellite

    NASA Astrophysics Data System (ADS)

    Sun, Xiucong; Han, Chao; Chen, Pei

    2017-10-01

    Spaceborne Global Positioning System (GPS) receivers are widely used for orbit determination of low-Earth-orbiting (LEO) satellites. With the improvement of measurement accuracy, single-frequency receivers are recently considered for low-cost small satellite missions. In this paper, a Schmidt-Kalman filter which processes single-frequency GPS measurements and broadcast ephemerides is proposed for real-time precise orbit determination of LEO satellites. The C/A code and L1 phase are linearly combined to eliminate the first-order ionospheric effects. Systematic errors due to ionospheric delay residual, group delay variation, phase center variation, and broadcast ephemeris errors, are lumped together into a noise term, which is modeled as a first-order Gauss-Markov process. In order to reduce computational complexity, the colored noise is considered rather than estimated in the orbit determination process. This ensures that the covariance matrix accurately represents the distribution of estimation errors without increasing the dimension of the state vector. The orbit determination algorithm is tested with actual flight data from the single-frequency GPS receiver onboard China's small satellite Shi Jian-9A (SJ-9A). Preliminary results using a 7-h data arc on October 25, 2012 show that the Schmidt-Kalman filter performs better than the standard Kalman filter in terms of accuracy.

  3. Research on Strain Measurements of Core Positions for the Chinese Space Station.

    PubMed

    Shen, Jingshi; Zeng, Xiaodong; Luo, Yuxiang; Cao, Changqing; Wang, Ting

    2018-06-05

    The Chinese space station is designed to carry out manned spaceflight, space science research, and so on. In serious applications, it is a common operation to inject gas into the hull, which can produce strain of the bulkhead. Accurate measurement of strain for the bulkhead is one of the key tasks in evaluating the health condition of the space station. This is the first work to perform strain detection for the Chinese space station bulkhead by using optical fiber Bragg grating. In the period of measurements, the resistance strain gauge is used as the strain standard. The measurement error of the fiber optical sensor in the circumferential direction is very small, being less than 4.52 με. However, the error in the axial direction is very large with the highest value of 28.93 με. Because the measurement error of bare fiber in the axial direction is very small, the transverse effect of the substrate of the fiber optical sensor likely plays a role. The comparison of the theoretical and experimental results of the transverse effect coefficients shows that they are fairly consistent, with values of 0.0271 and 0.0287, respectively. After the transverse effect is compensated, the strain deviation in the axial detection is smaller than 2.04 με. It is of great significance to carry out real-time health assessment for the bulkhead of the space station.

  4. Magnitude of pseudopotential localization errors in fixed node diffusion quantum Monte Carlo

    DOE PAGES

    Kent, Paul R.; Krogel, Jaron T.

    2017-06-22

    Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less

  5. Using weighted power mean for equivalent square estimation.

    PubMed

    Zhou, Sumin; Wu, Qiuwen; Li, Xiaobo; Ma, Rongtao; Zheng, Dandan; Wang, Shuo; Zhang, Mutian; Li, Sicong; Lei, Yu; Fan, Qiyong; Hyun, Megan; Diener, Tyler; Enke, Charles

    2017-11-01

    Equivalent Square (ES) enables the calculation of many radiation quantities for rectangular treatment fields, based only on measurements from square fields. While it is widely applied in radiotherapy, its accuracy, especially for extremely elongated fields, still leaves room for improvement. In this study, we introduce a novel explicit ES formula based on Weighted Power Mean (WPM) function and compare its performance with the Sterling formula and Vadash/Bjärngard's formula. The proposed WPM formula is ESWPMa,b=waα+1-wbα1/α for a rectangular photon field with sides a and b. The formula performance was evaluated by three methods: standard deviation of model fitting residual error, maximum relative model prediction error, and model's Akaike Information Criterion (AIC). Testing datasets included the ES table from British Journal of Radiology (BJR), photon output factors (S cp ) from the Varian TrueBeam Representative Beam Data (Med Phys. 2012;39:6981-7018), and published S cp data for Varian TrueBeam Edge (J Appl Clin Med Phys. 2015;16:125-148). For the BJR dataset, the best-fit parameter value α = -1.25 achieved a 20% reduction in standard deviation in ES estimation residual error compared with the two established formulae. For the two Varian datasets, employing WPM reduced the maximum relative error from 3.5% (Sterling) or 2% (Vadash/Bjärngard) to 0.7% for open field sizes ranging from 3 cm to 40 cm, and the reduction was even more prominent for 1 cm field sizes on Edge (J Appl Clin Med Phys. 2015;16:125-148). The AIC value of the WPM formula was consistently lower than its counterparts from the traditional formulae on photon output factors, most prominent on very elongated small fields. The WPM formula outperformed the traditional formulae on three testing datasets. With increasing utilization of very elongated, small rectangular fields in modern radiotherapy, improved photon output factor estimation is expected by adopting the WPM formula in treatment planning and secondary MU check. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  6. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO/sub 3/ standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less

  7. Estimation of distributional parameters for censored trace level water quality data: 1. Estimation techniques

    USGS Publications Warehouse

    Gilliom, Robert J.; Helsel, Dennis R.

    1986-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.

  8. Estimation of distributional parameters for censored trace level water quality data. 1. Estimation Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliom, R.J.; Helsel, D.R.

    1986-02-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensoredmore » observations, for determining the best performing parameter estimation method for any particular data det. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.« less

  9. Estimation of distributional parameters for censored trace-level water-quality data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliom, R.J.; Helsel, D.R.

    1984-01-01

    A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less

  10. Final acceptance testing of the LSST monolithic primary/tertiary mirror

    NASA Astrophysics Data System (ADS)

    Tuell, Michael T.; Burge, James H.; Cuerden, Brian; Gressler, William; Martin, Hubert M.; West, Steven C.; Zhao, Chunyu

    2014-07-01

    The Large Synoptic Survey Telescope (LSST) is a three-mirror wide-field survey telescope with the primary and tertiary mirrors on one monolithic substrate1. This substrate is made of Ohara E6 borosilicate glass in a honeycomb sandwich, spin cast at the Steward Observatory Mirror Lab at The University of Arizona2. Each surface is aspheric, with the specification in terms of conic constant error, maximum active bending forces and finally a structure function specification on the residual errors3. There are high-order deformation terms, but with no tolerance, any error is considered as a surface error and is included in the structure function. The radii of curvature are very different, requiring two independent test stations, each with instantaneous phase-shifting interferometers with null correctors. The primary null corrector is a standard two-element Offner null lens. The tertiary null corrector is a phase-etched computer-generated hologram (CGH). This paper details the two optical systems and their tolerances, showing that the uncertainty in measuring the figure is a small fraction of the structure function specification. Additional metrology includes the radii of curvature, optical axis locations, and relative surface tilts. The methods for measuring these will also be described along with their tolerances.

  11. A flexible wearable sensor for knee flexion assessment during gait.

    PubMed

    Papi, Enrica; Bo, Yen Nee; McGregor, Alison H

    2018-05-01

    Gait analysis plays an important role in the diagnosis and management of patients with movement disorders but it is usually performed within a laboratory. Recently interest has shifted towards the possibility of conducting gait assessments in everyday environments thus facilitating long-term monitoring. This is possible by using wearable technologies rather than laboratory based equipment. This study aims to validate a novel wearable sensor system's ability to measure peak knee sagittal angles during gait. The proposed system comprises a flexible conductive polymer unit interfaced with a wireless acquisition node attached over the knee on a pair of leggings. Sixteen healthy volunteers participated to two gait assessments on separate occasions. Data was simultaneously collected from the novel sensor and a gold standard 10 camera motion capture system. The relationship between sensor signal and reference knee flexion angles was defined for each subject to allow the transformation of sensor voltage outputs to angular measures (degrees). The knee peak flexion angle from the sensor and reference system were compared by means of root mean square error (RMSE), absolute error, Bland-Altman plots and intra-class correlation coefficients (ICCs) to assess test-retest reliability. Comparisons of knee peak flexion angles calculated from the sensor and gold standard yielded an absolute error of 0.35(±2.9°) and RMSE of 1.2(±0.4)°. Good agreement was found between the two systems with the majority of data lying within the limits of agreement. The sensor demonstrated high test-retest reliability (ICCs>0.8). These results show the ability of the sensor to monitor knee peak sagittal angles with small margins of error and in agreement with the gold standard system. The sensor has potential to be used in clinical settings as a discreet, unobtrusive wearable device allowing for long-term gait analysis. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  13. Semiclassical Dynamicswith Exponentially Small Error Estimates

    NASA Astrophysics Data System (ADS)

    Hagedorn, George A.; Joye, Alain

    We construct approximate solutions to the time-dependent Schrödingerequation for small values of ħ. If V satisfies appropriate analyticity and growth hypotheses and , these solutions agree with exact solutions up to errors whose norms are bounded by for some C and γ>0. Under more restrictive hypotheses, we prove that for sufficiently small T', implies the norms of the errors are bounded by for some C', γ'>0, and σ > 0.

  14. A practical method of estimating standard error of age in the fission track dating method

    USGS Publications Warehouse

    Johnson, N.M.; McGee, V.E.; Naeser, C.W.

    1979-01-01

    A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.

  15. Demographics of the gay and lesbian population in the United States: evidence from available systematic data sources.

    PubMed

    Black, D; Gates, G; Sanders, S; Taylor, L

    2000-05-01

    This work provides an overview of standard social science data sources that now allow some systematic study of the gay and lesbian population in the United States. For each data source, we consider how sexual orientation can be defined, and we note the potential sample sizes. We give special attention to the important problem of measurement error, especially the extent to which individuals recorded as gay and lesbian are indeed recorded correctly. Our concern is that because gays and lesbians constitute a relatively small fraction of the population, modest measurement problems could lead to serious errors in inference. In examining gays and lesbians in multiple data sets we also achieve a second objective: We provide a set of statistics about this population that is relevant to several current policy debates.

  16. Modified SPC for short run test and measurement process in multi-stations

    NASA Astrophysics Data System (ADS)

    Koh, C. K.; Chin, J. F.; Kamaruddin, S.

    2018-03-01

    Due to short production runs and measurement error inherent in electronic test and measurement (T&M) processes, continuous quality monitoring through real-time statistical process control (SPC) is challenging. Industry practice allows the installation of guard band using measurement uncertainty to reduce the width of acceptance limit, as an indirect way to compensate the measurement errors. This paper presents a new SPC model combining modified guard band and control charts (\\bar{\\text{Z}} chart and W chart) for short runs in T&M process in multi-stations. The proposed model standardizes the observed value with measurement target (T) and rationed measurement uncertainty (U). S-factor (S f) is introduced to the control limits to improve the sensitivity in detecting small shifts. The model was embedded in automated quality control system and verified with a case study in real industry.

  17. Reduction of medication errors related to sliding scale insulin by the introduction of a standardized order sheet.

    PubMed

    Harada, Saki; Suzuki, Akio; Nishida, Shohei; Kobayashi, Ryo; Tamai, Sayuri; Kumada, Keisuke; Murakami, Nobuo; Itoh, Yoshinori

    2017-06-01

    Insulin is frequently used for glycemic control. Medication errors related to insulin are a common problem for medical institutions. Here, we prepared a standardized sliding scale insulin (SSI) order sheet and assessed the effect of its introduction. Observations before and after the introduction of the standardized SSI template were conducted at Gifu University Hospital. The incidence of medication errors, hyperglycemia, and hypoglycemia related to SSI were obtained from the electronic medical records. The introduction of the standardized SSI order sheet significantly reduced the incidence of medication errors related to SSI compared with that prior to its introduction (12/165 [7.3%] vs 4/159 [2.1%], P = .048). However, the incidence of hyperglycemia (≥250 mg/dL) and hypoglycemia (≤50 mg/dL) in patients who received SSI was not significantly different between the 2 groups. The introduction of the standardized SSI order sheet reduced the incidence of medication errors related to SSI. © 2016 John Wiley & Sons, Ltd.

  18. A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2002-01-01

    The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.

  19. Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks

    NASA Astrophysics Data System (ADS)

    Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.

    2015-03-01

    The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which is to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.

  20. Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks

    NASA Astrophysics Data System (ADS)

    Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.

    2014-11-01

    The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which are to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.

  1. Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: a Monte Carlo study.

    PubMed

    Chou, C P; Bentler, P M; Satorra, A

    1991-11-01

    Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.

  2. Computer Programs for the Semantic Differential: Further Modifications.

    ERIC Educational Resources Information Center

    Lawson, Edwin D.; And Others

    The original nine programs for semantic differential analysis have been condensed into three programs which have been further refined and augmented. They yield: (1) means, standard deviations, and standard errors for each subscale on each concept; (2) Evaluation, Potency, and Activity (EPA) means, standard deviations, and standard errors; (3)…

  3. Experimental determination of the navigation error of the 4-D navigation, guidance, and control systems on the NASA B-737 airplane

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1978-01-01

    Navigation error data from these flights are presented in a format utilizing three independent axes - horizontal, vertical, and time. The navigation position estimate error term and the autopilot flight technical error term are combined to form the total navigation error in each axis. This method of error presentation allows comparisons to be made between other 2-, 3-, or 4-D navigation systems and allows experimental or theoretical determination of the navigation error terms. Position estimate error data are presented with the navigation system position estimate based on dual DME radio updates that are smoothed with inertial velocities, dual DME radio updates that are smoothed with true airspeed and magnetic heading, and inertial velocity updates only. The normal mode of navigation with dual DME updates that are smoothed with inertial velocities resulted in a mean error of 390 m with a standard deviation of 150 m in the horizontal axis; a mean error of 1.5 m low with a standard deviation of less than 11 m in the vertical axis; and a mean error as low as 252 m with a standard deviation of 123 m in the time axis.

  4. Small area estimation of proportions with different levels of auxiliary data.

    PubMed

    Chandra, Hukum; Kumar, Sushil; Aditya, Kaustav

    2018-03-01

    Binary data are often of interest in many small areas of applications. The use of standard small area estimation methods based on linear mixed models becomes problematic for such data. An empirical plug-in predictor (EPP) under a unit-level generalized linear mixed model with logit link function is often used for the estimation of a small area proportion. However, this EPP requires the availability of unit-level population information for auxiliary data that may not be always accessible. As a consequence, in many practical situations, this EPP approach cannot be applied. Based on the level of auxiliary information available, different small area predictors for estimation of proportions are proposed. Analytic and bootstrap approaches to estimating the mean squared error of the proposed small area predictors are also developed. Monte Carlo simulations based on both simulated and real data show that the proposed small area predictors work well for generating the small area estimates of proportions and represent a practical alternative to the above approach. The developed predictor is applied to generate estimates of the proportions of indebted farm households at district-level using debt investment survey data from India. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Evaluation of Acoustic Doppler Current Profiler measurements of river discharge

    USGS Publications Warehouse

    Morlock, S.E.

    1996-01-01

    The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.

  6. Increasing point-count duration increases standard error

    USGS Publications Warehouse

    Smith, W.P.; Twedt, D.J.; Hamel, P.B.; Ford, R.P.; Wiedenfeld, D.A.; Cooper, R.J.

    1998-01-01

    We examined data from point counts of varying duration in bottomland forests of west Tennessee and the Mississippi Alluvial Valley to determine if counting interval influenced sampling efficiency. Estimates of standard error increased as point count duration increased both for cumulative number of individuals and species in both locations. Although point counts appear to yield data with standard errors proportional to means, a square root transformation of the data may stabilize the variance. Using long (>10 min) point counts may reduce sample size and increase sampling error, both of which diminish statistical power and thereby the ability to detect meaningful changes in avian populations.

  7. Experiences from the testing of a theory for modelling groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    2002-01-01

    Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.

  8. Experience gained in testing a theory for modelling groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Christensen, S.; Cooley, R.L.

    2002-01-01

    Usually, small-scale model error is present in groundwater modelling because the model only represents average system characteristics having the same form as the drift, and small-scale variability is neglected. These errors cause the true errors of a regression model to be correlated. Theory and an example show that the errors also contribute to bias in the estimates of model parameters. This bias originates from model nonlinearity. In spite of this bias, predictions of hydraulic head are nearly unbiased if the model intrinsic nonlinearity is small. Individual confidence and prediction intervals are accurate if the t-statistic is multiplied by a correction factor. The correction factor can be computed from the true error second moment matrix, which can be determined when the stochastic properties of the system characteristics are known.

  9. Simulation of an automatically-controlled STOL aircraft in a microwave landing system multipath environment

    NASA Technical Reports Server (NTRS)

    Toda, M.; Brown, S. C.; Burrous, C. N.

    1976-01-01

    The simulated response is described of a STOL aircraft to Microwave Landing System (MLS) multipath errors during final approach and touchdown. The MLS azimuth, elevation, and DME multipath errors were computed for a relatively severe multipath environment at Crissy Field California, utilizing an MLS multipath simulation at MIT Lincoln Laboratory. A NASA/Ames six-degree-of-freedom simulation of an automatically-controlled deHavilland C-8A STOL aircraft was used to determine the response to these errors. The results show that the aircraft response to all of the Crissy Field MLS multipath errors was small. The small MLS azimuth and elevation multipath errors did not result in any discernible aircraft motion, and the aircraft response to the relatively large (200-ft (61-m) peak) DME multipath was noticeable but small.

  10. Biases and Standard Errors of Standardized Regression Coefficients

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Chan, Wai

    2011-01-01

    The paper obtains consistent standard errors (SE) and biases of order O(1/n) for the sample standardized regression coefficients with both random and given predictors. Analytical results indicate that the formulas for SEs given in popular text books are consistent only when the population value of the regression coefficient is zero. The sample…

  11. Interspecies scaling and prediction of human clearance: comparison of small- and macro-molecule drugs

    PubMed Central

    Huh, Yeamin; Smith, David E.; Feng, Meihau Rose

    2014-01-01

    Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879

  12. Development of optoelectronic monitoring system for ear arterial pressure waveforms

    NASA Astrophysics Data System (ADS)

    Sasayama, Satoshi; Imachi, Yu; Yagi, Tamotsu; Imachi, Kou; Ono, Toshirou; Man-i, Masando

    1994-02-01

    Invasive intra-arterial blood pressure measurement is the most accurate method but not practical if the subject is in motion. The apparatus developed by Wesseling et al., based on a volume-clamp method of Penaz (Finapres), is able to monitor continuous finger arterial pressure waveforms noninvasively. The limitation of Finapres is the difficulty in measuring the pressure of a subject during work that involves finger or arm action. Because the Finapres detector is attached to subject's finger, the measurements are affected by inertia of blood and hydrostatic effect cause by arm or finger motion. To overcome this problem, the authors made a detector that is attached to subject's ear and developed and optoelectronic monitoring systems for ear arterial pressure waveform (Earpres). An IR LEDs, photodiode, and air cuff comprised the detector. The detector was attached to a subject's ear, and the space adjusted between the air cuff and the rubber plate on which the LED and photodiode were positioned. To evaluate the accuracy of Earpres, the following tests were conducted with participation of 10 healthy male volunteers. The subjects rested for about five minutes, then performed standing and squatting exercises to provide wide ranges of systolic and diastolic arterial pressure. Intra- and inter-individual standard errors were calculated according to the method of van Egmond et al. As a result, average, the averages of intra-individual standard errors for earpres appeared small (3.7 and 2.7 mmHg for systolic and diastolic pressure respectively). The inter-individual standard errors for Earpres were about the same was Finapres for both systolic and diastolic pressure. The results showed the ear monitor was reliable in measuring arterial blood pressure waveforms and might be applicable to various fields such as sports medicine and ergonomics.

  13. Uncertainties of predictions from parton distributions II: theoretical errors

    NASA Astrophysics Data System (ADS)

    Martin, A. D.; Roberts, R. G.; Stirling, W. J.; Thorne, R. S.

    2004-06-01

    We study the uncertainties in parton distributions, determined in global fits to deep inelastic and related hard scattering data, due to so-called theoretical errors. Amongst these, we include potential errors due to the change of perturbative order (NLO to NNLO), ln(1/x) and ln(1-x) effects, absorptive corrections and higher-twist contributions. We investigate these uncertainties both by including explicit corrections to our standard global analysis and by examining the sensitivity to changes of the x, Q 2, W 2 cuts on the data that are fitted. In this way we expose those kinematic regions where the conventional DGLAP description is inadequate. As a consequence we obtain a set of NLO, and of NNLO, conservative partons where the data are fully consistent with DGLAP evolution, but over a restricted kinematic domain. We also examine the potential effects of such issues as the choice of input parametrisation, heavy target corrections, assumptions about the strange quark sea and isospin violation. Hence we are able to compare the theoretical errors with those uncertainties due to errors on the experimental measurements, which we studied previously. We use W and Higgs boson production at the Tevatron and the LHC as explicit examples of the uncertainties arising from parton distributions. For many observables the theoretical error is dominant, but for the cross section for W production at the Tevatron both the theoretical and experimental uncertainties are small, and hence the NNLO prediction may serve as a valuable luminosity monitor.

  14. Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks

    PubMed Central

    Mostafa, Hesham; Pedroni, Bruno; Sheik, Sadique; Cauwenberghs, Gert

    2017-01-01

    Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks. PMID:28932180

  15. Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks.

    PubMed

    Mostafa, Hesham; Pedroni, Bruno; Sheik, Sadique; Cauwenberghs, Gert

    2017-01-01

    Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks.

  16. Error estimation in multitemporal InSAR deformation time series, with application to Lanzarote, Canary Islands

    NASA Astrophysics Data System (ADS)

    GonzáLez, Pablo J.; FernáNdez, José

    2011-10-01

    Interferometric Synthetic Aperture Radar (InSAR) is a reliable technique for measuring crustal deformation. However, despite its long application in geophysical problems, its error estimation has been largely overlooked. Currently, the largest problem with InSAR is still the atmospheric propagation errors, which is why multitemporal interferometric techniques have been successfully developed using a series of interferograms. However, none of the standard multitemporal interferometric techniques, namely PS or SB (Persistent Scatterers and Small Baselines, respectively) provide an estimate of their precision. Here, we present a method to compute reliable estimates of the precision of the deformation time series. We implement it for the SB multitemporal interferometric technique (a favorable technique for natural terrains, the most usual target of geophysical applications). We describe the method that uses a properly weighted scheme that allows us to compute estimates for all interferogram pixels, enhanced by a Montecarlo resampling technique that properly propagates the interferogram errors (variance-covariances) into the unknown parameters (estimated errors for the displacements). We apply the multitemporal error estimation method to Lanzarote Island (Canary Islands), where no active magmatic activity has been reported in the last decades. We detect deformation around Timanfaya volcano (lengthening of line-of-sight ˜ subsidence), where the last eruption in 1730-1736 occurred. Deformation closely follows the surface temperature anomalies indicating that magma crystallization (cooling and contraction) of the 300-year shallow magmatic body under Timanfaya volcano is still ongoing.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kent, Paul R.; Krogel, Jaron T.

    Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less

  18. Impact of spot charge inaccuracies in IMPT treatments.

    PubMed

    Kraan, Aafke C; Depauw, Nicolas; Clasie, Ben; Giunta, Marina; Madden, Tom; Kooy, Hanne M

    2017-08-01

    Spot charge is one parameter of pencil-beam scanning dose delivery system whose accuracy is typically high but whose required value has not been investigated. In this work we quantify the dose impact of spot charge inaccuracies on the dose distribution in patients. Knowing the effect of charge errors is relevant for conventional proton machines, as well as for new generation proton machines, where ensuring accurate charge may be challenging. Through perturbation of spot charge in treatment plans for seven patients and a phantom, we evaluated the dose impact of absolute (up to 5× 10 6 protons) and relative (up to 30%) charge errors. We investigated the dependence on beam width by studying scenarios with small, medium and large beam sizes. Treatment plan statistics included the Γ passing rate, dose-volume-histograms and dose differences. The allowable absolute charge error for small spot plans was about 2× 10 6 protons. Larger limits would be allowed if larger spots were used. For relative errors, the maximum allowable error size for small, medium and large spots was about 13%, 8% and 6% for small, medium and large spots, respectively. Dose distributions turned out to be surprisingly robust against random spot charge perturbation. Our study suggests that ensuring spot charge errors as small as 1-2% as is commonly aimed at in conventional proton therapy machines, is clinically not strictly needed. © 2017 American Association of Physicists in Medicine.

  19. Digital stereophotogrammetry based on circular markers and zooming cameras: evaluation of a method for 3D analysis of small motions in orthopaedic research

    PubMed Central

    2011-01-01

    Background Orthopaedic research projects focusing on small displacements in a small measurement volume require a radiation free, three dimensional motion analysis system. A stereophotogrammetrical motion analysis system can track wireless, small, light-weight markers attached to the objects. Thereby the disturbance of the measured objects through the marker tracking can be kept at minimum. The purpose of this study was to develop and evaluate a non-position fixed compact motion analysis system configured for a small measurement volume and able to zoom while tracking small round flat markers in respect to a fiducial marker which was used for the camera pose estimation. Methods The system consisted of two web cameras and the fiducial marker placed in front of them. The markers to track were black circles on a white background. The algorithm to detect a centre of the projected circle on the image plane was described and applied. In order to evaluate the accuracy (mean measurement error) and precision (standard deviation of the measurement error) of the optical measurement system, two experiments were performed: 1) inter-marker distance measurement and 2) marker displacement measurement. Results The first experiment of the 10 mm distances measurement showed a total accuracy of 0.0086 mm and precision of ± 0.1002 mm. In the second experiment, translations from 0.5 mm to 5 mm were measured with total accuracy of 0.0038 mm and precision of ± 0.0461 mm. The rotations of 2.25° amount were measured with the entire accuracy of 0.058° and the precision was of ± 0.172°. Conclusions The description of the non-proprietary measurement device with very good levels of accuracy and precision may provide opportunities for new, cost effective applications of stereophotogrammetrical analysis in musculoskeletal research projects, focusing on kinematics of small displacements in a small measurement volume. PMID:21284867

  20. larvalign: Aligning Gene Expression Patterns from the Larval Brain of Drosophila melanogaster.

    PubMed

    Muenzing, Sascha E A; Strauch, Martin; Truman, James W; Bühler, Katja; Thum, Andreas S; Merhof, Dorit

    2018-01-01

    The larval brain of the fruit fly Drosophila melanogaster is a small, tractable model system for neuroscience. Genes for fluorescent marker proteins can be expressed in defined, spatially restricted neuron populations. Here, we introduce the methods for 1) generating a standard template of the larval central nervous system (CNS), 2) spatial mapping of expression patterns from different larvae into a reference space defined by the standard template. We provide a manually annotated gold standard that serves for evaluation of the registration framework involved in template generation and mapping. A method for registration quality assessment enables the automatic detection of registration errors, and a semi-automatic registration method allows one to correct registrations, which is a prerequisite for a high-quality, curated database of expression patterns. All computational methods are available within the larvalign software package: https://github.com/larvalign/larvalign/releases/tag/v1.0.

  1. An emerging network storage management standard: Media error monitoring and reporting information (MEMRI) - to determine optical tape data integrity

    NASA Technical Reports Server (NTRS)

    Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don

    1998-01-01

    Sophisticated network storage management applications are rapidly evolving to satisfy a market demand for highly reliable data storage systems with large data storage capacities and performance requirements. To preserve a high degree of data integrity, these applications must rely on intelligent data storage devices that can provide reliable indicators of data degradation. Error correction activity generally occurs within storage devices without notification to the host. Early indicators of degradation and media error monitoring 333 and reporting (MEMR) techniques implemented in data storage devices allow network storage management applications to notify system administrators of these events and to take appropriate corrective actions before catastrophic errors occur. Although MEMR techniques have been implemented in data storage devices for many years, until 1996 no MEMR standards existed. In 1996 the American National Standards Institute (ANSI) approved the only known (world-wide) industry standard specifying MEMR techniques to verify stored data on optical disks. This industry standard was developed under the auspices of the Association for Information and Image Management (AIIM). A recently formed AIIM Optical Tape Subcommittee initiated the development of another data integrity standard specifying a set of media error monitoring tools and media error monitoring information (MEMRI) to verify stored data on optical tape media. This paper discusses the need for intelligent storage devices that can provide data integrity metadata, the content of the existing data integrity standard for optical disks, and the content of the MEMRI standard being developed by the AIIM Optical Tape Subcommittee.

  2. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. State estimation for autopilot control of small unmanned aerial vehicles in windy conditions

    NASA Astrophysics Data System (ADS)

    Poorman, David Paul

    The use of small unmanned aerial vehicles (UAVs) both in the military and civil realms is growing. This is largely due to the proliferation of inexpensive sensors and the increase in capability of small computers that has stemmed from the personal electronic device market. Methods for performing accurate state estimation for large scale aircraft have been well known and understood for decades, which usually involve a complex array of expensive high accuracy sensors. Performing accurate state estimation for small unmanned aircraft is a newer area of study and often involves adapting known state estimation methods to small UAVs. State estimation for small UAVs can be more difficult than state estimation for larger UAVs due to small UAVs employing limited sensor suites due to cost, and the fact that small UAVs are more susceptible to wind than large aircraft. The purpose of this research is to evaluate the ability of existing methods of state estimation for small UAVs to accurately capture the states of the aircraft that are necessary for autopilot control of the aircraft in a Dryden wind field. The research begins by showing which aircraft states are necessary for autopilot control in Dryden wind. Then two state estimation methods that employ only accelerometer, gyro, and GPS measurements are introduced. The first method uses assumptions on aircraft motion to directly solve for attitude information and smooth GPS data, while the second method integrates sensor data to propagate estimates between GPS measurements and then corrects those estimates with GPS information. The performance of both methods is analyzed with and without Dryden wind, in straight and level flight, in a coordinated turn, and in a wings level ascent. It is shown that in zero wind, the first method produces significant steady state attitude errors in both a coordinated turn and in a wings level ascent. In Dryden wind, it produces large noise on the estimates for its attitude states, and has a non-zero mean error that increases when gyro bias is increased. The second method is shown to not exhibit any steady state error in the tested scenarios that is inherent to its design. The second method can correct for attitude errors that arise from both integration error and gyro bias states, but it suffers from lack of attitude error observability. The attitude errors are shown to be more observable in wind, but increased integration error in wind outweighs the increase in attitude corrections that such increased observability brings, resulting in larger attitude errors in wind. Overall, this work highlights many technical deficiencies of both of these methods of state estimation that could be improved upon in the future to enhance state estimation for small UAVs in windy conditions.

  4. SEM Microanalysis of Particles: Concerns and Suggestions

    NASA Astrophysics Data System (ADS)

    Fournelle, J.

    2008-12-01

    The scanning electron microscope (SEM) is well suited to examine and characterize small (i.e. <10 micron) particles. Particles can be imaged and sizes and shapes determined. With energy dispersive x-ray spectrometers (EDS), chemical compositions can be determined quickly. Despite the ease in acquiring x-ray spectra and chemical compositions, there are potentially major sources of error to be recognized. Problems with EDS analyses of small particles: Qualitive estimates of composition (e.g. stating that Si>Al>Ca>Fe plus O) are easy. However, to be able to have confidence that a chemical composition is accurate, several issues should be examined. (1) Particle Mass Effect: Is the accelerating voltage appropriate for the specimen size? Are all the incident electrons remaining inside the particle, and not traveling out of the sample side or bottom? (2) Particle Absorption Effect: What is the geometric relationship of the beam impact point to the x-ray detector? The x-ray intensity will vary by significant amounts for the same material if the grains are irregular and the path out of the sample in the direction of the detector is longer or shorter. (3) Particle Fluorescence Effect: This is generally a smaller error, but should be considered: for small particles, using large standards, there will be a few % less x-rays generated in a small particle relative to one of the same composition and 50-100 times larger. Also, if the sample sits on a grid of a particular composition (e.g. Si wafer) potentially several % of Si could appear in the analysis. (4) In a increasing number of laboratories, with environmental or variable pressure SEMs, the Gas Skirt Effect is operating against you: here the incident electron beam scatters in the gas in the chamber, with less electrons impacting the target spot and some others hitting grains 100s of microns away, producing spectra that could be faulty. (5) Inclusion of measured oxygen: if the measured oxygen x-ray counts are utilized, significant errors can be introduced by differential absorption of this low energy x-ray. (6) Standardless Analysis: This typical method of doing EDS analysis has a major pitfall: the printed analysis is normalized to 100 wt%, thereby eliminating an important clue to analytical error. Suggestions: (1) Use lower voltage, e.g. 10 kV, reducing effects 1,2,3 above. (2) Use standards--traditional flat polished ones--and don't initially normalize totals. Discrepancies can be observed and addressed, not ignored. (3) Alway include oxygen by stoichometry, not measured. (4) Experimental simulation. Using material of constant composition (e.g. NIST glass K-411, or other homogeneous multi-element material with the elements of interest), grind into fragments of similar size to your unknowns, and see what is the analytical error for measurements of these known particles. Analyses of your unknown material will be no better, and probably worse than that, particularly if the grains are smaller. The results of this experiment should be reported whenever discussing measurements on the unknown materials. (5) Monte Carlo simulation. Programs such PENEPMA allows creation of complex geometry samples (and samples on substrates) and resulting EDS spectra can be generated. This allows estimation of errors for representative cases. It is slow, however; other simulations such as DTSA-II promise faster simulations with some limitations. (6) EBSD: this is a perfectly suited for some problems with SEM identification of small particles, e.g. distinguishing magnetite (Fe3O4) from hematite (Fe2O3), which is virtually impossible to do by EDS. With the appropriate hardware and software, electron diffraction patterns on particles can be gathered and the crystal type determined.

  5. Does size matter? Statistical limits of paleomagnetic field reconstruction from small rock specimens

    NASA Astrophysics Data System (ADS)

    Berndt, Thomas; Muxworthy, Adrian R.; Fabian, Karl

    2016-01-01

    As samples of ever decreasing sizes are being studied paleomagnetically, care has to be taken that the underlying assumptions of statistical thermodynamics (Maxwell-Boltzmann statistics) are being met. Here we determine how many grains and how large a magnetic moment a sample needs to have to be able to accurately record an ambient field. It is found that for samples with a thermoremanent magnetic moment larger than 10-11Am2 the assumption of a sufficiently large number of grains is usually given. Standard 25 mm diameter paleomagnetic samples usually contain enough magnetic grains such that statistical errors are negligible, but "single silicate crystal" works on, for example, zircon, plagioclase, and olivine crystals are approaching the limits of what is physically possible, leading to statistic errors in both the angular deviation and paleointensity that are comparable to other sources of error. The reliability of nanopaleomagnetic imaging techniques capable of resolving individual grains (used, for example, to study the cloudy zone in meteorites), however, is questionable due to the limited area of the material covered.

  6. Hydraulic head estimation at unobserved locations: Approximating the distribution of the absolute error based on geologic interpretations

    NASA Astrophysics Data System (ADS)

    Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini

    2017-04-01

    Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.

  7. Assessment of ecologic regression in the study of lung cancer and indoor radon.

    PubMed

    Stidley, C A; Samet, J M

    1994-02-01

    Ecologic regression studies conducted to assess the cancer risk of indoor radon to the general population are subject to methodological limitations, and they have given seemingly contradictory results. The authors use simulations to examine the effects of two major methodological problems that affect these studies: measurement error and misspecification of the risk model. In a simulation study of the effect of measurement error caused by the sampling process used to estimate radon exposure for a geographic unit, both the effect of radon and the standard error of the effect estimate were underestimated, with greater bias for smaller sample sizes. In another simulation study, which addressed the consequences of uncontrolled confounding by cigarette smoking, even small negative correlations between county geometric mean annual radon exposure and the proportion of smokers resulted in negative average estimates of the radon effect. A third study considered consequences of using simple linear ecologic models when the true underlying model relation between lung cancer and radon exposure is nonlinear. These examples quantify potential biases and demonstrate the limitations of estimating risks from ecologic studies of lung cancer and indoor radon.

  8. Estimates of fetch-induced errors in Bowen-ratio energy-budget measurements of evapotranspiration from a prairie wetland, Cottonwood Lake Area, North Dakota, USA

    USGS Publications Warehouse

    Stannard, David L.; Rosenberry, Donald O.; Winter, Thomas C.; Parkhurst, Renee S.

    2004-01-01

    Micrometeorological measurements of evapotranspiration (ET) often are affected to some degree by errors arising from limited fetch. A recently developed model was used to estimate fetch-induced errors in Bowen-ratio energy-budget measurements of ET made at a small wetland with fetch-to-height ratios ranging from 34 to 49. Estimated errors were small, averaging −1.90%±0.59%. The small errors are attributed primarily to the near-zero lower sensor height, and the negative bias reflects the greater Bowen ratios of the drier surrounding upland. Some of the variables and parameters affecting the error were not measured, but instead are estimated. A sensitivity analysis indicates that the uncertainty arising from these estimates is small. In general, fetch-induced error in measured wetland ET increases with decreasing fetch-to-height ratio, with increasing aridity and with increasing atmospheric stability over the wetland. Occurrence of standing water at a site is likely to increase the appropriate time step of data integration, for a given level of accuracy. Occurrence of extensive open water can increase accuracy or decrease the required fetch by allowing the lower sensor to be placed at the water surface. If fetch is highly variable and fetch-induced errors are significant, the variables affecting fetch (e.g., wind direction, water level) need to be measured. Fetch-induced error during the non-growing season may be greater or smaller than during the growing season, depending on how seasonal changes affect both the wetland and upland at a site.

  9. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less

  10. Evaluation of dual energy quantitative CT for determining the spatial distributions of red marrow and bone for dosimetry in internal emitter radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsitt, Mitchell M., E-mail: goodsitt@umich.edu; Shenoy, Apeksha; Howard, David

    2014-05-15

    Purpose: To evaluate a three-equation three-unknown dual-energy quantitative CT (DEQCT) technique for determining region specific variations in bone spongiosa composition for improved red marrow dose estimation in radionuclide therapy. Methods: The DEQCT method was applied to 80/140 kVp images of patient-simulating lumbar sectional body phantoms of three sizes (small, medium, and large). External calibration rods of bone, red marrow, and fat-simulating materials were placed beneath the body phantoms. Similar internal calibration inserts were placed at vertebral locations within the body phantoms. Six test inserts of known volume fractions of bone, fat, and red marrow were also scanned. External-to-internal calibration correctionmore » factors were derived. The effects of body phantom size, radiation dose, spongiosa region segmentation granularity [single (∼17 × 17 mm) region of interest (ROI), 2 × 2, and 3 × 3 segmentation of that single ROI], and calibration method on the accuracy of the calculated volume fractions of red marrow (cellularity) and trabecular bone were evaluated. Results: For standard low dose DEQCT x-ray technique factors and the internal calibration method, the RMS errors of the estimated volume fractions of red marrow of the test inserts were 1.2–1.3 times greater in the medium body than in the small body phantom and 1.3–1.5 times greater in the large body than in the small body phantom. RMS errors of the calculated volume fractions of red marrow within 2 × 2 segmented subregions of the ROIs were 1.6–1.9 times greater than for no segmentation, and RMS errors for 3 × 3 segmented subregions were 2.3–2.7 times greater than those for no segmentation. Increasing the dose by a factor of 2 reduced the RMS errors of all constituent volume fractions by an average factor of 1.40 ± 0.29 for all segmentation schemes and body phantom sizes; increasing the dose by a factor of 4 reduced those RMS errors by an average factor of 1.71 ± 0.25. Results for external calibrations exhibited much larger RMS errors than size matched internal calibration. Use of an average body size external-to-internal calibration correction factor reduced the errors to closer to those for internal calibration. RMS errors of less than 30% or about 0.01 for the bone and 0.1 for the red marrow volume fractions would likely be satisfactory for human studies. Such accuracies were achieved for 3 × 3 segmentation of 5 mm slice images for: (a) internal calibration with 4 times dose for all size body phantoms, (b) internal calibration with 2 times dose for the small and medium size body phantoms, and (c) corrected external calibration with 4 times dose and all size body phantoms. Conclusions: Phantom studies are promising and demonstrate the potential to use dual energy quantitative CT to estimate the spatial distributions of red marrow and bone within the vertebral spongiosa.« less

  11. Evaluation of dual energy quantitative CT for determining the spatial distributions of red marrow and bone for dosimetry in internal emitter radiation therapy

    PubMed Central

    Goodsitt, Mitchell M.; Shenoy, Apeksha; Shen, Jincheng; Howard, David; Schipper, Matthew J.; Wilderman, Scott; Christodoulou, Emmanuel; Chun, Se Young; Dewaraja, Yuni K.

    2014-01-01

    Purpose: To evaluate a three-equation three-unknown dual-energy quantitative CT (DEQCT) technique for determining region specific variations in bone spongiosa composition for improved red marrow dose estimation in radionuclide therapy. Methods: The DEQCT method was applied to 80/140 kVp images of patient-simulating lumbar sectional body phantoms of three sizes (small, medium, and large). External calibration rods of bone, red marrow, and fat-simulating materials were placed beneath the body phantoms. Similar internal calibration inserts were placed at vertebral locations within the body phantoms. Six test inserts of known volume fractions of bone, fat, and red marrow were also scanned. External-to-internal calibration correction factors were derived. The effects of body phantom size, radiation dose, spongiosa region segmentation granularity [single (∼17 × 17 mm) region of interest (ROI), 2 × 2, and 3 × 3 segmentation of that single ROI], and calibration method on the accuracy of the calculated volume fractions of red marrow (cellularity) and trabecular bone were evaluated. Results: For standard low dose DEQCT x-ray technique factors and the internal calibration method, the RMS errors of the estimated volume fractions of red marrow of the test inserts were 1.2–1.3 times greater in the medium body than in the small body phantom and 1.3–1.5 times greater in the large body than in the small body phantom. RMS errors of the calculated volume fractions of red marrow within 2 × 2 segmented subregions of the ROIs were 1.6–1.9 times greater than for no segmentation, and RMS errors for 3 × 3 segmented subregions were 2.3–2.7 times greater than those for no segmentation. Increasing the dose by a factor of 2 reduced the RMS errors of all constituent volume fractions by an average factor of 1.40 ± 0.29 for all segmentation schemes and body phantom sizes; increasing the dose by a factor of 4 reduced those RMS errors by an average factor of 1.71 ± 0.25. Results for external calibrations exhibited much larger RMS errors than size matched internal calibration. Use of an average body size external-to-internal calibration correction factor reduced the errors to closer to those for internal calibration. RMS errors of less than 30% or about 0.01 for the bone and 0.1 for the red marrow volume fractions would likely be satisfactory for human studies. Such accuracies were achieved for 3 × 3 segmentation of 5 mm slice images for: (a) internal calibration with 4 times dose for all size body phantoms, (b) internal calibration with 2 times dose for the small and medium size body phantoms, and (c) corrected external calibration with 4 times dose and all size body phantoms. Conclusions: Phantom studies are promising and demonstrate the potential to use dual energy quantitative CT to estimate the spatial distributions of red marrow and bone within the vertebral spongiosa. PMID:24784380

  12. Performance monitoring and error significance in patients with obsessive-compulsive disorder.

    PubMed

    Endrass, Tanja; Schuermann, Beate; Kaufmann, Christan; Spielberg, Rüdiger; Kniesche, Rainer; Kathmann, Norbert

    2010-05-01

    Performance monitoring has been consistently found to be overactive in obsessive-compulsive disorder (OCD). The present study examines whether performance monitoring in OCD is adjusted with error significance. Therefore, errors in a flanker task were followed by neutral (standard condition) or punishment feedbacks (punishment condition). In the standard condition patients had significantly larger error-related negativity (ERN) and correct-related negativity (CRN) ampliudes than controls. But, in the punishment condition groups did not differ in ERN and CRN amplitudes. While healthy controls showed an amplitude enhancement between standard and punishment condition, OCD patients showed no variation. In contrast, group differences were not found for the error positivity (Pe): both groups had larger Pe amplitudes in the punishment condition. Results confirm earlier findings of overactive error monitoring in OCD. The absence of a variation with error significance might indicate that OCD patients are unable to down-regulate their monitoring activity according to external requirements. Copyright 2010 Elsevier B.V. All rights reserved.

  13. Evaluation of assigned-value uncertainty for complex calibrator value assignment processes: a prealbumin example.

    PubMed

    Middleton, John; Vaks, Jeffrey E

    2007-04-01

    Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.

  14. Random errors in interferometry with the least-squares method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Qi

    2011-01-20

    This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less

  15. The p-Value You Can't Buy.

    PubMed

    Demidenko, Eugene

    2016-01-02

    There is growing frustration with the concept of the p -value. Besides having an ambiguous interpretation, the p- value can be made as small as desired by increasing the sample size, n . The p -value is outdated and does not make sense with big data: Everything becomes statistically significant. The root of the problem with the p- value is in the mean comparison. We argue that statistical uncertainty should be measured on the individual, not the group, level. Consequently, standard deviation (SD), not standard error (SE), error bars should be used to graphically present the data on two groups. We introduce a new measure based on the discrimination of individuals/objects from two groups, and call it the D -value. The D -value can be viewed as the n -of-1 p -value because it is computed in the same way as p while letting n equal 1. We show how the D -value is related to discrimination probability and the area above the receiver operating characteristic (ROC) curve. The D -value has a clear interpretation as the proportion of patients who get worse after the treatment, and as such facilitates to weigh up the likelihood of events under different scenarios. [Received January 2015. Revised June 2015.].

  16. Repeatability of testing a small broadband sensor in the Albuquerque Seismological Laboratory Underground Vault

    USGS Publications Warehouse

    Ringler, Adam; Holland, Austin; Wilson, David

    2017-01-01

    Variability in seismic instrumentation performance plays a fundamental role in our ability to carry out experiments in observational seismology. Many such experiments rely on the assumed performance of various seismic sensors as well as on methods to isolate the sensors from nonseismic noise sources. We look at the repeatability of estimating the self‐noise, midband sensitivity, and the relative orientation by comparing three collocated Nanometrics Trillium Compact sensors. To estimate the repeatability, we conduct a total of 15 trials in which one sensor is repeatedly reinstalled, alongside two undisturbed sensors. We find that we are able to estimate the midband sensitivity with an error of no greater than 0.04% with a 99th percentile confidence, assuming a standard normal distribution. We also find that we are able to estimate mean sensor self‐noise to within ±5.6  dB with a 99th percentile confidence in the 30–100‐s‐period band. Finally, we find our relative orientation errors have a mean difference in orientation of 0.0171° from the reference, but our trials have a standard deviation of 0.78°.

  17. Study on the medical meteorological forecast of the number of hypertension inpatient based on SVR

    NASA Astrophysics Data System (ADS)

    Zhai, Guangyu; Chai, Guorong; Zhang, Haifeng

    2017-06-01

    The purpose of this study is to build a hypertension prediction model by discussing the meteorological factors for hypertension incidence. The research method is selecting the standard data of relative humidity, air temperature, visibility, wind speed and air pressure of Lanzhou from 2010 to 2012(calculating the maximum, minimum and average value with 5 days as a unit ) as the input variables of Support Vector Regression(SVR) and the standard data of hypertension incidence of the same period as the output dependent variables to obtain the optimal prediction parameters by cross validation algorithm, then by SVR algorithm learning and training, a SVR forecast model for hypertension incidence is built. The result shows that the hypertension prediction model is composed of 15 input independent variables, the training accuracy is 0.005, the final error is 0.0026389. The forecast accuracy based on SVR model is 97.1429%, which is higher than statistical forecast equation and neural network prediction method. It is concluded that SVR model provides a new method for hypertension prediction with its simple calculation, small error as well as higher historical sample fitting and Independent sample forecast capability.

  18. Online pretreatment verification of high-dose rate brachytherapy using an imaging panel

    NASA Astrophysics Data System (ADS)

    Fonseca, Gabriel P.; Podesta, Mark; Bellezzo, Murillo; Van den Bosch, Michiel R.; Lutgens, Ludy; Vanneste, Ben G. L.; Voncken, Robert; Van Limbergen, Evert J.; Reniers, Brigitte; Verhaegen, Frank

    2017-07-01

    Brachytherapy is employed to treat a wide variety of cancers. However, an accurate treatment verification method is currently not available. This study describes a pre-treatment verification system that uses an imaging panel (IP) to verify important aspects of the treatment plan. A detailed modelling of the IP was only possible with an extensive calibration performed using a robotic arm. Irradiations were performed with a high dose rate (HDR) 192Ir source within a water phantom. An empirical fit was applied to measure the distance between the source and the detector so 3D Cartesian coordinates of the dwell positions can be obtained using a single panel. The IP acquires 7.14 fps to verify the dwell times, dwell positions and air kerma strength (Sk). A gynecological applicator was used to create a treatment plan that was registered with a CT image of the water phantom used during the experiments for verification purposes. Errors (shifts, exchanged connections and wrong dwell times) were simulated to verify the proposed verification system. Cartesian source positions (panel measurement plane) have a standard deviation of about 0.02 cm. The measured distance between the source and the panel (z-coordinate) have a standard deviation up to 0.16 cm and maximum absolute error of  ≈0.6 cm if the signal is close to sensitive limit of the panel. The average response of the panel is very linear with Sk. Therefore, Sk measurements can be performed with relatively small errors. The measured dwell times show a maximum error of 0.2 s which is consistent with the acquisition rate of the panel. All simulated errors were clearly identified by the proposed system. The use of IPs is not common in brachytherapy, however, it provides considerable advantages. It was demonstrated that the IP can accurately measure Sk, dwell times and dwell positions.

  19. Estimating error statistics for Chambon-la-Forêt observatory definitive data

    NASA Astrophysics Data System (ADS)

    Lesur, Vincent; Heumez, Benoît; Telali, Abdelkader; Lalanne, Xavier; Soloviev, Anatoly

    2017-08-01

    We propose a new algorithm for calibrating definitive observatory data with the goal of providing users with estimates of the data error standard deviations (SDs). The algorithm has been implemented and tested using Chambon-la-Forêt observatory (CLF) data. The calibration process uses all available data. It is set as a large, weakly non-linear, inverse problem that ultimately provides estimates of baseline values in three orthogonal directions, together with their expected standard deviations. For this inverse problem, absolute data error statistics are estimated from two series of absolute measurements made within a day. Similarly, variometer data error statistics are derived by comparing variometer data time series between different pairs of instruments over few years. The comparisons of these time series led us to use an autoregressive process of order 1 (AR1 process) as a prior for the baselines. Therefore the obtained baselines do not vary smoothly in time. They have relatively small SDs, well below 300 pT when absolute data are recorded twice a week - i.e. within the daily to weekly measures recommended by INTERMAGNET. The algorithm was tested against the process traditionally used to derive baselines at CLF observatory, suggesting that statistics are less favourable when this latter process is used. Finally, two sets of definitive data were calibrated using the new algorithm. Their comparison shows that the definitive data SDs are less than 400 pT and may be slightly overestimated by our process: an indication that more work is required to have proper estimates of absolute data error statistics. For magnetic field modelling, the results show that even on isolated sites like CLF observatory, there are very localised signals over a large span of temporal frequencies that can be as large as 1 nT. The SDs reported here encompass signals of a few hundred metres and less than a day wavelengths.

  20. Online pretreatment verification of high-dose rate brachytherapy using an imaging panel.

    PubMed

    Fonseca, Gabriel P; Podesta, Mark; Bellezzo, Murillo; Van den Bosch, Michiel R; Lutgens, Ludy; Vanneste, Ben G L; Voncken, Robert; Van Limbergen, Evert J; Reniers, Brigitte; Verhaegen, Frank

    2017-07-07

    Brachytherapy is employed to treat a wide variety of cancers. However, an accurate treatment verification method is currently not available. This study describes a pre-treatment verification system that uses an imaging panel (IP) to verify important aspects of the treatment plan. A detailed modelling of the IP was only possible with an extensive calibration performed using a robotic arm. Irradiations were performed with a high dose rate (HDR) 192 Ir source within a water phantom. An empirical fit was applied to measure the distance between the source and the detector so 3D Cartesian coordinates of the dwell positions can be obtained using a single panel. The IP acquires 7.14 fps to verify the dwell times, dwell positions and air kerma strength (Sk). A gynecological applicator was used to create a treatment plan that was registered with a CT image of the water phantom used during the experiments for verification purposes. Errors (shifts, exchanged connections and wrong dwell times) were simulated to verify the proposed verification system. Cartesian source positions (panel measurement plane) have a standard deviation of about 0.02 cm. The measured distance between the source and the panel (z-coordinate) have a standard deviation up to 0.16 cm and maximum absolute error of  ≈0.6 cm if the signal is close to sensitive limit of the panel. The average response of the panel is very linear with Sk. Therefore, Sk measurements can be performed with relatively small errors. The measured dwell times show a maximum error of 0.2 s which is consistent with the acquisition rate of the panel. All simulated errors were clearly identified by the proposed system. The use of IPs is not common in brachytherapy, however, it provides considerable advantages. It was demonstrated that the IP can accurately measure Sk, dwell times and dwell positions.

  1. Resampling-based Methods in Single and Multiple Testing for Equality of Covariance/Correlation Matrices

    PubMed Central

    Yang, Yang; DeGruttola, Victor

    2016-01-01

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients. PMID:22740584

  2. Resampling-based methods in single and multiple testing for equality of covariance/correlation matrices.

    PubMed

    Yang, Yang; DeGruttola, Victor

    2012-06-22

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.

  3. Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation

    NASA Technical Reports Server (NTRS)

    Mandrake, Lukas

    2013-01-01

    Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.

  4. Map based navigation for autonomous underwater vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuohy, S.T.; Leonard, J.J.; Bellingham, J.G.

    1995-12-31

    In this work, a map based navigation algorithm is developed wherein measured geophysical properties are matched to a priori maps. The objectives is a complete algorithm applicable to a small, power-limited AUV which performs in real time to a required resolution with bounded position error. Interval B-Splines are introduced for the non-linear representation of two-dimensional geophysical parameters that have measurement uncertainty. Fine-scale position determination involves the solution of a system of nonlinear polynomial equations with interval coefficients. This system represents the complete set of possible vehicle locations and is formulated as the intersection of contours established on each map frommore » the simultaneous measurement of associated geophysical parameters. A standard filter mechanisms, based on a bounded interval error model, predicts the position of the vehicle and, therefore, screens extraneous solutions. When multiple solutions are found, a tracking mechanisms is applied until a unique vehicle location is determined.« less

  5. File Usage Analysis and Resource Usage Prediction: a Measurement-Based Study. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.-S.

    1987-01-01

    A probabilistic scheme was developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The coefficient of correlation between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.

  6. Predictability of process resource usage - A measurement-based study on UNIX

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.; Iyer, Ravishankar K.

    1989-01-01

    A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient betweeen the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82 percent of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.

  7. Predictability of process resource usage: A measurement-based study of UNIX

    NASA Technical Reports Server (NTRS)

    Devarakonda, Murthy V.; Iyer, Ravishankar K.

    1987-01-01

    A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.

  8. Generating a Magellanic star cluster catalog with ASteCA

    NASA Astrophysics Data System (ADS)

    Perren, G. I.; Piatti, A. E.; Vázquez, R. A.

    2016-08-01

    An increasing number of software tools have been employed in the recent years for the automated or semi-automated processing of astronomical data. The main advantages of using these tools over a standard by-eye analysis include: speed (particularly for large databases), homogeneity, reproducibility, and precision. At the same time, they enable a statistically correct study of the uncertainties associated with the analysis, in contrast with manually set errors, or the still widespread practice of simply not assigning errors. We present a catalog comprising 210 star clusters located in the Large and Small Magellanic Clouds, observed with Washington photometry. Their fundamental parameters were estimated through an homogeneous, automatized and completely unassisted process, via the Automated Stellar Cluster Analysis package ( ASteCA). Our results are compared with two types of studies on these clusters: one where the photometry is the same, and another where the photometric system is different than that employed by ASteCA.

  9. On the Estimation of Errors in Sparse Bathymetric Geophysical Data Sets

    NASA Astrophysics Data System (ADS)

    Jakobsson, M.; Calder, B.; Mayer, L.; Armstrong, A.

    2001-05-01

    There is a growing demand in the geophysical community for better regional representations of the world ocean's bathymetry. However, given the vastness of the oceans and the relative limited coverage of even the most modern mapping systems, it is likely that many of the older data sets will remain part of our cumulative database for several more decades. Therefore, regional bathymetrical compilations that are based on a mixture of historic and contemporary data sets will have to remain the standard. This raises the problem of assembling bathymetric compilations and utilizing data sets not only with a heterogeneous cover but also with a wide range of accuracies. In combining these data to regularly spaced grids of bathymetric values, which the majority of numerical procedures in earth sciences require, we are often forced to use a complex interpolation scheme due to the sparseness and irregularity of the input data points. Consequently, we are faced with the difficult task of assessing the confidence that we can assign to the final grid product, a task that is not usually addressed in most bathymetric compilations. We approach the problem of assessing the confidence via a direct-simulation Monte Carlo method. We start with a small subset of data from the International Bathymetric Chart of the Arctic Ocean (IBCAO) grid model [Jakobsson et al., 2000]. This grid is compiled from a mixture of data sources ranging from single beam soundings with available metadata to spot soundings with no available metadata, to digitized contours; the test dataset shows examples of all of these types. From this database, we assign a priori error variances based on available meta-data, and when this is not available, based on a worst-case scenario in an essentially heuristic manner. We then generate a number of synthetic datasets by randomly perturbing the base data using normally distributed random variates, scaled according to the predicted error model. These datasets are then re-gridded using the same methodology as the original product, generating a set of plausible grid models of the regional bathymetry that we can use for standard error estimates. Finally, we repeat the entire random estimation process and analyze each run's standard error grids in order to examine sampling bias and variance in the predictions. The final products of the estimation are a collection of standard error grids, which we combine with the source data density in order to create a grid that contains information about the bathymetry model's reliability. Jakobsson, M., Cherkis, N., Woodward, J., Coakley, B., and Macnab, R., 2000, A new grid of Arctic bathymetry: A significant resource for scientists and mapmakers, EOS Transactions, American Geophysical Union, v. 81, no. 9, p. 89, 93, 96.

  10. A targeted metabolomics approach for clinical diagnosis of inborn errors of metabolism.

    PubMed

    Jacob, Minnie; Malkawi, Abeer; Albast, Nour; Al Bougha, Salam; Lopata, Andreas; Dasouki, Majed; Abdel Rahman, Anas M

    2018-09-26

    Metabolome, the ultimate functional product of the genome, can be studied through identification and quantification of small molecules. The global metabolome influences the individual phenotype through clinical and environmental interventions. Metabolomics has become an integral part of clinical research and allowed for another dimension of better understanding of disease pathophysiology and mechanism. More than 95% of the clinical biochemistry laboratory routine workload is based on small molecular identification, which can potentially be analyzed through metabolomics. However, multiple challenges in clinical metabolomics impact the entire workflow and data quality, thus the biological interpretation needs to be standardized for a reproducible outcome. Herein, we introduce the establishment of a comprehensive targeted metabolomics method for a panel of 220 clinically relevant metabolites using Liquid chromatography-tandem mass spectrometry (LC-MS/MS) standardized for clinical research. The sensitivity, reproducibility and molecular stability of each targeted metabolite (amino acids, organic acids, acylcarnitines, sugars, bile acids, neurotransmitters, polyamines, and hormones) were assessed under multiple experimental conditions. The metabolic tissue distribution was determined in various rat organs. Furthermore, the method was validated in dry blood spot (DBS) samples collected from patients known to have various inborn errors of metabolism (IEMs). Using this approach, our panel appears to be sensitive and robust as it demonstrated differential and unique metabolic profiles in various rat tissues. Also, as a prospective screening method, this panel of diverse metabolites has the ability to identify patients with a wide range of IEMs who otherwise may need multiple, time-consuming and expensive biochemical assays causing a delay in clinical management. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Total ozone trend significance from space time variability of daily Dobson data

    NASA Technical Reports Server (NTRS)

    Wilcox, R. W.

    1981-01-01

    Estimates of standard errors of total ozone time and area means, as derived from ozone's natural temporal and spatial variability and autocorrelation in middle latitudes determined from daily Dobson data are presented. Assessing the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. Standard errors of time averages depend on the temporal variability and correlation of the averaged parameter. Trend detectability is discussed, both for the present network and for satellite measurements.

  12. Precision modelling of M dwarf stars: the magnetic components of CM Draconis

    NASA Astrophysics Data System (ADS)

    MacDonald, J.; Mullan, D. J.

    2012-04-01

    The eclipsing binary CM Draconis (CM Dra) contains two nearly identical red dwarfs of spectral class dM4.5. The masses and radii of the two components have been reported with unprecedentedly small statistical errors: for M, these errors are 1 part in 260, while for R, the errors reported by Morales et al. are 1 part in 130. When compared with standard stellar models with appropriate mass and age (≈4 Gyr), the empirical results indicate that both components are discrepant from the models in the following sense: the observed stars are larger in R ('bloated'), by several standard deviations, than the models predict. The observed luminosities are also lower than the models predict. Here, we attempt at first to model the two components of CM Dra in the context of standard (non-magnetic) stellar models using a systematic array of different assumptions about helium abundances (Y), heavy element abundances (Z), opacities and mixing length parameter (α). We find no 4-Gyr-old models with plausible values of these four parameters that fit the observed L and R within the reported statistical error bars. However, CM Dra is known to contain magnetic fields, as evidenced by the occurrence of star-spots and flares. Here we ask: can inclusion of magnetic effects into stellar evolution models lead to fits of L and R within the error bars? Morales et al. have reported that the presence of polar spots results in a systematic overestimate of R by a few per cent when eclipses are interpreted with a standard code. In a star where spots cover a fraction f of the surface area, we find that the revised R and L for CM Dra A can be fitted within the error bars by varying the parameter α. The latter is often assumed to be reduced by the presence of magnetic fields, although the reduction in α as a function of B is difficult to quantify. An alternative magnetic effect, namely inhibition of the onset of convection, can be readily quantified in terms of a magnetic parameter δ≈B2/4πγpgas (where B is the strength of the local vertical magnetic field). In the context of δ models in which B is not allowed to exceed a 'ceiling' of 106 G, we find that the revised R and L can also be fitted, within the error bars, in a finite region of the f-δ plane. The permitted values of δ near the surface leads us to estimate that the vertical field strength on the surface of CM Dra A is about 500 G, in good agreement with independent observational evidence for similar low-mass stars. Recent results for another binary with parameters close to those of CM Dra suggest that metallicity differences cannot be the dominant explanation for the bloating of the two components of CM Dra.

  13. Predicting the Earth encounters of (99942) Apophis

    NASA Technical Reports Server (NTRS)

    Giorgini, Jon D.; Benner, Lance A. M.; Ostro, Steven J.; Nolan, Michael C.; Busch, Michael W.

    2007-01-01

    Arecibo delay-Doppler measurements of (99942) Apophis in 2005 and 2006 resulted in a five standard-deviation trajectory correction to the optically predicted close approach distance to Earth in 2029. The radar measurements reduced the volume of the statistical uncertainty region entering the encounter to 7.3% of the pre-radar solution, but increased the trajectory uncertainty growth rate across the encounter by 800% due to the closer predicted approach to the Earth. A small estimated Earth impact probability remained for 2036. With standard-deviation plane-of-sky position uncertainties for 2007-2010 already less than 0.2 arcsec, the best near-term ground-based optical astrometry can only weakly affect the trajectory estimate. While the potential for impact in 2036 will likely be excluded in 2013 (if not 2011) using ground-based optical measurements, approximations within the Standard Dynamical Model (SDM) used to estimate and predict the trajectory from the current era are sufficient to obscure the difference between a predicted impact and a miss in 2036 by altering the dynamics leading into the 2029 encounter. Normal impact probability assessments based on the SDM become problematic without knowledge of the object's physical properties; impact could be excluded while the actual dynamics still permit it. Calibrated position uncertainty intervals are developed to compensate for this by characterizing the minimum and maximum effect of physical parameters on the trajectory. Uncertainty in accelerations related to solar radiation can cause between 82 and 4720 Earth-radii of trajectory change relative to the SDM by 2036. If an actionable hazard exists, alteration by 2-10% of Apophis' total absorption of solar radiation in 2018 could be sufficient to produce a six standard-deviation trajectory change by 2036 given physical characterization; even a 0.5% change could produce a trajectory shift of one Earth-radius by 2036 for all possible spin-poles and likely masses. Planetary ephemeris uncertainties are the next greatest source of systematic error, causing up to 23 Earth-radii of uncertainty. The SDM Earth point-mass assumption introduces an additional 2.9 Earth-radii of prediction error by 2036. Unmodeled asteroid perturbations produce as much as 2.3 Earth-radii of error. We find no future small-body encounters likely to yield an Apophis mass determination prior to 2029. However, asteroid (144898) 2004 VD17, itself having a statistical Earth impact in 2102, will probably encounter Apophis at 6.7 lunar distances in 2034, their uncertainty regions coming as close as 1.6 lunar distances near the center of both SDM probability distributions.

  14. Impact of electronic chemotherapy order forms on prescribing errors at an urban medical center: results from an interrupted time-series analysis.

    PubMed

    Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C

    2013-12-01

    To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.

  15. Design and Calibration of a Novel Bio-Inspired Pixelated Polarized Light Compass.

    PubMed

    Han, Guoliang; Hu, Xiaoping; Lian, Junxiang; He, Xiaofeng; Zhang, Lilian; Wang, Yujie; Dong, Fengliang

    2017-11-14

    Animals, such as Savannah sparrows and North American monarch butterflies, are able to obtain compass information from skylight polarization patterns to help them navigate effectively and robustly. Inspired by excellent navigation ability of animals, this paper proposes a novel image-based polarized light compass, which has the advantages of having a small size and being light weight. Firstly, the polarized light compass, which is composed of a Charge Coupled Device (CCD) camera, a pixelated polarizer array and a wide-angle lens, is introduced. Secondly, the measurement method of a skylight polarization pattern and the orientation method based on a single scattering Rayleigh model are presented. Thirdly, the error model of the sensor, mainly including the response error of CCD pixels and the installation error of the pixelated polarizer, is established. A calibration method based on iterative least squares estimation is proposed. In the outdoor environment, the skylight polarization pattern can be measured in real time by our sensor. The orientation accuracy of the sensor increases with the decrease of the solar elevation angle, and the standard deviation of orientation error is 0 . 15 ∘ at sunset. Results of outdoor experiments show that the proposed polarization navigation sensor can be used for outdoor autonomous navigation.

  16. Design and Calibration of a Novel Bio-Inspired Pixelated Polarized Light Compass

    PubMed Central

    Hu, Xiaoping; Lian, Junxiang; He, Xiaofeng; Zhang, Lilian; Wang, Yujie; Dong, Fengliang

    2017-01-01

    Animals, such as Savannah sparrows and North American monarch butterflies, are able to obtain compass information from skylight polarization patterns to help them navigate effectively and robustly. Inspired by excellent navigation ability of animals, this paper proposes a novel image-based polarized light compass, which has the advantages of having a small size and being light weight. Firstly, the polarized light compass, which is composed of a Charge Coupled Device (CCD) camera, a pixelated polarizer array and a wide-angle lens, is introduced. Secondly, the measurement method of a skylight polarization pattern and the orientation method based on a single scattering Rayleigh model are presented. Thirdly, the error model of the sensor, mainly including the response error of CCD pixels and the installation error of the pixelated polarizer, is established. A calibration method based on iterative least squares estimation is proposed. In the outdoor environment, the skylight polarization pattern can be measured in real time by our sensor. The orientation accuracy of the sensor increases with the decrease of the solar elevation angle, and the standard deviation of orientation error is 0.15∘ at sunset. Results of outdoor experiments show that the proposed polarization navigation sensor can be used for outdoor autonomous navigation. PMID:29135927

  17. Intravenous Chemotherapy Compounding Errors in a Follow-Up Pan-Canadian Observational Study.

    PubMed

    Gilbert, Rachel E; Kozak, Melissa C; Dobish, Roxanne B; Bourrier, Venetia C; Koke, Paul M; Kukreti, Vishal; Logan, Heather A; Easty, Anthony C; Trbovich, Patricia L

    2018-05-01

    Intravenous (IV) compounding safety has garnered recent attention as a result of high-profile incidents, awareness efforts from the safety community, and increasingly stringent practice standards. New research with more-sensitive error detection techniques continues to reinforce that error rates with manual IV compounding are unacceptably high. In 2014, our team published an observational study that described three types of previously unrecognized and potentially catastrophic latent chemotherapy preparation errors in Canadian oncology pharmacies that would otherwise be undetectable. We expand on this research and explore whether additional potential human failures are yet to be addressed by practice standards. Field observations were conducted in four cancer center pharmacies in four Canadian provinces from January 2013 to February 2015. Human factors specialists observed and interviewed pharmacy managers, oncology pharmacists, pharmacy technicians, and pharmacy assistants as they carried out their work. Emphasis was on latent errors (potential human failures) that could lead to outcomes such as wrong drug, dose, or diluent. Given the relatively short observational period, no active failures or actual errors were observed. However, 11 latent errors in chemotherapy compounding were identified. In terms of severity, all 11 errors create the potential for a patient to receive the wrong drug or dose, which in the context of cancer care, could lead to death or permanent loss of function. Three of the 11 practices were observed in our previous study, but eight were new. Applicable Canadian and international standards and guidelines do not explicitly address many of the potentially error-prone practices observed. We observed a significant degree of risk for error in manual mixing practice. These latent errors may exist in other regions where manual compounding of IV chemotherapy takes place. Continued efforts to advance standards, guidelines, technological innovation, and chemical quality testing are needed.

  18. Intimate Partner Violence, 1993-2010

    MedlinePlus

    ... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ...

  19. Estimating extreme stream temperatures by the standard deviate method

    NASA Astrophysics Data System (ADS)

    Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz

    2006-02-01

    It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.

  20. Decreasing patient identification band errors by standardizing processes.

    PubMed

    Walley, Susan Chu; Berger, Stephanie; Harris, Yolanda; Gallizzi, Gina; Hayes, Leslie

    2013-04-01

    Patient identification (ID) bands are an essential component in patient ID. Quality improvement methodology has been applied as a model to reduce ID band errors although previous studies have not addressed standardization of ID bands. Our specific aim was to decrease ID band errors by 50% in a 12-month period. The Six Sigma DMAIC (define, measure, analyze, improve, and control) quality improvement model was the framework for this study. ID bands at a tertiary care pediatric hospital were audited from January 2011 to January 2012 with continued audits to June 2012 to confirm the new process was in control. After analysis, the major improvement strategy implemented was standardization of styles of ID bands and labels. Additional interventions included educational initiatives regarding the new ID band processes and disseminating institutional and nursing unit data. A total of 4556 ID bands were audited with a preimprovement ID band error average rate of 9.2%. Significant variation in the ID band process was observed, including styles of ID bands. Interventions were focused on standardization of the ID band and labels. The ID band error rate improved to 5.2% in 9 months (95% confidence interval: 2.5-5.5; P < .001) and was maintained for 8 months. Standardization of ID bands and labels in conjunction with other interventions resulted in a statistical decrease in ID band error rates. This decrease in ID band error rates was maintained over the subsequent 8 months.

  1. Methods for estimating flood frequency in Montana based on data through water year 1998

    USGS Publications Warehouse

    Parrett, Charles; Johnson, Dave R.

    2004-01-01

    Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.

  2. Determination of low intrinsic clearance in vitro: the benefit of a novel internal standard in human hepatocyte incubations.

    PubMed

    Zanelli, Ugo; Michna, Thomas; Petersson, Carl

    2018-03-26

    1. A novel method utilizing an internal standard in hepatocytes incubations has been developed and demonstrated to decrease the variability in the determination of intrinsic clearance (CL int ) in this system. The reduced variability was shown to allow differentiation of lower elimination rate constants from noise. 2. The suggested method was able to compensate for a small but systematic error (0.5 µL/min/10 6 cells) caused by an evaporation of approximately 15% of the volume during the incubation time. 3. The approach was validated using six commercial drugs (ketoprofen, tolbutamide, phenacetin, etodolac and quinidine) which were metabolized by different pathways. 4. The suggested internal standard, MSC1815677, was extensively characterized and the acquired data suggest that it fulfills the requirements of an internal standard present during the incubation. The proposed internal standard was stable during the incubation and showed a low potential to inhibit drug metabolizing enzymes and transporters. With MSC1815677 we propose a novel simple, robust and cost-effective method to address the challenges in the estimation of low clearance in hepatocyte incubations.

  3. Adjuvant corneal crosslinking to prevent hyperopic LASIK regression.

    PubMed

    Aslanides, Ioannis M; Mukherjee, Achyut N

    2013-01-01

    To report the long term outcomes, safety, stability, and efficacy in a pilot series of simultaneous hyperopic laser assisted in situ keratomileusis (LASIK) and corneal crosslinking (CXL). A small cohort series of five eyes, with clinically suboptimal topography and/or thickness, underwent LASIK surgery with immediate riboflavin application under the flap, followed by UV light irradiation. Postoperative assessment was performed at 1, 3, 6, and 12 months, with late follow up at 4 years, and results were compared with a matched cohort that received LASIK only. The average age of the LASIK-CXL group was 39 years (26-46), and the average spherical equivalent hyperopic refractive error was +3.45 diopters (standard deviation 0.76; range 2.5 to 4.5). All eyes maintained refractive stability over the 4 years. There were no complications related to CXL, and topographic and clinical outcomes were as expected for standard LASIK. This limited series suggests that simultaneous LASIK and CXL for hyperopia is safe. Outcomes of the small cohort suggest that this technique may be promising for ameliorating hyperopic regression, presumed to be biomechanical in origin, and may also address ectasia risk.

  4. Towards reporting standards for neuropsychological study results: A proposal to minimize communication errors with standardized qualitative descriptors for normalized test scores.

    PubMed

    Schoenberg, Mike R; Rum, Ruba S

    2017-11-01

    Rapid, clear and efficient communication of neuropsychological results is essential to benefit patient care. Errors in communication are a lead cause of medical errors; nevertheless, there remains a lack of consistency in how neuropsychological scores are communicated. A major limitation in the communication of neuropsychological results is the inconsistent use of qualitative descriptors for standardized test scores and the use of vague terminology. PubMed search from 1 Jan 2007 to 1 Aug 2016 to identify guidelines or consensus statements for the description and reporting of qualitative terms to communicate neuropsychological test scores was conducted. The review found the use of confusing and overlapping terms to describe various ranges of percentile standardized test scores. In response, we propose a simplified set of qualitative descriptors for normalized test scores (Q-Simple) as a means to reduce errors in communicating test results. The Q-Simple qualitative terms are: 'very superior', 'superior', 'high average', 'average', 'low average', 'borderline' and 'abnormal/impaired'. A case example illustrates the proposed Q-Simple qualitative classification system to communicate neuropsychological results for neurosurgical planning. The Q-Simple qualitative descriptor system is aimed as a means to improve and standardize communication of standardized neuropsychological test scores. Research are needed to further evaluate neuropsychological communication errors. Conveying the clinical implications of neuropsychological results in a manner that minimizes risk for communication errors is a quintessential component of evidence-based practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. On the robustness of bucket brigade quantum RAM

    NASA Astrophysics Data System (ADS)

    Arunachalam, Srinivasan; Gheorghiu, Vlad; Jochym-O'Connor, Tomas; Mosca, Michele; Varshinee Srinivasan, Priyaa

    2015-12-01

    We study the robustness of the bucket brigade quantum random access memory model introduced by Giovannetti et al (2008 Phys. Rev. Lett.100 160501). Due to a result of Regev and Schiff (ICALP ’08 733), we show that for a class of error models the error rate per gate in the bucket brigade quantum memory has to be of order o({2}-n/2) (where N={2}n is the size of the memory) whenever the memory is used as an oracle for the quantum searching problem. We conjecture that this is the case for any realistic error model that will be encountered in practice, and that for algorithms with super-polynomially many oracle queries the error rate must be super-polynomially small, which further motivates the need for quantum error correction. By contrast, for algorithms such as matrix inversion Harrow et al (2009 Phys. Rev. Lett.103 150502) or quantum machine learning Rebentrost et al (2014 Phys. Rev. Lett.113 130503) that only require a polynomial number of queries, the error rate only needs to be polynomially small and quantum error correction may not be required. We introduce a circuit model for the quantum bucket brigade architecture and argue that quantum error correction for the circuit causes the quantum bucket brigade architecture to lose its primary advantage of a small number of ‘active’ gates, since all components have to be actively error corrected.

  6. Development of a simple system for simultaneously measuring 6DOF geometric motion errors of a linear guide.

    PubMed

    Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You

    2013-11-04

    A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.

  7. Backus Effect on a Perpendicular Errors in Harmonic Models of Real vs. Synthetic Data

    NASA Technical Reports Server (NTRS)

    Voorhies, C. V.; Santana, J.; Sabaka, T.

    1999-01-01

    Measurements of geomagnetic scalar intensity on a thin spherical shell alone are not enough to separate internal from external source fields; moreover, such scalar data are not enough for accurate modeling of the vector field from internal sources because of unmodeled fields and small data errors. Spherical harmonic models of the geomagnetic potential fitted to scalar data alone therefore suffer from well-understood Backus effect and perpendicular errors. Curiously, errors in some models of simulated 'data' are very much less than those in models of real data. We analyze select Magsat vector and scalar measurements separately to illustrate Backus effect and perpendicular errors in models of real scalar data. By using a model to synthesize 'data' at the observation points, and by adding various types of 'noise', we illustrate such errors in models of synthetic 'data'. Perpendicular errors prove quite sensitive to the maximum degree in the spherical harmonic expansion of the potential field model fitted to the scalar data. Small errors in models of synthetic 'data' are found to be an artifact of matched truncation levels. For example, consider scalar synthetic 'data' computed from a degree 14 model. A degree 14 model fitted to such synthetic 'data' yields negligible error, but amplifies 4 nT (rmss) added noise into a 60 nT error (rmss); however, a degree 12 model fitted to the noisy 'data' suffers a 492 nT error (rmms through degree 12). Geomagnetic measurements remain unaware of model truncation, so the small errors indicated by some simulations cannot be realized in practice. Errors in models fitted to scalar data alone approach 1000 nT (rmss) and several thousand nT (maximum).

  8. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  9. Comparison of anchor-based and distributional approaches in estimating important difference in common cold.

    PubMed

    Barrett, Bruce; Brown, Roger; Mundt, Marlon

    2008-02-01

    Evaluative health-related quality-of-life instruments used in clinical trials should be able to detect small but important changes in health status. Several approaches to minimal important difference (MID) and responsiveness have been developed. To compare anchor-based and distributional approaches to important difference and responsiveness for the Wisconsin Upper Respiratory Symptom Survey (WURSS), an illness-specific quality of life outcomes instrument. Participants with community-acquired colds self-reported daily using the WURSS-44. Distribution-based methods calculated standardized effect size (ES) and standard error of measurement (SEM). Anchor-based methods compared daily interval changes to global ratings of change, using: (1) standard MID methods based on correspondence to ratings of "a little better" or "somewhat better," and (2) two-level multivariate regression models. About 150 adults were monitored throughout their colds (1,681 sick days.): 88% were white, 69% were women, and 50% had completed college. The mean age was 35.5 years (SD = 14.7). WURSS scores increased 2.2 points from the first to second day, and then dropped by an average of 8.2 points per day from days 2 to 7. The SEM averaged 9.1 during these 7 days. Standard methods yielded a between day MID of 22 points. Regression models of MID projected 11.3-point daily changes. Dividing these estimates of small-but-important-difference by pooled SDs yielded coefficients of .425 for standard MID, .218 for regression model, .177 for SEM, and .157 for ES. These imply per-group sample sizes of 870 using ES, 616 for SEM, 302 for regression model, and 89 for standard MID, assuming alpha = .05, beta = .20 (80% power), and two-tailed testing. Distribution and anchor-based approaches provide somewhat different estimates of small but important difference, which in turn can have substantial impact on trial design.

  10. NMR structure calculation for all small molecule ligands and non-standard residues from the PDB Chemical Component Dictionary.

    PubMed

    Yilmaz, Emel Maden; Güntert, Peter

    2015-09-01

    An algorithm, CYLIB, is presented for converting molecular topology descriptions from the PDB Chemical Component Dictionary into CYANA residue library entries. The CYANA structure calculation algorithm uses torsion angle molecular dynamics for the efficient computation of three-dimensional structures from NMR-derived restraints. For this, the molecules have to be represented in torsion angle space with rotations around covalent single bonds as the only degrees of freedom. The molecule must be given a tree structure of torsion angles connecting rigid units composed of one or several atoms with fixed relative positions. Setting up CYANA residue library entries therefore involves, besides straightforward format conversion, the non-trivial step of defining a suitable tree structure of torsion angles, and to re-order the atoms in a way that is compatible with this tree structure. This can be done manually for small numbers of ligands but the process is time-consuming and error-prone. An automated method is necessary in order to handle the large number of different potential ligand molecules to be studied in drug design projects. Here, we present an algorithm for this purpose, and show that CYANA structure calculations can be performed with almost all small molecule ligands and non-standard amino acid residues in the PDB Chemical Component Dictionary.

  11. Cost-effectiveness of the stream-gaging program in Nebraska

    USGS Publications Warehouse

    Engel, G.B.; Wahl, K.L.; Boohar, J.A.

    1984-01-01

    This report documents the results of a study of the cost-effectiveness of the streamflow information program in Nebraska. Presently, 145 continuous surface-water stations are operated in Nebraska on a budget of $908,500. Data uses and funding sources are identified for each of the 145 stations. Data from most stations have multiple uses. All stations have sufficient justification for continuation, but two stations primarily are used in short-term research studies; their continued operation needs to be evaluated when the research studies end. The present measurement frequency produces an average standard error for instantaneous discharges of about 12 percent, including periods when stage data are missing. Altering the travel routes and the measurement frequency will allow a reduction in standard error of about 1 percent with the present budget. Standard error could be reduced to about 8 percent if lost record could be eliminated. A minimum budget of $822,000 is required to operate the present network, but operations at that funding level would result in an increase in standard error to about 16 percent. The maximum budget analyzed was $1,363,000, which would result in an average standard error of 6 percent. (USGS)

  12. When do latent class models overstate accuracy for diagnostic and other classifiers in the absence of a gold standard?

    PubMed

    Spencer, Bruce D

    2012-06-01

    Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.

  13. Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts

    NASA Astrophysics Data System (ADS)

    Gingrich, Mark

    Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.

  14. Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.

    PubMed

    Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J

    2012-08-01

    Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.

  15. Quantified Choice of Root-Mean-Square Errors of Approximation for Evaluation and Power Analysis of Small Differences between Structural Equation Models

    ERIC Educational Resources Information Center

    Li, Libo; Bentler, Peter M.

    2011-01-01

    MacCallum, Browne, and Cai (2006) proposed a new framework for evaluation and power analysis of small differences between nested structural equation models (SEMs). In their framework, the null and alternative hypotheses for testing a small difference in fit and its related power analyses were defined by some chosen root-mean-square error of…

  16. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  17. Cost effectiveness of the US Geological Survey stream-gaging program in Alabama

    USGS Publications Warehouse

    Jeffcoat, H.H.

    1987-01-01

    A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)

  18. Analysis of the PLL phase error in presence of simulated ionospheric scintillation events

    NASA Astrophysics Data System (ADS)

    Forte, B.

    2012-01-01

    The functioning of standard phase locked loops (PLL), including those used to track radio signals from Global Navigation Satellite Systems (GNSS), is based on a linear approximation which holds in presence of small phase errors. Such an approximation represents a reasonable assumption in most of the propagation channels. However, in presence of a fading channel the phase error may become large, making the linear approximation no longer valid. The PLL is then expected to operate in a non-linear regime. As PLLs are generally designed and expected to operate in their linear regime, whenever the non-linear regime comes into play, they will experience a serious limitation in their capability to track the corresponding signals. The phase error and the performance of a typical PLL embedded into a commercial multiconstellation GNSS receiver were analyzed in presence of simulated ionospheric scintillation. Large phase errors occurred during scintillation-induced signal fluctuations although cycle slips only occurred during the signal re-acquisition after a loss of lock. Losses of lock occurred whenever the signal faded below the minimumC/N0threshold allowed for tracking. The simulations were performed for different signals (GPS L1C/A, GPS L2C, GPS L5 and Galileo L1). L5 and L2C proved to be weaker than L1. It appeared evident that the conditions driving the PLL phase error in the specific case of GPS receivers in presence of scintillation-induced signal perturbations need to be evaluated in terms of the combination of the minimumC/N0 tracking threshold, lock detector thresholds, possible cycle slips in the tracking PLL and accuracy of the observables (i.e. the error propagation onto the observables stage).

  19. COLAcode: COmoving Lagrangian Acceleration code

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin V.

    2016-02-01

    COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.

  20. Standard Free Droplet Digital Polymerase Chain Reaction as a New Tool for the Quality Control of High-Capacity Adenoviral Vectors in Small-Scale Preparations

    PubMed Central

    Boehme, Philip; Stellberger, Thorsten; Solanki, Manish; Zhang, Wenli; Schulz, Eric; Bergmann, Thorsten; Liu, Jing; Doerner, Johannes; Baiker, Armin E.

    2015-01-01

    Abstract High-capacity adenoviral vectors (HCAdVs) are promising tools for gene therapy as well as for genetic engineering. However, one limitation of the HCAdV vector system is the complex, time-consuming, and labor-intensive production process and the following quality control procedure. Since HCAdVs are deleted for all viral coding sequences, a helper virus (HV) is needed in the production process to provide the sequences for all viral proteins in trans. For the purification procedure of HCAdV, cesium chloride density gradient centrifugation is usually performed followed by buffer exchange using dialysis or comparable methods. However, performing these steps is technically difficult, potentially error-prone, and not scalable. Here, we establish a new protocol for small-scale production of HCAdV based on commercially available adenovirus purification systems and a standard method for the quality control of final HCAdV preparations. For titration of final vector preparations, we established a droplet digital polymerase chain reaction (ddPCR) that uses a standard free-end-point PCR in small droplets of defined volume. By using different probes, this method is capable of detecting and quantifying HCAdV and HV in one reaction independent of reference material, rendering this method attractive for accurately comparing viral titers between different laboratories. In summary, we demonstrate that it is possible to produce HCAdV in a small scale of sufficient quality and quantity to perform experiments in cell culture, and we established a reliable protocol for vector titration based on ddPCR. Our method significantly reduces time and required equipment to perform HCAdV production. In the future the ddPCR technology could be advantageous for titration of other viral vectors commonly used in gene therapy. PMID:25640117

  1. Small Atomic Orbital Basis Set First‐Principles Quantum Chemical Methods for Large Molecular and Periodic Systems: A Critical Analysis of Error Sources

    PubMed Central

    Sure, Rebecca; Brandenburg, Jan Gerit

    2015-01-01

    Abstract In quantum chemical computations the combination of Hartree–Fock or a density functional theory (DFT) approximation with relatively small atomic orbital basis sets of double‐zeta quality is still widely used, for example, in the popular B3LYP/6‐31G* approach. In this Review, we critically analyze the two main sources of error in such computations, that is, the basis set superposition error on the one hand and the missing London dispersion interactions on the other. We review various strategies to correct those errors and present exemplary calculations on mainly noncovalently bound systems of widely varying size. Energies and geometries of small dimers, large supramolecular complexes, and molecular crystals are covered. We conclude that it is not justified to rely on fortunate error compensation, as the main inconsistencies can be cured by modern correction schemes which clearly outperform the plain mean‐field methods. PMID:27308221

  2. Assessment of Spectral Doppler in Preclinical Ultrasound Using a Small-Size Rotating Phantom

    PubMed Central

    Yang, Xin; Sun, Chao; Anderson, Tom; Moran, Carmel M.; Hadoke, Patrick W.F.; Gray, Gillian A.; Hoskins, Peter R.

    2013-01-01

    Preclinical ultrasound scanners are used to measure blood flow in small animals, but the potential errors in blood velocity measurements have not been quantified. This investigation rectifies this omission through the design and use of phantoms and evaluation of measurement errors for a preclinical ultrasound system (Vevo 770, Visualsonics, Toronto, ON, Canada). A ray model of geometric spectral broadening was used to predict velocity errors. A small-scale rotating phantom, made from tissue-mimicking material, was developed. True and Doppler-measured maximum velocities of the moving targets were compared over a range of angles from 10° to 80°. Results indicate that the maximum velocity was overestimated by up to 158% by spectral Doppler. There was good agreement (<10%) between theoretical velocity errors and measured errors for beam-target angles of 50°–80°. However, for angles of 10°–40°, the agreement was not as good (>50%). The phantom is capable of validating the performance of blood velocity measurement in preclinical ultrasound. PMID:23711503

  3. The proposed coding standard at GSFC

    NASA Technical Reports Server (NTRS)

    Morakis, J. C.; Helgert, H. J.

    1977-01-01

    As part of the continuing effort to introduce standardization of spacecraft and ground equipment in satellite systems, NASA's Goddard Space Flight Center and other NASA facilities have supported the development of a set of standards for the use of error control coding in telemetry subsystems. These standards are intended to ensure compatibility between spacecraft and ground encoding equipment, while allowing sufficient flexibility to meet all anticipated mission requirements. The standards which have been developed to date cover the application of block codes in error detection and error correction modes, as well as short and long constraint length convolutional codes decoded via the Viterbi and sequential decoding algorithms, respectively. Included are detailed specifications of the codes, and their implementation. Current effort is directed toward the development of standards covering channels with burst noise characteristics, channels with feedback, and code concatenation.

  4. Association between pregnancy complications and small-for-gestational-age birth weight defined by customized fetal growth standard versus a population-based standard.

    PubMed

    Odibo, Anthony O; Francis, Andre; Cahill, Alison G; Macones, George A; Crane, James P; Gardosi, Jason

    2011-03-01

    To derive coefficients for developing a customized growth chart for a Mid-Western US population, and to estimate the association between pregnancy outcomes and smallness for gestational age (SGA) defined by the customized growth chart compared with a population-based growth chart for the USA. A retrospective cohort study of an ultrasound database using 54,433 pregnancies meeting inclusion criteria was conducted. Coefficients for customized centiles were derived using 42,277 pregnancies and compared with those obtained from other populations. Two adverse outcome indicators were defined (greater than 7 day stay in the neonatal unit and stillbirth [SB]), and the risk for each outcome was calculated for the groups of pregnancies defined as SGA by the population standard and SGA by the customized standard using 12,456 pregnancies for the validation sample. The growth potential expressed as weight at 40 weeks in this population was 3524 g (standard error: 402 g). In the validation population, 4055 cases of SGA were identified using both population and customized standards. The cases additionally identified as SGA by the customized method had a significantly increased risk of each of the adverse outcome categories. The sensitivity and specificity of those identified as SGA by customized method only for detecting pregnancies at risk for SB was 32.7% (95% confidence interval [CI] 27.0-38.8%) and 95.1% (95% CI: 94.7-95.0%) versus 0.8% (95% CI 0.1-2.7%) and 98.0% (95% CI 97.8-98.2%)for those identified by only the population-based method, respectively. SGA defined by customized growth potential is able to identify substantially more pregnancies at a risk for adverse outcome than the currently used national standard for fetal growth.

  5. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  6. Derivation of an analytic expression for the error associated with the noise reduction rating

    NASA Astrophysics Data System (ADS)

    Murphy, William J.

    2005-04-01

    Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.

  7. 78 FR 17155 - Standards for the Growing, Harvesting, Packing, and Holding of Produce for Human Consumption...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-20

    ...The Food and Drug Administration (FDA or we) is correcting the preamble to a proposed rule that published in the Federal Register of January 16, 2013. That proposed rule would establish science-based minimum standards for the safe growing, harvesting, packing, and holding of produce, meaning fruits and vegetables grown for human consumption. FDA proposed these standards as part of our implementation of the FDA Food Safety Modernization Act. The document published with several technical errors, including some errors in cross references, as well as several errors in reference numbers cited throughout the document. This document corrects those errors. We are also placing a corrected copy of the proposed rule in the docket.

  8. A comparison of registration errors with imageless computer navigation during MIS total knee arthroplasty versus standard incision total knee arthroplasty: a cadaveric study.

    PubMed

    Davis, Edward T; Pagkalos, Joseph; Gallie, Price A M; Macgroarty, Kelly; Waddell, James P; Schemitsch, Emil H

    2015-01-01

    Optimal component alignment in total knee arthroplasty has been associated with better functional outcome as well as improved implant longevity. The ability to align components optimally during minimally invasive (MIS) total knee replacement (TKR) has been a cause of concern. Computer navigation is a useful aid in achieving the desired alignment although it is limited by the error during the manual registration of landmarks. Our study aims to compare the registration process error between a standard and a MIS surgical approach. We hypothesized that performing the registration error via an MIS approach would increase the registration process error. Five fresh frozen lower limbs were routinely prepared and draped. The registration process was performed through an MIS approach. This was then extended to the standard approach and the registration was performed again. Two surgeons performed the registration process five times with each approach. Performing the registration process through the MIS approach was not associated with higher error compared to the standard approach in the alignment parameters of interest. This rejects our hypothesis. Image-free navigated MIS TKR does not appear to carry higher risk of component malalignment due to the registration process error. Navigation can be used during MIS TKR to improve alignment without reduced accuracy due to the approach.

  9. Verification of unfold error estimates in the unfold operator code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fehl, D.L.; Biggs, F.

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashionmore » with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}« less

  10. Rank score and permutation testing alternatives for regression quantile estimates

    USGS Publications Warehouse

    Cade, B.S.; Richards, J.D.; Mielke, P.W.

    2006-01-01

    Performance of quantile rank score tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1) were evaluated by simulation for models with p = 2 and 6 predictors, moderate collinearity among predictors, homogeneous and hetero-geneous errors, small to moderate samples (n = 20–300), and central to upper quantiles (0.50–0.99). Test statistics evaluated were the conventional quantile rank score T statistic distributed as χ2 random variable with q degrees of freedom (where q parameters are constrained by H 0:) and an F statistic with its sampling distribution approximated by permutation. The permutation F-test maintained better Type I errors than the T-test for homogeneous error models with smaller n and more extreme quantiles τ. An F distributional approximation of the F statistic provided some improvements in Type I errors over the T-test for models with > 2 parameters, smaller n, and more extreme quantiles but not as much improvement as the permutation approximation. Both rank score tests required weighting to maintain correct Type I errors when heterogeneity under the alternative model increased to 5 standard deviations across the domain of X. A double permutation procedure was developed to provide valid Type I errors for the permutation F-test when null models were forced through the origin. Power was similar for conditions where both T- and F-tests maintained correct Type I errors but the F-test provided some power at smaller n and extreme quantiles when the T-test had no power because of excessively conservative Type I errors. When the double permutation scheme was required for the permutation F-test to maintain valid Type I errors, power was less than for the T-test with decreasing sample size and increasing quantiles. Confidence intervals on parameters and tolerance intervals for future predictions were constructed based on test inversion for an example application relating trout densities to stream channel width:depth.

  11. Role-modeling and medical error disclosure: a national survey of trainees.

    PubMed

    Martinez, William; Hickson, Gerald B; Miller, Bonnie M; Doukas, David J; Buckley, John D; Song, John; Sehgal, Niraj L; Deitz, Jennifer; Braddock, Clarence H; Lehmann, Lisa Soleymani

    2014-03-01

    To measure trainees' exposure to negative and positive role-modeling for responding to medical errors and to examine the association between that exposure and trainees' attitudes and behaviors regarding error disclosure. Between May 2011 and June 2012, 435 residents at two large academic medical centers and 1,187 medical students from seven U.S. medical schools received anonymous, electronic questionnaires. The questionnaire asked respondents about (1) experiences with errors, (2) training for responding to errors, (3) behaviors related to error disclosure, (4) exposure to role-modeling for responding to errors, and (5) attitudes regarding disclosure. Using multivariate regression, the authors analyzed whether frequency of exposure to negative and positive role-modeling independently predicted two primary outcomes: (1) attitudes regarding disclosure and (2) nontransparent behavior in response to a harmful error. The response rate was 55% (884/1,622). Training on how to respond to errors had the largest independent, positive effect on attitudes (standardized effect estimate, 0.32, P < .001); negative role-modeling had the largest independent, negative effect (standardized effect estimate, -0.26, P < .001). Positive role-modeling had a positive effect on attitudes (standardized effect estimate, 0.26, P < .001). Exposure to negative role-modeling was independently associated with an increased likelihood of trainees' nontransparent behavior in response to an error (OR 1.37, 95% CI 1.15-1.64; P < .001). Exposure to role-modeling predicts trainees' attitudes and behavior regarding the disclosure of harmful errors. Negative role models may be a significant impediment to disclosure among trainees.

  12. Combining Accuracy and Efficiency: An Incremental Focal-Point Method Based on Pair Natural Orbitals.

    PubMed

    Fiedler, Benjamin; Schmitz, Gunnar; Hättig, Christof; Friedrich, Joachim

    2017-12-12

    In this work, we present a new pair natural orbitals (PNO)-based incremental scheme to calculate CCSD(T) and CCSD(T0) reaction, interaction, and binding energies. We perform an extensive analysis, which shows small incremental errors similar to previous non-PNO calculations. Furthermore, slight PNO errors are obtained by using T PNO = T TNO with appropriate values of 10 -7 to 10 -8 for reactions and 10 -8 for interaction or binding energies. The combination with the efficient MP2 focal-point approach yields chemical accuracy relative to the complete basis-set (CBS) limit. In this method, small basis sets (cc-pVDZ, def2-TZVP) for the CCSD(T) part are sufficient in case of reactions or interactions, while some larger ones (e.g., (aug)-cc-pVTZ) are necessary for molecular clusters. For these larger basis sets, we show the very high efficiency of our scheme. We obtain not only tremendous decreases of the wall times (i.e., factors >10 2 ) due to the parallelization of the increment calculations as well as of the total times due to the application of PNOs (i.e., compared to the normal incremental scheme) but also smaller total times with respect to the standard PNO method. That way, our new method features a perfect applicability by combining an excellent accuracy with a very high efficiency as well as the accessibility to larger systems due to the separation of the full computation into several small increments.

  13. MARS: bringing the automation of small-molecule bioanalytical sample preparations to a new frontier.

    PubMed

    Li, Ming; Chou, Judy; Jing, Jing; Xu, Hui; Costa, Aldo; Caputo, Robin; Mikkilineni, Rajesh; Flannelly-King, Shane; Rohde, Ellen; Gan, Lawrence; Klunk, Lewis; Yang, Liyu

    2012-06-01

    In recent years, there has been a growing interest in automating small-molecule bioanalytical sample preparations specifically using the Hamilton MicroLab(®) STAR liquid-handling platform. In the most extensive work reported thus far, multiple small-molecule sample preparation assay types (protein precipitation extraction, SPE and liquid-liquid extraction) have been integrated into a suite that is composed of graphical user interfaces and Hamilton scripts. Using that suite, bioanalytical scientists have been able to automate various sample preparation methods to a great extent. However, there are still areas that could benefit from further automation, specifically, the full integration of analytical standard and QC sample preparation with study sample extraction in one continuous run, real-time 2D barcode scanning on the Hamilton deck and direct Laboratory Information Management System database connectivity. We developed a new small-molecule sample-preparation automation system that improves in all of the aforementioned areas. The improved system presented herein further streamlines the bioanalytical workflow, simplifies batch run design, reduces analyst intervention and eliminates sample-handling error.

  14. Structural nested mean models for assessing time-varying effect moderation.

    PubMed

    Almirall, Daniel; Ten Have, Thomas; Murphy, Susan A

    2010-03-01

    This article considers the problem of assessing causal effect moderation in longitudinal settings in which treatment (or exposure) is time varying and so are the covariates said to moderate its effect. Intermediate causal effects that describe time-varying causal effects of treatment conditional on past covariate history are introduced and considered as part of Robins' structural nested mean model. Two estimators of the intermediate causal effects, and their standard errors, are presented and discussed: The first is a proposed two-stage regression estimator. The second is Robins' G-estimator. The results of a small simulation study that begins to shed light on the small versus large sample performance of the estimators, and on the bias-variance trade-off between the two estimators are presented. The methodology is illustrated using longitudinal data from a depression study.

  15. Studies on hand-held visual communication device for the deaf and speech-impaired I. Visual display window size.

    PubMed

    Thurlow, W R

    1980-01-01

    Messages were presented which moved from right to left along an electronic alphabetic display which was varied in "window" size from 4 through 32 letter spaces. Deaf subjects signed the messages they perceived. Relatively few errors were made even at the highest rate of presentation, which corresponded to a typing rate of 60 words/min. It is concluded that many deaf persons can make effective use of a small visual display. A reduced cost is then possible for visual communication instruments for these people through reduced display size. Deaf subjects who can profit from a small display can be located by a sentence test administered by tape recorder which drives the display of the communication device by means of the standard code of the deaf teletype network.

  16. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections

    PubMed Central

    Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.

    2018-01-01

    Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737

  17. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    PubMed

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.

  18. Late complication of open inguinal hernia repair: small bowel obstruction caused by intraperitoneal mesh migration.

    PubMed

    Ferrone, Roberto; Scarone, Pier Carlo; Natalini, Gianni

    2003-09-01

    We describe a case of small bowel obstruction due to prosthetic mesh migration. A 67-year-old male, who had undergone prosthetic repair of inguinal hernia 3 years before, was admitted for a mechanical small bowel obstruction. Laparotomy revealed the penultimate ileal loop choked by an adhesion drawing it towards a polypropylene mesh, firmly attached to the parietal peritoneum of the inguinal region. The intestinal loop was released; the mesh was embedded deep with continuous whip suture after folding the parietal peritoneum. The patient was dismissed on the 11th postoperative day surgically healed. The "tension-free" technique is undoubtedly the gold standard for hernia repair. However, it is not free of complications, mostly due to technical errors, of which the surgeon must be aware, both when he is responsible for correcting defects in the wall, as well as when he has to face an occlusion in a patient who has undergone plastic surgery for inguinal hernia.

  19. SU-F-T-445: Effect of Triaxial Cables and Microdetectors in Small Field Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, I; Andersen, A

    2016-06-15

    Purpose: Advances in radiation treatment especially with smaller fields used in SRS, Gamma knife, Tomotherapy, Cyberknife, and IMRT, require a high degree of precision especially with microdetectors for small field dosimetry (Das et al, Med Ph, 35, 206, 2008; Alfonso et al, Med Phys, 35, 5179, 2008). Due to small signal, the triaxial cable becomes critical in terms of signal to noise ratio (SNR) which is studied with microdetectors. Methods: Six high quality triaxial cables, 9.1 meters long from different manufacturers without any defects were acquired along with 5 most popular microdetectors (microdiamond, plastic scintillators, SRS-diode, edge-diode and pinpoint). Amore » dedicated electrometer was used for each combination except W1 which has its own supermax electrometer. A 6MV photon beam from Varian True beam with 100 MU at a 600 MU/min was used. Measurements were made at a depth of 5 cm in water phantom. Field sizes were varied from 0.5 cm to 10 cm square fields. Readings were taken with combination of cables and microdetectors. Results: Signal is dependent on the quality of the connectors, cables and types of microdetector. The readings varied from nC to pC depending on the type of microdetector. The net signal, S, (Sc-Sn), where Sc is signal with chamber and Sn is without chamber is a linear function of sensitive volume, v; (S = α+β•V), where α and β are constants. The standard deviation (SD) in 3 sets of reading with each combination of cable-detector was extremely low <0.02%. As expected the SD is higher in small fields (<3cm). Maximum estimated error was only ±0.2% in cables-detector combinations. Conclusion: The choice of cables has relatively small effect (±0.2%) with microdosimeter and should be accounted in overall error estimation in k value that is needed to convert ratio of reading to dose in small field dosimetry.« less

  20. Lexico-Semantic Errors of the Learners of English: A Survey of Standard Seven Keiyo-Speaking Primary School Pupils in Keiyo District, Kenya

    ERIC Educational Resources Information Center

    Jeptarus, Kipsamo E.; Ngene, Patrick K.

    2016-01-01

    The purpose of this research was to study the Lexico-semantic errors of the Keiyo-speaking standard seven primary school learners of English as a Second Language (ESL) in Keiyo District, Kenya. This study was guided by two related theories: Error Analysis Theory/Approach by Corder (1971) which approaches L2 learning through a detailed analysis of…

  1. Solar Cell Short Circuit Current Errors and Uncertainties During High Altitude Calibrations

    NASA Technical Reports Server (NTRS)

    Snyder, David D.

    2012-01-01

    High altitude balloon based facilities can make solar cell calibration measurements above 99.5% of the atmosphere to use for adjusting laboratory solar simulators. While close to on-orbit illumination, the small attenuation to the spectra may result in under measurements of solar cell parameters. Variations of stratospheric weather, may produce flight-to-flight measurement variations. To support the NSCAP effort, this work quantifies some of the effects on solar cell short circuit current (Isc) measurements on triple junction sub-cells. This work looks at several types of high altitude methods, direct high altitude meas urements near 120 kft, and lower stratospheric Langley plots from aircraft. It also looks at Langley extrapolation from altitudes above most of the ozone, for potential small balloon payloads. A convolution of the sub-cell spectral response with the standard solar spectrum modified by several absorption processes is used to determine the relative change from AMO, lscllsc(AMO). Rayleigh scattering, molecular scatterin g from uniformly mixed gases, Ozone, and water vapor, are included in this analysis. A range of atmosph eric pressures are examined, from 0. 05 to 0.25 Atm to cover the range of atmospheric altitudes where solar cell calibrations a reperformed. Generally these errors and uncertainties are less than 0.2%

  2. Analyzing Reliability and Performance Trade-Offs of HLS-Based Designs in SRAM-Based FPGAs Under Soft Errors

    NASA Astrophysics Data System (ADS)

    Tambara, Lucas Antunes; Tonfat, Jorge; Santos, André; Kastensmidt, Fernanda Lima; Medina, Nilberto H.; Added, Nemitala; Aguiar, Vitor A. P.; Aguirre, Fernando; Silveira, Marcilei A. G.

    2017-02-01

    The increasing system complexity of FPGA-based hardware designs and shortening of time-to-market have motivated the adoption of new designing methodologies focused on addressing the current need for high-performance circuits. High-Level Synthesis (HLS) tools can generate Register Transfer Level (RTL) designs from high-level software programming languages. These tools have evolved significantly in recent years, providing optimized RTL designs, which can serve the needs of safety-critical applications that require both high performance and high reliability levels. However, a reliability evaluation of HLS-based designs under soft errors has not yet been presented. In this work, the trade-offs of different HLS-based designs in terms of reliability, resource utilization, and performance are investigated by analyzing their behavior under soft errors and comparing them to a standard processor-based implementation in an SRAM-based FPGA. Results obtained from fault injection campaigns and radiation experiments show that it is possible to increase the performance of a processor-based system up to 5,000 times by changing its architecture with a small impact in the cross section (increasing up to 8 times), and still increasing the Mean Workload Between Failures (MWBF) of the system.

  3. Verification of unfold error estimates in the UFO code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fehl, D.L.; Biggs, F.

    Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have anmore » imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.« less

  4. Optimization of Control Points Number at Coordinate Measurements based on the Monte-Carlo Method

    NASA Astrophysics Data System (ADS)

    Korolev, A. A.; Kochetkov, A. V.; Zakharov, O. V.

    2018-01-01

    Improving the quality of products causes an increase in the requirements for the accuracy of the dimensions and shape of the surfaces of the workpieces. This, in turn, raises the requirements for accuracy and productivity of measuring of the workpieces. The use of coordinate measuring machines is currently the most effective measuring tool for solving similar problems. The article proposes a method for optimizing the number of control points using Monte Carlo simulation. Based on the measurement of a small sample from batches of workpieces, statistical modeling is performed, which allows one to obtain interval estimates of the measurement error. This approach is demonstrated by examples of applications for flatness, cylindricity and sphericity. Four options of uniform and uneven arrangement of control points are considered and their comparison is given. It is revealed that when the number of control points decreases, the arithmetic mean decreases, the standard deviation of the measurement error increases and the probability of the measurement α-error increases. In general, it has been established that it is possible to repeatedly reduce the number of control points while maintaining the required measurement accuracy.

  5. Stochastic modeling for time series InSAR: with emphasis on atmospheric effects

    NASA Astrophysics Data System (ADS)

    Cao, Yunmeng; Li, Zhiwei; Wei, Jianchao; Hu, Jun; Duan, Meng; Feng, Guangcai

    2018-02-01

    Despite the many applications of time series interferometric synthetic aperture radar (TS-InSAR) techniques in geophysical problems, error analysis and assessment have been largely overlooked. Tropospheric propagation error is still the dominant error source of InSAR observations. However, the spatiotemporal variation of atmospheric effects is seldom considered in the present standard TS-InSAR techniques, such as persistent scatterer interferometry and small baseline subset interferometry. The failure to consider the stochastic properties of atmospheric effects not only affects the accuracy of the estimators, but also makes it difficult to assess the uncertainty of the final geophysical results. To address this issue, this paper proposes a network-based variance-covariance estimation method to model the spatiotemporal variation of tropospheric signals, and to estimate the temporal variance-covariance matrix of TS-InSAR observations. The constructed stochastic model is then incorporated into the TS-InSAR estimators both for parameters (e.g., deformation velocity, topography residual) estimation and uncertainty assessment. It is an incremental and positive improvement to the traditional weighted least squares methods to solve the multitemporal InSAR time series. The performance of the proposed method is validated by using both simulated and real datasets.

  6. 49 CFR Appendix F to Part 240 - Medical Standards Guidelines

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... greater guidance on the procedures that should be employed in administering the vision and hearing... more errors on plates 1-15. MULTIFUNCTION VISION TESTER Keystone Orthoscope Any error. OPTEC 2000 Any error. Titmus Vision Tester Any error. Titmus II Vision Tester Any error. (3) In administering any of...

  7. 49 CFR Appendix F to Part 240 - Medical Standards Guidelines

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... greater guidance on the procedures that should be employed in administering the vision and hearing... more errors on plates 1-15. MULTIFUNCTION VISION TESTER Keystone Orthoscope Any error. OPTEC 2000 Any error. Titmus Vision Tester Any error. Titmus II Vision Tester Any error. (3) In administering any of...

  8. Comparison of Optimal Design Methods in Inverse Problems

    DTIC Science & Technology

    2011-05-11

    corresponding FIM can be estimated by F̂ (τ) = F̂ (τ, θ̂OLS) = (Σ̂ N (θ̂OLS)) −1. (13) The asymptotic standard errors are given by SEk (θ0) = √ (ΣN0 )kk, k...1, . . . , p. (14) These standard errors are estimated in practice (when θ0 and σ0 are not known) by SEk (θ̂OLS) = √ (Σ̂N (θ̂OLS))kk, k = 1... SEk (θ̂boot) = √ Cov(θ̂boot)kk. We will compare the optimal design methods using the standard errors resulting from the op- timal time points each

  9. Small-Volume Injections: Evaluation of Volume Administration Deviation From Intended Injection Volumes.

    PubMed

    Muffly, Matthew K; Chen, Michael I; Claure, Rebecca E; Drover, David R; Efron, Bradley; Fitch, William L; Hammer, Gregory B

    2017-10-01

    In the perioperative period, anesthesiologists and postanesthesia care unit (PACU) nurses routinely prepare and administer small-volume IV injections, yet the accuracy of delivered medication volumes in this setting has not been described. In this ex vivo study, we sought to characterize the degree to which small-volume injections (≤0.5 mL) deviated from the intended injection volumes among a group of pediatric anesthesiologists and pediatric postanesthesia care unit (PACU) nurses. We hypothesized that as the intended injection volumes decreased, the deviation from those intended injection volumes would increase. Ten attending pediatric anesthesiologists and 10 pediatric PACU nurses each performed a series of 10 injections into a simulated patient IV setup. Practitioners used separate 1-mL tuberculin syringes with removable 18-gauge needles (Becton-Dickinson & Company, Franklin Lakes, NJ) to aspirate 5 different volumes (0.025, 0.05, 0.1, 0.25, and 0.5 mL) of 0.25 mM Lucifer Yellow (LY) fluorescent dye constituted in saline (Sigma Aldrich, St. Louis, MO) from a rubber-stoppered vial. Each participant then injected the specified volume of LY fluorescent dye via a 3-way stopcock into IV tubing with free-flowing 0.9% sodium chloride (10 mL/min). The injected volume of LY fluorescent dye and 0.9% sodium chloride then drained into a collection vial for laboratory analysis. Microplate fluorescence wavelength detection (Infinite M1000; Tecan, Mannedorf, Switzerland) was used to measure the fluorescence of the collected fluid. Administered injection volumes were calculated based on the fluorescence of the collected fluid using a calibration curve of known LY volumes and associated fluorescence.To determine whether deviation of the administered volumes from the intended injection volumes increased at lower injection volumes, we compared the proportional injection volume error (loge [administered volume/intended volume]) for each of the 5 injection volumes using a linear regression model. Analysis of variance was used to determine whether the absolute log proportional error differed by the intended injection volume. Interindividual and intraindividual deviation from the intended injection volume was also characterized. As the intended injection volumes decreased, the absolute log proportional injection volume error increased (analysis of variance, P < .0018). The exploratory analysis revealed no significant difference in the standard deviations of the log proportional errors for injection volumes between physicians and pediatric PACU nurses; however, the difference in absolute bias was significantly higher for nurses with a 2-sided significance of P = .03. Clinically significant dose variation occurs when injecting volumes ≤0.5 mL. Administering small volumes of medications may result in unintended medication administration errors.

  10. Reaction Time Is Negatively Associated with Corpus Callosum Area in the Early Stages of CADASIL.

    PubMed

    Delorme, S; De Guio, F; Reyes, S; Jabouley, A; Chabriat, H; Jouvent, E

    2017-11-01

    Reaction time was recently recognized as a marker of subtle cognitive and behavioral alterations in the early clinical stages of CADASIL, a monogenic cerebral small-vessel disease. In unselected patients with CADASIL, brain atrophy and lacunes are the main imaging correlates of disease severity, but MR imaging correlates of reaction time in mildly affected patients are unknown. We hypothesized that reaction time is independently associated with the corpus callosum area in the early clinical stages of CADASIL. Twenty-six patients with CADASIL without dementia (Mini-Mental State Examination score > 24 and no cognitive symptoms) and without disability (modified Rankin Scale score ≤ 1) were compared with 29 age- and sex-matched controls. Corpus callosum area was determined on 3D-T1 MR imaging sequences with validated methodology. Between-group comparisons were performed with t tests or χ 2 tests when appropriate. Relationships between reaction time and corpus callosum area were tested using linear regression modeling. Reaction time was significantly related to corpus callosum area in patients (estimate = -7.4 × 10 3 , standard error = 3.3 × 10 3 , P = .03) even after adjustment for age, sex, level of education, and scores of depression and apathy (estimate = -12.2 × 10 3 , standard error = 3.8 × 10 3 , P = .005). No significant relationship was observed in controls. Corpus callosum area, a simple and robust imaging parameter, appears to be an independent correlate of reaction time at the early clinical stages of CADASIL. Further studies will determine whether corpus callosum area can be used as an outcome in future clinical trials in CADASIL or in more prevalent small-vessel diseases. © 2017 by American Journal of Neuroradiology.

  11. Accuracy validation of incident photon fluence on DQE for various measurement conditions and X-ray units.

    PubMed

    Haba, Tomonobu; Kondo, Shimpei; Hayashi, Daiki; Koyama, Shuji

    2013-07-01

    Detective quantum efficiency (DQE) is widely used as a comprehensive metric for X-ray image evaluation in digital X-ray units. The incident photon fluence per air kerma (SNR²(in)) is necessary for calculating the DQE. The International Electrotechnical Commission (IEC) reports the SNR²(in) under conditions of standard radiation quality, but this SNR²(in) might not be accurate as calculated from the X-ray spectra emitted by an actual X-ray tube. In this study, we evaluated the error range of the SNR²(in) presented by the IEC62220-1 report. We measured the X-ray spectra emitted by an X-ray tube under conditions of standard radiation quality of RQA5. The spectral photon fluence at each energy bin was multiplied by the photon energy and the mass energy absorption coefficient of air; then the air kerma spectrum was derived. The air kerma spectrum was integrated over the whole photon energy range to yield the total air kerma. The total photon number was then divided by the total air kerma. This value is the SNR²(in). These calculations were performed for various measurement parameters and X-ray units. The percent difference between the calculated value and the standard value of RQA5 was up to 2.9%. The error range was not negligibly small. Therefore, it is better to use the new SNR²(in) of 30694 (1/(mm(2) μGy)) than the current [Formula: see text] of 30174 (1/(mm(2) μGy)).

  12. Geospatial interpolation and mapping of tropospheric ozone pollution using geostatistics.

    PubMed

    Kethireddy, Swatantra R; Tchounwou, Paul B; Ahmad, Hafiz A; Yerramilli, Anjaneyulu; Young, John H

    2014-01-10

    Tropospheric ozone (O3) pollution is a major problem worldwide, including in the United States of America (USA), particularly during the summer months. Ozone oxidative capacity and its impact on human health have attracted the attention of the scientific community. In the USA, sparse spatial observations for O3 may not provide a reliable source of data over a geo-environmental region. Geostatistical Analyst in ArcGIS has the capability to interpolate values in unmonitored geo-spaces of interest. In this study of eastern Texas O3 pollution, hourly episodes for spring and summer 2012 were selectively identified. To visualize the O3 distribution, geostatistical techniques were employed in ArcMap. Using ordinary Kriging, geostatistical layers of O3 for all the studied hours were predicted and mapped at a spatial resolution of 1 kilometer. A decent level of prediction accuracy was achieved and was confirmed from cross-validation results. The mean prediction error was close to 0, the root mean-standardized-prediction error was close to 1, and the root mean square and average standard errors were small. O3 pollution map data can be further used in analysis and modeling studies. Kriging results and O3 decadal trends indicate that the populace in Houston-Sugar Land-Baytown, Dallas-Fort Worth-Arlington, Beaumont-Port Arthur, San Antonio, and Longview are repeatedly exposed to high levels of O3-related pollution, and are prone to the corresponding respiratory and cardiovascular health effects. Optimization of the monitoring network proves to be an added advantage for the accurate prediction of exposure levels.

  13. Adjusted adaptive Lasso for covariate model-building in nonlinear mixed-effect pharmacokinetic models.

    PubMed

    Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O

    2017-02-01

    One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.

  14. (U) An Analytic Study of Piezoelectric Ejecta Mass Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tregillis, Ian Lee

    2017-02-16

    We consider the piezoelectric measurement of the areal mass of an ejecta cloud, for the specific case where ejecta are created by a single shock at the free surface and fly ballistically through vacuum to the sensor. To do so, we define time- and velocity-dependent ejecta “areal mass functions” at the source and sensor in terms of typically unknown distribution functions for the ejecta particles. Next, we derive an equation governing the relationship between the areal mass function at the source (which resides in the rest frame of the free surface) and at the sensor (which resides in the laboratorymore » frame). We also derive expressions for the analytic (“true”) accumulated ejecta mass at the sensor and the measured (“inferred”) value obtained via the standard method for analyzing piezoelectric voltage traces. This approach enables us to derive an exact expression for the error imposed upon a piezoelectric ejecta mass measurement (in a perfect system) by the assumption of instantaneous creation. We verify that when the ejecta are created instantaneously (i.e., when the time dependence is a delta function), the piezoelectric inference method exactly reproduces the correct result. When creation is not instantaneous, the standard piezo analysis will always overestimate the true mass. However, the error is generally quite small (less than several percent) for most reasonable velocity and time dependences. In some cases, errors exceeding 10-15% may require velocity distributions or ejecta production timescales inconsistent with experimental observations. These results are demonstrated rigorously with numerous analytic test problems.« less

  15. Comparison of Efficiency of Jackknife and Variance Component Estimators of Standard Errors. Program Statistics Research. Technical Report.

    ERIC Educational Resources Information Center

    Longford, Nicholas T.

    Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…

  16. Errors in quantitative backscattered electron analysis of bone standardized by energy-dispersive x-ray spectrometry.

    PubMed

    Vajda, E G; Skedros, J G; Bloebaum, R D

    1998-10-01

    Backscattered electron (BSE) imaging has proven to be a useful method for analyzing the mineral distribution in microscopic regions of bone. However, an accepted method of standardization has not been developed, limiting the utility of BSE imaging for truly quantitative analysis. Previous work has suggested that BSE images can be standardized by energy-dispersive x-ray spectrometry (EDX). Unfortunately, EDX-standardized BSE images tend to underestimate the mineral content of bone when compared with traditional ash measurements. The goal of this study is to investigate the nature of the deficit between EDX-standardized BSE images and ash measurements. A series of analytical standards, ashed bone specimens, and unembedded bone specimens were investigated to determine the source of the deficit previously reported. The primary source of error was found to be inaccurate ZAF corrections to account for the organic phase of the bone matrix. Conductive coatings, methylmethacrylate embedding media, and minor elemental constituents in bone mineral introduced negligible errors. It is suggested that the errors would remain constant and an empirical correction could be used to account for the deficit. However, extensive preliminary testing of the analysis equipment is essential.

  17. Systematic study of error sources in supersonic skin-friction balance measurements

    NASA Technical Reports Server (NTRS)

    Allen, J. M.

    1976-01-01

    An experimental study was performed to investigate potential error sources in data obtained with a self-nulling, moment-measuring, skin-friction balance. The balance was installed in the sidewall of a supersonic wind tunnel, and independent measurements of the three forces contributing to the balance output (skin friction, lip force, and off-center normal force) were made for a range of gap size and element protrusion. The relatively good agreement between the balance data and the sum of these three independently measured forces validated the three-term model used. No advantage to a small gap size was found; in fact, the larger gaps were preferable. Perfect element alignment with the surrounding test surface resulted in very small balance errors. However, if small protrusion errors are unavoidable, no advantage was found in having the element slightly below the surrounding test surface rather than above it.

  18. Reliability and validity of two isometric squat tests.

    PubMed

    Blazevich, Anthony J; Gill, Nicholas; Newton, Robert U

    2002-05-01

    The purpose of the present study was first to examine the reliability of isometric squat (IS) and isometric forward hack squat (IFHS) tests to determine if repeated measures on the same subjects yielded reliable results. The second purpose was to examine the relation between isometric and dynamic measures of strength to assess validity. Fourteen male subjects performed maximal IS and IFHS tests on 2 occasions and 1 repetition maximum (1-RM) free-weight squat and forward hack squat (FHS) tests on 1 occasion. The 2 tests were found to be highly reliable (intraclass correlation coefficient [ICC](IS) = 0.97 and ICC(IFHS) = 1.00). There was a strong relation between average IS and 1-RM squat performance, and between IFHS and 1-RM FHS performance (r(squat) = 0.77, r(FHS) = 0.76; p < 0.01), but a weak relation between squat and FHS test performances (r < 0.55). There was also no difference between observed 1-RM values and those predicted by our regression equations. Errors in predicting 1-RM performance were in the order of 8.5% (standard error of the estimate [SEE] = 13.8 kg) and 7.3% (SEE = 19.4 kg) for IS and IFHS respectively. Correlations between isometric and 1-RM tests were not of sufficient size to indicate high validity of the isometric tests. Together the results suggest that IS and IFHS tests could detect small differences in multijoint isometric strength between subjects, or performance changes over time, and that the scores in the isometric tests are well related to 1-RM performance. However, there was a small error when predicting 1-RM performance from isometric performance, and these tests have not been shown to discriminate between small changes in dynamic strength. The weak relation between squat and FHS test performance can be attributed to differences in the movement patterns of the tests

  19. New Angles on Standard Force Fields: Toward a General Approach for Treating Atomic-Level Anisotropy

    DOE PAGES

    Van Vleet, Mary J.; Misquitta, Alston J.; Schmidt, J. R.

    2017-12-21

    Nearly all standard force fields employ the “sum-of-spheres” approximation, which models intermolecular interactions purely in terms of interatomic distances. Nonetheless, atoms in molecules can have significantly nonspherical shapes, leading to interatomic interaction energies with strong orientation dependencies. Neglecting this “atomic-level anisotropy” can lead to significant errors in predicting interaction energies. Herein, we propose a simple, transferable, and computationally efficient model (MASTIFF) whereby atomic-level orientation dependence can be incorporated into ab initio intermolecular force fields. MASTIFF includes anisotropic exchange-repulsion, charge penetration, and dispersion effects, in conjunction with a standard treatment of anisotropic long-range (multipolar) electrostatics. To validate our approach, we benchmarkmore » MASTIFF against various sum-of-spheres models over a large library of intermolecular interactions between small organic molecules. MASTIFF achieves quantitative accuracy, with respect to both high-level electronic structure theory and experiment, thus showing promise as a basis for “next-generation” force field development.« less

  20. Standardized mean differences cause funnel plot distortion in publication bias assessments.

    PubMed

    Zwetsloot, Peter-Paul; Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris Ah; Chamuleau, Steven Aj; MacLeod, Malcolm R; Wever, Kimberley E

    2017-09-08

    Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results.

  1. Standardized mean differences cause funnel plot distortion in publication bias assessments

    PubMed Central

    Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris AH; Chamuleau, Steven AJ; MacLeod, Malcolm R

    2017-01-01

    Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results. PMID:28884685

  2. New Angles on Standard Force Fields: Toward a General Approach for Treating Atomic-Level Anisotropy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Vleet, Mary J.; Misquitta, Alston J.; Schmidt, J. R.

    Nearly all standard force fields employ the “sum-of-spheres” approximation, which models intermolecular interactions purely in terms of interatomic distances. Nonetheless, atoms in molecules can have significantly nonspherical shapes, leading to interatomic interaction energies with strong orientation dependencies. Neglecting this “atomic-level anisotropy” can lead to significant errors in predicting interaction energies. Herein, we propose a simple, transferable, and computationally efficient model (MASTIFF) whereby atomic-level orientation dependence can be incorporated into ab initio intermolecular force fields. MASTIFF includes anisotropic exchange-repulsion, charge penetration, and dispersion effects, in conjunction with a standard treatment of anisotropic long-range (multipolar) electrostatics. To validate our approach, we benchmarkmore » MASTIFF against various sum-of-spheres models over a large library of intermolecular interactions between small organic molecules. MASTIFF achieves quantitative accuracy, with respect to both high-level electronic structure theory and experiment, thus showing promise as a basis for “next-generation” force field development.« less

  3. NASA GPM GV Science Requirements

    NASA Technical Reports Server (NTRS)

    Smith, E.

    2003-01-01

    An important scientific objective of the NASA portion of the GPM Mission is to generate quantitatively-based error characterization information along with the rainrate retrievals emanating from the GPM constellation of satellites. These data must serve four main purposes: (1) they must be of sufficient quality, uniformity, and timeliness to govern the observation weighting schemes used in the data assimilation modules of numerical weather prediction models; (2) they must extend over that portion of the globe accessible by the GPM core satellite to which the NASA GV program is focused - (approx.65 degree inclination); (3) they must have sufficient specificity to enable detection of physically-formulated microphysical and meteorological weaknesses in the standard physical level 2 rainrate algorithms to be used in the GPM Precipitation Processing System (PPS), i.e., algorithms which will have evolved from the TRMM standard physical level 2 algorithms; and (4) they must support the use of physical error modeling as a primary validation tool and as the eventual replacement of the conventional GV approach of statistically intercomparing surface rainrates fiom ground and satellite measurements. This approach to ground validation research represents a paradigm shift vis-&-vis the program developed for the TRMM mission, which conducted ground validation largely as a statistical intercomparison process between raingauge-derived or radar-derived rainrates and the TRMM satellite rainrate retrievals -- long after the original satellite retrievals were archived. This approach has been able to quantify averaged rainrate differences between the satellite algorithms and the ground instruments, but has not been able to explain causes of algorithm failures or produce error information directly compatible with the cost functions of data assimilation schemes. These schemes require periodic and near-realtime bias uncertainty (i.e., global space-time distributed conditional accuracy of the retrieved rainrates) and local error covariance structure (i.e., global space-time distributed error correlation information for the local 4-dimensional space-time domain -- or in simpler terms, the matrix form of precision error). This can only be accomplished by establishing a network of high quality-heavily instrumented supersites selectively distributed at a few oceanic, continental, and coastal sites. Economics and pragmatics dictate that the network must be made up of a relatively small number of sites (6-8) created through international cooperation. This presentation will address some of the details of the methodology behind the error characterization approach, some proposed solutions for expanding site-developed error properties to regional scales, a data processing and communications concept that would enable rapid implementation of algorithm improvement by the algorithm developers, and the likely available options for developing the supersite network.

  4. Analysis of DGPS/INS and MLS/INS final approach navigation errors and control performance data

    NASA Technical Reports Server (NTRS)

    Hueschen, Richard M.; Spitzer, Cary R.

    1992-01-01

    Flight tests were conducted jointly by NASA Langley Research Center and Honeywell, Inc., on a B-737 research aircraft to record a data base for evaluating the performance of a differential DGPS/inertial navigation system (INS) which used GPS Course/Acquisition code receivers. Estimates from the DGPS/INS and a Microwave Landing System (MLS)/INS, and various aircraft parameter data were recorded in real time aboard the aircraft while flying along the final approach path to landing. This paper presents the mean and standard deviation of the DGPS/INS and MLS/INS navigation position errors computed relative to the laser tracker system and of the difference between the DGPS/INS and MLS/INS velocity estimates. RMS errors are presented for DGPS/INS and MLS/INS guidance errors (localizer and glideslope). The mean navigation position errors and standard deviation of the x position coordinate of the DGPS/INS and MLS/INS systems were found to be of similar magnitude while the standard deviation of the y and z position coordinate errors were significantly larger for DGPS/INS compared to MLS/INS.

  5. Modeling the small-scale dish-mounted solar thermal Brayton cycle

    NASA Astrophysics Data System (ADS)

    Le Roux, Willem G.; Meyer, Josua P.

    2016-05-01

    The small-scale dish-mounted solar thermal Brayton cycle (STBC) makes use of a sun-tracking dish reflector, solar receiver, recuperator and micro-turbine to generate power in the range of 1-20 kW. The modeling of such a system, using a turbocharger as micro-turbine, is required so that optimisation and further development of an experimental setup can be done. As a validation, an analytical model of the small-scale STBC in Matlab, where the net power output is determined from an exergy analysis, is compared with Flownex, an integrated systems CFD code. A 4.8 m diameter parabolic dish with open-cavity tubular receiver and plate-type counterflow recuperator is considered, based on previous work. A dish optical error of 10 mrad, a tracking error of 1° and a receiver aperture area of 0.25 m × 0.25 m are considered. Since the recuperator operates at a very high average temperature, the recuperator is modeled using an updated ɛ-NTU method which takes heat loss to the environment into consideration. Compressor and turbine maps from standard off-the-shelf Garrett turbochargers are used. The results show that for the calculation of the steady-state temperatures and pressures, there is good comparison between the Matlab and Flownex results (within 8%) except for the recuperator outlet temperature, which is due to the use of different ɛ-NTU methods. With the use of Matlab and Flownex, it is shown that the small-scale open STBC with an existing off-the-shelf turbocharger could generate a positive net power output with solar-to-mechanical efficiency of up to 12%, with much room for improvement.

  6. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    PubMed

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  7. Conditional Standard Errors of Measurement for Scale Scores.

    ERIC Educational Resources Information Center

    Kolen, Michael J.; And Others

    1992-01-01

    A procedure is described for estimating the reliability and conditional standard errors of measurement of scale scores incorporating the discrete transformation of raw scores to scale scores. The method is illustrated using a strong true score model, and practical applications are described. (SLD)

  8. Probabilistic global maps of the CO2 column at daily and monthly scales from sparse satellite measurements

    NASA Astrophysics Data System (ADS)

    Chevallier, Frédéric; Broquet, Grégoire; Pierangelo, Clémence; Crisp, David

    2017-07-01

    The column-average dry air-mole fraction of carbon dioxide in the atmosphere (XCO2) is measured by scattered satellite measurements like those from the Orbiting Carbon Observatory (OCO-2). We show that global continuous maps of XCO2 (corresponding to level 3 of the satellite data) at daily or coarser temporal resolution can be inferred from these data with a Kalman filter built on a model of persistence. Our application of this approach on 2 years of OCO-2 retrievals indicates that the filter provides better information than a climatology of XCO2 at both daily and monthly scales. Provided that the assigned observation uncertainty statistics are tuned in each grid cell of the XCO2 maps from an objective method (based on consistency diagnostics), the errors predicted by the filter at daily and monthly scales represent the true error statistics reasonably well, except for a bias in the high latitudes of the winter hemisphere and a lack of resolution (i.e., a too small discrimination skill) of the predicted error standard deviations. Due to the sparse satellite sampling, the broad-scale patterns of XCO2 described by the filter seem to lag behind the real signals by a few weeks. Finally, the filter offers interesting insights into the quality of the retrievals, both in terms of random and systematic errors.

  9. Time-gated scintillator imaging for real-time optical surface dosimetry in total skin electron therapy.

    PubMed

    Bruza, Petr; Gollub, Sarah L; Andreozzi, Jacqueline M; Tendler, Irwin I; Williams, Benjamin B; Jarvis, Lesley A; Gladstone, David J; Pogue, Brian W

    2018-05-02

    The purpose of this study was to measure surface dose by remote time-gated imaging of plastic scintillators. A novel technique for time-gated, intensified camera imaging of scintillator emission was demonstrated, and key parameters influencing the signal were analyzed, including distance, angle and thickness. A set of scintillator samples was calibrated by using thermo-luminescence detector response as reference. Examples of use in total skin electron therapy are described. The data showed excellent room light rejection (signal-to-noise ratio of scintillation SNR  ≈  470), ideal scintillation dose response linearity, and 2% dose rate error. Individual sample scintillation response varied by 7% due to sample preparation. Inverse square distance dependence correction and lens throughput error (8% per meter) correction were needed. At scintillator-to-source angle and observation angle  <50°, the radiant energy fluence error was smaller than 1%. The achieved standard error of the scintillator cumulative dose measurement compared to the TLD dose was 5%. The results from this proof-of-concept study documented the first use of small scintillator targets for remote surface dosimetry in ambient room lighting. The measured dose accuracy renders our method to be comparable to thermo-luminescent detector dosimetry, with the ultimate realization of accuracy likely to be better than shown here. Once optimized, this approach to remote dosimetry may substantially reduce the time and effort required for surface dosimetry.

  10. Time-gated scintillator imaging for real-time optical surface dosimetry in total skin electron therapy

    NASA Astrophysics Data System (ADS)

    Bruza, Petr; Gollub, Sarah L.; Andreozzi, Jacqueline M.; Tendler, Irwin I.; Williams, Benjamin B.; Jarvis, Lesley A.; Gladstone, David J.; Pogue, Brian W.

    2018-05-01

    The purpose of this study was to measure surface dose by remote time-gated imaging of plastic scintillators. A novel technique for time-gated, intensified camera imaging of scintillator emission was demonstrated, and key parameters influencing the signal were analyzed, including distance, angle and thickness. A set of scintillator samples was calibrated by using thermo-luminescence detector response as reference. Examples of use in total skin electron therapy are described. The data showed excellent room light rejection (signal-to-noise ratio of scintillation SNR  ≈  470), ideal scintillation dose response linearity, and 2% dose rate error. Individual sample scintillation response varied by 7% due to sample preparation. Inverse square distance dependence correction and lens throughput error (8% per meter) correction were needed. At scintillator-to-source angle and observation angle  <50°, the radiant energy fluence error was smaller than 1%. The achieved standard error of the scintillator cumulative dose measurement compared to the TLD dose was 5%. The results from this proof-of-concept study documented the first use of small scintillator targets for remote surface dosimetry in ambient room lighting. The measured dose accuracy renders our method to be comparable to thermo-luminescent detector dosimetry, with the ultimate realization of accuracy likely to be better than shown here. Once optimized, this approach to remote dosimetry may substantially reduce the time and effort required for surface dosimetry.

  11. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    PubMed Central

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-01-01

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130

  12. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    PubMed

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-03-15

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  13. What to use to express the variability of data: Standard deviation or standard error of mean?

    PubMed

    Barde, Mohini P; Barde, Prajakt J

    2012-07-01

    Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.

  14. Conditional standard errors of measurement for composite scores on the Wechsler Preschool and Primary Scale of Intelligence-Third Edition.

    PubMed

    Price, Larry R; Raju, Nambury; Lurie, Anna; Wilkins, Charles; Zhu, Jianjun

    2006-02-01

    A specific recommendation of the 1999 Standards for Educational and Psychological Testing by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education is that test publishers report estimates of the conditional standard error of measurement (SEM). Procedures for calculating the conditional (score-level) SEM based on raw scores are well documented; however, few procedures have been developed for estimating the conditional SEM of subtest or composite scale scores resulting from a nonlinear transformation. Item response theory provided the psychometric foundation to derive the conditional standard errors of measurement and confidence intervals for composite scores on the Wechsler Preschool and Primary Scale of Intelligence-Third Edition.

  15. Ion beam machining error control and correction for small scale optics.

    PubMed

    Xie, Xuhui; Zhou, Lin; Dai, Yifan; Li, Shengyi

    2011-09-20

    Ion beam figuring (IBF) technology for small scale optical components is discussed. Since the small removal function can be obtained in IBF, it makes computer-controlled optical surfacing technology possible to machine precision centimeter- or millimeter-scale optical components deterministically. Using a small ion beam to machine small optical components, there are some key problems, such as small ion beam positioning on the optical surface, material removal rate, ion beam scanning pitch control on the optical surface, and so on, that must be seriously considered. The main reasons for the problems are that it is more sensitive to the above problems than a big ion beam because of its small beam diameter and lower material ratio. In this paper, we discuss these problems and their influences in machining small optical components in detail. Based on the identification-compensation principle, an iterative machining compensation method is deduced for correcting the positioning error of an ion beam with the material removal rate estimated by a selected optimal scanning pitch. Experiments on ϕ10 mm Zerodur planar and spherical samples are made, and the final surface errors are both smaller than λ/100 measured by a Zygo GPI interferometer.

  16. Comparison of MLC error sensitivity of various commercial devices for VMAT pre-treatment quality assurance.

    PubMed

    Saito, Masahide; Sano, Naoki; Shibata, Yuki; Kuriyama, Kengo; Komiyama, Takafumi; Marino, Kan; Aoki, Shinichi; Ashizawa, Kazunari; Yoshizawa, Kazuya; Onishi, Hiroshi

    2018-05-01

    The purpose of this study was to compare the MLC error sensitivity of various measurement devices for VMAT pre-treatment quality assurance (QA). This study used four QA devices (Scandidos Delta4, PTW 2D-array, iRT systems IQM, and PTW Farmer chamber). Nine retrospective VMAT plans were used and nine MLC error plans were generated for all nine original VMAT plans. The IQM and Farmer chamber were evaluated using the cumulative signal difference between the baseline and error-induced measurements. In addition, to investigate the sensitivity of the Delta4 device and the 2D-array, global gamma analysis (1%/1, 2%/2, and 3%/3 mm), dose difference (1%, 2%, and 3%) were used between the baseline and error-induced measurements. Some deviations of the MLC error sensitivity for the evaluation metrics and MLC error ranges were observed. For the two ionization devices, the sensitivity of the IQM was significantly better than that of the Farmer chamber (P < 0.01) while both devices had good linearly correlation between the cumulative signal difference and the magnitude of MLC errors. The pass rates decreased as the magnitude of the MLC error increased for both Delta4 and 2D-array. However, the small MLC error for small aperture sizes, such as for lung SBRT, could not be detected using the loosest gamma criteria (3%/3 mm). Our results indicate that DD could be more useful than gamma analysis for daily MLC QA, and that a large-area ionization chamber has a greater advantage for detecting systematic MLC error because of the large sensitive volume, while the other devices could not detect this error for some cases with a small range of MLC error. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  17. An Inset CT Specimen for Evaluating Fracture in Small Samples of Material

    PubMed Central

    Yahyazadehfar, M.; Nazari, A.; Kruzic, J.J.; Quinn, G.D.; Arola, D.

    2013-01-01

    In evaluations on the fracture behavior of hard tissues and many biomaterials, the volume of material available to study is not always sufficient to apply a standard method of practice. In the present study an inset Compact Tension (inset CT) specimen is described, which uses a small cube of material (approximately 2×2×2 mm3) that is molded within a secondary material to form the compact tension geometry. A generalized equation describing the Mode I stress intensity was developed for the specimen using the solutions from a finite element model that was defined over permissible crack lengths, variations in specimen geometry, and a range in elastic properties of the inset and mold materials. A validation of the generalized equation was performed using estimates for the fracture toughness of a commercial dental composite via the “inset CT” specimen and the standard geometry defined by ASTM E399. Results showed that the average fracture toughness obtained from the new specimen (1.23 ± 0.02 MPa•m0.5) was within 2% of that from the standard. Applications of the inset CT specimen are presented for experimental evaluations on the crack growth resistance of dental enamel and root dentin, including their fracture resistance curves. Potential errors in adopting this specimen are then discussed, including the effects of debonding between the inset and molding material on the estimated stress intensity distribution. Results of the investigation show that the inset CT specimen offers a viable approach for studying the fracture behavior of small volumes of structural materials. PMID:24268892

  18. Towards Accurate Modelling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-04-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter halos. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the "accurate" regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard ΛCDM + halo model against the clustering of SDSS DR7 galaxies. Specifically, we use the projected correlation function, group multiplicity function and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir halos) matches the clustering of low luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the "standard" halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  19. Cost-effectiveness of the stream-gaging program in Kentucky

    USGS Publications Warehouse

    Ruhl, K.J.

    1989-01-01

    This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)

  20. Comparing biomarker measurements to a normal range: when to use standard error of the mean (SEM) or standard deviation (SD) confidence intervals tests.

    PubMed

    Pleil, Joachim D

    2016-01-01

    This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.

  1. Is Comprehension Necessary for Error Detection? A Conflict-Based Account of Monitoring in Speech Production

    ERIC Educational Resources Information Center

    Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.

    2011-01-01

    Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the…

  2. Development of a standardized differential-reflective bioassay for microbial pathogens

    NASA Astrophysics Data System (ADS)

    Wilhelm, Jay; Auld, J. R. X.; Smith, James E.

    2008-04-01

    This research examines standardizing a method for the rapid/semi-automated identification of microbial contaminates. It introduces a method suited to test for food/water contamination, serology, urinalysis and saliva testing for any >1 micron sized molecule that can be effectively bound to an identifying marker with exclusivity. This optical biosensor method seeks to integrate the semi-manual distribution of a collected sample onto a "transparent" substrate array of binding sites that will then be applied to a standard optical data disk and run for analysis. The detection of most microbe species is possible in this platform because the relative scale is greater than the resolution of the standard-scale digital information on a standard CD or DVD. This paper explains the critical first stage in the advance of this detection concept. This work has concentrated on developing the necessary software component needed to perform highly sensitive small-scale recognition using the standard optical disk as a detection platform. Physical testing has made significant progress in demonstrating the ability to utilize a standard optical drive for the purposes of micro-scale detection through the exploitation of CIRC error correction. Testing has also shown a definable trend in the optimum scale and geometry of micro-arrayed attachment sites for the technology's concept to reach achievement.

  3. General Aviation Avionics Statistics.

    DTIC Science & Technology

    1980-12-01

    designed to produce standard errors on these variables at levels specified by the FAA. No controls were placed on the standard errors of the non-design...Transponder Encoding Requirement. and Mode CAutomatic (11as been deleted) Altitude Reporting Ca- pabili.,; Two-way Radio; VOR or TACAN Receiver. Remaining 42

  4. The Effects of Lever Arm (Instrument Offset) Error on GRAV-D Airborne Gravity Data

    NASA Astrophysics Data System (ADS)

    Johnson, J. A.; Youngman, M.; Damiani, T.

    2017-12-01

    High quality airborne gravity collection with a 2-axis, stabilized platform gravity instrument, such as with a Micro-g LaCoste Turnkey Airborne Gravity System (TAGS), is dependent on the aircraft's ability to maintain "straight and level" flight. However, during flight there is constant rotation about the aircraft's center of gravity. Standard practice is to install the scientific equipment close to the aircraft's estimated center of gravity to minimize the relative rotations with aircraft motion. However, there remain small offsets between the instruments. These distance offsets, the lever arm, are used to define the rigid-body, spatial relationship between the IMU, GPS antenna, and airborne gravimeter within the aircraft body frame. The Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, which is collecting airborne gravity data across the U.S., uses a commercial software package for coupled IMU-GNSS aircraft positioning. This software incorporates a lever arm correction to calculate a precise position for the airborne gravimeter. The positioning software must do a coordinate transformation to relate each epoch of the coupled GNSS-IMU derived position to the position of the gravimeter within the constantly-rotating aircraft. This transformation requires three inputs: accurate IMU-measured aircraft rotations, GNSS positions, and lever arm distances between instruments. Previous studies show that correcting for the lever arm distances improves gravity results, but no sensitivity tests have been done to investigate how error in the lever arm distances affects the final airborne gravity products. This research investigates the effects of lever arm measurement error on airborne gravity data. GRAV-D lever arms are nominally measured to the cm-level using surveying equipment. "Truth" data sets will be created by processing GRAV-D flight lines with both relatively small lever arms and large lever arms. Then negative and positive incremental errors will be introduced independently in the x, y, and z directions during GPS-IMU processing. Finally, the post-processed gravity data obtained using the erroneous lever arms will be compared to the post-processed truth sets to identify relationships between error in the lever arm measurement and the final gravity product.

  5. Composite Reliability and Standard Errors of Measurement for a Seven-Subtest Short Form of the Wechsler Adult Intelligence Scale-Revised.

    ERIC Educational Resources Information Center

    Schretlen, David; And Others

    1994-01-01

    Composite reliability and standard errors of measurement were computed for prorated Verbal, Performance, and Full-Scale intelligence quotient (IQ) scores from a seven-subtest short form of the Wechsler Adult Intelligence Scale-Revised. Results with 1,880 adults (standardization sample) indicate that this form is as reliable as the complete test.…

  6. A Brief Look at: Test Scores and the Standard Error of Measurement. E&R Report No. 10.13

    ERIC Educational Resources Information Center

    Holdzkom, David; Sumner, Brian; McMillen, Brad

    2010-01-01

    In the context of standardized testing, the standard error of measurement (SEM) is a measure of the factors other than the student's actual knowledge of the tested material that may affect the student's test score. Such factors may include distractions in the testing environment, fatigue, hunger, or even luck. This means that a student's observed…

  7. Image guidance in prostate cancer - can offline corrections be an effective substitute for daily online imaging?

    PubMed

    Prasad, Devleena; Das, Pinaki; Saha, Niladri S; Chatterjee, Sanjoy; Achari, Rimpa; Mallick, Indranil

    2014-01-01

    This aim of this study was to determine if a less resource-intensive and established offline correction protocol - the No Action Level (NAL) protocol was as effective as daily online corrections of setup deviations in curative high-dose radiotherapy of prostate cancer. A total of 683 daily megavoltage CT (MVCT) or kilovoltage CT (kvCBCT) images of 30 patients with localized prostate cancer treated with intensity modulated radiotherapy were evaluated. Daily image-guidance was performed and setup errors in three translational axes recorded. The NAL protocol was simulated by using the mean shift calculated from the first five fractions and implemented on all subsequent treatments. Using the imaging data from the remaining fractions, the daily residual error (RE) was determined. The proportion of fractions where the RE was greater than 3,5 and 7 mm was calculated, and also the actual PTV margin that would be required if the offline protocol was followed. Using the NAL protocol reduced the systematic but not the random errors. Corrections made using the NAL protocol resulted in small and acceptable RE in the mediolateral (ML) and superoinferior (SI) directions with 46/533 (8.1%) and 48/533 (5%) residual shifts above 5 mm. However; residual errors greater than 5mm in the anteroposterior (AP) direction remained in 181/533 (34%) of fractions. The PTV margins calculated based on residual errors were 5mm, 5mm and 13 mm in the ML, SI and AP directions respectively. Offline correction using the NAL protocol resulted in unacceptably high residual errors in the AP direction, due to random uncertainties of rectal and bladder filling. Daily online imaging and corrections remain the standard image guidance policy for highly conformal radiotherapy of prostate cancer.

  8. Toward a new culture in verified quantum operations

    NASA Astrophysics Data System (ADS)

    Flammia, Steve

    Measuring error rates of quantum operations has become an indispensable component in any aspiring platform for quantum computation. As the quality of controlled quantum operations increases, the demands on the accuracy and precision with which we measure these error rates also grows. However, well-meaning scientists that report these error measures are faced with a sea of non-standardized methodologies and are often asked during publication for only coarse information about how their estimates were obtained. Moreover, there are serious incentives to use methodologies and measures that will continually produce numbers that improve with time to show progress. These problems will only get exacerbated as our typical error rates go from 1 in 100 to 1 in 1000 or less. This talk will survey existing challenges presented by the current paradigm and offer some suggestions for solutions than can help us move toward fair and standardized methods for error metrology in quantum computing experiments, and towards a culture that values full disclose of methodologies and higher standards for data analysis.

  9. Molecular radiotherapy: the NUKFIT software for calculating the time-integrated activity coefficient.

    PubMed

    Kletting, P; Schimmel, S; Kestler, H A; Hänscheid, H; Luster, M; Fernández, M; Bröer, J H; Nosske, D; Lassmann, M; Glatting, G

    2013-10-01

    Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error. The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB. To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit parameters and their standard error estimated by using SAAM numerical and NUKFIT showed differences of <1%. The differences for the time-integrated activity coefficients were also <1% (standard error between 0.4% and 3%). In general, the application of the software is user-friendly and the results are mathematically correct and reproducible. An application of NUKFIT is presented for three different clinical examples. The software tool with its underlying methodology can be employed to objectively and reproducibly estimate the time integrated activity coefficient and its standard error for most time activity data in molecular radiotherapy.

  10. Reaeration equations derived from U.S. geological survey database

    USGS Publications Warehouse

    Melching, C.S.; Flores, H.E.

    1999-01-01

    Accurate estimation of the reaeration-rate coefficient (K2) is extremely important for waste-load allocation. Currently, available K2 estimation equations generally yield poor estimates when applied to stream conditions different from those for which the equations were derived because they were derived from small databases composed of potentially highly inaccurate measurements. A large data set of K2 measurements made with tracer-gas methods was compiled from U.S. Geological Survey studies. This compilation included 493 reaches on 166 streams in 23 states. Careful screening to detect and eliminate erroneous measurements reduced the date set to 371 measurements. These measurements were divided into four subgroups on the basis of flow regime (channel control or pool and riffle) and stream scale (discharge greater than or less than 0.556 m3/s). Multiple linear regression in logarithms was applied to relate K2 to 12 stream hydraulic and water-quality characteristics. The resulting best-estimation equations had the form of semiempirical equations that included the rate of energy dissipation and discharge or depth and width as variables. For equation verification, a data set of K2 measurements made with tracer-gas procedures by other agencies was compiled from the literature. This compilation included 127 reaches on at least 24 streams in at least seven states. The standard error of estimate obtained when applying the developed equations to the U.S. Geological Survey data set ranged from 44 to 61%, whereas the standard error of estimate was 78% when applied to the verification data set.Accurate estimation of the reaeration-rate coefficient (K2) is extremely important for waste-load allocation. Currently, available K2 estimation equations generally yield poor estimates when applied to stream conditions different from those for which the equations were derived because they were derived from small databases composed of potentially highly inaccurate measurements. A large data set of K2 measurements made with tracer-gas methods was compiled from U.S. Geological Survey studies. This compilation included 493 reaches on 166 streams in 23 states. Careful screening to detect and eliminate erroneous measurements reduced the data set to 371 measurements. These measurements were divided into four subgroups on the basis of flow regime (channel control or pool and riffle) and stream scale (discharge greater than or less than 0.556 m3/s). Multiple linear regression in logarithms was applied to relate K2 to 12 stream hydraulic and water-quality characteristics. The resulting best-estimation equations had the form of semiempirical equations that included the rate of energy dissipation and discharge or depth and width as variables. For equation verification, a data set of K2 measurements made with tracer-gas procedures by other agencies was compiled from the literature. This compilation included 127 reaches on at least 24 streams in at least seven states. The standard error of estimate obtained when applying the developed equations to the U.S. Geological Survey data set ranged from 44 to 61%, whereas the standard error of estimate was 78% when applied to the verification data set.

  11. The influence of different error estimates in the detection of postoperative cognitive dysfunction using reliable change indices with correction for practice effects.

    PubMed

    Lewis, Matthew S; Maruff, Paul; Silbert, Brendan S; Evered, Lis A; Scott, David A

    2007-02-01

    The reliable change index (RCI) expresses change relative to its associated error, and is useful in the identification of postoperative cognitive dysfunction (POCD). This paper examines four common RCIs that each account for error in different ways. Three rules incorporate a constant correction for practice effects and are contrasted with the standard RCI that had no correction for practice. These rules are applied to 160 patients undergoing coronary artery bypass graft (CABG) surgery who completed neuropsychological assessments preoperatively and 1 week postoperatively using error and reliability data from a comparable healthy nonsurgical control group. The rules all identify POCD in a similar proportion of patients, but the use of the within-subject standard deviation (WSD), expressing the effects of random error, as an error estimate is a theoretically appropriate denominator when a constant error correction, removing the effects of systematic error, is deducted from the numerator in a RCI.

  12. Sample sizes needed for specified margins of relative error in the estimates of the repeatability and reproducibility standard deviations.

    PubMed

    McClure, Foster D; Lee, Jung K

    2005-01-01

    Sample size formulas are developed to estimate the repeatability and reproducibility standard deviations (Sr and S(R)) such that the actual error in (Sr and S(R)) relative to their respective true values, sigmar and sigmaR, are at predefined levels. The statistical consequences associated with AOAC INTERNATIONAL required sample size to validate an analytical method are discussed. In addition, formulas to estimate the uncertainties of (Sr and S(R)) were derived and are provided as supporting documentation. Formula for the Number of Replicates Required for a Specified Margin of Relative Error in the Estimate of the Repeatability Standard Deviation.

  13. Performance factors of mobile rich media job aids for community health workers

    PubMed Central

    Florez-Arango, Jose F; Dunn, Kim; Zhang, Jiajie

    2011-01-01

    Objective To study and analyze the possible benefits on performance of community health workers using point-of-care clinical guidelines implemented as interactive rich media job aids on small-format mobile platforms. Design A crossover study with one intervention (rich media job aids) and one control (traditional job aids), two periods, with 50 community health workers, each subject solving a total 15 standardized cases per period per period (30 cases in total per subject). Measurements Error rate per case and task, protocol compliance. Results A total of 1394 cases were evaluated. Intervention reduces errors by an average of 33.15% (p=0.001) and increases protocol compliance 30.18% (p<0.001). Limitations Medical cases were presented on human patient simulators in a laboratory setting, not on real patients. Conclusion These results indicate encouraging prospects for mHealth technologies in general, and the use of rich media clinical guidelines on cell phones in particular, for the improvement of community health worker performance in developing countries. PMID:21292702

  14. Performance factors of mobile rich media job aids for community health workers.

    PubMed

    Florez-Arango, Jose F; Iyengar, M Sriram; Dunn, Kim; Zhang, Jiajie

    2011-01-01

    To study and analyze the possible benefits on performance of community health workers using point-of-care clinical guidelines implemented as interactive rich media job aids on small-format mobile platforms. A crossover study with one intervention (rich media job aids) and one control (traditional job aids), two periods, with 50 community health workers, each subject solving a total 15 standardized cases per period per period (30 cases in total per subject). Error rate per case and task, protocol compliance. A total of 1394 cases were evaluated. Intervention reduces errors by an average of 33.15% (p = 0.001) and increases protocol compliance 30.18% (p < 0.001). Limitations Medical cases were presented on human patient simulators in a laboratory setting, not on real patients. These results indicate encouraging prospects for mHealth technologies in general, and the use of rich media clinical guidelines on cell phones in particular, for the improvement of community health worker performance in developing countries.

  15. Comparison of three rf plasma impedance monitors on a high phase angle planar inductively coupled plasma source

    NASA Astrophysics Data System (ADS)

    Uchiyama, H.; Watanabe, M.; Shaw, D. M.; Bahia, J. E.; Collins, G. J.

    1999-10-01

    Accurate measurement of plasma source impedance is important for verification of plasma circuit models, as well as for plasma process characterization and endpoint detection. Most impedance measurement techniques depend in some manner on the cosine of the phase angle to determine the impedance of the plasma load. Inductively coupled plasmas are generally highly inductive, with the phase angle between the applied rf voltage and the rf current in the range of 88 to near 90 degrees. A small measurement error in this phase angle range results in a large error in the calculated cosine of the angle, introducing large impedance measurement variations. In this work, we have compared the measured impedance of a planar inductively coupled plasma using three commercial plasma impedance monitors (ENI V/I probe, Advanced Energy RFZ60 and Advanced Energy Z-Scan). The plasma impedance is independently verified using a specially designed match network and a calibrated load, representing the plasma, to provide a measurement standard.

  16. Refractive-index determination of solids from first- and second-order critical diffraction angles of periodic surface patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meichner, Christoph, E-mail: christoph.meichner@uni-bayreuth.de; Kador, Lothar, E-mail: lothar.kador@uni-bayreuth.de; Schedl, Andreas E.

    2015-08-15

    We present two approaches for measuring the refractive index of transparent solids in the visible spectral range based on diffraction gratings. Both require a small spot with a periodic pattern on the surface of the solid, collimated monochromatic light, and a rotation stage. We demonstrate the methods on a polydimethylsiloxane film (Sylgard{sup ®} 184) and compare our data to those obtained with a standard Abbe refractometer at several wavelengths between 489 and 688 nm. The results of our approaches show good agreement with the refractometer data. Possible error sources are analyzed and discussed in detail; they include mainly the linewidthmore » of the laser and/or the angular resolution of the rotation stage. With narrow-band light sources, an angular accuracy of ±0.025{sup ∘} results in an error of the refractive index of typically ±5 ⋅ 10{sup −4}. Information on the sample thickness is not required.« less

  17. An expanded calibration study of the explicitly correlated CCSD(T)-F12b method using large basis set standard CCSD(T) atomization energies.

    PubMed

    Feller, David; Peterson, Kirk A

    2013-08-28

    The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies <0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.

  18. Bootstrap Estimates of Standard Errors in Generalizability Theory

    ERIC Educational Resources Information Center

    Tong, Ye; Brennan, Robert L.

    2007-01-01

    Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…

  19. Topological charge and cooling scales in pure SU(2) lattice gauge theory

    NASA Astrophysics Data System (ADS)

    Berg, Bernd A.; Clarke, David A.

    2018-03-01

    Using Monte Carlo simulations with overrelaxation, we have equilibrated lattices up to β =2.928 , size 6 04, for pure SU(2) lattice gauge theory with the Wilson action. We calculate topological charges with the standard cooling method and find that they become more reliable with increasing β values and lattice sizes. Continuum limit estimates of the topological susceptibility χ are obtained of which we favor χ1 /4/Tc=0.643 (12 ) , where Tc is the SU(2) deconfinement temperature. Differences between cooling length scales in different topological sectors turn out to be too small to be detectable within our statistical errors.

  20. Model-based multi-fringe interferometry using Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Gu, Wei; Song, Weihong; Wu, Gaofeng; Quan, Haiyang; Wu, Yongqian; Zhao, Wenchuan

    2018-06-01

    In this paper, a general phase retrieval method is proposed, which is based on one single interferogram with a small amount of fringes (either tilt or power). Zernike polynomials are used to characterize the phase to be measured; the phase distribution is reconstructed by a non-linear least squares method. Experiments show that the proposed method can obtain satisfactory results compared to the standard phase-shifting interferometry technique. Additionally, the retrace errors of proposed method can be neglected because of the few fringes; it does not need any auxiliary phase shifting facilities (low cost) and it is easy to implement without the process of phase unwrapping.

  1. Standard-M mobile satellite terminal employing electronic beam squint tracking

    NASA Technical Reports Server (NTRS)

    Hawkins, G. J.; Beach, M. A.; Hilton, G. S.

    1990-01-01

    In recent years, extensive experience has been built up at the University of Bristol in the use of the Electronic Beam Squint (EBS) tracking technique, applied to large earth station facilities. The current interest in land mobile satellite terminals, using small tracking antennas, has prompted the investigation of the applicability of the EBS technique to this environment. The development of an L-band mechanically steered vehicle antenna is presented. A description of the antenna is followed by a detailed investigation of the tracking environment and its implications on the error detection capability of the system. Finally, the overall hardware configuration is described along with plans for future work.

  2. Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks.

    PubMed

    Eppenhof, Koen A J; Pluim, Josien P W

    2018-04-01

    Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.

  3. Analysis of the Hessian for Inverse Scattering Problems. Part 3. Inverse Medium Scattering of Electromagnetic Waves in Three Dimensions

    DTIC Science & Technology

    2012-08-01

    small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This in turn enables fast solution of an appropriately...implication of the compactness of the Hessian is that for small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This...probability distribution is given by the inverse of the Hessian of the negative log likelihood function. For Gaussian data noise and model error, this

  4. Adjuvant corneal crosslinking to prevent hyperopic LASIK regression

    PubMed Central

    Aslanides, Ioannis M; Mukherjee, Achyut N

    2013-01-01

    Purpose To report the long term outcomes, safety, stability, and efficacy in a pilot series of simultaneous hyperopic laser assisted in situ keratomileusis (LASIK) and corneal crosslinking (CXL). Method A small cohort series of five eyes, with clinically suboptimal topography and/or thickness, underwent LASIK surgery with immediate riboflavin application under the flap, followed by UV light irradiation. Postoperative assessment was performed at 1, 3, 6, and 12 months, with late follow up at 4 years, and results were compared with a matched cohort that received LASIK only. Results The average age of the LASIK-CXL group was 39 years (26–46), and the average spherical equivalent hyperopic refractive error was +3.45 diopters (standard deviation 0.76; range 2.5 to 4.5). All eyes maintained refractive stability over the 4 years. There were no complications related to CXL, and topographic and clinical outcomes were as expected for standard LASIK. Conclusion This limited series suggests that simultaneous LASIK and CXL for hyperopia is safe. Outcomes of the small cohort suggest that this technique may be promising for ameliorating hyperopic regression, presumed to be biomechanical in origin, and may also address ectasia risk. PMID:23576861

  5. Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada

    USGS Publications Warehouse

    Hess, G.W.; Bohman, L.R.

    1996-01-01

    Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada were developed using streamflow records at six gaged sites and basin physical and climatic characteristics. Streamflow data at gaged sites were related by regression techniques to concurrent flows at nearby gaging stations so that monthly mean streamflows for periods of missing or no record can be estimated for gaged sites in central Nevada. The standard error of estimate for relations at these sites ranged from 12 to 196 percent. Also, monthly streamflow data for selected percent exceedence levels were used in regression analyses with basin and climatic variables to determine relations for ungaged basins for annual and monthly percent exceedence levels. Analyses indicate that the drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the annual percent exceedence, the standard error of estimate of the relations for ungaged sites ranged from 51 to 96 percent and standard error of prediction for ungaged sites ranged from 96 to 249 percent. For the monthly percent exceedence values, the standard error of estimate of the relations ranged from 31 to 168 percent, and the standard error of prediction ranged from 115 to 3,124 percent. Reliability and limitations of the estimating methods are described.

  6. An Empirical Approach to Ocean Color Data: Reducing Bias and the Need for Post-Launch Radiometric Re-Calibration

    NASA Technical Reports Server (NTRS)

    Gregg, Watson W.; Casey, Nancy W.; O'Reilly, John E.; Esaias, Wayne E.

    2009-01-01

    A new empirical approach is developed for ocean color remote sensing. Called the Empirical Satellite Radiance-In situ Data (ESRID) algorithm, the approach uses relationships between satellite water-leaving radiances and in situ data after full processing, i.e., at Level-3, to improve estimates of surface variables while relaxing requirements on post-launch radiometric re-calibration. The approach is evaluated using SeaWiFS chlorophyll, which is the longest time series of the most widely used ocean color geophysical product. The results suggest that ESRID 1) drastically reduces the bias of ocean chlorophyll, most impressively in coastal regions, 2) modestly improves the uncertainty, and 3) reduces the sensitivity of global annual median chlorophyll to changes in radiometric re-calibration. Simulated calibration errors of 1% or less produce small changes in global median chlorophyll (less than 2.7%). In contrast, the standard NASA algorithm set is highly sensitive to radiometric calibration: similar 1% calibration errors produce changes in global median chlorophyll up to nearly 25%. We show that 0.1% radiometric calibration error (about 1% in water-leaving radiance) is needed to prevent radiometric calibration errors from changing global annual median chlorophyll more than the maximum interannual variability observed in the SeaWiFS 9-year record (+/- 3%), using the standard method. This is much more stringent than the goal for SeaWiFS of 5% uncertainty for water leaving radiance. The results suggest ocean color programs might consider less emphasis of expensive efforts to improve post-launch radiometric re-calibration in favor of increased efforts to characterize in situ observations of ocean surface geophysical products. Although the results here are focused on chlorophyll, in principle the approach described by ESRID can be applied to any surface variable potentially observable by visible remote sensing.

  7. SU-C-BRD-03: Analysis of Accelerator Generated Text Logs for Preemptive Maintenance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Able, CM; Baydush, AH; Nguyen, C

    2014-06-15

    Purpose: To develop a model to analyze medical accelerator generated parameter and performance data that will provide an early warning of performance degradation and impending component failure. Methods: A robust 6 MV VMAT quality assurance treatment delivery was used to test the constancy of accelerator performance. The generated text log files were decoded and analyzed using statistical process control (SPC) methodology. The text file data is a single snapshot of energy specific and overall systems parameters. A total of 36 system parameters were monitored which include RF generation, electron gun control, energy control, beam uniformity control, DC voltage generation, andmore » cooling systems. The parameters were analyzed using Individual and Moving Range (I/MR) charts. The chart limits were calculated using a hybrid technique that included the use of the standard 3σ limits and the parameter/system specification. Synthetic errors/changes were introduced to determine the initial effectiveness of I/MR charts in detecting relevant changes in operating parameters. The magnitude of the synthetic errors/changes was based on: the value of 1 standard deviation from the mean operating parameter of 483 TB systems, a small fraction (≤ 5%) of the operating range, or a fraction of the minor fault deviation. Results: There were 34 parameters in which synthetic errors were introduced. There were 2 parameters (radial position steering coil, and positive 24V DC) in which the errors did not exceed the limit of the I/MR chart. The I chart limit was exceeded for all of the remaining parameters (94.2%). The MR chart limit was exceeded in 29 of the 32 parameters (85.3%) in which the I chart limit was exceeded. Conclusion: Statistical process control I/MR evaluation of text log file parameters may be effective in providing an early warning of performance degradation or component failure for digital medical accelerator systems. Research is Supported by Varian Medical Systems, Inc.« less

  8. Predicting protein concentrations with ELISA microarray assays, monotonic splines and Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; Anderson, Kevin K.; White, Amanda M.

    Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less

  9. Microwave Determination of Water Mole Fraction in Humid Gas Mixtures

    NASA Astrophysics Data System (ADS)

    Cuccaro, R.; Gavioso, R. M.; Benedetto, G.; Madonna Ripa, D.; Fernicola, V.; Guianvarc'h, C.

    2012-09-01

    A small volume (65 cm3) gold-plated quasi-spherical microwave resonator has been used to measure the water vapor mole fraction x w of H2O/N2 and H2O/air mixtures. This experimental technique exploits the high precision achievable in the determination of the cavity microwave resonance frequencies and is particularly sensitive to the presence of small concentrations of water vapor as a result of the high polarizability of this substance. The mixtures were prepared using the INRIM standard humidity generator for frost-point temperatures T fp in the range between 241 K and 270 K and a commercial two-pressure humidity generator operated at a dew-point temperature between 272 K and 291 K. The experimental measurements compare favorably with the calculated molar fractions of the mixture supplied by the humidity generators, showing a normalized error lower than 0.8.

  10. Hedonic price models with omitted variables and measurement errors: a constrained autoregression-structural equation modeling approach with application to urban Indonesia

    NASA Astrophysics Data System (ADS)

    Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.

    2014-01-01

    Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.

  11. The computation of equating errors in international surveys in education.

    PubMed

    Monseur, Christian; Berezner, Alla

    2007-01-01

    Since the IEA's Third International Mathematics and Science Study, one of the major objectives of international surveys in education has been to report trends in achievement. The names of the two current IEA surveys reflect this growing interest: Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study (PIRLS). Similarly a central concern of the OECD's PISA is with trends in outcomes over time. To facilitate trend analyses these studies link their tests using common item equating in conjunction with item response modelling methods. IEA and PISA policies differ in terms of reporting the error associated with trends. In IEA surveys, the standard errors of the trend estimates do not include the uncertainty associated with the linking step while PISA does include a linking error component in the standard errors of trend estimates. In other words, PISA implicitly acknowledges that trend estimates partly depend on the selected common items, while the IEA's surveys do not recognise this source of error. Failing to recognise the linking error leads to an underestimation of the standard errors and thus increases the Type I error rate, thereby resulting in reporting of significant changes in achievement when in fact these are not significant. The growing interest of policy makers in trend indicators and the impact of the evaluation of educational reforms appear to be incompatible with such underestimation. However, the procedure implemented by PISA raises a few issues about the underlying assumptions for the computation of the equating error. After a brief introduction, this paper will describe the procedure PISA implemented to compute the linking error. The underlying assumptions of this procedure will then be discussed. Finally an alternative method based on replication techniques will be presented, based on a simulation study and then applied to the PISA 2000 data.

  12. Stabilizing Conditional Standard Errors of Measurement in Scale Score Transformations

    ERIC Educational Resources Information Center

    Moses, Tim; Kim, YoungKoung

    2017-01-01

    The focus of this article is on scale score transformations that can be used to stabilize conditional standard errors of measurement (CSEMs). Three transformations for stabilizing the estimated CSEMs are reviewed, including the traditional arcsine transformation, a recently developed general variance stabilization transformation, and a new method…

  13. WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.

    PubMed

    Grech, Victor

    2018-03-01

    The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Determinants of Standard Errors of MLEs in Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Cheng, Ying; Zhang, Wei

    2010-01-01

    This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found.…

  15. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis

    PubMed Central

    Lin, Johnny; Bentler, Peter M.

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511

  16. A post-assembly genome-improvement toolkit (PAGIT) to obtain annotated genomes from contigs.

    PubMed

    Swain, Martin T; Tsai, Isheng J; Assefa, Samual A; Newbold, Chris; Berriman, Matthew; Otto, Thomas D

    2012-06-07

    Genome projects now produce draft assemblies within weeks owing to advanced high-throughput sequencing technologies. For milestone projects such as Escherichia coli or Homo sapiens, teams of scientists were employed to manually curate and finish these genomes to a high standard. Nowadays, this is not feasible for most projects, and the quality of genomes is generally of a much lower standard. This protocol describes software (PAGIT) that is used to improve the quality of draft genomes. It offers flexible functionality to close gaps in scaffolds, correct base errors in the consensus sequence and exploit reference genomes (if available) in order to improve scaffolding and generating annotations. The protocol is most accessible for bacterial and small eukaryotic genomes (up to 300 Mb), such as pathogenic bacteria, malaria and parasitic worms. Applying PAGIT to an E. coli assembly takes ∼24 h: it doubles the average contig size and annotates over 4,300 gene models.

  17. The operations manual: a mechanism for improving the research process.

    PubMed

    Bowman, Ann; Wyman, Jean F; Peters, Jennifer

    2002-01-01

    The development and use of an operations manual has the potential to improve the capacity of nurse scientists to address the complex, multifaceted issues associated with conducting research in today's healthcare environment. An operations manual facilitates communication, standardizes training and evaluation, and enhances the development and standard implementation of clear policies, processes, and protocols. A 10-year review of methodology articles in relevant nursing journals revealed no attention to this topic. This article will discuss how an operations manual can improve the conduct of research methods and outcomes for both small-scale and large-scale research studies. It also describes the purpose and components of a prototype operations manual for use in quantitative research. The operations manual increases reliability and reproducibility of the research while improving the management of study processes. It can prevent costly and untimely delays or errors in the conduct of research.

  18. Hidden dynamics in models of discontinuity and switching

    NASA Astrophysics Data System (ADS)

    Jeffrey, Mike R.

    2014-04-01

    Sharp switches in behaviour, like impacts, stick-slip motion, or electrical relays, can be modelled by differential equations with discontinuities. A discontinuity approximates fine details of a switching process that lie beyond a bulk empirical model. The theory of piecewise-smooth dynamics describes what happens assuming we can solve the system of equations across its discontinuity. What this typically neglects is that effects which are vanishingly small outside the discontinuity can have an arbitrarily large effect at the discontinuity itself. Here we show that such behaviour can be incorporated within the standard theory through nonlinear terms, and these introduce multiple sliding modes. We show that the nonlinear terms persist in more precise models, for example when the discontinuity is smoothed out. The nonlinear sliding can be eliminated, however, if the model contains an irremovable level of unknown error, which provides a criterion for systems to obey the standard Filippov laws for sliding dynamics at a discontinuity.

  19. A refined method for multivariate meta-analysis and meta-regression.

    PubMed

    Jackson, Daniel; Riley, Richard D

    2014-02-20

    Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects' standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Errors in Bibliographic Citations: A Continuing Problem.

    ERIC Educational Resources Information Center

    Sweetland, James H.

    1989-01-01

    Summarizes studies examining citation errors and illustrates errors resulting from a lack of standardization, misunderstanding of foreign languages, failure to examine the document cited, and general lack of training in citation norms. It is argued that the failure to detect and correct citation errors is due to diffusion of responsibility in the…

  1. A predictability study of Lorenz's 28-variable model as a dynamical system

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, V.

    1993-01-01

    The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.

  2. Numerical modeling of the divided bar measurements

    NASA Astrophysics Data System (ADS)

    LEE, Y.; Keehm, Y.

    2011-12-01

    The divided-bar technique has been used to measure thermal conductivity of rocks and fragments in heat flow studies. Though widely used, divided-bar measurements can have errors, which are not systematically quantified yet. We used an FEM and performed a series of numerical studies to evaluate various errors in divided-bar measurements and to suggest more reliable measurement techniques. A divided-bar measurement should be corrected against lateral heat loss on the sides of rock samples, and the thermal resistance at the contacts between the rock sample and the bar. We first investigated how the amount of these corrections would change by the thickness and thermal conductivity of rock samples through numerical modeling. When we fixed the sample thickness as 10 mm and varied thermal conductivity, errors in the measured thermal conductivity ranges from 2.02% for 1.0 W/m/K to 7.95% for 4.0 W/m/K. While we fixed thermal conductivity as 1.38 W/m/K and varied the sample thickness, we found that the error ranges from 2.03% for the 30 mm-thick sample to 11.43% for the 5 mm-thick sample. After corrections, a variety of error analyses for divided-bar measurements were conducted numerically. Thermal conductivity of two thin standard disks (2 mm in thickness) located at the top and the bottom of the rock sample slightly affects the accuracy of thermal conductivity measurements. When the thermal conductivity of a sample is 3.0 W/m/K and that of two standard disks is 0.2 W/m/K, the relative error in measured thermal conductivity is very small (~0.01%). However, the relative error would reach up to -2.29% for the same sample when thermal conductivity of two disks is 0.5 W/m/K. The accuracy of thermal conductivity measurements strongly depends on thermal conductivity and the thickness of thermal compound that is applied to reduce thermal resistance at contacts between the rock sample and the bar. When the thickness of thermal compound (0.29 W/m/K) is 0.03 mm, we found that the relative error in measured thermal conductivity is 4.01%, while the relative error can be very significant (~12.2%) if the thickness increases to 0.1 mm. Then, we fixed the thickness (0.03 mm) and varied thermal conductivity of the thermal compound. We found that the relative error with an 1.0 W/m/K compound is 1.28%, and the relative error with a 0.29 W/m/K is 4.06%. When we repeated this test with a different thickness of the thermal compound (0.1 mm), the relative error with an 1.0 W/m/K compound is 3.93%, and that with a 0.29 W/m/K is 12.2%. In addition, the cell technique by Sass et al.(1971), which is widely used to measure thermal conductivity of rock fragments, was evaluated using the FEM modeling. A total of 483 isotropic and homogeneous spherical rock fragments in the sample holder were used to test numerically the accuracy of the cell technique. The result shows the relative error of -9.61% for rock fragments with the thermal conductivity of 2.5 W/m/K. In conclusion, we report quantified errors in the divided-bar and the cell technique for thermal conductivity measurements for rocks and fragments. We found that the FEM modeling can accurately mimic these measurement techniques and can help us to estimate measurement errors quantitatively.

  3. Rapid Ice Loss at Vatnajokull,Iceland Since Late 1990s Constrained by Synthetic Aperture Radar Interferometry

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Amelung, F.; Dixon, T. H.; Wdowinski, S.

    2012-12-01

    Synthetic aperture radar interferometry time series is applied over Vatnajokull, Iceland by using 15 years ERS data. Ice loss at Vatnajokull accelerates since late 1990s especially after 21th century. Clear uplift signal due to ice mass loss is detected. The rebound signal is generally linear and increases a little bit after 2000. The relative annual velocity (GPS station 7485 as reference) is about 12 mm/yr at the ice cap edge, which matches the previous studies using GPS. The standard deviation compared to 11 GPS stations in this area is about 2 mm/yr. A relative-value modeling method ignoring the effect of viscous flow is chosen assuming elastic half space earth. The final ice loss estimation - 83 cm/yr - matches the climatology model with ground observations. Small Baseline Subsets is applied for time series analysis. Orbit error coupling with long wavelength phase trend due to horizontal plate motion is removed based on a second polynomial model. For simplicity, we do not consider atmospheric delay in this area because of no complex topography and small-scale turbulence is eliminated well after long-term average when calculating the annual mean velocity. Some unwrapping error still exits because of low coherence. Other uncertainties can be the basic assumption of ice loss pattern and spatial variation of the elastic parameters. It is the first time we apply InSAR time series for ice mass balance study and provide detailed error and uncertainty analysis. The successful of this application proves InSAR as an option for mass balance study and it is also important for validation of different ice loss estimation techniques.

  4. Detecting small-study effects and funnel plot asymmetry in meta-analysis of survival data: A comparison of new and existing tests.

    PubMed

    Debray, Thomas P A; Moons, Karel G M; Riley, Richard D

    2018-03-01

    Small-study effects are a common threat in systematic reviews and may indicate publication bias. Their existence is often verified by visual inspection of the funnel plot. Formal tests to assess the presence of funnel plot asymmetry typically estimate the association between the reported effect size and their standard error, the total sample size, or the inverse of the total sample size. In this paper, we demonstrate that the application of these tests may be less appropriate in meta-analysis of survival data, where censoring influences statistical significance of the hazard ratio. We subsequently propose 2 new tests that are based on the total number of observed events and adopt a multiplicative variance component. We compare the performance of the various funnel plot asymmetry tests in an extensive simulation study where we varied the true hazard ratio (0.5 to 1), the number of published trials (N=10 to 100), the degree of censoring within trials (0% to 90%), and the mechanism leading to participant dropout (noninformative versus informative). Results demonstrate that previous well-known tests for detecting funnel plot asymmetry suffer from low power or excessive type-I error rates in meta-analysis of survival data, particularly when trials are affected by participant dropout. Because our novel test (adopting estimates of the asymptotic precision as study weights) yields reasonable power and maintains appropriate type-I error rates, we recommend its use to evaluate funnel plot asymmetry in meta-analysis of survival data. The use of funnel plot asymmetry tests should, however, be avoided when there are few trials available for any meta-analysis. © 2017 The Authors. Research Synthesis Methods Published by John Wiley & Sons, Ltd.

  5. WE-H-BRC-05: Catastrophic Error Metrics for Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, S; Molloy, J

    Purpose: Intuitive evaluation of complex radiotherapy treatments is impractical, while data transfer anomalies create the potential for catastrophic treatment delivery errors. Contrary to prevailing wisdom, logical scrutiny can be applied to patient-specific machine settings. Such tests can be automated, applied at the point of treatment delivery and can be dissociated from prior states of the treatment plan, potentially revealing errors introduced early in the process. Methods: Analytical metrics were formulated for conventional and intensity modulated RT (IMRT) treatments. These were designed to assess consistency between monitor unit settings, wedge values, prescription dose and leaf positioning (IMRT). Institutional metric averages formore » 218 clinical plans were stratified over multiple anatomical sites. Treatment delivery errors were simulated using a commercial treatment planning system and metric behavior assessed via receiver-operator-characteristic (ROC) analysis. A positive result was returned if the erred plan metric value exceeded a given number of standard deviations, e.g. 2. The finding was declared true positive if the dosimetric impact exceeded 25%. ROC curves were generated over a range of metric standard deviations. Results: Data for the conventional treatment metric indicated standard deviations of 3%, 12%, 11%, 8%, and 5 % for brain, pelvis, abdomen, lung and breast sites, respectively. Optimum error declaration thresholds yielded true positive rates (TPR) between 0.7 and 1, and false positive rates (FPR) between 0 and 0.2. Two proposed IMRT metrics possessed standard deviations of 23% and 37%. The superior metric returned TPR and FPR of 0.7 and 0.2, respectively, when both leaf position and MUs were modelled. Isolation to only leaf position errors yielded TPR and FPR values of 0.9 and 0.1. Conclusion: Logical tests can reveal treatment delivery errors and prevent large, catastrophic errors. Analytical metrics are able to identify errors in monitor units, wedging and leaf positions with favorable sensitivity and specificity. In part by Varian.« less

  6. Self-calibration method without joint iteration for distributed small satellite SAR systems

    NASA Astrophysics Data System (ADS)

    Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan

    2013-12-01

    The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.

  7. The International Standard for Aureomycin

    PubMed Central

    Humphrey, J. H.; Lightbown, J. W.; Mussett, M. V.; Perry, W. L. M.

    1953-01-01

    In 1950, the Department of Biological Standards, National Institute for Medical Research, London, was authorized by the WHO Expert Committee on Biological Standardization to proceed with the establishment of an International Standard for Aureomycin. A 100-g batch of aureomycin was obtained and was compared with the Standard Preparation of Aureomycin of the United States Food and Drug Administration (FDA) in a collaborative assay in which six laboratories in five countries participated. In all, 30 assays were carried out; 26 of these were done by biological methods, using Sarcina lutea, Bacillus pumilus, Staphylococcus aureus, or Bacillus cereus, and the remaining four by physicochemical methods. The results were subjected to standard methods of analysis, and the overall weighted mean potency (calculated from the biological assays only) was 1.0139, with limits of error of 99.5% to 100.5%. Since the International Standard is 1.39% more potent than the FDA Standard Preparation, it is probable that the latter contains a small amount of inert material; it is also possible that the International Standard itself is not 100% pure. For most practical purposes, however, both preparations may be regarded as substantially pure, and it is considered that to alter the present practice of quoting aureomycin dosage in metric units of weight would be inadvisable. Nevertheless, since the International Standard may not be a pure substance, a unit notation—for use where required in bioassays—is desirable, and the International Unit of Aureomycin has therefore been defined as the activity contained in one microgram of the International Standard. PMID:13141137

  8. The statistical significance of error probability as determined from decoding simulations for long codes

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

  9. Effects of shape, size, and chromaticity of stimuli on estimated size in normally sighted, severely myopic, and visually impaired students.

    PubMed

    Huang, Kuo-Chen; Wang, Hsiu-Feng; Chen, Chun-Ching

    2010-06-01

    Effects of shape, size, and chromaticity of stimuli on participants' errors when estimating the size of simultaneously presented standard and comparison stimuli were examined. 48 Taiwanese college students ages 20 to 24 years old (M = 22.3, SD = 1.3) participated. Analysis showed that the error for estimated size was significantly greater for those in the low-vision group than for those in the normal-vision and severe-myopia groups. The errors were significantly greater with green and blue stimuli than with red stimuli. Circular stimuli produced smaller mean errors than did square stimuli. The actual size of the standard stimulus significantly affected the error for estimated size. Errors for estimations using smaller sizes were significantly higher than when the sizes were larger. Implications of the results for graphics-based interface design, particularly when taking account of visually impaired users, are discussed.

  10. Degrees of Freedom for Allan Deviation Estimates of Multiple Clocks

    DTIC Science & Technology

    2016-04-01

    Allan deviation . Allan deviation will be represented by σ and standard deviation will be represented by δ. In practice, when the Allan deviation of a...the Allan deviation of standard noise types. Once the number of degrees of freedom is known, an approximate confidence interval can be assigned by...measurement errors from paired difference data. We extend this approach by using the Allan deviation to estimate the error in a frequency standard

  11. Impact of Standardized Communication Techniques on Errors during Simulated Neonatal Resuscitation.

    PubMed

    Yamada, Nicole K; Fuerch, Janene H; Halamek, Louis P

    2016-03-01

    Current patterns of communication in high-risk clinical situations, such as resuscitation, are imprecise and prone to error. We hypothesized that the use of standardized communication techniques would decrease the errors committed by resuscitation teams during neonatal resuscitation. In a prospective, single-blinded, matched pairs design with block randomization, 13 subjects performed as a lead resuscitator in two simulated complex neonatal resuscitations. Two nurses assisted each subject during the simulated resuscitation scenarios. In one scenario, the nurses used nonstandard communication; in the other, they used standardized communication techniques. The performance of the subjects was scored to determine errors committed (defined relative to the Neonatal Resuscitation Program algorithm), time to initiation of positive pressure ventilation (PPV), and time to initiation of chest compressions (CC). In scenarios in which subjects were exposed to standardized communication techniques, there was a trend toward decreased error rate, time to initiation of PPV, and time to initiation of CC. While not statistically significant, there was a 1.7-second improvement in time to initiation of PPV and a 7.9-second improvement in time to initiation of CC. Should these improvements in human performance be replicated in the care of real newborn infants, they could improve patient outcomes and enhance patient safety. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  12. First measurements of error fields on W7-X using flux surface mapping

    DOE PAGES

    Lazerson, Samuel A.; Otte, Matthias; Bozhenkov, Sergey; ...

    2016-08-03

    Error fields have been detected and quantified using the flux surface mapping diagnostic system on Wendelstein 7-X (W7-X). A low-field 'more » $${\\rlap{-}\\ \\iota} =1/2$$ ' magnetic configuration ($${\\rlap{-}\\ \\iota} =\\iota /2\\pi $$ ), sensitive to error fields, was developed in order to detect their presence using the flux surface mapping diagnostic. In this configuration, a vacuum flux surface with rotational transform of n/m = 1/2 is created at the mid-radius of the vacuum flux surfaces. If no error fields are present a vanishingly small n/m = 5/10 island chain should be present. Modeling indicates that if an n = 1 perturbing field is applied by the trim coils, a large n/m = 1/2 island chain will be opened. This island chain is used to create a perturbation large enough to be imaged by the diagnostic. Phase and amplitude scans of the applied field allow the measurement of a small $$\\sim 0.04$$ m intrinsic island chain with a $${{130}^{\\circ}}$$ phase relative to the first module of the W7-X experiment. Lastly, these error fields are determined to be small and easily correctable by the trim coil system.« less

  13. MASTER: a model to improve and standardize clinical breakpoints for antimicrobial susceptibility testing using forecast probabilities.

    PubMed

    Blöchliger, Nicolas; Keller, Peter M; Böttger, Erik C; Hombach, Michael

    2017-09-01

    The procedure for setting clinical breakpoints (CBPs) for antimicrobial susceptibility has been poorly standardized with respect to population data, pharmacokinetic parameters and clinical outcome. Tools to standardize CBP setting could result in improved antibiogram forecast probabilities. We propose a model to estimate probabilities for methodological categorization errors and defined zones of methodological uncertainty (ZMUs), i.e. ranges of zone diameters that cannot reliably be classified. The impact of ZMUs on methodological error rates was used for CBP optimization. The model distinguishes theoretical true inhibition zone diameters from observed diameters, which suffer from methodological variation. True diameter distributions are described with a normal mixture model. The model was fitted to observed inhibition zone diameters of clinical Escherichia coli strains. Repeated measurements for a quality control strain were used to quantify methodological variation. For 9 of 13 antibiotics analysed, our model predicted error rates of < 0.1% applying current EUCAST CBPs. Error rates were > 0.1% for ampicillin, cefoxitin, cefuroxime and amoxicillin/clavulanic acid. Increasing the susceptible CBP (cefoxitin) and introducing ZMUs (ampicillin, cefuroxime, amoxicillin/clavulanic acid) decreased error rates to < 0.1%. ZMUs contained low numbers of isolates for ampicillin and cefuroxime (3% and 6%), whereas the ZMU for amoxicillin/clavulanic acid contained 41% of all isolates and was considered not practical. We demonstrate that CBPs can be improved and standardized by minimizing methodological categorization error rates. ZMUs may be introduced if an intermediate zone is not appropriate for pharmacokinetic/pharmacodynamic or drug dosing reasons. Optimized CBPs will provide a standardized antibiotic susceptibility testing interpretation at a defined level of probability. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Cost-effectiveness of the Federal stream-gaging program in Virginia

    USGS Publications Warehouse

    Carpenter, D.H.

    1985-01-01

    Data uses and funding sources were identified for the 77 continuous stream gages currently being operated in Virginia by the U.S. Geological Survey with a budget of $446,000. Two stream gages were identified as not being used sufficiently to warrant continuing their operation. Operation of these stations should be considered for discontinuation. Data collected at two other stations were identified as having uses primarily related to short-term studies; these stations should also be considered for discontinuation at the end of the data collection phases of the studies. The remaining 73 stations should be kept in the program for the foreseeable future. The current policy for operation of the 77-station program requires a budget of $446,000/yr. The average standard error of estimation of streamflow records is 10.1%. It was shown that this overall level of accuracy at the 77 sites could be maintained with a budget of $430,500 if resources were redistributed among the gages. A minimum budget of $428,500 is required to operate the 77-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, with optimized operation, the average standard error would be 10.4%. The maximum budget analyzed was $650,000, which resulted in an average standard error of 5.5%. The study indicates that a major component of error is caused by lost or missing data. If perfect equipment were available, the standard error for the current program and budget could be reduced to 7.6%. This also can be interpreted to mean that the streamflow data have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)

  15. Bootstrap Standard Error Estimates in Dynamic Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Browne, Michael W.

    2010-01-01

    Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…

  16. Standard Errors of Equating Differences: Prior Developments, Extensions, and Simulations

    ERIC Educational Resources Information Center

    Moses, Tim; Zhang, Wenmin

    2011-01-01

    The purpose of this article was to extend the use of standard errors for equated score differences (SEEDs) to traditional equating functions. The SEEDs are described in terms of their original proposal for kernel equating functions and extended so that SEEDs for traditional linear and traditional equipercentile equating functions can be computed.…

  17. Progress in the improved lattice calculation of direct CP-violation in the Standard Model

    NASA Astrophysics Data System (ADS)

    Kelly, Christopher

    2018-03-01

    We discuss the ongoing effort by the RBC & UKQCD collaborations to improve our lattice calculation of the measure of Standard Model direct CP violation, ɛ', with physical kinematics. We present our progress in decreasing the (dominant) statistical error and discuss other related activities aimed at reducing the systematic errors.

  18. The Development of MST Test Information for the Prediction of Test Performances

    ERIC Educational Resources Information Center

    Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.

    2017-01-01

    The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…

  19. Supplemental Figures, Tables, and Standard Error Tables for Student Financing of Undergraduate Education: 2007-08. Sticker, Net, and Out-of-Pocket Prices

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2010

    2010-01-01

    This paper presents the supplemental figures, tables, and standard error tables for the report "Student Financing of Undergraduate Education: 2007-08. Web Tables. NCES 2010-162." (Contains 6 figures and 10 tables.) [For the main report, see ED511828.

  20. Error model for the SAO 1969 standard earth.

    NASA Technical Reports Server (NTRS)

    Martin, C. F.; Roy, N. A.

    1972-01-01

    A method is developed for estimating an error model for geopotential coefficients using satellite tracking data. A single station's apparent timing error for each pass is attributed to geopotential errors. The root sum of the residuals for each station also depends on the geopotential errors, and these are used to select an error model. The model chosen is 1/4 of the difference between the SAO M1 and the APL 3.5 geopotential.

  1. Mars approach navigation using Doppler and range measurements to surface beacons and orbiting spacecraft

    NASA Technical Reports Server (NTRS)

    Thurman, Sam W.; Estefan, Jeffrey A.

    1991-01-01

    Approximate analytical models are developed and used to construct an error covariance analysis for investigating the range of orbit determination accuracies which might be achieved for typical Mars approach trajectories. The sensitivity or orbit determination accuracy to beacon/orbiter position errors and to small spacecraft force modeling errors is also investigated. The results indicate that the orbit determination performance obtained from both Doppler and range data is a strong function of the inclination of the approach trajectory to the Martian equator, for surface beacons, and for orbiters, the inclination relative to the orbital plane. Large variations in performance were also observed for different approach velocity magnitudes; Doppler data in particular were found to perform poorly in determining the downtrack (along the direction of flight) component of spacecraft position. In addition, it was found that small spacecraft acceleration modeling errors can induce large errors in the Doppler-derived downtrack position estimate.

  2. Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sedlak, Steffen M.; Bruetzel, Linda K.; Lipfert, Jan

    A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ 2(q) = [I(q) + const.]/(kq), whereI(q) is the scattering intensity as a function of the momentum transferq;kand const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurementmore » errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors.« less

  3. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review.

    PubMed

    Mathes, Tim; Klaßen, Pauline; Pieper, Dawid

    2017-11-28

    Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.

  4. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Eas M.

    2003-01-01

    The modus operandi in addressing human error in aviation systems is predominantly that of technological interventions or fixes. Such interventions exhibit considerable variability both in terms of sophistication and application. Some technological interventions address human error directly while others do so only indirectly. Some attempt to eliminate the occurrence of errors altogether whereas others look to reduce the negative consequences of these errors. In any case, technological interventions add to the complexity of the systems and may interact with other system components in unforeseeable ways and often create opportunities for novel human errors. Consequently, there is a need to develop standards for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the biggest benefit to flight safety as well as to mitigate any adverse ramifications. The purpose of this project was to help define the relationship between human error and technological interventions, with the ultimate goal of developing a set of standards for evaluating or measuring the potential benefits of new human error fixes.

  5. Comparative study of anatomical normalization errors in SPM and 3D-SSP using digital brain phantom.

    PubMed

    Onishi, Hideo; Matsutake, Yuki; Kawashima, Hiroki; Matsutomo, Norikazu; Amijima, Hizuru

    2011-01-01

    In single photon emission computed tomography (SPECT) cerebral blood flow studies, two major algorithms are widely used statistical parametric mapping (SPM) and three-dimensional stereotactic surface projections (3D-SSP). The aim of this study is to compare an SPM algorithm-based easy Z score imaging system (eZIS) and a 3D-SSP system in the errors of anatomical standardization using 3D-digital brain phantom images. We developed a 3D-brain digital phantom based on MR images to simulate the effects of head tilt, perfusion defective region size, and count value reduction rate on the SPECT images. This digital phantom was used to compare the errors of anatomical standardization by the eZIS and the 3D-SSP algorithms. While the eZIS allowed accurate standardization of the images of the phantom simulating a head in rotation, lateroflexion, anteflexion, or retroflexion without angle dependency, the standardization by 3D-SSP was not accurate enough at approximately 25° or more head tilt. When the simulated head contained perfusion defective regions, one of the 3D-SSP images showed an error of 6.9% from the true value. Meanwhile, one of the eZIS images showed an error as large as 63.4%, revealing a significant underestimation. When required to evaluate regions with decreased perfusion due to such causes as hemodynamic cerebral ischemia, the 3D-SSP is desirable. In a statistical image analysis, we must reconfirm the image after anatomical standardization by all means.

  6. Cost effectiveness of the stream-gaging program in South Carolina

    USGS Publications Warehouse

    Barker, A.C.; Wright, B.C.; Bennett, C.S.

    1985-01-01

    The cost effectiveness of the stream-gaging program in South Carolina was documented for the 1983 water yr. Data uses and funding sources were identified for the 76 continuous stream gages currently being operated in South Carolina. The budget of $422,200 for collecting and analyzing streamflow data also includes the cost of operating stage-only and crest-stage stations. The streamflow records for one stream gage can be determined by alternate, less costly methods, and should be discontinued. The remaining 75 stations should be maintained in the program for the foreseeable future. The current policy for the operation of the 75 stations including the crest-stage and stage-only stations would require a budget of $417,200/yr. The average standard error of estimation of streamflow records is 16.9% for the present budget with missing record included. However, the standard error of estimation would decrease to 8.5% if complete streamflow records could be obtained. It was shown that the average standard error of estimation of 16.9% could be obtained at the 75 sites with a budget of approximately $395,000 if the gaging resources were redistributed among the gages. A minimum budget of $383,500 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 18.6%. The maximum budget analyzed was $850,000, which resulted in an average standard error of 7.6 %. (Author 's abstract)

  7. Medical students' experiences with medical errors: an analysis of medical student essays.

    PubMed

    Martinez, William; Lo, Bernard

    2008-07-01

    This study aimed to examine medical students' experiences with medical errors. In 2001 and 2002, 172 fourth-year medical students wrote an anonymous description of a significant medical error they had witnessed or committed during their clinical clerkships. The assignment represented part of a required medical ethics course. We analysed 147 of these essays using thematic content analysis. Many medical students made or observed significant errors. In either situation, some students experienced distress that seemingly went unaddressed. Furthermore, this distress was sometimes severe and persisted after the initial event. Some students also experienced considerable uncertainty as to whether an error had occurred and how to prevent future errors. Many errors may not have been disclosed to patients, and some students who desired to discuss or disclose errors were apparently discouraged from doing so by senior doctors. Some students criticised senior doctors who attempted to hide errors or avoid responsibility. By contrast, students who witnessed senior doctors take responsibility for errors and candidly disclose errors to patients appeared to recognise the importance of honesty and integrity and said they aspired to these standards. There are many missed opportunities to teach students how to respond to and learn from errors. Some faculty members and housestaff may at times respond to errors in ways that appear to contradict professional standards. Medical educators should increase exposure to exemplary responses to errors and help students to learn from and cope with errors.

  8. Water quality of small seasonal wetlands in the Piedmont ecoregion, South Carolina, USA: Effects of land use and hydrological connectivity.

    PubMed

    Yu, Xubiao; Hawley-Howard, Joanna; Pitt, Amber L; Wang, Jun-Jian; Baldwin, Robert F; Chow, Alex T

    2015-04-15

    Small, shallow, seasonal wetlands with short hydroperiod (2-4 months) play an important role in the entrapment of organic matter and nutrients and, due to their wide distribution, in determining the water quality of watersheds. In order to explain the temporal, spatial and compositional variation of water quality of seasonal wetlands, we collected water quality data from forty seasonal wetlands in the lower Blue Ridge and upper Piedmont ecoregions of South Carolina, USA during the wet season of February to April 2011. Results indicated that the surficial hydrological connectivity and surrounding land-use were two key factors controlling variation in dissolved organic carbon (DOC) and total dissolved nitrogen (TDN) in these seasonal wetlands. In the sites without obvious land use changes (average developed area <0.1%), the DOC (p < 0.001, t-test) and TDN (p < 0.05, t-test) of isolated wetlands were significantly higher than that of connected wetlands. However, this phenomenon can be reversed as a result of land use changes. The connected wetlands in more urbanized areas (average developed area = 12.3%) showed higher concentrations of dissolved organic matter (DOM) (DOC: 11.76 ± 6.09 mg L(-1), TDN: 0.74 ± 0.22 mg L(-1), mean ± standard error) compared to those in isolated wetlands (DOC: 7.20 ± 0.62 mg L(-1), TDN: 0.20 ± 0.08 mg L(-1)). The optical parameters derived from UV and fluorescence also confirmed significant portions of protein-like fractions likely originating from land use changes such as wastewater treatment and livestock pastures. The average of C/N molar ratios of all the wetlands decreased from 77.82 ± 6.72 (mean ± standard error) in February to 15.14 ± 1.58 in April, indicating that the decomposition of organic matter increased with the temperature. Results of this study demonstrate that the water quality of small, seasonal wetlands has a direct and close association with the surrounding environment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Quantitative multi-pinhole small-animal SPECT: uniform versus non-uniform Chang attenuation correction.

    PubMed

    Wu, C; de Jong, J R; Gratama van Andel, H A; van der Have, F; Vastenhouw, B; Laverman, P; Boerman, O C; Dierckx, R A J O; Beekman, F J

    2011-09-21

    Attenuation of photon flux on trajectories between the source and pinhole apertures affects the quantitative accuracy of reconstructed single-photon emission computed tomography (SPECT) images. We propose a Chang-based non-uniform attenuation correction (NUA-CT) for small-animal SPECT/CT with focusing pinhole collimation, and compare the quantitative accuracy with uniform Chang correction based on (i) body outlines extracted from x-ray CT (UA-CT) and (ii) on hand drawn body contours on the images obtained with three integrated optical cameras (UA-BC). Measurements in phantoms and rats containing known activities of isotopes were conducted for evaluation. In (125)I, (201)Tl, (99m)Tc and (111)In phantom experiments, average relative errors comparing to the gold standards measured in a dose calibrator were reduced to 5.5%, 6.8%, 4.9% and 2.8%, respectively, with NUA-CT. In animal studies, these errors were 2.1%, 3.3%, 2.0% and 2.0%, respectively. Differences in accuracy on average between results of NUA-CT, UA-CT and UA-BC were less than 2.3% in phantom studies and 3.1% in animal studies except for (125)I (3.6% and 5.1%, respectively). All methods tested provide reasonable attenuation correction and result in high quantitative accuracy. NUA-CT shows superior accuracy except for (125)I, where other factors may have more impact on the quantitative accuracy than the selected attenuation correction.

  10. Fast and low-cost method for VBES bathymetry generation in coastal areas

    NASA Astrophysics Data System (ADS)

    Sánchez-Carnero, N.; Aceña, S.; Rodríguez-Pérez, D.; Couñago, E.; Fraile, P.; Freire, J.

    2012-12-01

    Sea floor topography is key information in coastal area management. Nowadays, LiDAR and multibeam technologies provide accurate bathymetries in those areas; however these methodologies are yet too expensive for small customers (fishermen associations, small research groups) willing to keep a periodic surveillance of environmental resources. In this paper, we analyse a simple methodology for vertical beam echosounder (VBES) bathymetric data acquisition and postprocessing, using low-cost means and free customizable tools such as ECOSONS and gvSIG (that is compared with industry standard ArcGIS). Echosounder data was filtered, resampled and, interpolated (using kriging or radial basis functions). Moreover, the presented methodology includes two data correction processes: Monte Carlo simulation, used to reduce GPS errors, and manually applied bathymetric line transformations, both improving the obtained results. As an example, we present the bathymetry of the Ría de Cedeira (Galicia, NW Spain), a good testbed area for coastal bathymetry methodologies given its extension and rich topography. The statistical analysis, performed by direct ground-truthing, rendered an upper bound of 1.7 m error, at 95% confidence level, and 0.7 m r.m.s. (cross-validation provided 30 cm and 25 cm, respectively). The methodology presented is fast and easy to implement, accurate outside transects (accuracy can be estimated), and can be used as a low-cost periodical monitoring method.

  11. Target position uncertainty during visually guided deep-inspiration breath-hold radiotherapy in locally advanced lung cancer.

    PubMed

    Scherman Rydhög, Jonas; Riisgaard de Blanck, Steen; Josipovic, Mirjana; Irming Jølck, Rasmus; Larsen, Klaus Richter; Clementsen, Paul; Lars Andersen, Thomas; Poulsen, Per Rugaard; Fredberg Persson, Gitte; Munck Af Rosenschold, Per

    2017-04-01

    The purpose of this study was to estimate the uncertainty in voluntary deep-inspiration breath-hold (DIBH) radiotherapy for locally advanced non-small cell lung cancer (NSCLC) patients. Perpendicular fluoroscopic movies were acquired in free breathing (FB) and DIBH during a course of visually guided DIBH radiotherapy of nine patients with NSCLC. Patients had liquid markers injected in mediastinal lymph nodes and primary tumours. Excursion, systematic- and random errors, and inter-breath-hold position uncertainty were investigated using an image based tracking algorithm. A mean reduction of 2-6mm in marker excursion in DIBH versus FB was seen in the anterior-posterior (AP), left-right (LR) and cranio-caudal (CC) directions. Lymph node motion during DIBH originated from cardiac motion. The systematic- (standard deviation (SD) of all the mean marker positions) and random errors (root-mean-square of the intra-BH SD) during DIBH were 0.5 and 0.3mm (AP), 0.5 and 0.3mm (LR), 0.8 and 0.4mm (CC), respectively. The mean inter-breath-hold shifts were -0.3mm (AP), -0.2mm (LR), and -0.2mm (CC). Intra- and inter-breath-hold uncertainty of tumours and lymph nodes were small in visually guided breath-hold radiotherapy of NSCLC. Target motion could be substantially reduced, but not eliminated, using visually guided DIBH. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Estimates of streamflow characteristics for selected small streams, Baker River basin, Washington

    USGS Publications Warehouse

    Williams, John R.

    1987-01-01

    Regression equations were used to estimate streamflow characteristics at eight ungaged sites on small streams in the Baker River basin in the North Cascade Mountains, Washington, that could be suitable for run-of-the-river hydropower development. The regression equations were obtained by relating known streamflow characteristics at 25 gaging stations in nearby basins to several physical and climatic variables that could be easily measured in gaged or ungaged basins. The known streamflow characteristics were mean annual flows, 1-, 3-, and 7-day low flows and high flows, mean monthly flows, and flow duration. Drainage area and mean annual precipitation were not the most significant variables in all the regression equations. Variance in the low flows and the summer mean monthly flows was reduced by including an index of glacierized area within the basin as a third variable. Standard errors of estimate of the regression equations ranged from 25 to 88%, and the largest errors were associated with the low flow characteristics. Discharge measurements made at the eight sites near midmonth each month during 1981 were used to estimate monthly mean flows at the sites for that period. These measurements also were correlated with concurrent daily mean flows from eight operating gaging stations. The correlations provided estimates of mean monthly flows that compared reasonably well with those estimated by the regression analyses. (Author 's abstract)

  13. A stitch in time saves nine: external quality assessment rounds demonstrate improved quality of biomarker analysis in lung cancer

    PubMed Central

    Keppens, Cleo; Tack, Véronique; Hart, Nils ‘t; Tembuyser, Lien; Ryska, Ales; Pauwels, Patrick; Zwaenepoel, Karen; Schuuring, Ed; Cabillic, Florian; Tornillo, Luigi; Warth, Arne; Weichert, Wilko; Dequeker, Elisabeth

    2018-01-01

    Biomarker analysis has become routine practice in the treatment of non-small cell lung cancer (NSCLC). To ensure high quality testing, participation to external quality assessment (EQA) schemes is essential. This article provides a longitudinal overview of the EQA performance for EGFR, ALK, and ROS1 analyses in NSCLC between 2012 and 2015. The four scheme years were organized by the European Society of Pathology according to the ISO 17043 standard. Participants were asked to analyze the provided tissue using their routine procedures. Analysis scores improved for individual laboratories upon participation to more EQA schemes, except for ROS1 immunohistochemistry (IHC). For EGFR analysis, scheme error rates were 18.8%, 14.1% and 7.5% in 2013, 2014 and 2015 respectively. For ALK testing, error rates decreased between 2012 and 2015 by 5.2%, 3.2% and 11.8% for the fluorescence in situ hybridization (FISH), FISH digital, and IHC subschemes, respectively. In contrast, for ROS1 error rates increased between 2014 and 2015 for FISH and IHC by 3.2% and 9.3%. Technical failures decreased over the years for all three markers. Results show that EQA contributes to an ameliorated performance for most predictive biomarkers in NSCLC. Room for improvement is still present, especially for ROS1 analysis. PMID:29755669

  14. The statistical pitfalls of the partially randomized preference design in non-blinded trials of psychological interventions.

    PubMed

    Gemmell, Isla; Dunn, Graham

    2011-03-01

    In a partially randomized preference trial (PRPT) patients with no treatment preference are allocated to groups at random, but those who express a preference receive the treatment of their choice. It has been suggested that the design can improve the external and internal validity of trials. We used computer simulation to illustrate the impact that an unmeasured confounder could have on the results and conclusions drawn from a PRPT. We generated 4000 observations ("patients") that reflected the distribution of the Beck Depression Index (DBI) in trials of depression. Half were randomly assigned to a randomized controlled trial (RCT) design and half were assigned to a PRPT design. In the RCT, "patients" were evenly split between treatment and control groups; whereas in the preference arm, to reflect patient choice, 87.5% of patients were allocated to the experimental treatment and 12.5% to the control. Unadjusted analyses of the PRPT data consistently overestimated the treatment effect and its standard error. This lead to Type I errors when the true treatment effect was small and Type II errors when the confounder effect was large. The PRPT design is not recommended as a method of establishing an unbiased estimate of treatment effect due to the potential influence of unmeasured confounders. Copyright © 2011 John Wiley & Sons, Ltd.

  15. Pre-analytical phase: The automated ProTube device supports quality assurance in the phlebotomy process.

    PubMed

    Piva, Elisa; Tosato, Francesca; Plebani, Mario

    2015-12-07

    Most errors in laboratory medicine occur in the pre-analytical phase of the total testing process. Phlebotomy, a crucial step in the pre-analytical phase influencing laboratory results and patient outcome, calls for quality assurance procedures and automation in order to prevent errors and ensure patient safety. We compared the performance of a new small, automated device, the ProTube Inpeco, designed for use in phlebotomy with a complete traceability of the process, with a centralized automated system, BC ROBO. ProTube was used for 15,010 patients undergoing phlebotomy with 48,776 tubes being labeled. The mean time and standard deviation (SD) for blood sampling was 3:03 (min:sec; SD ± 1:24) when using ProTube, against 5:40 (min:sec; SD ± 1:57) when using BC ROBO. The mean number of patients per hour managed at each phlebotomy point was 16 ± 3 with ProTube, and 10 ± 2 with BC ROBO. No tubes were labeled erroneously or incorrectly, even if process failure occurred in 2.8% of cases when ProTube was used. Thanks to its cutting edge technology, the ProTube has many advantages over BC ROBO, above all in verifying patient identity, and in allowing a reduction in both identification error and tube mislabeling.

  16. Open-circuit respirometry: a brief historical review of the use of Douglas bags and chemical analyzers.

    PubMed

    Shephard, Roy J

    2017-03-01

    The Douglas bag technique is reviewed as one in a series of articles looking at historical insights into measurement of whole body metabolic rate. Consideration of all articles looking at Douglas bag technique and chemical gas analysis has here focused on the growing appreciation of errors in measuring expired volumes and gas composition, and subjective reactions to airflow resistance and dead space. Multiple small sources of error have been identified and appropriate remedies proposed over a century of use of the methodology. Changes in the bag lining have limited gas diffusion, laboratories conducting gas analyses have undergone validation, and WHO guidelines on airflow resistance have minimized reactive effects. One remaining difficulty is a contamination of expirate by dead space air, minimized by keeping the dead space <70 mL. Care must also be taken to ensure a steady state, and formal validation of the Douglas bag method still needs to be carried out. We may conclude that the Douglas bag method has helped to define key concepts in exercise physiology. Although now superceded in many applications, the errors in a meticulously completed measurement are sufficiently low to warrant retention of the Douglas bag as the gold standard when evaluating newer open-circuit methodology.

  17. Towards accurate modelling of galaxy clustering on small scales: testing the standard ΛCDM + halo model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-07-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter haloes. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the `accurate' regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard Λ cold dark matter (ΛCDM) + halo model against the clustering of Sloan Digital Sky Survey (SDSS) seventh data release (DR7) galaxies. Specifically, we use the projected correlation function, group multiplicity function, and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir haloes) matches the clustering of low-luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the `standard' halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  18. Effects of Artificial Viscosity on the Accuracy of High-reynolds-number Kappa-epsilon Turbulence Model

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit

    1994-01-01

    Wall functions, as used in the typical high Reynolds number k-epsilon turbulence model, can be implemented in various ways. A least disruptive method (to the flow solver) is to directly solve for the flow variables at the grid point next to the wall while prescribing the values of k and epsilon. For the centrally-differenced finite-difference scheme employing artificial viscocity (AV) as a stabilizing mechanism, this methodology proved to be totally useless. This is because the AV gives rise to a large error at the wall due to too steep a velocity gradient resulting from the use of a coarse grid as required by the wall function methodology. This error can be eliminated simply by extrapolating velocities at the wall, instead of using the physical values of the no-slip velocities (i.e. the zero value). The applicability of the technique used in this paper is demonstrated by solving a flow over a flat plate and comparing the results with those of experiments. It was also observed that AV gives rise to a velocity overshoot (about 1 percent) near the edge of the boundary layer. This small velocity error, however, can yield as much as 10 percent error in the momentum thickness. A method which integrates the boundary layer up to only the edge of the boundary (instead of infinity) was proposed and demonstrated to give better results than the standard method.

  19. Optimizer convergence and local minima errors and their clinical importance

    NASA Astrophysics Data System (ADS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.

    2003-09-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  20. Optimizer convergence and local minima errors and their clinical importance.

    PubMed

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-09-07

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  1. Photospheric Magnetic Field Properties of Flaring versus Flare-quiet Active Regions. II. Discriminant Analysis

    NASA Astrophysics Data System (ADS)

    Leka, K. D.; Barnes, G.

    2003-10-01

    We apply statistical tests based on discriminant analysis to the wide range of photospheric magnetic parameters described in a companion paper by Leka & Barnes, with the goal of identifying those properties that are important for the production of energetic events such as solar flares. The photospheric vector magnetic field data from the University of Hawai'i Imaging Vector Magnetograph are well sampled both temporally and spatially, and we include here data covering 24 flare-event and flare-quiet epochs taken from seven active regions. The mean value and rate of change of each magnetic parameter are treated as separate variables, thus evaluating both the parameter's state and its evolution, to determine which properties are associated with flaring. Considering single variables first, Hotelling's T2-tests show small statistical differences between flare-producing and flare-quiet epochs. Even pairs of variables considered simultaneously, which do show a statistical difference for a number of properties, have high error rates, implying a large degree of overlap of the samples. To better distinguish between flare-producing and flare-quiet populations, larger numbers of variables are simultaneously considered; lower error rates result, but no unique combination of variables is clearly the best discriminator. The sample size is too small to directly compare the predictive power of large numbers of variables simultaneously. Instead, we rank all possible four-variable permutations based on Hotelling's T2-test and look for the most frequently appearing variables in the best permutations, with the interpretation that they are most likely to be associated with flaring. These variables include an increasing kurtosis of the twist parameter and a larger standard deviation of the twist parameter, but a smaller standard deviation of the distribution of the horizontal shear angle and a horizontal field that has a smaller standard deviation but a larger kurtosis. To support the ``sorting all permutations'' method of selecting the most frequently occurring variables, we show that the results of a single 10-variable discriminant analysis are consistent with the ranking. We demonstrate that individually, the variables considered here have little ability to differentiate between flaring and flare-quiet populations, but with multivariable combinations, the populations may be distinguished.

  2. Use of Positive Blood Cultures for Direct Identification and Susceptibility Testing with the Vitek 2 System

    PubMed Central

    de Cueto, Marina; Ceballos, Esther; Martinez-Martinez, Luis; Perea, Evelio J.; Pascual, Alvaro

    2004-01-01

    In order to further decrease the time lapse between initial inoculation of blood culture media and the reporting of results of identification and antimicrobial susceptibility tests for microorganisms causing bacteremia, we performed a prospective study in which specially processed fluid from positive blood culture bottles from Bactec 9240 (Becton Dickinson, Cockeysville, Md.) containing aerobic media were directly inoculated into Vitek 2 system cards (bio-Mérieux, France). Organism identification and susceptibility results were compared with those obtained from cards inoculated with a standardized bacterial suspension obtained following subculture to agar; 100 consecutive positive monomicrobic blood cultures, consisting of 50 gram-negative rods and 50 gram-positive cocci, were included in the study. For gram-negative organisms, 31 of the 50 (62%) showed complete agreement with the standard method for species identification, while none of the 50 gram-positive cocci were correctly identified by the direct method. For gram-negative rods, there were 50% categorical agreements between the direct and standard methods for all drugs tested. The very major error rate was 2.4%, and the major error rate was 0.6%. The overall error rate for gram-negatives was 6.6%. Complete agreement in clinical categories of all antimicrobial agents evaluated was obtained for 19 of 50 (38%) gram-positive cocci evaluated; the overall error rate was 8.4%, with 2.8% minor errors, 2.4% major errors, and 3.2% very major errors. These findings suggest that the Vitek 2 cards inoculated directly from positive Bactec 9240 bottles do not provide acceptable bacterial identification or susceptibility testing in comparison with corresponding cards tested by a standard method. PMID:15297523

  3. [The quality of medication orders--can it be improved?].

    PubMed

    Vaknin, Ofra; Wingart-Emerel, Efrat; Stern, Zvi

    2003-07-01

    Medication errors are a common cause of morbidity and mortality among patients. Medication administration in hospitals is a complicated procedure with the possibility of error at each step. Errors are most commonly found at the prescription and transcription stages, although it is known that most errors can easily be avoided through strict adherence to standardized procedure guidelines. In examination of medication errors reported in the hospital in the year 2000, we found that 38% reported to have resulted from transcription errors. In the year 2001, the hospital initiated a program designed to identify faulty process of orders in an effort to improve the quality and effectiveness of the medication administration process. As part of this program, it was decided to check and evaluate the quality of the written doctor's orders and the transcription of those orders to the nursing cadre, in various hospital units. The study was conducted using a questionnaire which checked compliance to hospital standards with regard to the medication administration process, as applied to 6 units over the course of 8 weeks. Results of the survey showed poor compliance to guidelines on the part of doctors and nurses. Only 18% of doctors' orders in the study and 37% of the nurses' transcriptions were written according to standards. The Emergency Department showed an even lower compliance with only 3% of doctors' orders and 25% of nurses' transcriptions complying to standards. As a result of this study, it was decided to initiate an intensive in-service teaching course to refresh the staff's knowledge of medication administration guidelines. In the future it is recommended that hand-written orders be replaced by computerized orders in an effort to limit the chance of error.

  4. Improving patient safety through quality assurance.

    PubMed

    Raab, Stephen S

    2006-05-01

    Anatomic pathology laboratories use several quality assurance tools to detect errors and to improve patient safety. To review some of the anatomic pathology laboratory patient safety quality assurance practices. Different standards and measures in anatomic pathology quality assurance and patient safety were reviewed. Frequency of anatomic pathology laboratory error, variability in the use of specific quality assurance practices, and use of data for error reduction initiatives. Anatomic pathology error frequencies vary according to the detection method used. Based on secondary review, a College of American Pathologists Q-Probes study showed that the mean laboratory error frequency was 6.7%. A College of American Pathologists Q-Tracks study measuring frozen section discrepancy found that laboratories improved the longer they monitored and shared data. There is a lack of standardization across laboratories even for governmentally mandated quality assurance practices, such as cytologic-histologic correlation. The National Institutes of Health funded a consortium of laboratories to benchmark laboratory error frequencies, perform root cause analysis, and design error reduction initiatives, using quality assurance data. Based on the cytologic-histologic correlation process, these laboratories found an aggregate nongynecologic error frequency of 10.8%. Based on gynecologic error data, the laboratory at my institution used Toyota production system processes to lower gynecologic error frequencies and to improve Papanicolaou test metrics. Laboratory quality assurance practices have been used to track error rates, and laboratories are starting to use these data for error reduction initiatives.

  5. Characteristics of advanced hydrogen maser frequency standards

    NASA Technical Reports Server (NTRS)

    Peters, H. E.

    1973-01-01

    Measurements with several operational atomic hydrogen maser standards have been made which illustrate the fundamental characteristics of the maser as well as the analysability of the corrections which are made to relate the oscillation frequency to the free, unperturbed, hydrogen standard transition frequency. Sources of the most important perturbations, and the magnitude of the associated errors, are discussed. A variable volume storage bulb hydrogen maser is also illustrated which can provide on the order of 2 parts in 10 to the 14th power or better accuracy in evaluating the wall shift. Since the other basic error sources combined contribute no more than approximately 1 part in 10 to the 14th power uncertainty, the variable volume storage bulb hydrogen maser will have net intrinsic accuracy capability of the order of 2 parts in 10 to the 14th power or better. This is an order of magnitude less error than anticipated with cesium standards and is comparable to the basic limit expected for a free atom hydrogen beam resonance standard.

  6. On a more rigorous gravity field processing for future LL-SST type gravity satellite missions

    NASA Astrophysics Data System (ADS)

    Daras, I.; Pail, R.; Murböck, M.

    2013-12-01

    In order to meet the augmenting demands of the user community concerning accuracies of temporal gravity field models, future gravity missions of low-low satellite-to-satellite tracking (LL-SST) type are planned to carry more precise sensors than their precedents. A breakthrough is planned with the improved LL-SST measurement link, where the traditional K-band microwave instrument of 1μm accuracy will be complemented by an inter-satellite ranging instrument of several nm accuracy. This study focuses on investigations concerning the potential performance of the new sensors and their impact in gravity field solutions. The processing methods for gravity field recovery have to meet the new sensor standards and be able to take full advantage of the new accuracies that they provide. We use full-scale simulations in a realistic environment to investigate whether the standard processing techniques suffice to fully exploit the new sensors standards. We achieve that by performing full numerical closed-loop simulations based on the Integral Equation approach. In our simulation scheme, we simulate dynamic orbits in a conventional tracking analysis to compute pseudo inter-satellite ranges or range-rates that serve as observables. Each part of the processing is validated separately with special emphasis on numerical errors and their impact in gravity field solutions. We demonstrate that processing with standard precision may be a limiting factor for taking full advantage of new generation sensors that future satellite missions will carry. Therefore we have created versions of our simulator with enhanced processing precision with primarily aim to minimize round-off system errors. Results using the enhanced precision show a big reduction of system errors that were present at the standard precision processing even for the error-free scenario, and reveal the improvements the new sensors will bring into the gravity field solutions. As a next step, we analyze the contribution of individual error sources to the system's error budget. More specifically we analyze sensor noise from the laser interferometer and the accelerometers, errors in the kinematic orbits and the background fields as well as temporal and spatial aliasing errors. We give special care on the assessment of error sources with stochastic behavior, such as the laser interferometer and the accelerometers, and their consistent stochastic modeling in frame of the adjustment process.

  7. Asymmetries of the B →K*μ+μ- decay and the search of new physics beyond the standard model

    NASA Astrophysics Data System (ADS)

    Fu, Hai-Bing; Wu, Xing-Gang; Cheng, Wei; Zhong, Tao; Sun, Zhan

    2018-03-01

    In this paper, we compute the forward-backward asymmetry and the isospin asymmetry of the B →K*μ+μ- decay. The B →K* transition form factors (TFFs) are key components of the decay. To achieve a more accurate QCD prediction, we adopt a chiral correlator for calculating the QCD light cone sum rules for those TFFs with the purpose of suppressing the uncertain high-twist distribution amplitudes. Our predictions show that the asymmetries under the standard model and the minimal supersymmetric standard model with minimal flavor violation are close in shape for q2≥6 GeV2 and are consistent with the Belle, LHCb, and CDF data within errors. When q2<2 GeV2, their predictions behave quite differently. Thus, a careful study on the B →K*μ+μ- decay within the small q2 region could be helpful for searching new physics beyond the standard model. As a further application, we also apply the B →K* TFFs to the branching ratio and longitudinal polarization fraction of the B →K*ν ν ¯ decay within different models.

  8. Intermittent nocturnal hypoxia and metabolic risk in obese adolescents with obstructive sleep apnea.

    PubMed

    Narang, Indra; McCrindle, Brian W; Manlhiot, Cedric; Lu, Zihang; Al-Saleh, Suhail; Birken, Catherine S; Hamilton, Jill

    2018-01-22

    There is conflicting data regarding the independent associations of obstructive sleep apnea (OSA) with metabolic risk in obese youth. Previous studies have not consistently addressed central adiposity, specifically elevated waist to height ratio (WHtR), which is associated with metabolic risk independent of body mass index. The objective of this study was to determine the independent effects of the obstructive apnea-hypopnea index (OAHI) and associated indices of nocturnal hypoxia on metabolic function in obese youth after adjusting for WHtR. Subjects had standardized anthropometric measurements. Fasting blood included insulin, glucose, glycated hemoglobin, alanine transferase, and aspartate transaminase. Insulin resistance was quantified with the homeostatic model assessment. Overnight polysomnography determined the OAHI and nocturnal oxygenation indices. Of the 75 recruited subjects, 23% were diagnosed with OSA. Adjusting for age, gender, and WHtR in multivariable linear regression models, a higher oxygen desaturation index was associated with a higher fasting insulin (coefficient [standard error] = 48.076 [11.255], p < 0.001), higher glycated hemoglobin (coefficient [standard error] = 0.097 [0.041], p = 0.02), higher insulin resistance (coefficient [standard error] = 1.516 [0.364], p < 0.001), elevated alanine transferase (coefficient [standard error] = 11.631 [2.770], p < 0.001), and aspartate transaminase (coefficient [standard error] = 4.880 [1.444], p = 0.001). However, there were no significant associations between OAHI, glucose metabolism, and liver enzymes. Intermittent nocturnal hypoxia rather than the OAHI was associated with metabolic risk in obese youth after adjusting for WHtR. Measures of abdominal adiposity such as WHtR should be considered in future studies that evaluate the impact of OSA on metabolic health.

  9. Asymptotic Standard Errors for Item Response Theory True Score Equating of Polytomous Items

    ERIC Educational Resources Information Center

    Cher Wong, Cheow

    2015-01-01

    Building on previous works by Lord and Ogasawara for dichotomous items, this article proposes an approach to derive the asymptotic standard errors of item response theory true score equating involving polytomous items, for equivalent and nonequivalent groups of examinees. This analytical approach could be used in place of empirical methods like…

  10. Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method

    ERIC Educational Resources Information Center

    Liu, Yuming; Schulz, E. Matthew; Yu, Lei

    2008-01-01

    A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…

  11. How Accurate Is a Test Score?

    ERIC Educational Resources Information Center

    Doppelt, Jerome E.

    1956-01-01

    The standard error of measurement as a means for estimating the margin of error that should be allowed for in test scores is discussed. The true score measures the performance that is characteristic of the person tested; the variations, plus and minus, around the true score describe a characteristic of the test. When the standard deviation is used…

  12. Standard Errors for National Trends in International Large-Scale Assessments in the Case of Cross-National Differential Item Functioning

    ERIC Educational Resources Information Center

    Sachse, Karoline A.; Haag, Nicole

    2017-01-01

    Standard errors computed according to the operational practices of international large-scale assessment studies such as the Programme for International Student Assessment's (PISA) or the Trends in International Mathematics and Science Study (TIMSS) may be biased when cross-national differential item functioning (DIF) and item parameter drift are…

  13. Standard Error of Linear Observed-Score Equating for the NEAT Design with Nonnormally Distributed Data

    ERIC Educational Resources Information Center

    Zu, Jiyun; Yuan, Ke-Hai

    2012-01-01

    In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed-score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the…

  14. Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters

    ERIC Educational Resources Information Center

    Hoshino, Takahiro; Shigemasu, Kazuo

    2008-01-01

    The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

  15. 40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of diameters meter per meter m/m 1 b atomic oxygen-to-carbon ratio mole per mole mol/mol 1 C # number... error between a quantity and its reference e brake-specific emission or fuel consumption gram per... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...

  16. Standard errors in forest area

    Treesearch

    Joseph McCollum

    2002-01-01

    I trace the development of standard error equations for forest area, beginning with the theory behind double sampling and the variance of a product. The discussion shifts to the particular problem of forest area - at which time the theory becomes relevant. There are subtle difficulties in figuring out which variance of a product equation should be used. The equations...

  17. Analyzing Multilevel Data: An Empirical Comparison of Parameter Estimates of Hierarchical Linear Modeling and Ordinary Least Squares Regression

    ERIC Educational Resources Information Center

    Rocconi, Louis M.

    2011-01-01

    Hierarchical linear models (HLM) solve the problems associated with the unit of analysis problem such as misestimated standard errors, heterogeneity of regression and aggregation bias by modeling all levels of interest simultaneously. Hierarchical linear modeling resolves the problem of misestimated standard errors by incorporating a unique random…

  18. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  19. Patient Safety: Moving the Bar in Prison Health Care Standards

    PubMed Central

    Greifinger, Robert B.; Mellow, Jeff

    2010-01-01

    Improvements in community health care quality through error reduction have been slow to transfer to correctional settings. We convened a panel of correctional experts, which recommended 60 patient safety standards focusing on such issues as creating safety cultures at organizational, supervisory, and staff levels through changes to policy and training and by ensuring staff competency, reducing medication errors, encouraging the seamless transfer of information between and within practice settings, and developing mechanisms to detect errors or near misses and to shift the emphasis from blaming staff to fixing systems. To our knowledge, this is the first published set of standards focusing on patient safety in prisons, adapted from the emerging literature on quality improvement in the community. PMID:20864714

  20. Kappa statistic for the clustered dichotomous responses from physicians and patients

    PubMed Central

    Kang, Chaeryon; Qaqish, Bahjat; Monaco, Jane; Sheridan, Stacey L.; Cai, Jianwen

    2013-01-01

    The bootstrap method for estimating the standard error of the kappa statistic in the presence of clustered data is evaluated. Such data arise, for example, in assessing agreement between physicians and their patients regarding their understanding of the physician-patient interaction and discussions. We propose a computationally efficient procedure for generating correlated dichotomous responses for physicians and assigned patients for simulation studies. The simulation result demonstrates that the proposed bootstrap method produces better estimate of the standard error and better coverage performance compared to the asymptotic standard error estimate that ignores dependence among patients within physicians with at least a moderately large number of clusters. An example of an application to a coronary heart disease prevention study is presented. PMID:23533082

  1. Understanding Problem-Solving Errors by Students with Learning Disabilities in Standards-Based and Traditional Curricula

    ERIC Educational Resources Information Center

    Bouck, Emily C.; Bouck, Mary K.; Joshi, Gauri S.; Johnson, Linley

    2016-01-01

    Students with learning disabilities struggle with word problems in mathematics classes. Understanding the type of errors students make when working through such mathematical problems can further describe student performance and highlight student difficulties. Through the use of error codes, researchers analyzed the type of errors made by 14 sixth…

  2. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Esa; Crisp, Vicki K. (Technical Monitor)

    2002-01-01

    One of the main factors in all aviation accidents is human error. The NASA Aviation Safety Program (AvSP), therefore, has identified several human-factors safety technologies to address this issue. Some technologies directly address human error either by attempting to reduce the occurrence of errors or by mitigating the negative consequences of errors. However, new technologies and system changes may also introduce new error opportunities or even induce different types of errors. Consequently, a thorough understanding of the relationship between error classes and technology "fixes" is crucial for the evaluation of intervention strategies outlined in the AvSP, so that resources can be effectively directed to maximize the benefit to flight safety. The purpose of the present project, therefore, was to examine the repositories of human factors data to identify the possible relationship between different error class and technology intervention strategies. The first phase of the project, which is summarized here, involved the development of prototype data structures or matrices that map errors onto "fixes" (and vice versa), with the hope of facilitating the development of standards for evaluating safety products. Possible follow-on phases of this project are also discussed. These additional efforts include a thorough and detailed review of the literature to fill in the data matrix and the construction of a complete database and standards checklists.

  3. Translating Radiometric Requirements for Satellite Sensors to Match International Standards.

    PubMed

    Pearlman, Aaron; Datla, Raju; Kacker, Raghu; Cao, Changyong

    2014-01-01

    International scientific standards organizations created standards on evaluating uncertainty in the early 1990s. Although scientists from many fields use these standards, they are not consistently implemented in the remote sensing community, where traditional error analysis framework persists. For a satellite instrument under development, this can create confusion in showing whether requirements are met. We aim to create a methodology for translating requirements from the error analysis framework to the modern uncertainty approach using the product level requirements of the Advanced Baseline Imager (ABI) that will fly on the Geostationary Operational Environmental Satellite R-Series (GOES-R). In this paper we prescribe a method to combine several measurement performance requirements, written using a traditional error analysis framework, into a single specification using the propagation of uncertainties formula. By using this approach, scientists can communicate requirements in a consistent uncertainty framework leading to uniform interpretation throughout the development and operation of any satellite instrument.

  4. Translating Radiometric Requirements for Satellite Sensors to Match International Standards

    PubMed Central

    Pearlman, Aaron; Datla, Raju; Kacker, Raghu; Cao, Changyong

    2014-01-01

    International scientific standards organizations created standards on evaluating uncertainty in the early 1990s. Although scientists from many fields use these standards, they are not consistently implemented in the remote sensing community, where traditional error analysis framework persists. For a satellite instrument under development, this can create confusion in showing whether requirements are met. We aim to create a methodology for translating requirements from the error analysis framework to the modern uncertainty approach using the product level requirements of the Advanced Baseline Imager (ABI) that will fly on the Geostationary Operational Environmental Satellite R-Series (GOES-R). In this paper we prescribe a method to combine several measurement performance requirements, written using a traditional error analysis framework, into a single specification using the propagation of uncertainties formula. By using this approach, scientists can communicate requirements in a consistent uncertainty framework leading to uniform interpretation throughout the development and operation of any satellite instrument. PMID:26601032

  5. Liquid scintillation counting for /sup 14/C uptake of single algal cells isolated from natural samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivkin, R.B.; Seliger, H.H.

    1981-07-01

    Short term rates of /sup 14/C uptake for single cells and small numbers of isolated algal cells of five phytoplankton species from natural populations were measured by liquid scintillation counting. Regression analysis of uptake rates per cell for cells isolated from unialgal cultures of seven species of dinoflagellates, ranging in volume from ca. 10/sup 3/ to 10/sup 7/ ..mu..m/sup 3/, gave results identical to uptake rates per cell measured by conventional /sup 14/C techniques. Relative standard errors or regression coefficients ranged between 3 and 10%, indicating that for any species there was little variation in photosynthesis per cell.

  6. Note: Rotaphone, a new self-calibrated six-degree-of-freedom seismic sensor

    USGS Publications Warehouse

    Brokešová, Johana; Málek, Jiří; Evans, John R.

    2012-01-01

    We have developed and tested (calibration, linearity, and cross-axis errors) a new six-degree-of-freedom mechanical seismic sensor for collocated measurements of three translational and three rotational ground motion velocity components. The device consists of standard geophones arranged in parallel pairs to detect spatial gradients. The instrument operates in a high-frequency range (above the natural frequency of the geophones, 4.5 Hz). Its theoretical sensitivity limit in this range is 10(-9) m/s in ground velocity and 10(-9) rad/s in rotation rate. Small size and weight, and easy installation and maintenance make the instrument useful for local-earthquake recording and seismic prospecting.

  7. One Small Step for the Gram Stain, One Giant Leap for Clinical Microbiology

    PubMed Central

    2016-01-01

    The Gram stain is one of the most commonly performed tests in the clinical microbiology laboratory, yet it is poorly controlled and lacks standardization. It was once the best rapid test in microbiology, but it is no longer trusted by many clinicians. The publication by Samuel et al. (J. Clin. Microbiol. 54:1442–1447, 2016, http://dx.doi.org/10.1128/JCM.03066-15) is a start for those who want to evaluate and improve Gram stain performance. In an age of emerging rapid molecular results, is the Gram stain still relevant? How should clinical microbiologists respond to the call to reduce Gram stain error rates? PMID:27008876

  8. Anomaly Detection for Beam Loss Maps in the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Valentino, Gianluca; Bruce, Roderik; Redaelli, Stefano; Rossi, Roberto; Theodoropoulos, Panagiotis; Jaster-Merz, Sonja

    2017-07-01

    In the LHC, beam loss maps are used to validate collimator settings for cleaning and machine protection. This is done by monitoring the loss distribution in the ring during infrequent controlled loss map campaigns, as well as in standard operation. Due to the complexity of the system, consisting of more than 50 collimators per beam, it is difficult to identify small changes in the collimation hierarchy, which may be due to setting errors or beam orbit drifts with such methods. A technique based on Principal Component Analysis and Local Outlier Factor is presented to detect anomalies in the loss maps and therefore provide an automatic check of the collimation hierarchy.

  9. Piston manometer as an absolute standard for vacuum-gage calibration in the range 2 to 500 millitorr

    NASA Technical Reports Server (NTRS)

    Warshawsky, I.

    1972-01-01

    A thin disk is suspended, with very small annular clearance, in a cylindrical opening in the base plate of a calibration chamber. A continuous flow of calibration gas passes through the chamber and annular opening to a downstream high vacuum pump. The ratio of pressures on the two faces of the disk is very large, so that the upstream pressure is substantially equal to net force on the disk divided by disk area. This force is measured with a dynamometer that is calibrated in place with dead weights. A probable error of + or - (0.2 millitorr plus 0.2 percent) is attainable when downstream pressure is known to 10 percent.

  10. Documenting Models for Interoperability and Reusability ...

    EPA Pesticide Factsheets

    Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration between scientific communities, since component-based modeling can integrate models from different disciplines. Integrated Environmental Modeling (IEM) systems focus on transferring information between components by capturing a conceptual site model; establishing local metadata standards for input/output of models and databases; managing data flow between models and throughout the system; facilitating quality control of data exchanges (e.g., checking units, unit conversions, transfers between software languages); warning and error handling; and coordinating sensitivity/uncertainty analyses. Although many computational software systems facilitate communication between, and execution of, components, there are no common approaches, protocols, or standards for turn-key linkages between software systems and models, especially if modifying components is not the intent. Using a standard ontology, this paper reviews how models can be described for discovery, understanding, evaluation, access, and implementation to facilitate interoperability and reusability. In the proceedings of the International Environmental Modelling and Software Society (iEMSs), 8th International Congress on Environmental Mod

  11. Simultaneous Control of Error Rates in fMRI Data Analysis

    PubMed Central

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-01-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730

  12. UAS Well Clear Recovery Against Non-Cooperative Intruders Using Vertical Maneuvers

    NASA Technical Reports Server (NTRS)

    Cone, Andrew C.; Thipphavong, David; Lee, Seung Man; Santiago, Confesor

    2017-01-01

    This paper documents a study that drove the development of a mathematical expression in the detect-and-avoid (DAA) minimum operational performance standards (MOPS) for unmanned aircraft systems (UAS). This equation describes the conditions under which vertical maneuver guidance should be provided during recovery of DAA well clear separation with a non-cooperative VFR aircraft. Although the original hypothesis was that vertical maneuvers for DAA well clear recovery should only be offered when sensor vertical rate errors are small, this paper suggests that UAS climb and descent performance should be considered-in addition to sensor errors for vertical position and vertical rate-when determining whether to offer vertical guidance. A fast-time simulation study involving 108,000 encounters between a UAS and a non-cooperative visual-flight-rules aircraft was conducted. Results are presented showing that, when vertical maneuver guidance for DAA well clear recovery was suppressed, the minimum vertical separation increased by roughly 50 feet (or horizontal separation by 500 to 800 feet). However, the percentage of encounters that had a risk of collision when performing vertical well clear recovery maneuvers was reduced as UAS vertical rate performance increased and sensor vertical rate errors decreased. A class of encounter is identified for which vertical-rate error had a large effect on the efficacy of horizontal maneuvers due to the difficulty of making the correct left/right turn decision: crossing conflict with intruder changing altitude. Overall, these results support logic that would allow vertical maneuvers when UAS vertical performance is sufficient to avoid the intruder, based on the intruder's estimated vertical position and vertical rate, as well as the vertical rate error of the UAS' sensor.

  13. A meta-analysis of inhibitory-control deficits in patients diagnosed with Alzheimer's dementia.

    PubMed

    Kaiser, Anna; Kuhlmann, Beatrice G; Bosnjak, Michael

    2018-05-10

    The authors conducted meta-analyses to determine the magnitude of performance impairments in patients diagnosed with Alzheimer's dementia (AD) compared with healthy aging (HA) controls on eight tasks commonly used to measure inhibitory control. Response time (RT) and error rates from a total of 64 studies were analyzed with random-effects models (overall effects) and mixed-effects models (moderator analyses). Large differences between AD patients and HA controls emerged in the basic inhibition conditions of many of the tasks with AD patients often performing slower, overall d = 1.17, 95% CI [0.88-1.45], and making more errors, d = 0.83 [0.63-1.03]. However, comparably large differences were also present in performance on many of the baseline control-conditions, d = 1.01 [0.83-1.19] for RTs and d = 0.44 [0.19-0.69] for error rates. A standardized derived inhibition score (i.e., control-condition score - inhibition-condition score) suggested no significant mean group difference for RTs, d = -0.07 [-0.22-0.08], and only a small difference for errors, d = 0.24 [-0.12-0.60]. Effects systematically varied across tasks and with AD severity. Although the error rate results suggest a specific deterioration of inhibitory-control abilities in AD, further processes beyond inhibitory control (e.g., a general reduction in processing speed and other, task-specific attentional processes) appear to contribute to AD patients' performance deficits observed on a variety of inhibitory-control tasks. Nonetheless, the inhibition conditions of many of these tasks well discriminate between AD patients and HA controls. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. Talar dome detection and its geometric approximation in CT: Sphere, cylinder or bi-truncated cone?

    PubMed

    Huang, Junbin; Liu, He; Wang, Defeng; Griffith, James F; Shi, Lin

    2017-04-01

    The purpose of our study is to give a relatively objective definition of talar dome and its shape approximations to sphere (SPH), cylinder (CLD) and bi-truncated cone (BTC). The "talar dome" is well-defined with the improved Dijkstra's algorithm, considering the Euclidean distance and surface curvature. The geometric similarity between talar dome and ideal shapes, namely SPH, CLD and BTC, is quantified. 50 unilateral CT datasets from 50 subjects with no pathological morphometry of tali were included in the experiments and statistical analyses were carried out based on the approximation error. The similarity between talar dome and BTC was more prominent, with smaller mean, standard deviation, maximum and median of the approximation error (0.36±0.07mm, 0.32±0.06mm, 2.24±0.47mm and 0.28±0.06mm) compare with fitting to SPH and CLD. In addition, there were significant differences between the fitting error of each pair of models in terms of the 4 measurements (p-values<0.05). The linear regression analyses demonstrated high correlation between CLD and BTC approximations (R 2 =0.55 for median, R 2 >0.7 for others). Color maps representing fitting error indicated that fitting error mainly occurred on the marginal regions of talar dome for SPH and CLD fittings, while that of BTC was small for the whole talar dome. The successful restoration of ankle functions in displacement surgery highly depends on the comprehensive understanding of the talus. The talar dome surface could be well-defined in a computational way and compared to SPH and CLD, the talar dome reflects outstanding similarity with BTC. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Developing and Validating a Tablet Version of an Illness Explanatory Model Interview for a Public Health Survey in Pune, India

    PubMed Central

    Giduthuri, Joseph G.; Maire, Nicolas; Joseph, Saju; Kudale, Abhay; Schaetti, Christian; Sundaram, Neisha; Schindler, Christian; Weiss, Mitchell G.

    2014-01-01

    Background Mobile electronic devices are replacing paper-based instruments and questionnaires for epidemiological and public health research. The elimination of a data-entry step after an interview is a notable advantage over paper, saving investigator time, decreasing the time lags in managing and analyzing data, and potentially improving the data quality by removing the error-prone data-entry step. Research has not yet provided adequate evidence, however, to substantiate the claim of fewer errors for computerized interviews. Methodology We developed an Android-based illness explanatory interview for influenza vaccine acceptance and tested the instrument in a field study in Pune, India, for feasibility and acceptability. Error rates for tablet and paper were compared with reference to the voice recording of the interview as gold standard to assess discrepancies. We also examined the preference of interviewers for the classical paper-based or the electronic version of the interview and compared the costs of research with both data collection devices. Results In 95 interviews with household respondents, total error rates with paper and tablet devices were nearly the same (2.01% and 1.99% respectively). Most interviewers indicated no preference for a particular device; but those with a preference opted for tablets. The initial investment in tablet-based interviews was higher compared to paper, while the recurring costs per interview were lower with the use of tablets. Conclusion An Android-based tablet version of a complex interview was developed and successfully validated. Advantages were not compromised by increased errors, and field research assistants with a preference preferred the Android device. Use of tablets may be more costly than paper for small samples and less costly for large studies. PMID:25233212

  16. Robotic-Arm Assisted Total Knee Arthroplasty Demonstrated Greater Accuracy and Precision to Plan Compared with Manual Techniques.

    PubMed

    Hampp, Emily L; Chughtai, Morad; Scholl, Laura Y; Sodhi, Nipun; Bhowmik-Stoker, Manoshi; Jacofsky, David J; Mont, Michael A

    2018-05-01

    This study determined if robotic-arm assisted total knee arthroplasty (RATKA) allows for more accurate and precise bone cuts and component position to plan compared with manual total knee arthroplasty (MTKA). Specifically, we assessed the following: (1) final bone cuts, (2) final component position, and (3) a potential learning curve for RATKA. On six cadaver specimens (12 knees), a MTKA and RATKA were performed on the left and right knees, respectively. Bone-cut and final-component positioning errors relative to preoperative plans were compared. Median errors and standard deviations (SDs) in the sagittal, coronal, and axial planes were compared. Median values of the absolute deviation from plan defined the accuracy to plan. SDs described the precision to plan. RATKA bone cuts were as or more accurate to plan based on nominal median values in 11 out of 12 measurements. RATKA bone cuts were more precise to plan in 8 out of 12 measurements ( p  ≤ 0.05). RATKA final component positions were as or more accurate to plan based on median values in five out of five measurements. RATKA final component positions were more precise to plan in four out of five measurements ( p  ≤ 0.05). Stacked error results from all cuts and implant positions for each specimen in procedural order showed that RATKA error was less than MTKA error. Although this study analyzed a small number of cadaver specimens, there were clear differences that separated these two groups. When compared with MTKA, RATKA demonstrated more accurate and precise bone cuts and implant positioning to plan. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  17. Effects of Random Shadings, Phasing Errors, and Element Failures on the Beam Patterns of Linear and Planar Arrays

    DTIC Science & Technology

    1980-03-14

    failure Sigmar (Or) in line 50, the standard deviation of the relative error of the weights Sigmap (o) in line 60, the standard deviation of the phase...200, the weight structures in the x and y coordinates Q in line 210, the probability of element failure Sigmar (Or) in line 220, the standard...NUMBER OF ELEMENTS =u;2*H 120 PRINT "Pr’obability of elemenit failure al;O 130 PRINT "Standard dtvi&t ion’ oe r.1&tive ýrror of wl; Sigmar 14 0 PRINT

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  19. Error image aware content restoration

    NASA Astrophysics Data System (ADS)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  20. Analytical quality goals derived from the total deviation from patients' homeostatic set points, with a margin for analytical errors.

    PubMed

    Bolann, B J; Asberg, A

    2004-01-01

    The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.

  1. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  2. 76 FR 44010 - Medicare Program; Hospice Wage Index for Fiscal Year 2012; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-22

    .... 93.774, Medicare-- Supplementary Medical Insurance Program) Dated: July 15, 2011. Dawn L. Smalls... corrects technical errors that appeared in the notice of CMS ruling published in the Federal Register on... FR 26731), there were technical errors that are identified and corrected in the Correction of Errors...

  3. The Cut-Score Operating Function: A New Tool to Aid in Standard Setting

    ERIC Educational Resources Information Center

    Grabovsky, Irina; Wainer, Howard

    2017-01-01

    In this essay, we describe the construction and use of the Cut-Score Operating Function in aiding standard setting decisions. The Cut-Score Operating Function shows the relation between the cut-score chosen and the consequent error rate. It allows error rates to be defined by multiple loss functions and will show the behavior of each loss…

  4. Estimation of Standard Error of Regression Effects in Latent Regression Models Using Binder's Linearization. Research Report. ETS RR-07-09

    ERIC Educational Resources Information Center

    Li, Deping; Oranje, Andreas

    2007-01-01

    Two versions of a general method for approximating standard error of regression effect estimates within an IRT-based latent regression model are compared. The general method is based on Binder's (1983) approach, accounting for complex samples and finite populations by Taylor series linearization. In contrast, the current National Assessment of…

  5. The Measurement and Correction of the Periodic Error of the LX200-16 Telescope Driving System

    NASA Astrophysics Data System (ADS)

    Jeong, Jang Hae; Lee, Young Sam; Lee, Chung Uk

    2000-06-01

    We examined and corrected the periodic error of the LX200-16 Telescope driving system of Chungbuk National University Campus Observatory. Before correcting, the standard deviation of the periodic error in the direction of East-West was = 7.''2. After correcting,we found that the periodic error was reduced to = 1.''2.

  6. Mimicking Aphasic Semantic Errors in Normal Speech Production: Evidence from a Novel Experimental Paradigm

    ERIC Educational Resources Information Center

    Hodgson, Catherine; Lambon Ralph, Matthew A.

    2008-01-01

    Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study…

  7. A partial least squares based spectrum normalization method for uncertainty reduction for laser-induced breakdown spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou

    2013-10-01

    A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.

  8. Cost-effectiveness of the streamflow-gaging program in Wyoming

    USGS Publications Warehouse

    Druse, S.A.; Wahl, K.L.

    1988-01-01

    This report documents the results of a cost-effectiveness study of the streamflow-gaging program in Wyoming. Regression analysis or hydrologic flow-routing techniques were considered for 24 combinations of stations from a 139-station network operated in 1984 to investigate suitability of techniques for simulating streamflow records. Only one station was determined to have sufficient accuracy in the regression analysis to consider discontinuance of the gage. The evaluation of the gaging-station network, which included the use of associated uncertainty in streamflow records, is limited to the nonwinter operation of the 47 stations operated by the Riverton Field Office of the U.S. Geological Survey. The current (1987) travel routes and measurement frequencies require a budget of $264,000 and result in an average standard error in streamflow records of 13.2%. Changes in routes and station visits using the same budget, could optimally reduce the standard error by 1.6%. Budgets evaluated ranged from $235,000 to $400,000. A $235,000 budget increased the optimal average standard error/station from 11.6 to 15.5%, and a $400,000 budget could reduce it to 6.6%. For all budgets considered, lost record accounts for about 40% of the average standard error. (USGS)

  9. Absolute color scale for improved diagnostics with wavefront error mapping.

    PubMed

    Smolek, Michael K; Klyce, Stephen D

    2007-11-01

    Wavefront data are expressed in micrometers and referenced to the pupil plane, but current methods to map wavefront error lack standardization. Many use normalized or floating scales that may confuse the user by generating ambiguous, noisy, or varying information. An absolute scale that combines consistent clinical information with statistical relevance is needed for wavefront error mapping. The color contours should correspond better to current corneal topography standards to improve clinical interpretation. Retrospective analysis of wavefront error data. Historic ophthalmic medical records. Topographic modeling system topographical examinations of 120 corneas across 12 categories were used. Corneal wavefront error data in micrometers from each topography map were extracted at 8 Zernike polynomial orders and for 3 pupil diameters expressed in millimeters (3, 5, and 7 mm). Both total aberrations (orders 2 through 8) and higher-order aberrations (orders 3 through 8) were expressed in the form of frequency histograms to determine the working range of the scale across all categories. The standard deviation of the mean error of normal corneas determined the map contour resolution. Map colors were based on corneal topography color standards and on the ability to distinguish adjacent color contours through contrast. Higher-order and total wavefront error contour maps for different corneal conditions. An absolute color scale was produced that encompassed a range of +/-6.5 microm and a contour interval of 0.5 microm. All aberrations in the categorical database were plotted with no loss of clinical information necessary for classification. In the few instances where mapped information was beyond the range of the scale, the type and severity of aberration remained legible. When wavefront data are expressed in micrometers, this absolute scale facilitates the determination of the severity of aberrations present compared with a floating scale, particularly for distinguishing normal from abnormal levels of wavefront error. The new color palette makes it easier to identify disorders. The corneal mapping method can be extended to mapping whole eye wavefront errors. When refraction data are expressed in diopters, the previously published corneal topography scale is suggested.

  10. Integrating models that depend on variable data

    NASA Astrophysics Data System (ADS)

    Banks, A. T.; Hill, M. C.

    2016-12-01

    Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log-transformation. Greater consistency is obtained by imposing smaller (by up to a factor of 1/35) weights on the smaller dependent-variable values. From an error-based perspective, the small weights are consistent with large standard deviations. This work considers the consequences of these two common ways of addressing variable data.

  11. Prevalence of refractive error among preschool children in an urban population: the Baltimore Pediatric Eye Disease Study.

    PubMed

    Giordano, Lydia; Friedman, David S; Repka, Michael X; Katz, Joanne; Ibironke, Josephine; Hawes, Patricia; Tielsch, James M

    2009-04-01

    To determine the age-specific prevalence of refractive errors in white and African-American preschool children. The Baltimore Pediatric Eye Disease Study is a population-based evaluation of the prevalence of ocular disorders in children aged 6 to 71 months in Baltimore, Maryland. Among 4132 children identified, 3990 eligible children (97%) were enrolled and 2546 children (62%) were examined. Cycloplegic autorefraction was attempted in all children with the use of a Nikon Retinomax K-Plus 2 (Nikon Corporation, Tokyo, Japan). If a reliable autorefraction could not be obtained after 3 attempts, cycloplegic streak retinoscopy was performed. Mean spherical equivalent (SE) refractive error, astigmatism, and prevalence of higher refractive errors among African-American and white children. The mean SE of right eyes was +1.49 diopters (D) (standard deviation [SD] = 1.23) in white children and +0.71 D (SD = 1.35) in African-American children (mean difference of 0.78 D; 95% confidence interval [CI], 0.67-0.89). Mean SE refractive error did not decline with age in either group. The prevalence of myopia of 1.00 D or more in the eye with the lesser refractive error was 0.7% in white children and 5.5% in African-American children (relative risk [RR], 8.01; 95% CI, 3.70-17.35). The prevalence of hyperopia of +3 D or more in the eye with the lesser refractive error was 8.9% in white children and 4.4% in African-American children (RR, 0.49; 95% CI, 0.35-0.68). The prevalence of emmetropia (<-1.00 D to <+1.00 D) was 35.6% in white children and 58.0% in African-American children (RR, 1.64; 95% CI, 1.49-1.80). On the basis of published prescribing guidelines, 5.1% of the children would have benefited from spectacle correction. However, only 1.3% had been prescribed correction. Significant refractive errors are uncommon in this population of urban preschool children. There was no evidence for a myopic shift over this age range in this cross-sectional study. A small proportion of preschool children would likely benefit from refractive correction, but few have had this prescribed.

  12. A variational regularization of Abel transform for GPS radio occultation

    NASA Astrophysics Data System (ADS)

    Wee, Tae-Kwon

    2018-04-01

    In the Global Positioning System (GPS) radio occultation (RO) technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI) is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR) proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the mean refractivity deserting the first guess. In the lowest few kilometers that AI produces large negative refractivity bias, VR reduces the refractivity bias substantially with the aid of the background, which in this study is the operational forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF). It is concluded based on the results presented in this study that VR offers a definite advantage over AI in the quality of refractivity.

  13. Comparative study of standard space and real space analysis of quantitative MR brain data.

    PubMed

    Aribisala, Benjamin S; He, Jiabao; Blamire, Andrew M

    2011-06-01

    To compare the robustness of region of interest (ROI) analysis of magnetic resonance imaging (MRI) brain data in real space with analysis in standard space and to test the hypothesis that standard space image analysis introduces more partial volume effect errors compared to analysis of the same dataset in real space. Twenty healthy adults with no history or evidence of neurological diseases were recruited; high-resolution T(1)-weighted, quantitative T(1), and B(0) field-map measurements were collected. Algorithms were implemented to perform analysis in real and standard space and used to apply a simple standard ROI template to quantitative T(1) datasets. Regional relaxation values and histograms for both gray and white matter tissues classes were then extracted and compared. Regional mean T(1) values for both gray and white matter were significantly lower using real space compared to standard space analysis. Additionally, regional T(1) histograms were more compact in real space, with smaller right-sided tails indicating lower partial volume errors compared to standard space analysis. Standard space analysis of quantitative MRI brain data introduces more partial volume effect errors biasing the analysis of quantitative data compared to analysis of the same dataset in real space. Copyright © 2011 Wiley-Liss, Inc.

  14. Comparison of bias-corrected covariance estimators for MMRM analysis in longitudinal data with dropouts.

    PubMed

    Gosho, Masahiko; Hirakawa, Akihiro; Noma, Hisashi; Maruo, Kazushi; Sato, Yasunori

    2017-10-01

    In longitudinal clinical trials, some subjects will drop out before completing the trial, so their measurements towards the end of the trial are not obtained. Mixed-effects models for repeated measures (MMRM) analysis with "unstructured" (UN) covariance structure are increasingly common as a primary analysis for group comparisons in these trials. Furthermore, model-based covariance estimators have been routinely used for testing the group difference and estimating confidence intervals of the difference in the MMRM analysis using the UN covariance. However, using the MMRM analysis with the UN covariance could lead to convergence problems for numerical optimization, especially in trials with a small-sample size. Although the so-called sandwich covariance estimator is robust to misspecification of the covariance structure, its performance deteriorates in settings with small-sample size. We investigated the performance of the sandwich covariance estimator and covariance estimators adjusted for small-sample bias proposed by Kauermann and Carroll ( J Am Stat Assoc 2001; 96: 1387-1396) and Mancl and DeRouen ( Biometrics 2001; 57: 126-134) fitting simpler covariance structures through a simulation study. In terms of the type 1 error rate and coverage probability of confidence intervals, Mancl and DeRouen's covariance estimator with compound symmetry, first-order autoregressive (AR(1)), heterogeneous AR(1), and antedependence structures performed better than the original sandwich estimator and Kauermann and Carroll's estimator with these structures in the scenarios where the variance increased across visits. The performance based on Mancl and DeRouen's estimator with these structures was nearly equivalent to that based on the Kenward-Roger method for adjusting the standard errors and degrees of freedom with the UN structure. The model-based covariance estimator with the UN structure under unadjustment of the degrees of freedom, which is frequently used in applications, resulted in substantial inflation of the type 1 error rate. We recommend the use of Mancl and DeRouen's estimator in MMRM analysis if the number of subjects completing is ( n + 5) or less, where n is the number of planned visits. Otherwise, the use of Kenward and Roger's method with UN structure should be the best way.

  15. Observing System Simulations for Small Satellite Formations Estimating Bidirectional Reflectance

    NASA Technical Reports Server (NTRS)

    Nag, Sreeja; Gatebe, Charles K.; de Weck, Olivier

    2015-01-01

    The bidirectional reflectance distribution function (BRDF) gives the reflectance of a target as a function of illumination geometry and viewing geometry, hence carries information about the anisotropy of the surface. BRDF is needed in remote sensing for the correction of view and illumination angle effects (for example in image standardization and mosaicing), for deriving albedo, for land cover classification, for cloud detection, for atmospheric correction, and other applications. However, current spaceborne instruments provide sparse angular sampling of BRDF and airborne instruments are limited in the spatial and temporal coverage. To fill the gaps in angular coverage within spatial, spectral and temporal requirements, we propose a new measurement technique: Use of small satellites in formation flight, each satellite with a VNIR (visible and near infrared) imaging spectrometer, to make multi-spectral, near-simultaneous measurements of every ground spot in the swath at multiple angles. This paper describes an observing system simulation experiment (OSSE) to evaluate the proposed concept and select the optimal formation architecture that minimizes BRDF uncertainties. The variables of the OSSE are identified; number of satellites, measurement spread in the view zenith and relative azimuth with respect to solar plane, solar zenith angle, BRDF models and wavelength of reflection. Analyzing the sensitivity of BRDF estimation errors to the variables allow simplification of the OSSE, to enable its use to rapidly evaluate formation architectures. A 6-satellite formation is shown to produce lower BRDF estimation errors, purely in terms of angular sampling as evaluated by the OSSE, than a single spacecraft with 9 forward-aft sensors. We demonstrate the ability to use OSSEs to design small satellite formations as complements to flagship mission data. The formations can fill angular sampling gaps and enable better BRDF products than currently possible.

  16. New, small, fast acting blood glucose meters--an analytical laboratory evaluation.

    PubMed

    Weitgasser, Raimund; Hofmann, Manuela; Gappmayer, Brigitta; Garstenauer, Christa

    2007-09-22

    Patients and medical personnel are eager to use blood glucose meters that are easy to handle and fast acting. We questioned whether accuracy and precision of these new, small and light weight devices would meet analytical laboratory standards and tested four meters with the above mentioned conditions. Approximately 300 capillary blood samples were collected and tested using two devices of each brand and two different types of glucose test strips. Blood from the same samples was used for comparison. Results were evaluated using maximum deviation of 5% and 10% from the comparative method, the error grid analysis, the overall deviation of the devices, the linear regression analysis as well as the CVs for measurement in series. Of all 1196 measurements a deviation of less than 5% resp. 10% from the reference method was found for the FreeStyle (FS) meter in 69.5% and 96%, the Glucocard X Meter (GX) in 44% and 75%, the One Touch Ultra (OT) in 29% and 60%, the Wellion True Track (WT) in 28.5% and 58%. The error grid analysis gave 99.7% for FS, 99% for GX, 98% for OT and 97% for WT in zone A. The remainder of the values lay within zone B. Linear regression analysis resembled these results. CVs for measurement in series showed higher deviations for OT and WT compared to FS and GX. The four new, small and fast acting glucose meters fulfil clinically relevant analytical laboratory requirements making them appropriate for use by medical personnel. However, with regard to the tight and restrictive limits of the ADA recommendations, the devices are still in need of improvement. This should be taken into account when the devices are used by primarily inexperienced persons and is relevant for further industrial development of such devices.

  17. Observing system simulations for small satellite formations estimating bidirectional reflectance

    NASA Astrophysics Data System (ADS)

    Nag, Sreeja; Gatebe, Charles K.; Weck, Olivier de

    2015-12-01

    The bidirectional reflectance distribution function (BRDF) gives the reflectance of a target as a function of illumination geometry and viewing geometry, hence carries information about the anisotropy of the surface. BRDF is needed in remote sensing for the correction of view and illumination angle effects (for example in image standardization and mosaicing), for deriving albedo, for land cover classification, for cloud detection, for atmospheric correction, and other applications. However, current spaceborne instruments provide sparse angular sampling of BRDF and airborne instruments are limited in the spatial and temporal coverage. To fill the gaps in angular coverage within spatial, spectral and temporal requirements, we propose a new measurement technique: use of small satellites in formation flight, each satellite with a VNIR (visible and near infrared) imaging spectrometer, to make multi-spectral, near-simultaneous measurements of every ground spot in the swath at multiple angles. This paper describes an observing system simulation experiment (OSSE) to evaluate the proposed concept and select the optimal formation architecture that minimizes BRDF uncertainties. The variables of the OSSE are identified; number of satellites, measurement spread in the view zenith and relative azimuth with respect to solar plane, solar zenith angle, BRDF models and wavelength of reflection. Analyzing the sensitivity of BRDF estimation errors to the variables allow simplification of the OSSE, to enable its use to rapidly evaluate formation architectures. A 6-satellite formation is shown to produce lower BRDF estimation errors, purely in terms of angular sampling as evaluated by the OSSE, than a single spacecraft with 9 forward-aft sensors. We demonstrate the ability to use OSSEs to design small satellite formations as complements to flagship mission data. The formations can fill angular sampling gaps and enable better BRDF products than currently possible.

  18. Drop size distribution comparisons between Parsivel and 2-D video disdrometers

    NASA Astrophysics Data System (ADS)

    Thurai, M.; Petersen, W. A.; Tokay, A.; Schultz, C.; Gatlin, P.

    2011-05-01

    Measurements from a 2-D video disdrometer (2DVD) have been used for drop size distribution (DSD) comparisons with co-located Parsivel measurements in Huntsville, Alabama. The comparisons were made in terms of the mass-weighted mean diameter, Dm, the standard deviation of the mass-spectrum, σm, and the rainfall rate, R, all based on 1-min DSD from the two instruments. Time series comparisons show close agreement in all three parameters for cases where R was less than 20 mm h-1. In four cases, discrepancies in all three parameters were seen for "heavy" events, with the Parsivel showing higher Dm, σm and R, when R reached high values (particularly above 30 mm h-1). Possible causes for the discrepancies include the presence of a small percentage of non-fully melted hydrometers, with higher than expected fall velocity and with very different axis ratios as compared with rain, indicating small hail or ice pellets or graupel. We also present here Parsivel-to-Parsivel comparisons as well as comparisons between two 2DVD instruments, namely a low-profile unit and the latest generation, "compact unit" which was installed at the same site in November 2009. The comparisons are included to assess the variability between the same types of instrument. Correlation coefficients and the fractional standard errors are compared.

  19. Development and validation of a sensitive HPLC method for the quantification of HI-6 in guinea pig plasma and evaluated in domestic swine.

    PubMed

    Bohnert, Sara; Vair, Cory; Mikler, John

    2010-05-15

    A rapid and small volume assay to quantify HI-6 in plasma was developed to further the development and licensing of an intravenous formulation of HI-6. The objective of this method was to develop a sensitive and rapid assay that clearly resolved HI-6 and an internal standard in saline and plasma matrices. A fully validated method using ion-pair HPLC and 2-PAM as the internal standard fulfilled these requirements. Small plasma samples of 35 microL were extracted using acidification, filtration and neutralization. Linearity was shown for over 4 microg/mL to 1mg/mL with accuracy and precision within 6% relative error at the lower limit of detection. This method was utilized in the pharmacokinetic analysis HI-6 dichloride (2Cl) and HI-6 dimethane sulfonate (DMS) in anaesthetized guinea pigs and domestic swine following an intravenous bolus administration. From the resultant pharmacokinetic parameters a target plasma concentration of 100 microM was established and maintained in guinea pigs receiving an intravenous infusion. This validated method allows for the analysis of low volume samples, increased sample numbers and is applicable to the determination of pharmacokinetic profiles and parameters. Copyright (c) 2010. Published by Elsevier B.V.

  20. Effectiveness of Toyota process redesign in reducing thyroid gland fine-needle aspiration error.

    PubMed

    Raab, Stephen S; Grzybicki, Dana Marie; Sudilovsky, Daniel; Balassanian, Ronald; Janosky, Janine E; Vrbin, Colleen M

    2006-10-01

    Our objective was to determine whether the Toyota Production System process redesign resulted in diagnostic error reduction for patients who underwent cytologic evaluation of thyroid nodules. In this longitudinal, nonconcurrent cohort study, we compared the diagnostic error frequency of a thyroid aspiration service before and after implementation of error reduction initiatives consisting of adoption of a standardized diagnostic terminology scheme and an immediate interpretation service. A total of 2,424 patients underwent aspiration. Following terminology standardization, the false-negative rate decreased from 41.8% to 19.1% (P = .006), the specimen nondiagnostic rate increased from 5.8% to 19.8% (P < .001), and the sensitivity increased from 70.2% to 90.6% (P < .001). Cases with an immediate interpretation had a lower noninterpretable specimen rate than those without immediate interpretation (P < .001). Toyota process change led to significantly fewer diagnostic errors for patients who underwent thyroid fine-needle aspiration.

  1. Verification of an ensemble prediction system for storm surge forecast in the Adriatic Sea

    NASA Astrophysics Data System (ADS)

    Mel, Riccardo; Lionello, Piero

    2014-12-01

    In the Adriatic Sea, storm surges present a significant threat to Venice and to the flat coastal areas of the northern coast of the basin. Sea level forecast is of paramount importance for the management of daily activities and for operating the movable barriers that are presently being built for the protection of the city. In this paper, an EPS (ensemble prediction system) for operational forecasting of storm surge in the northern Adriatic Sea is presented and applied to a 3-month-long period (October-December 2010). The sea level EPS is based on the HYPSE (hydrostatic Padua Sea elevation) model, which is a standard single-layer nonlinear shallow water model, whose forcings (mean sea level pressure and surface wind fields) are provided by the ensemble members of the ECMWF (European Center for Medium-Range Weather Forecasts) EPS. Results are verified against observations at five tide gauges located along the Croatian and Italian coasts of the Adriatic Sea. Forecast uncertainty increases with the predicted value of the storm surge and with the forecast lead time. The EMF (ensemble mean forecast) provided by the EPS has a rms (root mean square) error lower than the DF (deterministic forecast), especially for short (up to 3 days) lead times. Uncertainty for short lead times of the forecast and for small storm surges is mainly caused by uncertainty of the initial condition of the hydrodynamical model. Uncertainty for large lead times and large storm surges is mainly caused by uncertainty in the meteorological forcings. The EPS spread increases with the rms error of the forecast. For large lead times the EPS spread and the forecast error substantially coincide. However, the EPS spread in this study, which does not account for uncertainty in the initial condition, underestimates the error during the early part of the forecast and for small storm surge values. On the contrary, it overestimates the rms error for large surge values. The PF (probability forecast) of the EPS has a clear skill in predicting the actual probability distribution of sea level, and it outperforms simple "dressed" PF methods. A probability estimate based on the single DF is shown to be inadequate. However, a PF obtained with a prescribed Gaussian distribution and centered on the DF value performs very similarly to the EPS-based PF.

  2. Ensemble-type numerical uncertainty information from single model integrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less

  3. Kappa statistic for clustered dichotomous responses from physicians and patients.

    PubMed

    Kang, Chaeryon; Qaqish, Bahjat; Monaco, Jane; Sheridan, Stacey L; Cai, Jianwen

    2013-09-20

    The bootstrap method for estimating the standard error of the kappa statistic in the presence of clustered data is evaluated. Such data arise, for example, in assessing agreement between physicians and their patients regarding their understanding of the physician-patient interaction and discussions. We propose a computationally efficient procedure for generating correlated dichotomous responses for physicians and assigned patients for simulation studies. The simulation result demonstrates that the proposed bootstrap method produces better estimate of the standard error and better coverage performance compared with the asymptotic standard error estimate that ignores dependence among patients within physicians with at least a moderately large number of clusters. We present an example of an application to a coronary heart disease prevention study. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Multicollinearity and Regression Analysis

    NASA Astrophysics Data System (ADS)

    Daoud, Jamal I.

    2017-12-01

    In regression analysis it is obvious to have a correlation between the response and predictor(s), but having correlation among predictors is something undesired. The number of predictors included in the regression model depends on many factors among which, historical data, experience, etc. At the end selection of most important predictors is something objective due to the researcher. Multicollinearity is a phenomena when two or more predictors are correlated, if this happens, the standard error of the coefficients will increase [8]. Increased standard errors means that the coefficients for some or all independent variables may be found to be significantly different from In other words, by overinflating the standard errors, multicollinearity makes some variables statistically insignificant when they should be significant. In this paper we focus on the multicollinearity, reasons and consequences on the reliability of the regression model.

  5. Effects of Random Circuit Fabrication Errors on Small Signal Gain and on Output Phase In a Traveling Wave Tube

    NASA Astrophysics Data System (ADS)

    Rittersdorf, I. M.; Antonsen, T. M., Jr.; Chernin, D.; Lau, Y. Y.

    2011-10-01

    Random fabrication errors may have detrimental effects on the performance of traveling-wave tubes (TWTs) of all types. A new scaling law for the modification in the average small signal gain and in the output phase is derived from the third order ordinary differential equation that governs the forward wave interaction in a TWT in the presence of random error that is distributed along the axis of the tube. Analytical results compare favorably with numerical results, in both gain and phase modifications as a result of random error in the phase velocity of the slow wave circuit. Results on the effect of the reverse-propagating circuit mode will be reported. This work supported by AFOSR, ONR, L-3 Communications Electron Devices, and Northrop Grumman Corporation.

  6. Measurement-free implementations of small-scale surface codes for quantum-dot qubits

    NASA Astrophysics Data System (ADS)

    Ercan, H. Ekmel; Ghosh, Joydip; Crow, Daniel; Premakumar, Vickram N.; Joynt, Robert; Friesen, Mark; Coppersmith, S. N.

    2018-01-01

    The performance of quantum-error-correction schemes depends sensitively on the physical realizations of the qubits and the implementations of various operations. For example, in quantum-dot spin qubits, readout is typically much slower than gate operations, and conventional surface-code implementations that rely heavily on syndrome measurements could therefore be challenging. However, fast and accurate reset of quantum-dot qubits, without readout, can be achieved via tunneling to a reservoir. Here we propose small-scale surface-code implementations for which syndrome measurements are replaced by a combination of Toffoli gates and qubit reset. For quantum-dot qubits, this enables much faster error correction than measurement-based schemes, but requires additional ancilla qubits and non-nearest-neighbor interactions. We have performed numerical simulations of two different coding schemes, obtaining error thresholds on the orders of 10-2 for a one-dimensional architecture that only corrects bit-flip errors and 10-4 for a two-dimensional architecture that corrects bit- and phase-flip errors.

  7. Effect of correlated observation error on parameters, predictions, and uncertainty

    USGS Publications Warehouse

    Tiedeman, Claire; Green, Christopher T.

    2013-01-01

    Correlations among observation errors are typically omitted when calculating observation weights for model calibration by inverse methods. We explore the effects of omitting these correlations on estimates of parameters, predictions, and uncertainties. First, we develop a new analytical expression for the difference in parameter variance estimated with and without error correlations for a simple one-parameter two-observation inverse model. Results indicate that omitting error correlations from both the weight matrix and the variance calculation can either increase or decrease the parameter variance, depending on the values of error correlation (ρ) and the ratio of dimensionless scaled sensitivities (rdss). For small ρ, the difference in variance is always small, but for large ρ, the difference varies widely depending on the sign and magnitude of rdss. Next, we consider a groundwater reactive transport model of denitrification with four parameters and correlated geochemical observation errors that are computed by an error-propagation approach that is new for hydrogeologic studies. We compare parameter estimates, predictions, and uncertainties obtained with and without the error correlations. Omitting the correlations modestly to substantially changes parameter estimates, and causes both increases and decreases of parameter variances, consistent with the analytical expression. Differences in predictions for the models calibrated with and without error correlations can be greater than parameter differences when both are considered relative to their respective confidence intervals. These results indicate that including observation error correlations in weighting for nonlinear regression can have important effects on parameter estimates, predictions, and their respective uncertainties.

  8. Trajectory-based visual localization in underwater surveying missions.

    PubMed

    Burguera, Antoni; Bonin-Font, Francisco; Oliver, Gabriel

    2015-01-14

    We present a new vision-based localization system applied to an autonomous underwater vehicle (AUV) with limited sensing and computation capabilities. The traditional EKF-SLAM approaches are usually expensive in terms of execution time; the approach presented in this paper strengthens this method by adopting a trajectory-based schema that reduces the computational requirements. The pose of the vehicle is estimated using an extended Kalman filter (EKF), which predicts the vehicle motion by means of a visual odometer and corrects these predictions using the data associations (loop closures) between the current frame and the previous ones. One of the most important steps in this procedure is the image registration method, as it reinforces the data association and, thus, makes it possible to close loops reliably. Since the use of standard EKFs entail linearization errors that can distort the vehicle pose estimations, the approach has also been tested using an iterated Kalman filter (IEKF). Experiments have been conducted using a real underwater vehicle in controlled scenarios and in shallow sea waters, showing an excellent performance with very small errors, both in the vehicle pose and in the overall trajectory estimates.

  9. Radio structure effects on the optical and radio representations of the ICRF

    NASA Astrophysics Data System (ADS)

    Andrei, A. H.; da Silva Neto, D. N.; Assafin, M.; Vieira Martins, R.

    Silva Neto et al. (2002) show that comparing the ICRF Ext.1 sources standard radio position (Ma et al. 1998) against their optical counterpart position (Zacharias et al. 1999, Monet et al., 1998), a systematic pattern appears, which depends on the radio structure index (Fey and Charlot, 2000). The optical to radio offsets produce a distribution suggestive of a coincidence of the optical and radio centroids worse for the radio extended than for the radio compact sources. On average, the coincidence between the optical and radio centroids is found 7.9±1.1 mas smaller for the compact than for the extended sources. Such an effect is reasonably large, and certainly much too large to be due to errors on the VLBI radio position. On the other hand, it is too small to be accounted to the errors on the optical position, which moreover should be independent from the radio stucture. Thus, other than a true pattern of centroids non-coincidence, the remaining explanation is of a hazard result. This paper summarizes the several statistical tests used to discard the hazard explanation.

  10. Multicategory nets of single-layer perceptrons: complexity and sample-size issues.

    PubMed

    Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras

    2010-05-01

    The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.

  11. The Atacama Cosmology Telescope: temperature and gravitational lensing power spectrum measurements from three seasons of data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Sudeep; Louis, Thibaut; Calabrese, Erminia

    2014-04-01

    We present the temperature power spectra of the cosmic microwave background (CMB) derived from the three seasons of data from the Atacama Cosmology Telescope (ACT) at 148 GHz and 218 GHz, as well as the cross-frequency spectrum between the two channels. We detect and correct for contamination due to the Galactic cirrus in our equatorial maps. We present the results of a number of tests for possible systematic error and conclude that any effects are not significant compared to the statistical errors we quote. Where they overlap, we cross-correlate the ACT and the South Pole Telescope (SPT) maps and showmore » they are consistent. The measurements of higher-order peaks in the CMB power spectrum provide an additional test of the ΛCDM cosmological model, and help constrain extensions beyond the standard model. The small angular scale power spectrum also provides constraining power on the Sunyaev-Zel'dovich effects and extragalactic foregrounds. We also present a measurement of the CMB gravitational lensing convergence power spectrum at 4.6σ detection significance.« less

  12. The Atacama Cosmology Telescope: Temperature and Gravitational Lensing Power Spectrum Measurements from Three Seasons of Data

    NASA Technical Reports Server (NTRS)

    Das, Sudeep; Louis, Thibaut; Nolta, Michael R.; Addison, Graeme E.; Battisetti, Elia S.; Bond, J. Richard; Calabrese, Erminia; Crichton, Devin; Devlin, Mark J.; Dicker, Simon; hide

    2014-01-01

    We present the temperature power spectra of the cosmic microwave background (CMB) derived from the three seasons of data from the Atacama Cosmology Telescope (ACT) at 148 GHz and 218 GHz, as well as the cross-frequency spectrum between the two channels. We detect and correct for contamination due to the Galactic cirrus in our equatorial maps. We present the results of a number of tests for possible systematic error and conclude that any effects are not significant compared to the statistical errors we quote. Where they overlap, we cross-correlate the ACT and the South Pole Telescope (SPT) maps and show they are consistent. The measurements of higher-order peaks in the CMB power spectrum provide an additional test of the ?CDM cosmological model, and help constrain extensions beyond the standard model. The small angular scale power spectrum also provides constraining power on the Sunyaev-Zel'dovich effects and extragalactic foregrounds. We also present a measurement of the CMB gravitational lensing convergence power spectrum at 4.6s detection significance.

  13. [Study on predicting sugar content and valid acidity of apples by near infrared diffuse reflectance technique].

    PubMed

    Liu, Yan-de; Ying, Yi-bin; Fu, Xia-ping

    2005-11-01

    The nondestructive method for quantifying sugar content (SC) and available acid (VA) of intact apples using diffuse near infrared reflectance and optical fiber sensing techniques were explored in the present research. The standard sample sets and prediction models were established by partial least squares analysis (PLS). A total of 120 Shandong Fuji apples were tested in the wave number of 12,500 - 4000 cm(-1) using Fourier transform near infrared spectroscopy. The results of the research indicated that the nondestructive quantification of SC and VA, gave a high correlation coefficient 0.970 and 0.906, a low root mean square error of prediction (RMSEP) 0.272 and 0.056 2, a low root mean square error of calibration (RMSEC) 0.261 and 0.0677, and a small difference between RMSEP and RMSEC 0.011 a nd 0.0115. It was suggested that the diffuse nearinfrared reflectance technique be feasible for nondestructive determination of apple sugar content in the wave number range of 10,341 - 5461 cm(-1) and for available acid in the wave number range of 10,341 - 3818 cm(-1).

  14. Calculating the sensitivity and robustness of binding free energy calculations to force field parameters

    PubMed Central

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.

    2013-01-01

    Binding free energy calculations offer a thermodynamically rigorous method to compute protein-ligand binding, and they depend on empirical force fields with hundreds of parameters. We examined the sensitivity of computed binding free energies to the ligand’s electrostatic and van der Waals parameters. Dielectric screening and cancellation of effects between ligand-protein and ligand-solvent interactions reduce the parameter sensitivity of binding affinity by 65%, compared with interaction strengths computed in the gas-phase. However, multiple changes to parameters combine additively on average, which can lead to large changes in overall affinity from many small changes to parameters. Using these results, we estimate that random, uncorrelated errors in force field nonbonded parameters must be smaller than 0.02 e per charge, 0.06 Å per radius, and 0.01 kcal/mol per well depth in order to obtain 68% (one standard deviation) confidence that a computed affinity for a moderately-sized lead compound will fall within 1 kcal/mol of the true affinity, if these are the only sources of error considered. PMID:24015114

  15. Small refractive errors--their correction and practical importance.

    PubMed

    Skrbek, Matej; Petrová, Sylvie

    2013-04-01

    Small refractive errors present a group of specifc far-sighted refractive dispositions that are compensated by enhanced accommodative exertion and aren't exhibited by loss of the visual acuity. This paper should answer a few questions about their correction, flowing from theoretical presumptions and expectations of this dilemma. The main goal of this research was to (dis)confirm the hypothesis about convenience, efficiency and frequency of the correction that do not raise the visual acuity (or if the improvement isn't noticeable). The next goal was to examine the connection between this correction and other factors (age, size of the refractive error, etc.). The last aim was to describe the subjective personal rating of the correction of these small refractive errors, and to determine the minimal improvement of the visual acuity, that is attractive enough for the client to purchase the correction (glasses, contact lenses). It was confirmed, that there's an indispensable group of subjects with good visual acuity, where the correction is applicable, although it doesn't improve the visual acuity much. The main importance is to eliminate the asthenopia. The prime reason for acceptance of the correction is typically changing during the life, so as the accommodation is declining. Young people prefer the correction on the ground of the asthenopia, caused by small refractive error or latent strabismus; elderly people acquire the correction because of improvement of the visual acuity. Generally the correction was found useful in more than 30%, if the gain of the visual acuity was at least 0,3 of the decimal row.

  16. Anthropogenic resource subsidies determine space use by Australian arid zone dingoes: an improved resource selection modelling approach.

    PubMed

    Newsome, Thomas M; Ballard, Guy-Anthony; Dickman, Christopher R; Fleming, Peter J S; Howden, Chris

    2013-01-01

    Dingoes (Canis lupus dingo) were introduced to Australia and became feral at least 4,000 years ago. We hypothesized that dingoes, being of domestic origin, would be adaptable to anthropogenic resource subsidies and that their space use would be affected by the dispersion of those resources. We tested this by analyzing Resource Selection Functions (RSFs) developed from GPS fixes (locations) of dingoes in arid central Australia. Using Generalized Linear Mixed-effect Models (GLMMs), we investigated resource relationships for dingoes that had access to abundant food near mine facilities, and for those that did not. From these models, we predicted the probability of dingo occurrence in relation to anthropogenic resource subsidies and other habitat characteristics over ∼ 18,000 km(2). Very small standard errors and subsequent pervasively high P-values of results will become more important as the size of data sets, such as our GPS tracking logs, increases. Therefore, we also investigated methods to minimize the effects of serial and spatio-temporal correlation among samples and unbalanced study designs. Using GLMMs, we accounted for some of the correlation structure of GPS animal tracking data; however, parameter standard errors remained very small and all predictors were highly significant. Consequently, we developed an alternative approach that allowed us to review effect sizes at different spatial scales and determine which predictors were sufficiently ecologically meaningful to include in final RSF models. We determined that the most important predictor for dingo occurrence around mine sites was distance to the refuse facility. Away from mine sites, close proximity to human-provided watering points was predictive of dingo dispersion as were other landscape factors including palaeochannels, rocky rises and elevated drainage depressions. Our models demonstrate that anthropogenically supplemented food and water can alter dingo-resource relationships. The spatial distribution of such resources is therefore critical for the conservation and management of dingoes and other top predators.

  17. Validity, reliability and Norwegian adaptation of the Stroke-Specific Quality of Life (SS-QOL) scale

    PubMed Central

    Pedersen, Synne Garder; Heiberg, Guri Anita; Nielsen, Jørgen Feldbæk; Friborg, Oddgeir; Stabel, Henriette Holm; Anke, Audny; Arntzen, Cathrine

    2018-01-01

    Background: There is a paucity of stroke-specific instruments to assess health-related quality of life in the Norwegian language. The objective was to examine the validity and reliability of a Norwegian version of the 12-domain Stroke-Specific Quality of Life scale. Methods: A total of 125 stroke survivors were prospectively recruited. Questionnaires were administered at 3 months; 36 test–retests were performed at 12 months post stroke. The translation was conducted according to guidelines. The internal consistency was assessed with Cronbach’s alpha; convergent validity, with item-to-subscale correlations; and test–retest, with Spearman’s correlations. Scaling validity was explored by calculating both floor and ceiling effects. A priori hypotheses regarding the associations between the Stroke-Specific Quality of Life domain scores and scores of established measures were tested. Standard error of measurement was assessed. Results: The Norwegian version revealed no major changes in back translations. The internal consistency values of the domains were Cronbach’s alpha = 0.79–0.93. Rates of missing items were small, and the item-to-subscale correlation coefficients supported convergent validity (0.48–0.87). The observed floor effects were generally small, whereas the ceiling effects had moderate or high values (16%–63%). Test–retest reliability indicated stability in most domains, with Spearman’s rho = 0.67–0.94 (all p < 0.001), whereas the rho was 0.35 (p < 0.05) for the ‘Vision’ domain. Hypothesis testing supported the construct validity of the scale. Standard error of measurement values for each domain were generated to indicate the required magnitudes of detectable change. Conclusions: The Norwegian version of the Stroke-Specific Quality of Life scale is a reliable and valid instrument with good psychometric properties. It is suited for use in health research as well as in individual assessments of persons with stroke. PMID:29344360

  18. Validity, reliability and Norwegian adaptation of the Stroke-Specific Quality of Life (SS-QOL) scale.

    PubMed

    Pedersen, Synne Garder; Heiberg, Guri Anita; Nielsen, Jørgen Feldbæk; Friborg, Oddgeir; Stabel, Henriette Holm; Anke, Audny; Arntzen, Cathrine

    2018-01-01

    There is a paucity of stroke-specific instruments to assess health-related quality of life in the Norwegian language. The objective was to examine the validity and reliability of a Norwegian version of the 12-domain Stroke-Specific Quality of Life scale. A total of 125 stroke survivors were prospectively recruited. Questionnaires were administered at 3 months; 36 test-retests were performed at 12 months post stroke. The translation was conducted according to guidelines. The internal consistency was assessed with Cronbach's alpha; convergent validity, with item-to-subscale correlations; and test-retest, with Spearman's correlations. Scaling validity was explored by calculating both floor and ceiling effects. A priori hypotheses regarding the associations between the Stroke-Specific Quality of Life domain scores and scores of established measures were tested. Standard error of measurement was assessed. The Norwegian version revealed no major changes in back translations. The internal consistency values of the domains were Cronbach's alpha = 0.79-0.93. Rates of missing items were small, and the item-to-subscale correlation coefficients supported convergent validity (0.48-0.87). The observed floor effects were generally small, whereas the ceiling effects had moderate or high values (16%-63%). Test-retest reliability indicated stability in most domains, with Spearman's rho = 0.67-0.94 (all p < 0.001), whereas the rho was 0.35 (p < 0.05) for the 'Vision' domain. Hypothesis testing supported the construct validity of the scale. Standard error of measurement values for each domain were generated to indicate the required magnitudes of detectable change. The Norwegian version of the Stroke-Specific Quality of Life scale is a reliable and valid instrument with good psychometric properties. It is suited for use in health research as well as in individual assessments of persons with stroke.

  19. Study of an instrument for sensing errors in a telescope wavefront

    NASA Technical Reports Server (NTRS)

    Golden, L. J.; Shack, R. V.; Slater, P. N.

    1974-01-01

    Focal plane sensors for determining the error in a telescope wavefront were investigated. The construction of three candidate test instruments and their evaluation in terms of small wavefront error aberration measurements are described. A laboratory wavefront simulator was designed and fabricated to evaluate the test instruments. The laboratory wavefront error simulator was used to evaluate three tests; a Hartmann test, a polarization shearing interferometer test, and an interferometric Zernike test.

  20. Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits

    NASA Astrophysics Data System (ADS)

    Hoogland, Jiri; Kleiss, Ronald

    1997-04-01

    In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.

  1. Significant and Sustained Reduction in Chemotherapy Errors Through Improvement Science.

    PubMed

    Weiss, Brian D; Scott, Melissa; Demmel, Kathleen; Kotagal, Uma R; Perentesis, John P; Walsh, Kathleen E

    2017-04-01

    A majority of children with cancer are now cured with highly complex chemotherapy regimens incorporating multiple drugs and demanding monitoring schedules. The risk for error is high, and errors can occur at any stage in the process, from order generation to pharmacy formulation to bedside drug administration. Our objective was to describe a program to eliminate errors in chemotherapy use among children. To increase reporting of chemotherapy errors, we supplemented the hospital reporting system with a new chemotherapy near-miss reporting system. After the model for improvement, we then implemented several interventions, including a daily chemotherapy huddle, improvements to the preparation and delivery of intravenous therapy, headphones for clinicians ordering chemotherapy, and standards for chemotherapy administration throughout the hospital. Twenty-two months into the project, we saw a centerline shift in our U chart of chemotherapy errors that reached the patient from a baseline rate of 3.8 to 1.9 per 1,000 doses. This shift has been sustained for > 4 years. In Poisson regression analyses, we found an initial increase in error rates, followed by a significant decline in errors after 16 months of improvement work ( P < .001). After the model for improvement, our improvement efforts were associated with significant reductions in chemotherapy errors that reached the patient. Key drivers for our success included error vigilance through a huddle, standardization, and minimization of interruptions during ordering.

  2. Safe and effective error rate monitors for SS7 signaling links

    NASA Astrophysics Data System (ADS)

    Schmidt, Douglas C.

    1994-04-01

    This paper describes SS7 error monitor characteristics, discusses the existing SUERM (Signal Unit Error Rate Monitor), and develops the recently proposed EIM (Error Interval Monitor) for higher speed SS7 links. A SS7 error monitor is considered safe if it ensures acceptable link quality and is considered effective if it is tolerant to short-term phenomena. Formal criteria for safe and effective error monitors are formulated in this paper. This paper develops models of changeover transients, the unstable component of queue length resulting from errors. These models are in the form of recursive digital filters. Time is divided into sequential intervals. The filter's input is the number of errors which have occurred in each interval. The output is the corresponding change in transmit queue length. Engineered EIM's are constructed by comparing an estimated changeover transient with a threshold T using a transient model modified to enforce SS7 standards. When this estimate exceeds T, a changeover will be initiated and the link will be removed from service. EIM's can be differentiated from SUERM by the fact that EIM's monitor errors over an interval while SUERM's count errored messages. EIM's offer several advantages over SUERM's, including the fact that they are safe and effective, impose uniform standards in link quality, are easily implemented, and make minimal use of real-time resources.

  3. Time-resolved in vivo luminescence dosimetry for online error detection in pulsed dose-rate brachytherapy.

    PubMed

    Andersen, Claus E; Nielsen, Søren Kynde; Lindegaard, Jacob Christian; Tanderup, Kari

    2009-11-01

    The purpose of this study is to present and evaluate a dose-verification protocol for pulsed dose-rate (PDR) brachytherapy based on in vivo time-resolved (1 s time resolution) fiber-coupled luminescence dosimetry. Five cervix cancer patients undergoing PDR brachytherapy (Varian GammaMed Plus with 192Ir) were monitored. The treatments comprised from 10 to 50 pulses (1 pulse/h) delivered by intracavitary/interstitial applicators (tandem-ring systems and/or needles). For each patient, one or two dosimetry probes were placed directly in or close to the tumor region using stainless steel or titanium needles. Each dosimeter probe consisted of a small aluminum oxide crystal attached to an optical fiber cable (1 mm outer diameter) that could guide radioluminescence (RL) and optically stimulated luminescence (OSL) from the crystal to special readout instrumentation. Positioning uncertainty and hypothetical dose-delivery errors (interchanged guide tubes or applicator movements from +/-5 to +/-15 mm) were simulated in software in order to assess the ability of the system to detect errors. For three of the patients, the authors found no significant differences (P>0.01) for comparisons between in vivo measurements and calculated reference values at the level of dose per dwell position, dose per applicator, or total dose per pulse. The standard deviations of the dose per pulse were less than 3%, indicating a stable dose delivery and a highly stable geometry of applicators and dosimeter probes during the treatments. For the two other patients, the authors noted significant deviations for three individual pulses and for one dosimeter probe. These deviations could have been due to applicator movement during the treatment and one incorrectly positioned dosimeter probe, respectively. Computer simulations showed that the likelihood of detecting a pair of interchanged guide tubes increased by a factor of 10 or more for the considered patients when going from integrating to time-resolved dose verification. The likelihood of detecting a +/-15 mm displacement error increased by a factor of 1.5 or more. In vivo fiber-coupled RL/OSL dosimetry based on detectors placed in standard brachytherapy needles was demonstrated. The time-resolved dose-rate measurements were found to provide a good way to visualize the progression and stability of PDR brachytherapy dose delivery, and time-resolved dose-rate measurements provided an increased sensitivity for detection of dose-delivery errors compared with time-integrated dosimetry.

  4. The performance of projective standardization for digital subtraction radiography.

    PubMed

    Mol, André; Dunn, Stanley M

    2003-09-01

    We sought to test the performance and robustness of projective standardization in preserving invariant properties of subtraction images in the presence of irreversible projection errors. Study design Twenty bone chips (1-10 mg each) were placed on dentate dry mandibles. Follow-up images were obtained without the bone chips, and irreversible projection errors of up to 6 degrees were introduced. Digitized image intensities were normalized, and follow-up images were geometrically reconstructed by 2 operators using anatomical and fiduciary landmarks. Subtraction images were analyzed by 3 observers. Regression analysis revealed a linear relationship between radiographic estimates of mineral loss and actual mineral loss (R(2) = 0.99; P <.05). The effect of projection error was not significant (general linear model [GLM]: P >.05). There was no difference between the radiographic estimates from images standardized with anatomical landmarks and those standardized with fiduciary landmarks (Wilcoxon signed rank test: P >.05). Operator variability was low for image analysis alone (R(2) = 0.99; P <.05), as well as for the entire procedure (R(2) = 0.98; P <.05). The predicted detection limit was smaller than 1 mg. Subtraction images registered by projective standardization yield estimates of osseous change that are invariant to irreversible projection errors of up to 6 degrees. Within these limits, operator precision is high and anatomical landmarks can be used to establish correspondence.

  5. Trial-to-trial adaptation in control of arm reaching and standing posture

    PubMed Central

    Pienciak-Siewert, Alison; Horan, Dylan P.

    2016-01-01

    Classical theories of motor learning hypothesize that adaptation is driven by sensorimotor error; this is supported by studies of arm and eye movements that have shown that trial-to-trial adaptation increases with error. Studies of postural control have shown that anticipatory postural adjustments increase with the magnitude of a perturbation. However, differences in adaptation have been observed between the two modalities, possibly due to either the inherent instability or sensory uncertainty in standing posture. Therefore, we hypothesized that trial-to-trial adaptation in posture should be driven by error, similar to what is observed in arm reaching, but the nature of the relationship between error and adaptation may differ. Here we investigated trial-to-trial adaptation of arm reaching and postural control concurrently; subjects made reaching movements in a novel dynamic environment of varying strengths, while standing and holding the handle of a force-generating robotic arm. We found that error and adaptation increased with perturbation strength in both arm and posture. Furthermore, in both modalities, adaptation showed a significant correlation with error magnitude. Our results indicate that adaptation scales proportionally with error in the arm and near proportionally in posture. In posture only, adaptation was not sensitive to small error sizes, which were similar in size to errors experienced in unperturbed baseline movements due to inherent variability. This finding may be explained as an effect of uncertainty about the source of small errors. Our findings suggest that in rehabilitation, postural error size should be considered relative to the magnitude of inherent movement variability. PMID:27683888

  6. Trial-to-trial adaptation in control of arm reaching and standing posture.

    PubMed

    Pienciak-Siewert, Alison; Horan, Dylan P; Ahmed, Alaa A

    2016-12-01

    Classical theories of motor learning hypothesize that adaptation is driven by sensorimotor error; this is supported by studies of arm and eye movements that have shown that trial-to-trial adaptation increases with error. Studies of postural control have shown that anticipatory postural adjustments increase with the magnitude of a perturbation. However, differences in adaptation have been observed between the two modalities, possibly due to either the inherent instability or sensory uncertainty in standing posture. Therefore, we hypothesized that trial-to-trial adaptation in posture should be driven by error, similar to what is observed in arm reaching, but the nature of the relationship between error and adaptation may differ. Here we investigated trial-to-trial adaptation of arm reaching and postural control concurrently; subjects made reaching movements in a novel dynamic environment of varying strengths, while standing and holding the handle of a force-generating robotic arm. We found that error and adaptation increased with perturbation strength in both arm and posture. Furthermore, in both modalities, adaptation showed a significant correlation with error magnitude. Our results indicate that adaptation scales proportionally with error in the arm and near proportionally in posture. In posture only, adaptation was not sensitive to small error sizes, which were similar in size to errors experienced in unperturbed baseline movements due to inherent variability. This finding may be explained as an effect of uncertainty about the source of small errors. Our findings suggest that in rehabilitation, postural error size should be considered relative to the magnitude of inherent movement variability. Copyright © 2016 the American Physiological Society.

  7. Second Chance: If at First You Do Not Succeed, Set up a Plan and Try, Try Again

    ERIC Educational Resources Information Center

    Poulsen, John

    2012-01-01

    Student teachers make errors in their practicum. Then, they learn and fix those errors. This is the standard arc within a successful practicum. Some students make errors that they do not fix and then make more errors that again remain unfixed. This downward spiral increases in pace until the classroom becomes chaos. These students at the…

  8. SU-E-T-377: Inaccurate Positioning Might Introduce Significant MapCheck Calibration Error in Flatten Filter Free Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, S; Chao, C; Columbia University, NY, NY

    2014-06-01

    Purpose: This study investigates the calibration error of detector sensitivity for MapCheck due to inaccurate positioning of the device, which is not taken into account by the current commercial iterative calibration algorithm. We hypothesize the calibration is more vulnerable to the positioning error for the flatten filter free (FFF) beams than the conventional flatten filter flattened beams. Methods: MapCheck2 was calibrated with 10MV conventional and FFF beams, with careful alignment and with 1cm positioning error during calibration, respectively. Open fields of 37cmx37cm were delivered to gauge the impact of resultant calibration errors. The local calibration error was modeled as amore » detector independent multiplication factor, with which propagation error was estimated with positioning error from 1mm to 1cm. The calibrated sensitivities, without positioning error, were compared between the conventional and FFF beams to evaluate the dependence on the beam type. Results: The 1cm positioning error leads to 0.39% and 5.24% local calibration error in the conventional and FFF beams respectively. After propagating to the edges of MapCheck, the calibration errors become 6.5% and 57.7%, respectively. The propagation error increases almost linearly with respect to the positioning error. The difference of sensitivities between the conventional and FFF beams was small (0.11 ± 0.49%). Conclusion: The results demonstrate that the positioning error is not handled by the current commercial calibration algorithm of MapCheck. Particularly, the calibration errors for the FFF beams are ~9 times greater than those for the conventional beams with identical positioning error, and a small 1mm positioning error might lead to up to 8% calibration error. Since the sensitivities are only slightly dependent of the beam type and the conventional beam is less affected by the positioning error, it is advisable to cross-check the sensitivities between the conventional and FFF beams to detect potential calibration errors due to inaccurate positioning. This work was partially supported by a DOD Grant No.; DOD W81XWH1010862.« less

  9. Improved prediction of hardwood tree biomass derived from wood density estimates and form factors for whole trees

    Treesearch

    David W. MacFarlane; Neil R. Ver Planck

    2012-01-01

    Data from hardwood trees in Michigan were analyzed to investigate how differences in whole-tree form and wood density between trees of different stem diameter relate to residual error in standard-type biomass equations. The results suggested that whole-tree wood density, measured at breast height, explained a significant proportion of residual error in standard-type...

  10. The Relationship between Mean Square Differences and Standard Error of Measurement: Comment on Barchard (2012)

    ERIC Educational Resources Information Center

    Pan, Tianshu; Yin, Yue

    2012-01-01

    In the discussion of mean square difference (MSD) and standard error of measurement (SEM), Barchard (2012) concluded that the MSD between 2 sets of test scores is greater than 2(SEM)[superscript 2] and SEM underestimates the score difference between 2 tests when the 2 tests are not parallel. This conclusion has limitations for 2 reasons. First,…

  11. Empirical Synthesis of the Effect of Standard Error of Measurement on Decisions Made within Brief Experimental Analyses of Reading Fluency

    ERIC Educational Resources Information Center

    Burns, Matthew K.; Taylor, Crystal N.; Warmbold-Brann, Kristy L.; Preast, June L.; Hosp, John L.; Ford, Jeremy W.

    2017-01-01

    Intervention researchers often use curriculum-based measurement of reading fluency (CBM-R) with a brief experimental analysis (BEA) to identify an effective intervention for individual students. The current study synthesized data from 22 studies that used CBM-R data within a BEA by computing the standard error of measure (SEM) for the median data…

  12. A Comparison of Kernel Equating and Traditional Equipercentile Equating Methods and the Parametric Bootstrap Methods for Estimating Standard Errors in Equipercentile Equating

    ERIC Educational Resources Information Center

    Choi, Sae Il

    2009-01-01

    This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…

  13. The impact of statistical adjustment on conditional standard errors of measurement in the assessment of physician communication skills.

    PubMed

    Raymond, Mark R; Clauser, Brian E; Furman, Gail E

    2010-10-01

    The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.

  14. Imaging of the Galapagos Plume Using a Network of Mermaids

    NASA Astrophysics Data System (ADS)

    Nolet, G.; Hello, Y.; Chen, J.; Pazmino, A.; Van der Lee, S.; Bonnieux, S.; Deschamps, A.; Regnier, M. M.; Font, Y.; Simons, F.

    2017-12-01

    A network of nine submarine seismographs (Mermaids) has been floating freely from 2014 to 2016 around the Galapagos islands, with the aim to enhance the resolving power of deep tomographic images of the mantle plume in this region (see poster by Hello et al. in session S002 for technical details).Analysing a total of 1329 triggered signals transmitted by satellite, we were able to pick the onset times of 434 P waves, 95 PKP and 26 pP arrivals. For the events recorded by at least one Mermaid, these data were complemented with hand-picked onsets from stations on the islands, or on the continent nearby, for a total of 3892 onset times of rays crossing the mantle beneath the Galapagos, many of them with a small standard error estimated at 0.3s. These data are used in a local inversion using ray theory, as is appropriate for onset times. To compensate for delays acquired in the rest of the Earth, the local model is embedded in a global inversion of P delays from the EHB data set most recently published by the ISC for 2000-2003. By selecting a strongly redundant subset of more than one million EHB P wave arrivals, we determined an objective standard error for these delays of 0.51s using the method of Voronin et al. (GJI, 2014). Using a combination of (strong) smoothing and (weak) damping, we force the tomographic model to fit the data close to the level of the estimated standard errors.Preliminary images obtained at the time of writing of this abstract indicate a deep reaching plume that is stronger in the lower mantle than near the surface.Most importantly, the experiment shows how even a limited number of Mermaids can contribute a significant gain in resolution. This is a direct consequence of the fact that they float with abyssal currents, thus avoiding redundancy in raypaths even for aftershocks.The final tomographic images and an analysis of their significance will be subject of the presentation.

  15. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    NASA Astrophysics Data System (ADS)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  16. Efficient logistic regression designs under an imperfect population identifier.

    PubMed

    Albert, Paul S; Liu, Aiyi; Nansel, Tonja

    2014-03-01

    Motivated by actual study designs, this article considers efficient logistic regression designs where the population is identified with a binary test that is subject to diagnostic error. We consider the case where the imperfect test is obtained on all participants, while the gold standard test is measured on a small chosen subsample. Under maximum-likelihood estimation, we evaluate the optimal design in terms of sample selection as well as verification. We show that there may be substantial efficiency gains by choosing a small percentage of individuals who test negative on the imperfect test for inclusion in the sample (e.g., verifying 90% test-positive cases). We also show that a two-stage design may be a good practical alternative to a fixed design in some situations. Under optimal and nearly optimal designs, we compare maximum-likelihood and semi-parametric efficient estimators under correct and misspecified models with simulations. The methodology is illustrated with an analysis from a diabetes behavioral intervention trial. © 2013, The International Biometric Society.

  17. Ad hoc versus standardized admixtures for continuous infusion drugs in neonatal intensive care: cognitive task analysis of safety at the bedside.

    PubMed

    Brannon, Timothy S

    2006-01-01

    Continuous infusion intravenous (IV) drugs in neonatal intensive care are usually prepared based on patient weight so that the dose is readable as a simple multiple of the infusion pump rate. New safety guidelines propose that hospitals switch to using standardized admixtures of these drugs to prevent calculation errors during ad hoc preparation. Extended hierarchical task analysis suggests that switching to standardized admixtures may lead to more errors in programming the pump at the bedside.

  18. Ad Hoc versus Standardized Admixtures for Continuous Infusion Drugs in Neonatal Intensive Care: Cognitive Task Analysis of Safety at the Bedside

    PubMed Central

    Brannon, Timothy S.

    2006-01-01

    Continuous infusion intravenous (IV) drugs in neonatal intensive care are usually prepared based on patient weight so that the dose is readable as a simple multiple of the infusion pump rate. New safety guidelines propose that hospitals switch to using standardized admixtures of these drugs to prevent calculation errors during ad hoc preparation. Extended hierarchical task analysis suggests that switching to standardized admixtures may lead to more errors in programming the pump at the bedside. PMID:17238482

  19. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    NASA Technical Reports Server (NTRS)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  20. Design of an all-attitude flight control system to execute commanded bank angles and angles of attack

    NASA Technical Reports Server (NTRS)

    Burgin, G. H.; Eggleston, D. M.

    1976-01-01

    A flight control system for use in air-to-air combat simulation was designed. The input to the flight control system are commanded bank angle and angle of attack, the output are commands to the control surface actuators such that the commanded values will be achieved in near minimum time and sideslip is controlled to remain small. For the longitudinal direction, a conventional linear control system with gains scheduled as a function of dynamic pressure is employed. For the lateral direction, a novel control system, consisting of a linear portion for small bank angle errors and a bang-bang control system for large errors and error rates is employed.

  1. Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels

    USGS Publications Warehouse

    Laenen, Antonius; Curtis, R. E.

    1989-01-01

    Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)

  2. End-of-Kindergarten Spelling Outcomes: How Can Spelling Error Analysis Data Inform Beginning Reading Instruction?

    PubMed

    Lee, Julia Ai Cheng; Otaiba, Stephanie Al

    2017-01-01

    In this article, the authors examined the spelling performance of 430 kindergarteners, which included a high risk sample, to determine the relations between end of kindergarten reading and spelling in a high quality language arts setting. The spelling outcomes including the spelling errors between the good and the poor readers were described, analyzed, and compared. The findings suggest that not all the children have acquired the desired standard as outlined by the Common Core State Standards. In addition, not every good reader is a good speller and that not every poor speller is a poor reader. The study shows that spelling tasks that are accompanied by spelling errors analysis provide a powerful window for making instructional sense of children's spelling errors and for individualizing spelling instructional strategies.

  3. Validation of self-reported start year of mobile phone use in a Swedish case-control study on radiofrequency fields and acoustic neuroma risk.

    PubMed

    Pettersson, David; Bottai, Matteo; Mathiesen, Tiit; Prochazka, Michaela; Feychting, Maria

    2015-01-01

    The possible effect of radiofrequency exposure from mobile phones on tumor risk has been studied since the late 1990s. Yet, empirical information about recall of the start of mobile phone use among adult cases and controls has never been reported. Limited knowledge about recall errors hampers interpretations of the epidemiological evidence. We used network operator data to validate the self-reported start year of mobile phone use in a case-control study of mobile phone use and acoustic neuroma risk. The answers of 96 (29%) cases and 111 (22%) controls could be included in the validation. The larger proportion of cases reflects a more complete and detailed reporting of subscription history. Misclassification was substantial, with large random errors, small systematic errors, and no significant differences between cases and controls. The average difference between self-reported and operator start year was -0.62 (95% confidence interval: -1.42, 0.17) years for cases and -0.71 (-1.50, 0.07) years for controls, standard deviations were 3.92 and 4.17 years, respectively. Agreement between self-reported and operator-recorded data categorized into short, intermediate and long-term use was moderate (kappa statistic: 0.42). Should an association exist, dilution of risk estimates and distortion of exposure-response patterns for time since first mobile phone use could result from the large random errors in self-reported start year. Retrospective collection of operator data likely leads to a selection of "good reporters", with a higher proportion of cases. Thus, differential recall cannot be entirely excluded.

  4. Improving estimates of streamflow characteristics by using Landsat-1 imagery

    USGS Publications Warehouse

    Hollyday, Este F.

    1976-01-01

    Imagery from the first Earth Resources Technology Satellite (renamed Landsat-1) was used to discriminate physical features of drainage basins in an effort to improve equations used to estimate streamflow characteristics at gaged and ungaged sites. Records of 20 gaged basins in the Delmarva Peninsula of Maryland, Delaware, and Virginia were analyzed for 40 statistical streamflow characteristics. Equations relating these characteristics to basin characteristics were obtained by a technique of multiple linear regression. A control group of equations contains basin characteristics derived from maps. An experimental group of equations contains basin characteristics derived from maps and imagery. Characteristics from imagery were forest, riparian (streambank) vegetation, water, and combined agricultural and urban land use. These basin characteristics were isolated photographically by techniques of film-density discrimination. The area of each characteristic in each basin was measured photometrically. Comparison of equations in the control group with corresponding equations in the experimental group reveals that for 12 out of 40 equations the standard error of estimate was reduced by more than 10 percent. As an example, the standard error of estimate of the equation for the 5-year recurrence-interval flood peak was reduced from 46 to 32 percent. Similarly, the standard error of the equation for the mean monthly flow for September was reduced from 32 to 24 percent, the standard error for the 7-day, 2-year recurrence low flow was reduced from 136 to 102 percent, and the standard error for the 3-day, 2-year flood volume was reduced from 30 to 12 percent. It is concluded that data from Landsat imagery can substantially improve the accuracy of estimates of some streamflow characteristics at sites in the Delmarva Peninsula.

  5. Agreement between Two Methods of Dietary Data Collection in Male Adolescent Academy-Level Soccer Players

    PubMed Central

    Briggs, Marc A.; Rumbold, Penny L. S.; Cockburn, Emma; Russell, Mark; Stevenson, Emma J.

    2015-01-01

    Collecting accurate and reliable nutritional data from adolescent populations is challenging, with current methods providing significant under-reporting. Therefore, the aim of the study was to determine the accuracy of a combined dietary data collection method (self-reported weighed food diary, supplemented with a 24-h recall) when compared to researcher observed energy intake in male adolescent soccer players. Twelve Academy players from an English Football League club participated in the study. Players attended a 12 h period in the laboratory (08:00 h–20:00 h), during which food and drink items were available and were consumed ad libitum. Food was also provided to consume at home between 20:00 h and 08:00 h the following morning under free-living conditions. To calculate the participant reported energy intake, food and drink items were weighed and recorded in a food diary by each participant, which was supplemented with information provided through a 24-h recall interview the following morning. Linear regression, limits of agreement (LOA) and typical error (coefficient of variation; CV) were used to quantify agreement between observer and participant reported 24-h energy intake. Difference between methods was assessed using a paired samples t-test. Participants systematically under-reported energy intake in comparison to that observed (p < 0.01) but the magnitude of this bias was small and consistent (mean bias = −88 kcal·day−1, 95% CI for bias = −146 to −29 kcal·day−1). For random error, the 95% LOA between methods ranged between −1.11 to 0.37 MJ·day−1 (−256 to 88 kcal·day−1). The standard error of the estimate was low, with a typical error between measurements of 3.1%. These data suggest that the combined dietary data collection method could be used interchangeably with the gold standard observed food intake technique in the population studied providing that appropriate adjustment is made for the systematic under-reporting common to such methods. PMID:26193315

  6. Carbapenem Susceptibility Testing Errors Using Three Automated Systems, Disk Diffusion, Etest, and Broth Microdilution and Carbapenem Resistance Genes in Isolates of Acinetobacter baumannii-calcoaceticus Complex

    DTIC Science & Technology

    2011-10-01

    Phoenix, and Vitek 2 systems). Discordant results were categorized as very major errors (VME), major errors (ME), and minor errors (mE). DNA sequences...01 OCT 2011 2 . REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Carbapenem Susceptibility Testing Errors Using Three Automated...FDA standards required for device approval (11). The Vitek 2 method was the only automated susceptibility method in our study that satisfied FDA

  7. What Randomized Benchmarking Actually Measures

    DOE PAGES

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; ...

    2017-09-28

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  8. Differential sea-state bias: A case study using TOPEX/POSEIDON data

    NASA Technical Reports Server (NTRS)

    Stewart, Robert H.; Devalla, B.

    1994-01-01

    We used selected data from the NASA altimeter TOPEX/POSEIDON to calculate differences in range measured by the C and Ku-band altimeters when the satellite overflew 5 to 15 m waves late at night. The range difference is due to free electrons in the ionosphere and to errors in sea-state bias. For the selected data the ionospheric influence on Ku range is less than 2 cm. Any difference in range over short horizontal distances is due only to a small along-track variability of the ionosphere and to errors in calculating the differential sea-state bias. We find that there is a barely detectable error in the bias in the geophysical data records. The wave-induced error in the ionospheric correction is less than 0.2% of significant wave height. The equivalent error in differential range is less than 1% of wave height. Errors in the differential sea-state bias calculations appear to be small even for extreme wave heights that greatly exceed the conditions on which the bias is based. The results also improved our confidence in the sea-state bias correction used for calculating the geophysical data records. Any error in the correction must influence Ku and C-band ranges almost equally.

  9. Laboratory errors and patient safety.

    PubMed

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that evaluated the encountered laboratory errors and launch the great need for universal standardization and bench marking measures to control the laboratory work.

  10. Flight Calibration of four airspeed systems on a swept-wing airplane at Mach numbers up to 1.04 by the NACA radar-phototheodolite method

    NASA Technical Reports Server (NTRS)

    Thompson, Jim Rogers; Bray, Richard S; COOPER GEORGE E

    1950-01-01

    The calibrations of four airspeed systems installed in a North American F-86A airplane have been determined in flight at Mach numbers up to 1.04 by the NACA radar-phototheodolite method. The variation of the static-pressure error per unit indicated impact pressure is presented for three systems typical of those currently in use in flight research, a nose boom and two different wing-tip booms, and for the standard service system installed in the airplane. A limited amount of information on the effect of airplane normal-force coefficient on the static-pressure error is included. The results are compared with available theory and with results from wind-tunnel tests of the airspeed heads alone. Of the systems investigated, a nose-boom installation was found to be most suitable for research use at transonic and low supersonic speeds because it provided the greatest sensitivity of the indicated Mach number to a unit change in true Mach number at very high subsonic speeds, and because it was least sensitive to changes in airplane normal-force coefficient. The static-pressure error of the nose-boom system was small and constant above a Mach number of 1.03 after passage of the fuselage bow shock wave over the airspeed head.

  11. Description and preliminary results of a 100 square meter rain gauge

    NASA Astrophysics Data System (ADS)

    Grimaldi, Salvatore; Petroselli, Andrea; Baldini, Luca; Gorgucci, Eugenio

    2018-01-01

    Rainfall is one of the most crucial processes in hydrology, and the direct and indirect rainfall measurement methods are constantly being updated and improved. The standard instrument used to measure rainfall rate and accumulation is the rain gauge, which provides direct observations. Though the small dimension of the orifice allows rain gauges to be installed anywhere, it also causes errors due to the splash and wind effects. To investigate the role of the orifice dimension, this study, for the first time, introduces and demonstrates an apparatus for observing rainfall called a giant-rain gauge that is characterised by a collecting surface of 100 m2. To discuss the new instrument and its technical details, a preliminary analysis of 26 rainfall events is provided. The results suggest that there are significant differences between the standard and proposed rain gauges. Specifically, major discrepancies are evident for low time aggregation scale (5, 10, and 15 min) and for high rainfall intensity values.

  12. Spatiotemporal matrix image formation for programmable ultrasound scanners

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Morichau-Beauchant, Pierre; Porée, Jonathan; Garofalakis, Anikitos; Tavitian, Bertrand; Tanter, Mickael; Provost, Jean

    2018-02-01

    As programmable ultrasound scanners become more common in research laboratories, it is increasingly important to develop robust software-based image formation algorithms that can be obtained in a straightforward fashion for different types of probes and sequences with a small risk of error during implementation. In this work, we argue that as the computational power keeps increasing, it is becoming practical to directly implement an approximation to the matrix operator linking reflector point targets to the corresponding radiofrequency signals via thoroughly validated and widely available simulations software. Once such a spatiotemporal forward-problem matrix is constructed, standard and thus highly optimized inversion procedures can be leveraged to achieve very high quality images in real time. Specifically, we show that spatiotemporal matrix image formation produces images of similar or enhanced quality when compared against standard delay-and-sum approaches in phantoms and in vivo, and show that this approach can be used to form images even when using non-conventional probe designs for which adapted image formation algorithms are not readily available.

  13. The 1982 control network of Mars

    NASA Technical Reports Server (NTRS)

    Davies, M. E.; Katayama, F. Y.

    1983-01-01

    Attention is given to a planet-wide control network of Mars that was computed in September 1982 using a large single-block analytical triangulation with 47,524 measurements of 6853 control points on 1054 Mariner 9 and 757 Viking pictures. In all, 19,139 normal equations were solved, with a resulting standard error of measurement of 18.06 microns. The control points identified by name and letter designation are given, as are the aerographic coordinates of the control points. In addition, the coordinates of the Viking I lander site are given: latitude, 22.480 deg; longitude, 47.962 deg (radius, 3389.32 km). This study expands and updates the previously published network (1978). It is noted that the computation differs in many respects from standard aerial mapping photogrammetric practice. In comparison with aerial mapping photography, the television formats are small and the focal lengths are long; stereo coverage is rare, the scale of the pictures varies greatly, and the residual camera distortions are large.

  14. A refined method for multivariate meta-analysis and meta-regression

    PubMed Central

    Jackson, Daniel; Riley, Richard D

    2014-01-01

    Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects’ standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:23996351

  15. Quantification of free convection effects on 1 kg mass standards

    NASA Astrophysics Data System (ADS)

    Schreiber, M.; Emran, M. S.; Fröhlich, T.; Schumacher, J.; Thess, A.

    2015-12-01

    We determine the free-convection effects and the resulting mass differences in a high-precision mass comparator for cylindrical and spherical 1 kg mass standards at different air pressures. The temperature differences are chosen in the millikelvin range and lead to microgram updrafts. Our studies reveal a good agreement between the measurements and direct numerical simulations of the Boussinesq equations of free thermal convection. A higher sensitivity to the free convection effects is found for the spherical case compared to the cylindrical one. We also translate our results on the free convection effects into a form which is used in fluid mechanics: a dimensionless updraft coefficient as a function of the dimensionless Grashof number Gr that quantifies the thermal driving due to temperature differences. This relation displays a unique scaling behavior over nearly four decades in Gr and levels off into geometry-specific constants for the very small Grashof numbers. The obtained results provide a rational framework for estimating systematic errors in mass metrology due to the effects of free convection.

  16. Use of units of measurement error in anthropometric comparisons.

    PubMed

    Lucas, Teghan; Henneberg, Maciej

    2017-09-01

    Anthropometrists attempt to minimise measurement errors, however, errors cannot be eliminated entirely. Currently, measurement errors are simply reported. Measurement errors should be included into analyses of anthropometric data. This study proposes a method which incorporates measurement errors into reported values, replacing metric units with 'units of technical error of measurement (TEM)' by applying these to forensics, industrial anthropometry and biological variation. The USA armed forces anthropometric survey (ANSUR) contains 132 anthropometric dimensions of 3982 individuals. Concepts of duplication and Euclidean distance calculations were applied to the forensic-style identification of individuals in this survey. The National Size and Shape Survey of Australia contains 65 anthropometric measurements of 1265 women. This sample was used to show how a woman's body measurements expressed in TEM could be 'matched' to standard clothing sizes. Euclidean distances show that two sets of repeated anthropometric measurements of the same person cannot be matched (> 0) on measurements expressed in millimetres but can in units of TEM (= 0). Only 81 women can fit into any standard clothing size when matched using centimetres, with units of TEM, 1944 women fit. The proposed method can be applied to all fields that use anthropometry. Units of TEM are considered a more reliable unit of measurement for comparisons.

  17. Finite Time Control Design for Bilateral Teleoperation System With Position Synchronization Error Constrained.

    PubMed

    Yang, Yana; Hua, Changchun; Guan, Xinping

    2016-03-01

    Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method.

  18. EDITORIAL: Nanoscale metrology Nanoscale metrology

    NASA Astrophysics Data System (ADS)

    Picotto, G. B.; Koenders, L.; Wilkening, G.

    2009-08-01

    Instrumentation and measurement techniques at the nanoscale play a crucial role not only in extending our knowledge of the properties of matter and processes in nanosciences, but also in addressing new measurement needs in process control and quality assurance in industry. Micro- and nanotechnologies are now facing a growing demand for quantitative measurements to support the reliability, safety and competitiveness of products and services. Quantitative measurements presuppose reliable and stable instruments and measurement procedures as well as suitable calibration artefacts to ensure the quality of measurements and traceability to standards. This special issue of Measurement Science and Technology presents selected contributions from the Nanoscale 2008 seminar held at the Istituto Nazionale di Ricerca Metrologica (INRIM), Torino, in September 2008. This was the 4th Seminar on Nanoscale Calibration Standards and Methods and the 8th Seminar on Quantitative Microscopy (the first being held in 1995). The seminar was jointly organized by the Nanometrology Group within EUROMET (The European Collaboration in Measurement Standards), the German Nanotechnology Competence Centre 'Ultraprecise Surface Figuring' (CC-UPOB), the Physikalisch-Technische Bundesanstalt (PTB) and INRIM. A special event during the seminar was the 'knighting' of Günter Wilkening from PTB, Braunschweig, Germany, as the 1st Knight of Dimensional Nanometrology. Günter Wilkening received the NanoKnight Award for his outstanding work in the field of dimensional nanometrology over the last 20 years. The contributions in this special issue deal with the developments and improvements of instrumentation and measurement methods for scanning force microscopy (SFM), electron and optical microscopy, high-resolution interferometry, calibration of instruments and new standards, new facilities and applications including critical dimension (CD) measurements on small and medium structures and nanoparticle characterization. The papers in the first part report on new or improved instrumentation, details of developments of metrology SFM, improvements to SFM, probes and scanning methods in the direction of nanoscale coordinate measuring machines and true 3D measurements as well as of progress of a 2D encoder based on a regular crystalline lattice. To ensure traceability to the SI unit of length many highly sophisticated instruments are equipped with laser interferometers to measure small displacements in the nanometre range very accurately. Improving these techniques is still a challenge and therefore new interferometric techniques are considered in several papers as well as improved sensors for nanodisplacement measurements or the development of a deep UV microscope for micro- and nanostructures. The tactile measurement of small structures also calls for a better control of forces in the nano- and piconewton range. A nanoforce facility, based on a disk-pendulum with electrostatic stiffness reduction and electrostatic force compensation, is presented for the measurement of small forces. In the second part the contributions are related to calibration and correction strategies and standards such as the development of test objects based on 3D silicon structures, and of samples with irregular surface profiles, and their use for calibration. The shape of the tip and its influence on measurements is still a contentious issue and addressed in several papers: use of nanospheres for tip characterization, a geometrical approach for reconstruction errors by tactile probing. Molecular dynamical calculations, classical as well as ab initio (based on density functional theory), are used to discuss effects of tip-sample relaxation on the topography and to have a better base from which to estimate uncertainties in measurements of small particles or features. Some papers report about measurements of air refractivity fluctuations by phase modulation interferometry, angle-scale traceability by laser diffractometry, and an error separation method. The development of 3D surface roughness measurement standards from scratches is considered in one contribution. Here a 2D autoregressive model was used to generate the software gauge data, which were used as a base for the manufacturing process by diamond turning. Contributions in the third part deal with applications including CD measurements on small and medium structures, the characterization of nanoparticles with a diameter less than 200 nm by electron microscopy, chemical nanoscale metrology by TXRF and a study of the strength of nanotube bundles. We would like to thank all the authors for their contributions, and the referees for their time spent reviewing all the papers and for making their valuable and helpful comments. Additional thanks are extended to all involved in the production of this issue for their help and support.

  19. How personal standards perfectionism and evaluative concerns perfectionism affect the error positivity and post-error behavior with varying stimulus visibility.

    PubMed

    Drizinsky, Jessica; Zülch, Joachim; Gibbons, Henning; Stahl, Jutta

    2016-10-01

    Error detection is required in order to correct or avoid imperfect behavior. Although error detection is beneficial for some people, for others it might be disturbing. We investigated Gaudreau and Thompson's (Personality and Individual Differences, 48, 532-537, 2010) model, which combines personal standards perfectionism (PSP) and evaluative concerns perfectionism (ECP). In our electrophysiological study, 43 participants performed a combination of a modified Simon task, an error awareness paradigm, and a masking task with a variation of stimulus onset asynchrony (SOA; 33, 67, and 100 ms). Interestingly, relative to low-ECP participants, high-ECP participants showed a better post-error accuracy (despite a worse classification accuracy) in the high-visibility SOA 100 condition than in the two low-visibility conditions (SOA 33 and SOA 67). Regarding the electrophysiological results, first, we found a positive correlation between ECP and the amplitude of the error positivity (Pe) under conditions of low stimulus visibility. Second, under the condition of high stimulus visibility, we observed a higher Pe amplitude for high-ECP-low-PSP participants than for high-ECP-high-PSP participants. These findings are discussed within the framework of the error-processing avoidance hypothesis of perfectionism (Stahl, Acharki, Kresimon, Völler, & Gibbons, International Journal of Psychophysiology, 97, 153-162, 2015).

  20. Prediction of matching condition for a microstrip subsystem using artificial neural network and adaptive neuro-fuzzy inference system

    NASA Astrophysics Data System (ADS)

    Salehi, Mohammad Reza; Noori, Leila; Abiri, Ebrahim

    2016-11-01

    In this paper, a subsystem consisting of a microstrip bandpass filter and a microstrip low noise amplifier (LNA) is designed for WLAN applications. The proposed filter has a small implementation area (49 mm2), small insertion loss (0.08 dB) and wide fractional bandwidth (FBW) (61%). To design the proposed LNA, the compact microstrip cells, an field effect transistor, and only a lumped capacitor are used. It has a low supply voltage and a low return loss (-40 dB) at the operation frequency. The matching condition of the proposed subsystem is predicted using subsystem analysis, artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). To design the proposed filter, the transmission matrix of the proposed resonator is obtained and analysed. The performance of the proposed ANN and ANFIS models is tested using the numerical data by four performance measures, namely the correlation coefficient (CC), the mean absolute error (MAE), the average percentage error (APE) and the root mean square error (RMSE). The obtained results show that these models are in good agreement with the numerical data, and a small error between the predicted values and numerical solution is obtained.

Top