Simplified Approach Charts Improve Data Retrieval Performance
Stewart, Michael; Laraway, Sean; Jordan, Kevin; Feary, Michael S.
2016-01-01
The effectiveness of different instrument approach charts to deliver minimum visibility and altitude information during airport equipment outages was investigated. Eighteen pilots flew simulated instrument approaches in three conditions: (a) normal operations using a standard approach chart (standard-normal), (b) equipment outage conditions using a standard approach chart (standard-outage), and (c) equipment outage conditions using a prototype decluttered approach chart (prototype-outage). Errors and retrieval times in identifying minimum altitudes and visibilities were measured. The standard-outage condition produced significantly more errors and longer retrieval times versus the standard-normal condition. The prototype-outage condition had significantly fewer errors and shorter retrieval times than did the standard-outage condition. The prototype-outage condition produced significantly fewer errors but similar retrieval times when compared with the standard-normal condition. Thus, changing the presentation of minima may reduce risk and increase safety in instrument approaches, specifically with airport equipment outages. PMID:28491009
ERIC Educational Resources Information Center
Wang, Tianyou; And Others
M. J. Kolen, B. A. Hanson, and R. L. Brennan (1992) presented a procedure for assessing the conditional standard error of measurement (CSEM) of scale scores using a strong true-score model. They also investigated the ways of using nonlinear transformation from number-correct raw score to scale score to equalize the conditional standard error along…
ERIC Educational Resources Information Center
Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu
2013-01-01
Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…
Performance monitoring and error significance in patients with obsessive-compulsive disorder.
Endrass, Tanja; Schuermann, Beate; Kaufmann, Christan; Spielberg, Rüdiger; Kniesche, Rainer; Kathmann, Norbert
2010-05-01
Performance monitoring has been consistently found to be overactive in obsessive-compulsive disorder (OCD). The present study examines whether performance monitoring in OCD is adjusted with error significance. Therefore, errors in a flanker task were followed by neutral (standard condition) or punishment feedbacks (punishment condition). In the standard condition patients had significantly larger error-related negativity (ERN) and correct-related negativity (CRN) ampliudes than controls. But, in the punishment condition groups did not differ in ERN and CRN amplitudes. While healthy controls showed an amplitude enhancement between standard and punishment condition, OCD patients showed no variation. In contrast, group differences were not found for the error positivity (Pe): both groups had larger Pe amplitudes in the punishment condition. Results confirm earlier findings of overactive error monitoring in OCD. The absence of a variation with error significance might indicate that OCD patients are unable to down-regulate their monitoring activity according to external requirements. Copyright 2010 Elsevier B.V. All rights reserved.
Price, Larry R; Raju, Nambury; Lurie, Anna; Wilkins, Charles; Zhu, Jianjun
2006-02-01
A specific recommendation of the 1999 Standards for Educational and Psychological Testing by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education is that test publishers report estimates of the conditional standard error of measurement (SEM). Procedures for calculating the conditional (score-level) SEM based on raw scores are well documented; however, few procedures have been developed for estimating the conditional SEM of subtest or composite scale scores resulting from a nonlinear transformation. Item response theory provided the psychometric foundation to derive the conditional standard errors of measurement and confidence intervals for composite scores on the Wechsler Preschool and Primary Scale of Intelligence-Third Edition.
Conditional Standard Errors of Measurement for Scale Scores.
ERIC Educational Resources Information Center
Kolen, Michael J.; And Others
1992-01-01
A procedure is described for estimating the reliability and conditional standard errors of measurement of scale scores incorporating the discrete transformation of raw scores to scale scores. The method is illustrated using a strong true score model, and practical applications are described. (SLD)
Stabilizing Conditional Standard Errors of Measurement in Scale Score Transformations
ERIC Educational Resources Information Center
Moses, Tim; Kim, YoungKoung
2017-01-01
The focus of this article is on scale score transformations that can be used to stabilize conditional standard errors of measurement (CSEMs). Three transformations for stabilizing the estimated CSEMs are reviewed, including the traditional arcsine transformation, a recently developed general variance stabilization transformation, and a new method…
Liu, Xiaofeng Steven
2011-05-01
The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.
Chou, C P; Bentler, P M; Satorra, A
1991-11-01
Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.
Drizinsky, Jessica; Zülch, Joachim; Gibbons, Henning; Stahl, Jutta
2016-10-01
Error detection is required in order to correct or avoid imperfect behavior. Although error detection is beneficial for some people, for others it might be disturbing. We investigated Gaudreau and Thompson's (Personality and Individual Differences, 48, 532-537, 2010) model, which combines personal standards perfectionism (PSP) and evaluative concerns perfectionism (ECP). In our electrophysiological study, 43 participants performed a combination of a modified Simon task, an error awareness paradigm, and a masking task with a variation of stimulus onset asynchrony (SOA; 33, 67, and 100 ms). Interestingly, relative to low-ECP participants, high-ECP participants showed a better post-error accuracy (despite a worse classification accuracy) in the high-visibility SOA 100 condition than in the two low-visibility conditions (SOA 33 and SOA 67). Regarding the electrophysiological results, first, we found a positive correlation between ECP and the amplitude of the error positivity (Pe) under conditions of low stimulus visibility. Second, under the condition of high stimulus visibility, we observed a higher Pe amplitude for high-ECP-low-PSP participants than for high-ECP-high-PSP participants. These findings are discussed within the framework of the error-processing avoidance hypothesis of perfectionism (Stahl, Acharki, Kresimon, Völler, & Gibbons, International Journal of Psychophysiology, 97, 153-162, 2015).
A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test
NASA Technical Reports Server (NTRS)
Reeder, James R.
2002-01-01
The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.
Raymond, Mark R; Clauser, Brian E; Furman, Gail E
2010-10-01
The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.
Conditional Standard Errors of Measurement for Composite Scores Using IRT
ERIC Educational Resources Information Center
Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan
2012-01-01
Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…
Spencer, Bruce D
2012-06-01
Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Tsay, Chung-Biau
1987-01-01
The authors have proposed a method for the generation of circular arc helical gears which is based on the application of standard equipment, worked out all aspects of the geometry of the gears, proposed methods for the computer aided simulation of conditions of meshing and bearing contact, investigated the influence of manufacturing and assembly errors, and proposed methods for the adjustment of gears to these errors. The results of computer aided solutions are illustrated with computer graphics.
Ribic, C.A.; Miller, T.W.
1998-01-01
We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.
Errors from approximation of ODE systems with reduced order models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vassilevska, Tanya
2016-12-30
This is a code to calculate the error from approximation of systems of ordinary differential equations (ODEs) by using Proper Orthogonal Decomposition (POD) Reduced Order Models (ROM) methods and to compare and analyze the errors for two POD ROM variants. The first variant is the standard POD ROM, the second variant is a modification of the method using the values of the time derivatives (a.k.a. time-derivative snapshots). The code compares the errors from the two variants under different conditions.
New dimension analyses with error analysis for quaking aspen and black spruce
NASA Technical Reports Server (NTRS)
Woods, K. D.; Botkin, D. B.; Feiveson, A. H.
1987-01-01
Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.
Molavi Tabrizi, Amirhossein; Goossens, Spencer; Mehdizadeh Rahimi, Ali; Cooper, Christopher D; Knepley, Matthew G; Bardhan, Jaydeep P
2017-06-13
We extend the linearized Poisson-Boltzmann (LPB) continuum electrostatic model for molecular solvation to address charge-hydration asymmetry. Our new solvation-layer interface condition (SLIC)/LPB corrects for first-shell response by perturbing the traditional continuum-theory interface conditions at the protein-solvent and the Stern-layer interfaces. We also present a GPU-accelerated treecode implementation capable of simulating large proteins, and our results demonstrate that the new model exhibits significant accuracy improvements over traditional LPB models, while reducing the number of fitting parameters from dozens (atomic radii) to just five parameters, which have physical meanings related to first-shell water behavior at an uncharged interface. In particular, atom radii in the SLIC model are not optimized but uniformly scaled from their Lennard-Jones radii. Compared to explicit-solvent free-energy calculations of individual atoms in small molecules, SLIC/LPB is significantly more accurate than standard parametrizations (RMS error 0.55 kcal/mol for SLIC, compared to RMS error of 3.05 kcal/mol for standard LPB). On parametrizing the electrostatic model with a simple nonpolar component for total molecular solvation free energies, our model predicts octanol/water transfer free energies with an RMS error 1.07 kcal/mol. A more detailed assessment illustrates that standard continuum electrostatic models reproduce total charging free energies via a compensation of significant errors in atomic self-energies; this finding offers a window into improving the accuracy of Generalized-Born theories and other coarse-grained models. Most remarkably, the SLIC model also reproduces positive charging free energies for atoms in hydrophobic groups, whereas standard PB models are unable to generate positive charging free energies regardless of the parametrized radii. The GPU-accelerated solver is freely available online, as is a MATLAB implementation.
2016-01-01
Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627
ERIC Educational Resources Information Center
Cole, Russell; Haimson, Joshua; Perez-Johnson, Irma; May, Henry
2011-01-01
State assessments are increasingly used as outcome measures for education evaluations. The scaling of state assessments produces variability in measurement error, with the conditional standard error of measurement increasing as average student ability moves toward the tails of the achievement distribution. This report examines the variability in…
Bolann, B J; Asberg, A
2004-01-01
The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.
NASA Astrophysics Data System (ADS)
Lin, Xiaomei; Chang, Penghui; Chen, Gehua; Lin, Jingjun; Liu, Ruixiang; Yang, Hao
2015-11-01
Our recent work has determined the carbon content in a melting ferroalloy by laser-induced breakdown spectroscopy (LIBS). The emission spectrum of carbon that we obtained in the laboratory is suitable for carbon content determination in a melting ferroalloy but we cannot get the expected results when this method is applied in industrial conditions: there is always an unacceptable error of around 4% between the actual value and the measured value. By comparing the measurement condition in the industrial condition with that in the laboratory, the results show that the temperature of the molten ferroalloy samples to be measured is constant under laboratory conditions while it decreases gradually under industrial conditions. However, temperature has a considerable impact on the measurement of carbon content, and this is the reason why there is always an error between the actual value and the measured value. In this paper we compare the errors of carbon content determination at different temperatures to find the optimum reference temperature range which can fit the requirements better in industrial conditions and, hence, make the measurement more accurate. The results of the comparative analyses show that the measured value of the carbon content in molten state (1620 K) is consistent with the nominal value of the solid standard sample (error within 0.7%). In fact, it is the most accurate measurement in the solid state. Based on this, we can effectively improve the accuracy of measurements in laboratory and can provide a reference standard of temperature for the measurement in industrial conditions. supported by National Natural Science Foundation of China (No. 51374040), and supported by Laser-Induced Plasma Spectroscopy Equipment Development and Application, China (No. 2014YQ120351)
Initializing a Mesoscale Boundary-Layer Model with Radiosonde Observations
NASA Astrophysics Data System (ADS)
Berri, Guillermo J.; Bertossa, Germán
2018-01-01
A mesoscale boundary-layer model is used to simulate low-level regional wind fields over the La Plata River of South America, a region characterized by a strong daily cycle of land-river surface-temperature contrast and low-level circulations of sea-land breeze type. The initial and boundary conditions are defined from a limited number of local observations and the upper boundary condition is taken from the only radiosonde observations available in the region. The study considers 14 different upper boundary conditions defined from the radiosonde data at standard levels, significant levels, level of the inversion base and interpolated levels at fixed heights, all of them within the first 1500 m. The period of analysis is 1994-2008 during which eight daily observations from 13 weather stations of the region are used to validate the 24-h surface-wind forecast. The model errors are defined as the root-mean-square of relative error in wind-direction frequency distribution and mean wind speed per wind sector. Wind-direction errors are greater than wind-speed errors and show significant dispersion among the different upper boundary conditions, not present in wind speed, revealing a sensitivity to the initialization method. The wind-direction errors show a well-defined daily cycle, not evident in wind speed, with the minimum at noon and the maximum at dusk, but no systematic deterioration with time. The errors grow with the height of the upper boundary condition level, in particular wind direction, and double the errors obtained when the upper boundary condition is defined from the lower levels. The conclusion is that defining the model upper boundary condition from radiosonde data closer to the ground minimizes the low-level wind-field errors throughout the region.
Hemispheric Differences in Processing Handwritten Cursive
ERIC Educational Resources Information Center
Hellige, Joseph B.; Adamson, Maheen M.
2007-01-01
Hemispheric asymmetry was examined for native English speakers identifying consonant-vowel-consonant (CVC) non-words presented in standard printed form, in standard handwritten cursive form or in handwritten cursive with the letters separated by small gaps. For all three conditions, fewer errors occurred when stimuli were presented to the right…
ERIC Educational Resources Information Center
Raymond, Mark R.; Clauser, Brian E.; Furman, Gail E.
2010-01-01
The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary…
Tracking Progress in Improving Diagnosis: A Framework for Defining Undesirable Diagnostic Events.
Olson, Andrew P J; Graber, Mark L; Singh, Hardeep
2018-01-29
Diagnostic error is a prevalent, harmful, and costly phenomenon. Multiple national health care and governmental organizations have recently identified the need to improve diagnostic safety as a high priority. A major barrier, however, is the lack of standardized, reliable methods for measuring diagnostic safety. Given the absence of reliable and valid measures for diagnostic errors, we need methods to help establish some type of baseline diagnostic performance across health systems, as well as to enable researchers and health systems to determine the impact of interventions for improving the diagnostic process. Multiple approaches have been suggested but none widely adopted. We propose a new framework for identifying "undesirable diagnostic events" (UDEs) that health systems, professional organizations, and researchers could further define and develop to enable standardized measurement and reporting related to diagnostic safety. We propose an outline for UDEs that identifies both conditions prone to diagnostic error and the contexts of care in which these errors are likely to occur. Refinement and adoption of this framework across health systems can facilitate standardized measurement and reporting of diagnostic safety.
Kessels, Roy P C; van Loon, Eke; Wester, Arie J
2007-10-01
To examine the errorless learning approach using a procedural memory task (i.e. learning of actual routes) in patients with amnesia, as compared to trial-and-error learning. Counterbalanced self-controlled cases series. Psychiatric hospital (Korsakoff clinic). A convenience sample of 10 patients with the Korsakoff amnestic syndrome. All patients learned a route in four sessions on separate days using an errorless approach and a different route using trial-and-error. Error rate was scored during route learning and standard neuro-psychological tests were administered (i.e. subtest route recall of the Rivermead Behavioural Memory Test (RBMT) and the Dutch version of the California Verbal Learning Test (VLGT)). A significant learning effect was found in the trial-and-error condition over consecutive sessions (P = 0.006), but no performance difference was found between errorless and trial-and-error learning of the routes. VLGT performance was significantly correlated with a trial-and-error advantage (P < 0.05); no significant correlation was found between the RBMT subtest and the learning conditions. Errorless learning was no more successful than trial-and-error learning of a procedural spatial task in patients with the Korsakoff syndrome (severe amnesia).
Human error and the search for blame
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1989-01-01
Human error is a frequent topic in discussions about risks in using computer systems. A rational analysis of human error leads through the consideration of mistakes to standards that designers use to avoid mistakes that lead to known breakdowns. The irrational side, however, is more interesting. It conditions people to think that breakdowns are inherently wrong and that there is ultimately someone who is responsible. This leads to a search for someone to blame which diverts attention from: learning from the mistakes; seeing the limitations of current engineering methodology; and improving the discourse of design.
Color constancy in dermatoscopy with smartphone
NASA Astrophysics Data System (ADS)
Cugmas, Blaž; Pernuš, Franjo; Likar, Boštjan
2017-12-01
The recent spread of cheap dermatoscopes for smartphones can empower patients to acquire images of skin lesions on their own and send them to dermatologists. Since images are acquired by different smartphone cameras under unique illumination conditions, the variability in colors is expected. Therefore, the mobile dermatoscopic systems should be calibrated in order to ensure the color constancy in skin images. In this study, we have tested a dermatoscope DermLite DL1 basic, attached to Samsung Galaxy S4 smartphone. Under the controlled conditions, jpeg images of standard color patches were acquired and a model between an unknown device-dependent RGB and a deviceindependent Lab color space has been built. Results showed that median and the best color error was 7.77 and 3.94, respectively. Results are in the range of a human eye detection capability (color error ≈ 4) and video and printing industry standards (color error is expected to be between 5 and 6). It can be concluded that a calibrated smartphone dermatoscope can provide sufficient color constancy and can serve as an interesting opportunity to bring dermatologists closer to the patients.
A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output
Stevanovic, Stefan; Pervan, Boris
2018-01-01
We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250
NASA Astrophysics Data System (ADS)
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2018-05-01
In recent years, proper orthogonal decomposition (POD) has become a popular model reduction method in the field of groundwater modeling. It is used to mitigate the problem of long run times that are often associated with physically-based modeling of natural systems, especially for parameter estimation and uncertainty analysis. POD-based techniques reproduce groundwater head fields sufficiently accurate for a variety of applications. However, no study has investigated how POD techniques affect the accuracy of different boundary conditions found in groundwater models. We show that the current treatment of boundary conditions in POD causes inaccuracies for these boundaries in the reduced models. We provide an improved method that splits the POD projection space into a subspace orthogonal to the boundary conditions and a separate subspace that enforces the boundary conditions. To test the method for Dirichlet, Neumann and Cauchy boundary conditions, four simple transient 1D-groundwater models, as well as a more complex 3D model, are set up and reduced both by standard POD and POD with the new extension. We show that, in contrast to standard POD, the new method satisfies both Dirichlet and Neumann boundary conditions. It can also be applied to Cauchy boundaries, where the flux error of standard POD is reduced by its head-independent contribution. The extension essentially shifts the focus of the projection towards the boundary conditions. Therefore, we see a slight trade-off between errors at model boundaries and overall accuracy of the reduced model. The proposed POD extension is recommended where exact treatment of boundary conditions is required.
Mehta, Saurabh P; George, Hannah R; Goering, Christian A; Shafer, Danielle R; Koester, Alan; Novotny, Steven
2017-11-01
Clinical measurement study. The push-off test (POT) was recently conceived and found to be reliable and valid for assessing weight bearing through injured wrist or elbow. However, further research with larger sample can lend credence to the preliminary findings supporting the use of the POT. This study examined the interrater reliability, construct validity, and measurement error for the POT in patients with wrist conditions. Participants with musculoskeletal (MSK) wrist conditions were recruited. The performance on the POT, grip isometric strength of wrist extensors was assessed. The shortened version of the Disabilities of the Arm, Shoulder and Hand and numeric pain rating scale were completed. The intraclass correlation coefficient assessed interrater reliability of the POT. Pearson correlation coefficients (r) examined the concurrent relationships between the POT and other measures. The standard error of measurement and the minimal detectable change at 90% confidence interval were assessed as measurement error and index of true change for the POT. A total of 50 participants with different elbow or wrist conditions (age: 48.1 ± 16.6 years) were included in this study. The results of this study strongly supported the interrater reliability (intraclass correlation coefficient: 0.96 and 0.93 for the affected and unaffected sides, respectively) of the POT in patients with wrist MSK conditions. The POT showed convergent relationships with the grip strength on the injured side (r = 0.89) and the wrist extensor strength (r = 0.7). The POT showed smaller standard error of measurement (1.9 kg). The minimal detectable change at 90% confidence interval for the POT was 4.4 kg for the sample. This study provides additional evidence to support the reliability and validity of the POT. This is the first study that provides the values for the measurement error and true change on the POT scores in patients with wrist MSK conditions. Further research should examine the responsiveness and discriminant validity of the POT in patients with wrist conditions. Copyright © 2017 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.
Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B
2016-05-01
The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-01-01
Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476
Land Surface Temperature Measurements form EOS MODIS Data
NASA Technical Reports Server (NTRS)
Wan, Zhengming
1996-01-01
We have developed a physics-based land-surface temperature (LST) algorithm for simultaneously retrieving surface band-averaged emissivities and temperatures from day/night pairs of MODIS (Moderate Resolution Imaging Spectroradiometer) data in seven thermal infrared bands. The set of 14 nonlinear equations in the algorithm is solved with the statistical regression method and the least-squares fit method. This new LST algorithm was tested with simulated MODIS data for 80 sets of band-averaged emissivities calculated from published spectral data of terrestrial materials in wide ranges of atmospheric and surface temperature conditions. Comprehensive sensitivity and error analysis has been made to evaluate the performance of the new LST algorithm and its dependence on variations in surface emissivity and temperature, upon atmospheric conditions, as well as the noise-equivalent temperature difference (NE(Delta)T) and calibration accuracy specifications of the MODIS instrument. In cases with a systematic calibration error of 0.5%, the standard deviations of errors in retrieved surface daytime and nighttime temperatures fall between 0.4-0.5 K over a wide range of surface temperatures for mid-latitude summer conditions. The standard deviations of errors in retrieved emissivities in bands 31 and 32 (in the 10-12.5 micrometer IR spectral window region) are 0.009, and the maximum error in retrieved LST values falls between 2-3 K. Several issues related to the day/night LST algorithm (uncertainties in the day/night registration and in surface emissivity changes caused by dew occurrence, and the cloud cover) have been investigated. The LST algorithms have been validated with MODIS Airborne Simulator (MAS) dada and ground-based measurement data in two field campaigns conducted in Railroad Valley playa, NV in 1995 and 1996. The MODIS LST version 1 software has been delivered.
Uncertainty Evaluation of Residential Central Air-conditioning Test System
NASA Astrophysics Data System (ADS)
Li, Haoxue
2018-04-01
According to national standards, property tests of air-conditioning are required. However, test results could be influenced by the precision of apparatus or measure errors. Therefore, uncertainty evaluation of property tests should be conducted. In this paper, the uncertainties are calculated on the property tests of Xinfei13.6 kW residential central air-conditioning. The evaluation result shows that the property tests are credible.
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-11-01
Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
Comparison of Predictive Modeling Methods of Aircraft Landing Speed
NASA Technical Reports Server (NTRS)
Diallo, Ousmane H.
2012-01-01
Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.
Methods for estimating streamflow at mountain fronts in southern New Mexico
Waltemeyer, S.D.
1994-01-01
The infiltration of streamflow is potential recharge to alluvial-basin aquifers at or near mountain fronts in southern New Mexico. Data for 13 streamflow-gaging stations were used to determine a relation between mean annual stream- flow and basin and climatic conditions. Regression analysis was used to develop an equation that can be used to estimate mean annual streamflow on the basis of drainage areas and mean annual precipi- tation. The average standard error of estimate for this equation is 46 percent. Regression analysis also was used to develop an equation to estimate mean annual streamflow on the basis of active- channel width. Measurements of the width of active channels were determined for 6 of the 13 gaging stations. The average standard error of estimate for this relation is 29 percent. Stream- flow estimates made using a regression equation based on channel geometry are considered more reliable than estimates made from an equation based on regional relations of basin and climatic conditions. The sample size used to develop these relations was small, however, and the reported standard error of estimate may not represent that of the entire population. Active-channel-width measurements were made at 23 ungaged sites along the Rio Grande upstream from Elephant Butte Reservoir. Data for additional sites would be needed for a more comprehensive assessment of mean annual streamflow in southern New Mexico.
Kang, Le; Carter, Randy; Darcy, Kathleen; Kauderer, James; Liao, Shu-Yuan
2013-01-01
In this article we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo EM (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group (GOG) study of significant cervical lesion (S-CL) diagnosis in women with atypical glandular cells of undetermined significance (AGC) to compare the diagnostic accuracy of a histology-based evaluation, a CA-IX biomarker-based test and a human papillomavirus (HPV) DNA test. PMID:24163493
The performance of the standard rate turn (SRT) by student naval helicopter pilots.
Chapman, F; Temme, L A; Still, D L
2001-04-01
During flight training, student naval helicopter pilots learn the use of flight instruments through a prescribed series of simulator training events. The training simulator is a 6-degrees-of-freedom, motion-based, high-fidelity instrument trainer. From the final basic instrument simulator flights of student pilots, we selected for evaluation and analysis their performance of the Standard Rate Turn (SRT), a routine flight maneuver. The performance of the SRT was scored with air speed, altitude and heading average error from target values and standard deviations. These average errors and standard deviations were used in a Multiple Analysis of Variance (MANOVA) to evaluate the effects of three independent variables: 1) direction of turn (left vs. right), 2) degree of turn (180 vs. 360 degrees); and 3) segment of turn (roll-in, first 30 s, last 30 s, and roll-out of turn). Only the main effects of the three independent variables were significant; there were no significant interactions. This result greatly reduces the number of different conditions that should be scored separately for the evaluation of SRT performance. The results also showed that the magnitude of the heading and altitude errors at the beginning of the SRT correlated with the magnitude of the heading and altitude errors throughout the turn. This result suggests that for the turn to be well executed, it is important for it to begin with little error in these two response parameters. The observations reported here should be considered when establishing SRT performance norms and comparing student scores. Furthermore, it seems easier for pilots to maintain good performance than to correct poor performance.
NASA Astrophysics Data System (ADS)
Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.
2006-06-01
Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.
Calibration of Contactless Pulse Oximetry
Bartula, Marek; Bresch, Erik; Rocque, Mukul; Meftah, Mohammed; Kirenko, Ihor
2017-01-01
BACKGROUND: Contactless, camera-based photoplethysmography (PPG) interrogates shallower skin layers than conventional contact probes, either transmissive or reflective. This raises questions on the calibratability of camera-based pulse oximetry. METHODS: We made video recordings of the foreheads of 41 healthy adults at 660 and 840 nm, and remote PPG signals were extracted. Subjects were in normoxic, hypoxic, and low temperature conditions. Ratio-of-ratios were compared to reference Spo2 from 4 contact probes. RESULTS: A calibration curve based on artifact-free data was determined for a population of 26 individuals. For an Spo2 range of approximately 83% to 100% and discarding short-term errors, a root mean square error of 1.15% was found with an upper 99% one-sided confidence limit of 1.65%. Under normoxic conditions, a decrease in ambient temperature from 23 to 7°C resulted in a calibration error of 0.1% (±1.3%, 99% confidence interval) based on measurements for 3 subjects. PPG signal strengths varied strongly among individuals from about 0.9 × 10−3 to 4.6 × 10−3 for the infrared wavelength. CONCLUSIONS: For healthy adults, the results present strong evidence that camera-based contactless pulse oximetry is fundamentally feasible because long-term (eg, 10 minutes) error stemming from variation among individuals expressed as A*rms is significantly lower (<1.65%) than that required by the International Organization for Standardization standard (<4%) with the notion that short-term errors should be added. A first illustration of such errors has been provided with A**rms = 2.54% for 40 individuals, including 6 with dark skin. Low signal strength and subject motion present critical challenges that will have to be addressed to make camera-based pulse oximetry practically feasible. PMID:27258081
Collinear Latent Variables in Multilevel Confirmatory Factor Analysis
van de Schoot, Rens; Hox, Joop
2014-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions. PMID:29795827
Can, Seda; van de Schoot, Rens; Hox, Joop
2015-06-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.
Absolute color scale for improved diagnostics with wavefront error mapping.
Smolek, Michael K; Klyce, Stephen D
2007-11-01
Wavefront data are expressed in micrometers and referenced to the pupil plane, but current methods to map wavefront error lack standardization. Many use normalized or floating scales that may confuse the user by generating ambiguous, noisy, or varying information. An absolute scale that combines consistent clinical information with statistical relevance is needed for wavefront error mapping. The color contours should correspond better to current corneal topography standards to improve clinical interpretation. Retrospective analysis of wavefront error data. Historic ophthalmic medical records. Topographic modeling system topographical examinations of 120 corneas across 12 categories were used. Corneal wavefront error data in micrometers from each topography map were extracted at 8 Zernike polynomial orders and for 3 pupil diameters expressed in millimeters (3, 5, and 7 mm). Both total aberrations (orders 2 through 8) and higher-order aberrations (orders 3 through 8) were expressed in the form of frequency histograms to determine the working range of the scale across all categories. The standard deviation of the mean error of normal corneas determined the map contour resolution. Map colors were based on corneal topography color standards and on the ability to distinguish adjacent color contours through contrast. Higher-order and total wavefront error contour maps for different corneal conditions. An absolute color scale was produced that encompassed a range of +/-6.5 microm and a contour interval of 0.5 microm. All aberrations in the categorical database were plotted with no loss of clinical information necessary for classification. In the few instances where mapped information was beyond the range of the scale, the type and severity of aberration remained legible. When wavefront data are expressed in micrometers, this absolute scale facilitates the determination of the severity of aberrations present compared with a floating scale, particularly for distinguishing normal from abnormal levels of wavefront error. The new color palette makes it easier to identify disorders. The corneal mapping method can be extended to mapping whole eye wavefront errors. When refraction data are expressed in diopters, the previously published corneal topography scale is suggested.
Local alignment of two-base encoded DNA sequence
Homer, Nils; Merriman, Barry; Nelson, Stanley F
2009-01-01
Background DNA sequence comparison is based on optimal local alignment of two sequences using a similarity score. However, some new DNA sequencing technologies do not directly measure the base sequence, but rather an encoded form, such as the two-base encoding considered here. In order to compare such data to a reference sequence, the data must be decoded into sequence. The decoding is deterministic, but the possibility of measurement errors requires searching among all possible error modes and resulting alignments to achieve an optimal balance of fewer errors versus greater sequence similarity. Results We present an extension of the standard dynamic programming method for local alignment, which simultaneously decodes the data and performs the alignment, maximizing a similarity score based on a weighted combination of errors and edits, and allowing an affine gap penalty. We also present simulations that demonstrate the performance characteristics of our two base encoded alignment method and contrast those with standard DNA sequence alignment under the same conditions. Conclusion The new local alignment algorithm for two-base encoded data has substantial power to properly detect and correct measurement errors while identifying underlying sequence variants, and facilitating genome re-sequencing efforts based on this form of sequence data. PMID:19508732
NASA Astrophysics Data System (ADS)
Zhu, Lianqing; Chen, Yunfang; Chen, Qingshan; Meng, Hao
2011-05-01
According to minimum zone condition, a method for evaluating the profile error of Archimedes helicoid surface based on Genetic Algorithm (GA) is proposed. The mathematic model of the surface is provided and the unknown parameters in the equation of surface are acquired through least square method. Principle of GA is explained. Then, the profile error of Archimedes Helicoid surface is obtained through GA optimization method. To validate the proposed method, the profile error of an Archimedes helicoid surface, Archimedes Cylindrical worm (ZA worm) surface, is evaluated. The results show that the proposed method is capable of correctly evaluating the profile error of Archimedes helicoid surface and satisfy the evaluation standard of the Minimum Zone Method. It can be applied to deal with the measured data of profile error of complex surface obtained by three coordinate measurement machines (CMM).
Optical storage media data integrity studies
NASA Technical Reports Server (NTRS)
Podio, Fernando L.
1994-01-01
Optical disk-based information systems are being used in private industry and many Federal Government agencies for on-line and long-term storage of large quantities of data. The storage devices that are part of these systems are designed with powerful, but not unlimited, media error correction capacities. The integrity of data stored on optical disks does not only depend on the life expectancy specifications for the medium. Different factors, including handling and storage conditions, may result in an increase of medium errors in size and frequency. Monitoring the potential data degradation is crucial, especially for long term applications. Efforts are being made by the Association for Information and Image Management Technical Committee C21, Storage Devices and Applications, to specify methods for monitoring and reporting to the user medium errors detected by the storage device while writing, reading or verifying the data stored in that medium. The Computer Systems Laboratory (CSL) of the National Institute of Standard and Technology (NIST) has a leadership role in the development of these standard techniques. In addition, CSL is researching other data integrity issues, including the investigation of error-resilient compression algorithms. NIST has conducted care and handling experiments on optical disk media with the objective of identifying possible causes of degradation. NIST work in data integrity and related standards activities is described.
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Watson, Andrew B.
1994-01-01
The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.
Multilevel Modeling with Correlated Effects
ERIC Educational Resources Information Center
Kim, Jee-Seon; Frees, Edward W.
2007-01-01
When there exist omitted effects, measurement error, and/or simultaneity in multilevel models, explanatory variables may be correlated with random components, and standard estimation methods do not provide consistent estimates of model parameters. This paper introduces estimators that are consistent under such conditions. By employing generalized…
Credit assignment in movement-dependent reinforcement learning
Boggess, Matthew J.; Crossley, Matthew J.; Parvin, Darius; Ivry, Richard B.; Taylor, Jordan A.
2016-01-01
When a person fails to obtain an expected reward from an object in the environment, they face a credit assignment problem: Did the absence of reward reflect an extrinsic property of the environment or an intrinsic error in motor execution? To explore this problem, we modified a popular decision-making task used in studies of reinforcement learning, the two-armed bandit task. We compared a version in which choices were indicated by key presses, the standard response in such tasks, to a version in which the choices were indicated by reaching movements, which affords execution failures. In the key press condition, participants exhibited a strong risk aversion bias; strikingly, this bias reversed in the reaching condition. This result can be explained by a reinforcement model wherein movement errors influence decision-making, either by gating reward prediction errors or by modifying an implicit representation of motor competence. Two further experiments support the gating hypothesis. First, we used a condition in which we provided visual cues indicative of movement errors but informed the participants that trial outcomes were independent of their actual movements. The main result was replicated, indicating that the gating process is independent of participants’ explicit sense of control. Second, individuals with cerebellar degeneration failed to modulate their behavior between the key press and reach conditions, providing converging evidence of an implicit influence of movement error signals on reinforcement learning. These results provide a mechanistically tractable solution to the credit assignment problem. PMID:27247404
Credit assignment in movement-dependent reinforcement learning.
McDougle, Samuel D; Boggess, Matthew J; Crossley, Matthew J; Parvin, Darius; Ivry, Richard B; Taylor, Jordan A
2016-06-14
When a person fails to obtain an expected reward from an object in the environment, they face a credit assignment problem: Did the absence of reward reflect an extrinsic property of the environment or an intrinsic error in motor execution? To explore this problem, we modified a popular decision-making task used in studies of reinforcement learning, the two-armed bandit task. We compared a version in which choices were indicated by key presses, the standard response in such tasks, to a version in which the choices were indicated by reaching movements, which affords execution failures. In the key press condition, participants exhibited a strong risk aversion bias; strikingly, this bias reversed in the reaching condition. This result can be explained by a reinforcement model wherein movement errors influence decision-making, either by gating reward prediction errors or by modifying an implicit representation of motor competence. Two further experiments support the gating hypothesis. First, we used a condition in which we provided visual cues indicative of movement errors but informed the participants that trial outcomes were independent of their actual movements. The main result was replicated, indicating that the gating process is independent of participants' explicit sense of control. Second, individuals with cerebellar degeneration failed to modulate their behavior between the key press and reach conditions, providing converging evidence of an implicit influence of movement error signals on reinforcement learning. These results provide a mechanistically tractable solution to the credit assignment problem.
NASA Astrophysics Data System (ADS)
Debchoudhury, Shantanab; Earle, Gregory
2017-04-01
Retarding Potential Analyzers (RPA) have a rich flight heritage. Standard curve-fitting analysis techniques exist that can infer state variables in the ionospheric plasma environment from RPA data, but the estimation process is prone to errors arising from a number of sources. Previous work has focused on the effects of grid geometry on uncertainties in estimation; however, no prior study has quantified the estimation errors due to additive noise. In this study, we characterize the errors in estimation of thermal plasma parameters by adding noise to the simulated data derived from the existing ionospheric models. We concentrate on low-altitude, mid-inclination orbits since a number of nano-satellite missions are focused on this region of the ionosphere. The errors are quantified and cross-correlated for varying geomagnetic conditions.
Ly, Thomas; Pamer, Carol; Dang, Oanh; Brajovic, Sonja; Haider, Shahrukh; Botsis, Taxiarchis; Milward, David; Winter, Andrew; Lu, Susan; Ball, Robert
2018-05-31
The FDA Adverse Event Reporting System (FAERS) is a primary data source for identifying unlabeled adverse events (AEs) in a drug or biologic drug product's postmarketing phase. Many AE reports must be reviewed by drug safety experts to identify unlabeled AEs, even if the reported AEs are previously identified, labeled AEs. Integrating the labeling status of drug product AEs into FAERS could increase report triage and review efficiency. Medical Dictionary for Regulatory Activities (MedDRA) is the standard for coding AE terms in FAERS cases. However, drug manufacturers are not required to use MedDRA to describe AEs in product labels. We hypothesized that natural language processing (NLP) tools could assist in automating the extraction and MedDRA mapping of AE terms in drug product labels. We evaluated the performance of three NLP systems, (ETHER, I2E, MetaMap) for their ability to extract AE terms from drug labels and translate the terms to MedDRA Preferred Terms (PTs). Pharmacovigilance-based annotation guidelines for extracting AE terms from drug labels were developed for this study. We compared each system's output to MedDRA PT AE lists, manually mapped by FDA pharmacovigilance experts using the guidelines, for ten drug product labels known as the "gold standard AE list" (GSL) dataset. Strict time and configuration conditions were imposed in order to test each system's capabilities under conditions of no human intervention and minimal system configuration. Each NLP system's output was evaluated for precision, recall and F measure in comparison to the GSL. A qualitative error analysis (QEA) was conducted to categorize a random sample of each NLP system's false positive and false negative errors. A total of 417, 278, and 250 false positive errors occurred in the ETHER, I2E, and MetaMap outputs, respectively. A total of 100, 80, and 187 false negative errors occurred in ETHER, I2E, and MetaMap outputs, respectively. Precision ranged from 64% to 77%, recall from 64% to 83% and F measure from 67% to 79%. I2E had the highest precision (77%), recall (83%) and F measure (79%). ETHER had the lowest precision (64%). MetaMap had the lowest recall (64%). The QEA found that the most prevalent false positive errors were context errors such as "Context error/General term", "Context error/Instructions or monitoring parameters", "Context error/Medical history preexisting condition underlying condition risk factor or contraindication", and "Context error/AE manifestations or secondary complication". The most prevalent false negative errors were in the "Incomplete or missed extraction" error category. Missing AE terms were typically due to long terms, or terms containing non-contiguous words which do not correspond exactly to MedDRA synonyms. MedDRA mapping errors were a minority of errors for ETHER and I2E but were the most prevalent false positive errors for MetaMap. The results demonstrate that it may be feasible to use NLP tools to extract and map AE terms to MedDRA PTs. However, the NLP tools we tested would need to be modified or reconfigured to lower the error rates to support their use in a regulatory setting. Tools specific for extracting AE terms from drug labels and mapping the terms to MedDRA PTs may need to be developed to support pharmacovigilance. Conducting research using additional NLP systems on a larger, diverse GSL would also be informative. Copyright © 2018. Published by Elsevier Inc.
ERIC Educational Resources Information Center
Andrews, Benjamin James
2011-01-01
The equity properties can be used to assess the quality of an equating. The degree to which expected scores conditional on ability are similar between test forms is referred to as first-order equity. Second-order equity is the degree to which conditional standard errors of measurement are similar between test forms after equating. The purpose of…
NASA Astrophysics Data System (ADS)
Zhang, Rong-Hua; Tao, Ling-Jiang; Gao, Chuan
2017-09-01
Large uncertainties exist in real-time predictions of the 2015 El Niño event, which have systematic intensity biases that are strongly model-dependent. It is critically important to characterize those model biases so they can be reduced appropriately. In this study, the conditional nonlinear optimal perturbation (CNOP)-based approach was applied to an intermediate coupled model (ICM) equipped with a four-dimensional variational data assimilation technique. The CNOP-based approach was used to quantify prediction errors that can be attributed to initial conditions (ICs) and model parameters (MPs). Two key MPs were considered in the ICM: one represents the intensity of the thermocline effect, and the other represents the relative coupling intensity between the ocean and atmosphere. Two experiments were performed to illustrate the effects of error corrections, one with a standard simulation and another with an optimized simulation in which errors in the ICs and MPs derived from the CNOP-based approach were optimally corrected. The results indicate that simulations of the 2015 El Niño event can be effectively improved by using CNOP-derived error correcting. In particular, the El Niño intensity in late 2015 was adequately captured when simulations were started from early 2015. Quantitatively, the Niño3.4 SST index simulated in Dec. 2015 increased to 2.8 °C in the optimized simulation, compared with only 1.5 °C in the standard simulation. The feasibility and effectiveness of using the CNOP-based technique to improve ENSO simulations are demonstrated in the context of the 2015 El Niño event. The limitations and further applications are also discussed.
Accuracy and Precision of Visual Stimulus Timing in PsychoPy: No Timing Errors in Standard Usage
Garaizar, Pablo; Vadillo, Miguel A.
2014-01-01
In a recent report published in PLoS ONE, we found that the performance of PsychoPy degraded with very short timing intervals, suggesting that it might not be perfectly suitable for experiments requiring the presentation of very brief stimuli. The present study aims to provide an updated performance assessment for the most recent version of PsychoPy (v1.80) under different hardware/software conditions. Overall, the results show that PsychoPy can achieve high levels of precision and accuracy in the presentation of brief visual stimuli. Although occasional timing errors were found in very demanding benchmarking tests, there is no reason to think that they can pose any problem for standard experiments developed by researchers. PMID:25365382
NASA Technical Reports Server (NTRS)
Hueschen, R. M.
1986-01-01
Five flight tests of the Digital Automated Landing System (DIALS) were conducted on the Advanced Transport Operating Systems (ATOPS) Transportation Research Vehicle (TSRV) -- a modified Boeing 737 aircraft for advanced controls and displays research. These flight tests were conducted at NASA's Wallops Flight Center using the microwave landing system (MLS) installation on runway 22. This report describes the flight software equations of the DIALS which was designed using modern control theory direct-digital design methods and employed a constant gain Kalman filter. Selected flight test performance data is presented for localizer (runway centerline) capture and track at various intercept angles, for glideslope capture and track of 3, 4.5, and 5 degree glideslopes, for the decrab maneuver, and for the flare maneuver. Data is also presented to illustrate the system performance in the presence of cross, gust, and shear winds. The mean and standard deviation of the peak position errors for localizer capture were, respectively, 24 feet and 26 feet. For mild wind conditions, glideslope and localizer tracking position errors did not exceed, respectively, 5 and 20 feet. For gusty wind conditions (8 to 10 knots), these errors were, respectively, 10 and 30 feet. Ten hands off automatic lands were performed. The standard deviation of the touchdown position and velocity errors from the mean values were, respectively, 244 feet and 0.7 feet/sec.
Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.
Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas
2016-11-14
Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.
A meta-analysis of inhibitory-control deficits in patients diagnosed with Alzheimer's dementia.
Kaiser, Anna; Kuhlmann, Beatrice G; Bosnjak, Michael
2018-05-10
The authors conducted meta-analyses to determine the magnitude of performance impairments in patients diagnosed with Alzheimer's dementia (AD) compared with healthy aging (HA) controls on eight tasks commonly used to measure inhibitory control. Response time (RT) and error rates from a total of 64 studies were analyzed with random-effects models (overall effects) and mixed-effects models (moderator analyses). Large differences between AD patients and HA controls emerged in the basic inhibition conditions of many of the tasks with AD patients often performing slower, overall d = 1.17, 95% CI [0.88-1.45], and making more errors, d = 0.83 [0.63-1.03]. However, comparably large differences were also present in performance on many of the baseline control-conditions, d = 1.01 [0.83-1.19] for RTs and d = 0.44 [0.19-0.69] for error rates. A standardized derived inhibition score (i.e., control-condition score - inhibition-condition score) suggested no significant mean group difference for RTs, d = -0.07 [-0.22-0.08], and only a small difference for errors, d = 0.24 [-0.12-0.60]. Effects systematically varied across tasks and with AD severity. Although the error rate results suggest a specific deterioration of inhibitory-control abilities in AD, further processes beyond inhibitory control (e.g., a general reduction in processing speed and other, task-specific attentional processes) appear to contribute to AD patients' performance deficits observed on a variety of inhibitory-control tasks. Nonetheless, the inhibition conditions of many of these tasks well discriminate between AD patients and HA controls. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
SU-E-T-257: Output Constancy: Reducing Measurement Variations in a Large Practice Group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hedrick, K; Fitzgerald, T; Miller, R
2014-06-01
Purpose: To standardize output constancy check procedures in a large medical physics practice group covering multiple sites, in order to identify and reduce small systematic errors caused by differences in equipment and the procedures of multiple physicists. Methods: A standardized machine output constancy check for both photons and electrons was instituted within the practice group in 2010. After conducting annual TG-51 measurements in water and adjusting the linac to deliver 1.00 cGy/MU at Dmax, an acrylic phantom (comparable at all sites) and PTW farmer ion chamber are used to obtain monthly output constancy reference readings. From the collected charge reading,more » measurements of air pressure and temperature, and chamber Ndw and Pelec, a value we call the Kacrylic factor is determined, relating the chamber reading in acrylic to the dose in water with standard set-up conditions. This procedure easily allows for multiple equipment combinations to be used at any site. The Kacrylic factors and output results from all sites and machines are logged monthly in a central database and used to monitor trends in calibration and output. Results: The practice group consists of 19 sites, currently with 34 Varian and 8 Elekta linacs (24 Varian and 5 Elekta linacs in 2010). Over the past three years, the standard deviation of Kacrylic factors measured on all machines decreased by 20% for photons and high energy electrons as systematic errors were found and reduced. Low energy electrons showed very little change in the distribution of Kacrylic values. Small errors in linac beam data were found by investigating outlier Kacrylic values. Conclusion: While the use of acrylic phantoms introduces an additional source of error through small differences in depth and effective depth, the new standardized procedure eliminates potential sources of error from using many different phantoms and results in more consistent output constancy measurements.« less
An Optimal Control Modification to Model-Reference Adaptive Control for Fast Adaptation
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Krishnakumar, Kalmanje; Boskovic, Jovan
2008-01-01
This paper presents a method that can achieve fast adaptation for a class of model-reference adaptive control. It is well-known that standard model-reference adaptive control exhibits high-gain control behaviors when a large adaptive gain is used to achieve fast adaptation in order to reduce tracking error rapidly. High gain control creates high-frequency oscillations that can excite unmodeled dynamics and can lead to instability. The fast adaptation approach is based on the minimization of the squares of the tracking error, which is formulated as an optimal control problem. The necessary condition of optimality is used to derive an adaptive law using the gradient method. This adaptive law is shown to result in uniform boundedness of the tracking error by means of the Lyapunov s direct method. Furthermore, this adaptive law allows a large adaptive gain to be used without causing undesired high-gain control effects. The method is shown to be more robust than standard model-reference adaptive control. Simulations demonstrate the effectiveness of the proposed method.
Bessesen, Mary T; Adams, Jill C; Radonovich, Lewis; Anderson, Judith
2015-06-01
This was a feasibility study in a Department of Veterans Affairs Medical Center to develop a standard operating procedure (SOP) to be used by health care workers to disinfect reusable elastomeric respirators under pandemic conditions. Registered and licensed practical nurses, nurse practitioners, aides, clinical technicians, and physicians took part in the study. Health care worker volunteers were provided with manufacturers' cleaning and disinfection instructions and all necessary supplies. They were observed and filmed. SOPs were developed, based on these observations, and tested on naïve volunteer health care workers. Error rates using manufacturers' instructions and SOPs were compared. When using respirator manufacturers' cleaning and disinfection instructions, without specific training or supervision, all subjects made multiple errors. When using the SOPs developed in the study, without specific training or guidance, naïve health care workers disinfected respirators with zero errors. Reusable facial protective equipment may be disinfected by health care workers with minimal training using SOPs. Published by Elsevier Inc.
Hypothesis Testing Using Factor Score Regression
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2015-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886
Adherence to balance tolerance limits at the Upper Mississippi Science Center, La Crosse, Wisconsin.
Myers, C.T.; Kennedy, D.M.
1998-01-01
Verification of balance accuracy entails applying a series of standard masses to a balance prior to use and recording the measured values. The recorded values for each standard should have lower and upper weight limits or tolerances that are accepted as verification of balance accuracy under normal operating conditions. Balance logbooks for seven analytical balances at the Upper Mississippi Science Center were checked over a 3.5-year period to determine if the recorded weights were within the established tolerance limits. A total of 9435 measurements were checked. There were 14 instances in which the balance malfunctioned and operators recorded a rationale in the balance logbook. Sixty-three recording errors were found. Twenty-eight operators were responsible for two types of recording errors: Measurements of weights were recorded outside of the tolerance limit but not acknowledged as an error by the operator (n = 40); and measurements were recorded with the wrong number of decimal places (n = 23). The adherence rate for following tolerance limits was 99.3%. To ensure the continued adherence to tolerance limits, the quality-assurance unit revised standard operating procedures to require more frequent review of balance logbooks.
Revised techniques for estimating peak discharges from channel width in Montana
Parrett, Charles; Hull, J.A.; Omang, R.J.
1987-01-01
This study was conducted to develop new estimating equations based on channel width and the updated flood frequency curves of previous investigations. Simple regression equations for estimating peak discharges with recurrence intervals of 2, 5, 10 , 25, 50, and 100 years were developed for seven regions in Montana. The standard errors of estimates for the equations that use active channel width as the independent variables ranged from 30% to 87%. The standard errors of estimate for the equations that use bankfull width as the independent variable ranged from 34% to 92%. The smallest standard errors generally occurred in the prediction equations for the 2-yr flood, 5-yr flood, and 10-yr flood, and the largest standard errors occurred in the prediction equations for the 100-yr flood. The equations that use active channel width and the equations that use bankfull width were determined to be about equally reliable in five regions. In the West Region, the equations that use bankfull width were slightly more reliable than those based on active channel width, whereas in the East-Central Region the equations that use active channel width were slightly more reliable than those based on bankfull width. Compared with similar equations previously developed, the standard errors of estimate for the new equations are substantially smaller in three regions and substantially larger in two regions. Limitations on the use of the estimating equations include: (1) The equations are based on stable conditions of channel geometry and prevailing water and sediment discharge; (2) The measurement of channel width requires a site visit, preferably by a person with experience in the method, and involves appreciable measurement errors; (3) Reliability of results from the equations for channel widths beyond the range of definition is unknown. In spite of the limitations, the estimating equations derived in this study are considered to be as reliable as estimating equations based on basin and climatic variables. Because the two types of estimating equations are independent, results from each can be weighted inversely proportional to their variances, and averaged. The weighted average estimate has a variance less than either individual estimate. (Author 's abstract)
NASA Astrophysics Data System (ADS)
Solazzo, Efisio; Hogrefe, Christian; Colette, Augustin; Garcia-Vivanco, Marta; Galmarini, Stefano
2017-09-01
The work here complements the overview analysis of the modelling systems participating in the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3) by focusing on the performance for hourly surface ozone by two modelling systems, Chimere for Europe and CMAQ for North America. The evaluation strategy outlined in the course of the three phases of the AQMEII activity, aimed to build up a diagnostic methodology for model evaluation, is pursued here and novel diagnostic methods are proposed. In addition to evaluating the base case
simulation in which all model components are configured in their standard mode, the analysis also makes use of sensitivity simulations in which the models have been applied by altering and/or zeroing lateral boundary conditions, emissions of anthropogenic precursors, and ozone dry deposition. To help understand of the causes of model deficiencies, the error components (bias, variance, and covariance) of the base case and of the sensitivity runs are analysed in conjunction with timescale considerations and error modelling using the available error fields of temperature, wind speed, and NOx concentration. The results reveal the effectiveness and diagnostic power of the methods devised (which remains the main scope of this study), allowing the detection of the timescale and the fields that the two models are most sensitive to. The representation of planetary boundary layer (PBL) dynamics is pivotal to both models. In particular, (i) the fluctuations slower than ˜ 1.5 days account for 70-85 % of the mean square error of the full (undecomposed) ozone time series; (ii) a recursive, systematic error with daily periodicity is detected, responsible for 10-20 % of the quadratic total error; (iii) errors in representing the timing of the daily transition between stability regimes in the PBL are responsible for a covariance error as large as 9 ppb (as much as the standard deviation of the network-average ozone observations in summer in both Europe and North America); (iv) the CMAQ ozone error has a weak/negligible dependence on the errors in NO2, while the error in NO2 significantly impacts the ozone error produced by Chimere; (v) the response of the models to variations of anthropogenic emissions and boundary conditions show a pronounced spatial heterogeneity, while the seasonal variability of the response is found to be less marked. Only during the winter season does the zeroing of boundary values for North America produce a spatially uniform deterioration of the model accuracy across the majority of the continent.
Telemetry Standards, RCC Standard 106-17, Annex A.1, Pulse Amplitude Modulation Standards
2017-07-01
conform to either Figure Error! No text of specified style in document.-1 or Figure Error! No text of specified style in document.-2. Figure Error...No text of specified style in document.-1. 50 percent duty cycle PAM with amplitude synchronization A 20-25 percent deviation reserved for pulse...synchronization is recommended. Telemetry Standards, RCC Standard 106-17 Annex A.1, July 2017 A.1.2 Figure Error! No text of specified style
NASA Astrophysics Data System (ADS)
Ostrikov, V. N.; Plakhotnikov, O. V.
2014-12-01
Using considerable experimental material, we examine whether it is possible to recalculate the initial data of hyperspectral aircraft survey into spectral radiance factors (SRF). The errors of external calibration for various observation conditions and different instruments for data receiving are estimated.
Influence of municipal- and individual-level socioeconomic conditions on mortality in Japan.
Honjo, Kaori; Iso, Hiroyasu; Fukuda, Yoshiharu; Nishi, Nobuo; Nakaya, Tomoki; Fujino, Yoshihisa; Tanabe, Naohito; Suzuki, Sadao; Subramanian, S V; Tamakoshi, Akiko
2014-01-01
The health effect of area socioeconomic conditions has been evident especially in Western countries; however, limited research has focused on the effect of municipal-level socioeconomic conditions, especially in Asia. Multilevel research using data from the Japan Collaborative Cohort Study, a large cohort study followed from 1990 to 2006, was conducted to examine individual as well as municipal socioeconomic conditions on risk of death, adjusting for each other. We included 24,460 men and 32,649 women aged 40 to 65 years at baseline in 35 municipalities as our study population. Primary predictors were municipal socioeconomic conditions (proportion of college graduates, per capita income, unemployment rate, and proportion of households receiving public assistance) and individual socioeconomic conditions (education level and occupation). Among men, the multilevel logistic estimate (standard errors) of proportion of college graduates and unemployment rate for mortality from cardiovascular disease were -0.399 (0.094) and -0.343 (0.122), respectively. Among women, the multilevel logistic estimate (standard errors) of proportion of college graduates and per capita annual income for mortality from injuries were -0.386 (0.171) and -1.069 (0.407). Individual education level and occupation were associated with all-cause mortality, in particular, mortality from cardiovascular disease or injuries. Interactions between individual education level and indicators of municipal socioeconomic conditions were observed for mortality from cancer and cardiovascular disease among men and mortality from injuries among women. Municipal and individual socioeconomic conditions were independently and interactively associated with premature death; this suggests that reducing social inequalities in health demands a focus on municipal conditions in addition to those of individuals.
Zhang, Huisheng; Zhang, Ying; Xu, Dongpo; Liu, Xiaodong
2015-06-01
It has been shown that, by adding a chaotic sequence to the weight update during the training of neural networks, the chaos injection-based gradient method (CIBGM) is superior to the standard backpropagation algorithm. This paper presents the theoretical convergence analysis of CIBGM for training feedforward neural networks. We consider both the case of batch learning as well as the case of online learning. Under mild conditions, we prove the weak convergence, i.e., the training error tends to a constant and the gradient of the error function tends to zero. Moreover, the strong convergence of CIBGM is also obtained with the help of an extra condition. The theoretical results are substantiated by a simulation example.
Accuracy of a pulse-coherent acoustic Doppler profiler in a wave-dominated flow
Lacy, J.R.; Sherwood, C.R.
2004-01-01
The accuracy of velocities measured by a pulse-coherent acoustic Doppler profiler (PCADP) in the bottom boundary layer of a wave-dominated inner-shelf environment is evaluated. The downward-looking PCADP measured velocities in eight 10-cm cells at 1 Hz. Velocities measured by the PCADP are compared to those measured by an acoustic Doppler velocimeter for wave orbital velocities up to 95 cm s-1 and currents up to 40 cm s-1. An algorithm for correcting ambiguity errors using the resolution velocities was developed. Instrument bias, measured as the average error in burst mean speed, is -0.4 cm s-1 (standard deviation = 0.8). The accuracy (root-mean-square error) of instantaneous velocities has a mean of 8.6 cm s-1 (standard deviation = 6.5) for eastward velocities (the predominant direction of waves), 6.5 cm s-1 (standard deviation = 4.4) for northward velocities, and 2.4 cm s-1 (standard deviation = 1.6) for vertical velocities. Both burst mean and root-mean-square errors are greater for bursts with ub ??? 50 cm s-1. Profiles of burst mean speeds from the bottom five cells were fit to logarithmic curves: 92% of bursts with mean speed ??? 5 cm s-1 have a correlation coefficient R2 > 0.96. In cells close to the transducer, instantaneous velocities are noisy, burst mean velocities are biased low, and bottom orbital velocities are biased high. With adequate blanking distances for both the profile and resolution velocities, the PCADP provides sufficient accuracy to measure velocities in the bottom boundary layer under moderately energetic inner-shelf conditions.
Willem W.S. van Hees
2002-01-01
Comparisons of estimated standard error for a ratio-of-means (ROM) estimator are presented for forest resource inventories conducted in southeast Alaska between 1995 and 2000. Estimated standard errors for the ROM were generated by using a traditional variance estimator and also approximated by bootstrap methods. Estimates of standard error generated by both...
Effects of auditory radio interference on a fine, continuous, open motor skill.
Lazar, J M; Koceja, D M; Morris, H H
1995-06-01
The effects of human speech on a fine, continuous, and open motor skill were examined. A tape of auditory human radio traffic was injected into a tank gunnery simulator during each training session for 4 wk. of training for 3 hr. a week. The dependent variables were identification time, fire time, kill time, systems errors, and acquisition errors. These were measured by the Unit Conduct Of Fire Trainer (UCOFT). The interference was interjected into the UCOFT Tank Table VIII gunnery test. A Solomon four-group design was used. A 2 x 2 analysis of variance was used to assess whether interference gunnery training resulted in improvements in interference posttest scores. During the first three weeks of training, the interference group committed 106% more systems errors and 75% more acquisition errors than the standard group. The interference training condition was associated with a significant improvement from pre- to posttest of 44% in over-all UCOFT scores; however, when examined on the posttest the standard training did not improve performance significantly over the same period. It was concluded that auditory radio interference degrades performance of this fine, continuous, open motor skill, and interference training appears to abate the effects of this degradation.
Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale; Selby, Neil
2012-08-14
Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.
Maassen, Gerard H
2010-08-01
In this Journal, Lewis and colleagues introduced a new Reliable Change Index (RCI(WSD)), which incorporated the within-subject standard deviation (WSD) of a repeated measurement design as the standard error. In this note, two opposite errors in using WSD this way are demonstrated. First, being the standard error of measurement of only a single assessment makes WSD too small when practice effects are absent. Then, too many individuals will be designated reliably changed. Second, WSD can grow unlimitedly to the extent that differential practice effects occur. This can even make RCI(WSD) unable to detect any reliable change.
Signal location using generalized linear constraints
NASA Astrophysics Data System (ADS)
Griffiths, Lloyd J.; Feldman, D. D.
1992-01-01
This report has presented a two-part method for estimating the directions of arrival of uncorrelated narrowband sources when there are arbitrary phase errors and angle independent gain errors. The signal steering vectors are estimated in the first part of the method; in the second part, the arrival directions are estimated. It should be noted that the second part of the method can be tailored to incorporate additional information about the nature of the phase errors. For example, if the phase errors are known to be caused solely by element misplacement, the element locations can be estimated concurrently with the DOA's by trying to match the theoretical steering vectors to the estimated ones. Simulation results suggest that, for general perturbation, the method can resolve closely spaced sources under conditions for which a standard high-resolution DOA method such as MUSIC fails.
2017-01-01
Background Clinicians, such as respiratory therapists and physicians, are often required to set up pieces of medical equipment that use inconsistent terminology. Current lung ventilator terminology that is used by different manufacturers contributes to the risk of usage errors, and in turn the risk of ventilator-associated lung injuries and other conditions. Human factors and communication issues are often associated with ventilator-related sentinel events, and inconsistent ventilator terminology compounds these issues. This paper describes our proposed protocol, which will be implemented at the University of Waterloo, Canada when this project is externally funded. Objective We propose to determine whether a standardized vocabulary improves the ease of use, safety, and utility as it relates to the usability of medical devices, compared to legacy medical devices from multiple manufacturers, which use different terms. Methods We hypothesize that usage errors by clinicians will be lower when standardization is consistently applied by all manufacturers. The proposed study will experimentally examine the impact of standardized nomenclature on performance declines in the use of an unfamiliar ventilator product in clinically relevant scenarios. Participants will be respiratory therapy practitioners and trainees, and we propose studying approximately 60 participants. Results The work reported here is in the proposal phase. Once the protocol is implemented, we will report the results in a follow-up paper. Conclusions The proposed study will help us better understand the effects of standardization on medical device usability. The study will also help identify any terms in the International Organization for Standardization (ISO) Draft International Standard (DIS) 19223 that may be associated with recurrent errors. Amendments to the standard will be proposed if recurrent errors are identified. This report contributes a protocol that can be used to assess the effect of standardization in any given domain that involves equipment, multiple manufacturers, inconsistent vocabulary, symbology, audio tones, or patterns in interface navigation. Second, the protocol can be used to experimentally evaluate the ISO DIS 19223 for its effectiveness, as researchers around the world may wish to conduct such tests and compare results. PMID:28887292
An Evaluation of Portable Wet Bulb Globe Temperature Monitor Accuracy.
Cooper, Earl; Grundstein, Andrew; Rosen, Adam; Miles, Jessica; Ko, Jupil; Curry, Patrick
2017-12-01
Wet bulb globe temperature (WBGT) is the gold standard for assessing environmental heat stress during physical activity. Many manufacturers of commercially available instruments fail to report WBGT accuracy. To determine the accuracy of several commercially available WBGT monitors compared with a standardized reference device. Observational study. Field test. Six commercially available WBGT devices. Data were recorded for 3 sessions (1 in the morning and 2 in the afternoon) at 2-minute intervals for at least 2 hours. Mean absolute error (MAE), root mean square error (RMSE), mean bias error (MBE), and the Pearson correlation coefficient ( r) were calculated to determine instrument performance compared with the reference unit. The QUESTemp° 34 (MAE = 0.24°C, RMSE = 0.44°C, MBE = -0.64%) and Extech HT30 Heat Stress Wet Bulb Globe Temperature Meter (Extech; MAE = 0.61°C, RMSE = 0.79°C, MBE = 0.44%) demonstrated the least error in relation to the reference standard, whereas the General WBGT8778 Heat Index Checker (General; MAE = 1.18°C, RMSE = 1.34°C, MBE = 4.25%) performed the poorest. The QUESTemp° 34 and Kestrel 4400 Heat Stress Tracker units provided conservative measurements that slightly overestimated the WBGT provided by the reference unit. Finally, instruments using the psychrometric wet bulb temperature (General, REED Heat Index WBGT Meter, and WBGT-103 Heat Stroke Checker) tended to underestimate the WBGT, and the resulting values more frequently fell into WBGT-based activity categories with fewer restrictions as defined by the American College of Sports Medicine. The QUESTemp° 34, followed by the Extech, had the smallest error compared with the reference unit. Moreover, the QUESTemp° 34, Extech, and Kestrel units appeared to offer conservative yet accurate assessments of the WBGT, potentially minimizing the risk of allowing physical activity to continue in stressful heat environments. Instruments using the psychrometric wet bulb temperature tended to underestimate WBGT under low wind-speed conditions. Accurate WBGT interpretations are important to enable clinicians to guide activities in hot and humid weather conditions.
Gilliom, Robert J.; Helsel, Dennis R.
1986-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.
NASA Astrophysics Data System (ADS)
Kim, Younsu; Kim, Sungmin; Boctor, Emad M.
2017-03-01
An ultrasound image-guided needle tracking systems have been widely used due to their cost-effectiveness and nonionizing radiation properties. Various surgical navigation systems have been developed by utilizing state-of-the-art sensor technologies. However, ultrasound transmission beam thickness causes unfair initial evaluation conditions due to inconsistent placement of the target with respect to the ultrasound probe. This inconsistency also brings high uncertainty and results in large standard deviations for each measurement when we compare accuracy with and without the guidance. To resolve this problem, we designed a complete evaluation platform by utilizing our mid-plane detection and time of flight measurement systems. The evaluating system uses a PZT element target and an ultrasound transmitting needle. In this paper, we evaluated an optical tracker-based surgical ultrasound-guided navigation system whereby the optical tracker tracks marker frames attached on the ultrasound probe and the needle. We performed ten needle trials of guidance experiment with a mid-plane adjustment algorithm and with a B-mode segmentation method. With the midplane adjustment, the result showed a mean error of 1.62+/-0.72mm. The mean error increased to 3.58+/-2.07mm without the mid-plane adjustment. Our evaluation system can reduce the effect of the beam-thickness problem, and measure ultrasound image-guided technologies consistently with a minimal standard deviation. Using our novel evaluation system, ultrasound image-guided technologies can be compared under equal initial conditions. Therefore, the error can be evaluated more accurately, and the system provides better analysis on the error sources such as ultrasound beam thickness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1986-02-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensoredmore » observations, for determining the best performing parameter estimation method for any particular data det. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.« less
Estimation of distributional parameters for censored trace-level water-quality data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1984-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less
Treating Sample Covariances for Use in Strongly Coupled Atmosphere-Ocean Data Assimilation
NASA Astrophysics Data System (ADS)
Smith, Polly J.; Lawless, Amos S.; Nichols, Nancy K.
2018-01-01
Strongly coupled data assimilation requires cross-domain forecast error covariances; information from ensembles can be used, but limited sampling means that ensemble derived error covariances are routinely rank deficient and/or ill-conditioned and marred by noise. Thus, they require modification before they can be incorporated into a standard assimilation framework. Here we compare methods for improving the rank and conditioning of multivariate sample error covariance matrices for coupled atmosphere-ocean data assimilation. The first method, reconditioning, alters the matrix eigenvalues directly; this preserves the correlation structures but does not remove sampling noise. We show that it is better to recondition the correlation matrix rather than the covariance matrix as this prevents small but dynamically important modes from being lost. The second method, model state-space localization via the Schur product, effectively removes sample noise but can dampen small cross-correlation signals. A combination that exploits the merits of each is found to offer an effective alternative.
On the Stability of Rotated Factor Loadings: The Wexler Phenomenon.
ERIC Educational Resources Information Center
Jennerich, Robert I.
The formulas which give the standard errors of factor loading estimates while available and computable are complicated, and our understanding of them is limited. A nontechnical description of their behavior under favorable and unfavorable conditions is given. Of particular interest is their behavior in the presence of singularities arising from…
ERIC Educational Resources Information Center
Guo, Hongwen; Puhan, Gautam; Walker, Michael
2013-01-01
In this study we investigated when an equating conversion line is problematic in terms of gaps and clumps. We suggest using the conditional standard error of measurement (CSEM) to measure the scale scores that are inappropriate in the overall raw-to-scale transformation.
Psychometric Properties of Raw and Scale Scores on Mixed-Format Tests
ERIC Educational Resources Information Center
Kolen, Michael J.; Lee, Won-Chan
2011-01-01
This paper illustrates that the psychometric properties of scores and scales that are used with mixed-format educational tests can impact the use and interpretation of the scores that are reported to examinees. Psychometric properties that include reliability and conditional standard errors of measurement are considered in this paper. The focus is…
Chapman, Wendy W.; Dowling, John N.
2006-01-01
Evaluating automated indexing applications requires comparing automatically indexed terms against manual reference standard annotations. However, there are no standard guidelines for determining which words from a textual document to include in manual annotations, and the vague task can result in substantial variation among manual indexers. We applied grounded theory to emergency department reports to create an annotation schema representing syntactic and semantic variables that could be annotated when indexing clinical conditions. We describe the annotation schema, which includes variables representing medical concepts (e.g., symptom, demographics), linguistic form (e.g., noun, adjective), and modifier types (e.g., anatomic location, severity). We measured the schema’s quality and found: (1) the schema was comprehensive enough to be applied to 20 unseen reports without changes to the schema; (2) agreement between author annotators applying the schema was high, with an F measure of 93%; and (3) an error analysis showed that the authors made complementary errors when applying the schema, demonstrating that the schema incorporates both linguistic and medical expertise. PMID:16230050
Cembrowski, G S; Hackney, J R; Carey, N
1993-04-01
The Clinical Laboratory Improvement Act of 1988 (CLIA 88) has dramatically changed proficiency testing (PT) practices having mandated (1) satisfactory PT for certain analytes as a condition of laboratory operation, (2) fixed PT limits for many of these "regulated" analytes, and (3) an increased number of PT specimens (n = 5) for each testing cycle. For many of these analytes, the fixed limits are much broader than the previously employed Standard Deviation Index (SDI) criteria. Paradoxically, there may be less incentive to identify and evaluate analytically significant outliers to improve the analytical process. Previously described "control rules" to evaluate these PT results are unworkable as they consider only two or three results. We used Monte Carlo simulations of Kodak Ektachem analyzers participating in PT to determine optimal control rules for the identification of PT results that are inconsistent with those from other laboratories using the same methods. The analysis of three representative analytes, potassium, creatine kinase, and iron was simulated with varying intrainstrument and interinstrument standard deviations (si and sg, respectively) obtained from the College of American Pathologists (Northfield, Ill) Quality Assurance Services data and Proficiency Test data, respectively. Analytical errors were simulated in each of the analytes and evaluated in terms of multiples of the interlaboratory SDI. Simple control rules for detecting systematic and random error were evaluated with power function graphs, graphs of probability of error detected vs magnitude of error. Based on the simulation results, we recommend screening all analytes for the occurrence of two or more observations exceeding the same +/- 1 SDI limit. For any analyte satisfying this condition, the mean of the observations should be calculated. For analytes with sg/si ratios between 1.0 and 1.5, a significant systematic error is signaled by the mean exceeding 1.0 SDI. Significant random error is signaled by one observation exceeding the +/- 3-SDI limit or the range of the observations exceeding 4 SDIs. For analytes with higher sg/si, significant systematic or random error is signaled by violation of the screening rule (having at least two observations exceeding the same +/- 1 SDI limit). Random error can also be signaled by one observation exceeding the +/- 1.5-SDI limit or the range of the observations exceeding 3 SDIs. We present a practical approach to the workup of apparent PT errors.
King, Adam C; Newell, Karl M
2015-10-01
The experiment investigated the effect of selectively augmenting faster time scales of visual feedback information on the learning and transfer of continuous isometric force tracking tasks to test the generality of the self-organization of 1/f properties of force output. Three experimental groups tracked an irregular target pattern either under a standard fixed gain condition or with selectively enhancement in the visual feedback display of intermediate (4-8 Hz) or high (8-12 Hz) frequency components of the force output. All groups reduced tracking error over practice, with the error lowest in the intermediate scaling condition followed by the high scaling and fixed gain conditions, respectively. Selective visual scaling induced persistent changes across the frequency spectrum, with the strongest effect in the intermediate scaling condition and positive transfer to novel feedback displays. The findings reveal an interdependence of the timescales in the learning and transfer of isometric force output frequency structures consistent with 1/f process models of the time scales of motor output variability.
NASA Astrophysics Data System (ADS)
Azarnavid, Babak; Parand, Kourosh; Abbasbandy, Saeid
2018-06-01
This article discusses an iterative reproducing kernel method with respect to its effectiveness and capability of solving a fourth-order boundary value problem with nonlinear boundary conditions modeling beams on elastic foundations. Since there is no method of obtaining reproducing kernel which satisfies nonlinear boundary conditions, the standard reproducing kernel methods cannot be used directly to solve boundary value problems with nonlinear boundary conditions as there is no knowledge about the existence and uniqueness of the solution. The aim of this paper is, therefore, to construct an iterative method by the use of a combination of reproducing kernel Hilbert space method and a shooting-like technique to solve the mentioned problems. Error estimation for reproducing kernel Hilbert space methods for nonlinear boundary value problems have yet to be discussed in the literature. In this paper, we present error estimation for the reproducing kernel method to solve nonlinear boundary value problems probably for the first time. Some numerical results are given out to demonstrate the applicability of the method.
Haslam, Catherine; Wagner, Joseph; Wegener, Signy; Malouf, Tania
2017-01-01
Errorless learning has demonstrated efficacy in the treatment of memory impairment in adults and older adults with acquired brain injury. In the same population, use of elaborative encoding through supported self-generation in errorless paradigms has been shown to further enhance memory performance. However, the evidence base relevant to application of both standard and self-generation forms of errorless learning in children is far weaker. We address this limitation in the present study to examine recall performance in children with brain injury (n = 16) who were taught novel age-appropriate science and social science facts through the medium of Skype. All participants were taught these facts under conditions of standard errorless learning, errorless learning with self-generation, and trial-and-error learning after which memory was tested at 5-minute, 30-minute, 1-hour and 24-hour delays. Analysis revealed no main effect of time, with participants retaining most information acquired over the 24-hour testing period, but a significant effect of condition. Notably, self-generation proved more effective than both standard errorless and trial-and-error learning. Further analysis of the data revealed that severity of attentional impairment was less detrimental to recall performance under errorless conditions. This study extends the literature to provide further evidence of the value of errorless learning methods in children with ABI and the first demonstration of the effectiveness of self-generation when delivered via the Internet.
Haba, Tomonobu; Kondo, Shimpei; Hayashi, Daiki; Koyama, Shuji
2013-07-01
Detective quantum efficiency (DQE) is widely used as a comprehensive metric for X-ray image evaluation in digital X-ray units. The incident photon fluence per air kerma (SNR²(in)) is necessary for calculating the DQE. The International Electrotechnical Commission (IEC) reports the SNR²(in) under conditions of standard radiation quality, but this SNR²(in) might not be accurate as calculated from the X-ray spectra emitted by an actual X-ray tube. In this study, we evaluated the error range of the SNR²(in) presented by the IEC62220-1 report. We measured the X-ray spectra emitted by an X-ray tube under conditions of standard radiation quality of RQA5. The spectral photon fluence at each energy bin was multiplied by the photon energy and the mass energy absorption coefficient of air; then the air kerma spectrum was derived. The air kerma spectrum was integrated over the whole photon energy range to yield the total air kerma. The total photon number was then divided by the total air kerma. This value is the SNR²(in). These calculations were performed for various measurement parameters and X-ray units. The percent difference between the calculated value and the standard value of RQA5 was up to 2.9%. The error range was not negligibly small. Therefore, it is better to use the new SNR²(in) of 30694 (1/(mm(2) μGy)) than the current [Formula: see text] of 30174 (1/(mm(2) μGy)).
Merging gauge and satellite rainfall with specification of associated uncertainty across Australia
NASA Astrophysics Data System (ADS)
Woldemeskel, Fitsum M.; Sivakumar, Bellie; Sharma, Ashish
2013-08-01
Accurate estimation of spatial rainfall is crucial for modelling hydrological systems and planning and management of water resources. While spatial rainfall can be estimated either using rain gauge-based measurements or using satellite-based measurements, such estimates are subject to uncertainties due to various sources of errors in either case, including interpolation and retrieval errors. The purpose of the present study is twofold: (1) to investigate the benefit of merging rain gauge measurements and satellite rainfall data for Australian conditions and (2) to produce a database of retrospective rainfall along with a new uncertainty metric for each grid location at any timestep. The analysis involves four steps: First, a comparison of rain gauge measurements and the Tropical Rainfall Measuring Mission (TRMM) 3B42 data at such rain gauge locations is carried out. Second, gridded monthly rain gauge rainfall is determined using thin plate smoothing splines (TPSS) and modified inverse distance weight (MIDW) method. Third, the gridded rain gauge rainfall is merged with the monthly accumulated TRMM 3B42 using a linearised weighting procedure, the weights at each grid being calculated based on the error variances of each dataset. Finally, cross validation (CV) errors at rain gauge locations and standard errors at gridded locations for each timestep are estimated. The CV error statistics indicate that merging of the two datasets improves the estimation of spatial rainfall, and more so where the rain gauge network is sparse. The provision of spatio-temporal standard errors with the retrospective dataset is particularly useful for subsequent modelling applications where input error knowledge can help reduce the uncertainty associated with modelling outcomes.
Using artificial neural networks (ANN) for open-loop tomography
NASA Astrophysics Data System (ADS)
Osborn, James; De Cos Juez, Francisco Javier; Guzman, Dani; Butterley, Timothy; Myers, Richard; Guesalaga, Andres; Laine, Jesus
2011-09-01
The next generation of adaptive optics (AO) systems require tomographic techniques in order to correct for atmospheric turbulence along lines of sight separated from the guide stars. Multi-object adaptive optics (MOAO) is one such technique. Here, we present a method which uses an artificial neural network (ANN) to reconstruct the target phase given off-axis references sources. This method does not require any input of the turbulence profile and is therefore less susceptible to changing conditions than some existing methods. We compare our ANN method with a standard least squares type matrix multiplication method (MVM) in simulation and find that the tomographic error is similar to the MVM method. In changing conditions the tomographic error increases for MVM but remains constant with the ANN model and no large matrix inversions are required.
Eye Gaze and Aging: Selective and Combined Effects of Working Memory and Inhibitory Control.
Crawford, Trevor J; Smith, Eleanor S; Berry, Donna M
2017-01-01
Eye-tracking is increasingly studied as a cognitive and biological marker for the early signs of neuropsychological and psychiatric disorders. However, in order to make further progress, a more comprehensive understanding of the age-related effects on eye-tracking is essential. The antisaccade task requires participants to make saccadic eye movements away from a prepotent stimulus. Speculation on the cause of the observed age-related differences in the antisaccade task largely centers around two sources of cognitive dysfunction: inhibitory control (IC) and working memory (WM). The IC account views cognitive slowing and task errors as a direct result of the decline of inhibitory cognitive mechanisms. An alternative theory considers that a deterioration of WM is the cause of these age-related effects on behavior. The current study assessed IC and WM processes underpinning saccadic eye movements in young and older participants. This was achieved with three experimental conditions that systematically varied the extent to which WM and IC were taxed in the antisaccade task: a memory-guided task was used to explore the effect of increasing the WM load; a Go/No-Go task was used to explore the effect of increasing the inhibitory load; a 'standard' antisaccade task retained the standard WM and inhibitory loads. Saccadic eye movements were also examined in a control condition: the standard prosaccade task where the load of WM and IC were minimal or absent. Saccade latencies, error rates and the spatial accuracy of saccades of older participants were compared to the same measures in healthy young controls across the conditions. The results revealed that aging is associated with changes in both IC and WM. Increasing the inhibitory load was associated with increased reaction times in the older group, while the increased WM load and the inhibitory load contributed to an increase in the antisaccade errors. These results reveal that aging is associated with changes in both IC and WM.
ERIC Educational Resources Information Center
Lord, Frederic M.; Stocking, Martha
A general Computer program is described that will compute asymptotic standard errors and carry out significance tests for an endless variety of (standard and) nonstandard large-sample statistical problems, without requiring the statistician to derive asymptotic standard error formulas. The program assumes that the observations have a multinormal…
Experimental determination of a Viviparus contectus thermometry equation.
Bugler, Melanie J; Grimes, Stephen T; Leng, Melanie J; Rundle, Simon D; Price, Gregory D; Hooker, Jerry J; Collinson, Margaret E
2009-09-01
Experimental measurements of the (18)O/(16)O isotope fractionation between the biogenic aragonite of Viviparus contectus (Gastropoda) and its host freshwater were undertaken to generate a species-specific thermometry equation. The temperature dependence of the fractionation factor and the relationship between Deltadelta(18)O (delta(18)O(carb.) - delta(18)O(water)) and temperature were calculated from specimens maintained under laboratory and field (collection and cage) conditions. The field specimens were grown (Somerset, UK) between August 2007 and August 2008, with water samples and temperature measurements taken monthly. Specimens grown in the laboratory experiment were maintained under constant temperatures (15 degrees C, 20 degrees C and 25 degrees C) with water samples collected weekly. Application of a linear regression to the datasets indicated that the gradients of all three experiments were within experimental error of each other (+/-2 times the standard error); therefore, a combined (laboratory and field data) correlation could be applied. The relationship between Deltadelta(18)O (delta(18)O(carb.) - delta(18)O(water)) and temperature (T) for this combined dataset is given by: T = - 7.43( + 0.87, - 1.13)*Deltadelta18O + 22.89(+/- 2.09) (T is in degrees C, delta(18)O(carb.) is with respect to Vienna Pee Dee Belemnite (VPDB) and delta(18)O(water) is with respect to Vienna Standard Mean Ocean Water (VSMOW). Quoted errors are 2 times standard error).Comparisons made with existing aragonitic thermometry equations reveal that the linear regression for the combined Viviparus contectus equation is within 2 times the standard error of previously reported aragonitic thermometry equations. This suggests there are no species-specific vital effects for Viviparus contectus. Seasonal delta(18)O(carb.) profiles from specimens retrieved from the field cage experiment indicate that during shell secretion the delta(18)O(carb.) of the shell carbonate is not influenced by size, sex or whether females contained eggs or juveniles. Copyright (c) 2009 John Wiley & Sons, Ltd.
Computation of Standard Errors
Dowd, Bryan E; Greene, William H; Norton, Edward C
2014-01-01
Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304
Reproducibility of 3D kinematics and surface electromyography measurements of mastication.
Remijn, Lianne; Groen, Brenda E; Speyer, Renée; van Limbeek, Jacques; Nijhuis-van der Sanden, Maria W G
2016-03-01
The aim of this study was to determine the measurement reproducibility for a procedure evaluating the mastication process and to estimate the smallest detectable differences of 3D kinematic and surface electromyography (sEMG) variables. Kinematics of mandible movements and sEMG activity of the masticatory muscles were obtained over two sessions with four conditions: two food textures (biscuit and bread) of two sizes (small and large). Twelve healthy adults (mean age 29.1 years) completed the study. The second to the fifth chewing cycle of 5 bites were used for analyses. The reproducibility per outcome variable was calculated with an intraclass correlation coefficient (ICC) and a Bland-Altman analysis was applied to determine the standard error of measurement relative error of measurement and smallest detectable differences of all variables. ICCs ranged from 0.71 to 0.98 for all outcome variables. The outcome variables consisted of four bite and fourteen chewing cycle variables. The relative standard error of measurement of the bite variables was up to 17.3% for 'time-to-swallow', 'time-to-transport' and 'number of chewing cycles', but ranged from 31.5% to 57.0% for 'change of chewing side'. The relative standard error of measurement ranged from 4.1% to 24.7% for chewing cycle variables and was smaller for kinematic variables than sEMG variables. In general, measurements obtained with 3D kinematics and sEMG are reproducible techniques to assess the mastication process. The duration of the chewing cycle and frequency of chewing were the best reproducible measurements. Change of chewing side could not be reproduced. The published measurement error and smallest detectable differences will aid the interpretation of the results of future clinical studies using the same study variables. Copyright © 2015 Elsevier Inc. All rights reserved.
The Infinitesimal Jackknife with Exploratory Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.
2012-01-01
The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…
ERIC Educational Resources Information Center
Nevitt, Jonathan; Hancock, Gregory R.
2001-01-01
Evaluated the bootstrap method under varying conditions of nonnormality, sample size, model specification, and number of bootstrap samples drawn from the resampling space. Results for the bootstrap suggest the resampling-based method may be conservative in its control over model rejections, thus having an impact on the statistical power associated…
ERIC Educational Resources Information Center
Fouladi, Rachel T.
2000-01-01
Provides an overview of standard and modified normal theory and asymptotically distribution-free covariance and correlation structure analysis techniques and details Monte Carlo simulation results on Type I and Type II error control. Demonstrates through the simulation that robustness and nonrobustness of structure analysis techniques vary as a…
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife
ERIC Educational Resources Information Center
Jennrich, Robert I.
2008-01-01
The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…
Factor Rotation and Standard Errors in Exploratory Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.
2015-01-01
In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…
Luo, Ke; Hong, Sung-Sam; Wang, Jun; Chung, Mi-Ja; Deog-Hwan, Oh
2015-05-01
This study was conducted to develop a predictive model to estimate the growth of Listeria monocytogenes on fresh pork during storage at constant temperatures (5, 10, 15, 20, 25, 30, and 35°C). The Baranyi model was fitted to growth data (log CFU per gram) to calculate the specific growth rate (SGR) and lag time (LT) with a high coefficient of determination (R(2) > 0.98). As expected, SGR increased with a decline in LT with rising temperatures in all samples. Secondary models were then developed to describe the variation of SGR and LT as a function of temperature. Subsequently, the developed models were validated with additional independent growth data collected at 7, 17, 27, and 37°C and from published reports using proportion of relative errors and proportion of standard error of prediction. The proportion of relative errors of the SGR and LT models developed herein were 0.79 and 0.18, respectively. In addition, the standard error of prediction values of the SGR and LT of L. monocytogenes ranged from 25.7 to 33.1% and from 44.92 to 58.44%, respectively. These results suggest that the model developed in this study was capable of predicting the growth of L. monocytogenes under various isothermal conditions.
Evaluation of quality of commercial pedometers.
Tudor-Locke, Catrine; Sisson, Susan B; Lee, Sarah M; Craig, Cora L; Plotnikoff, Ronald C; Bauman, Adrian
2006-01-01
The purpose of this study was to: 1) evaluate the quality of promotional pedometers widely distributed through cereal boxes at the time of the 2004 Canada on the Move campaign; and 2) establish a battery of testing protocols to provide direction for future consensus on industry standards for pedometer quality. Fifteen Kellogg's* Special K* Step Counters (K pedometers or K; manufactured for Kellogg Canada by Sasco, Inc.) and 9 Yamax pedometers (Yamax; Yamax Corporation, Tokyo, Japan) were tested with 9 participants accordingly: 1) 20 Step Test; 2) treadmill at 80m x min(-1) (3 miles x hr(-1)) and motor vehicle controlled conditions; and 3) 24-hour free-living conditions against an accelerometer criterion. Fifty-three percent of the K pedometers passed the 20 Step Test compared to 100% of the Yamax. Mean absolute percent error for the K during treadmill walking was 24.2+/-33.9 vs. 3.9+/-6.6% for the Yamax. The K detected 5.7-fold more non-steps compared to the Yamax during the motor vehicle condition. In the free-living condition, mean absolute percent error relative to the ActiGraph was 44.9+/-34.5% for the K vs. 19.5+/-21.2% for the Yamax. K pedometers are unacceptably inaccurate. We suggest that research grade pedometers: 1) be manufactured to a sensitivity threshold of 0.35 Gs; 2) detect +/-1 step error on the 20 Step Test (i.e., within 5%); 3) detect +/-1% error most of the time during treadmill walking at 80m x min(-1) (3 miles x hr(-1)); as well as, 4) detect steps/day within 10% of the ActiGraph at least 60% of the time, or be within 10% of the Yamax under free-living conditions.
Sustained attention performance during sleep deprivation: evidence of state instability
NASA Technical Reports Server (NTRS)
Doran, S. M.; Van Dongen, H. P.; Dinges, D. F.
2001-01-01
Nathaniel Kleitman was the first to observe that sleep deprivation in humans did not eliminate the ability to perform neurobehavioral functions, but it did make it difficult to maintain stable performance for more than a few minutes. To investigate variability in performance as a function of sleep deprivation, n = 13 subjects were tested every 2 hours on a 10-minute, sustained-attention, psychomotor vigilance task (PVT) throughout 88 hours of total sleep deprivation (TSD condition), and compared to a control group of n = 15 subjects who were permitted a 2-hour nap every 12 hours (NAP condition) throughout the 88-hour period. PVT reaction time means and standard deviations increased markedly among subjects and within each individual subject in the TSD condition relative to the NAP condition. TSD subjects also had increasingly greater performance variability as a function of time on task after 18 hours of wakefulness. During sleep deprivation, variability in PVT performance reflected a combination of normal timely responses, errors of omission (i.e., lapses), and errors of commission (i.e., responding when no stimulus was present). Errors of omission and errors of commission were highly intercorrelated across deprivation in the TSD condition (r = 0.85, p = 0.0001), suggesting that performance instability is more likely to include compensatory effort than a lack of motivation. The marked increases in PVT performance variability as sleep loss continued supports the "state instability" hypothesis, which posits that performance during sleep deprivation is increasingly variable due to the influence of sleep initiating mechanisms on the endogenous capacity to maintain attention and alertness, thereby creating an unstable state that fluctuates within seconds and that cannot be characterized as either fully awake or asleep.
McClintock, Brett T.; Bailey, Larissa L.; Pollock, Kenneth H.; Simons, Theodore R.
2010-01-01
The recent surge in the development and application of species occurrence models has been associated with an acknowledgment among ecologists that species are detected imperfectly due to observation error. Standard models now allow unbiased estimation of occupancy probability when false negative detections occur, but this is conditional on no false positive detections and sufficient incorporation of explanatory variables for the false negative detection process. These assumptions are likely reasonable in many circumstances, but there is mounting evidence that false positive errors and detection probability heterogeneity may be much more prevalent in studies relying on auditory cues for species detection (e.g., songbird or calling amphibian surveys). We used field survey data from a simulated calling anuran system of known occupancy state to investigate the biases induced by these errors in dynamic models of species occurrence. Despite the participation of expert observers in simplified field conditions, both false positive errors and site detection probability heterogeneity were extensive for most species in the survey. We found that even low levels of false positive errors, constituting as little as 1% of all detections, can cause severe overestimation of site occupancy, colonization, and local extinction probabilities. Further, unmodeled detection probability heterogeneity induced substantial underestimation of occupancy and overestimation of colonization and local extinction probabilities. Completely spurious relationships between species occurrence and explanatory variables were also found. Such misleading inferences would likely have deleterious implications for conservation and management programs. We contend that all forms of observation error, including false positive errors and heterogeneous detection probabilities, must be incorporated into the estimation framework to facilitate reliable inferences about occupancy and its associated vital rate parameters.
NASA Astrophysics Data System (ADS)
Skourup, Henriette; Farrell, Sinéad Louise; Hendricks, Stefan; Ricker, Robert; Armitage, Thomas W. K.; Ridout, Andy; Andersen, Ole Baltazar; Haas, Christian; Baker, Steven
2017-11-01
State-of-the-art Arctic Ocean mean sea surface (MSS) models and global geoid models (GGMs) are used to support sea ice freeboard estimation from satellite altimeters, as well as in oceanographic studies such as mapping sea level anomalies and mean dynamic ocean topography. However, errors in a given model in the high-frequency domain, primarily due to unresolved gravity features, can result in errors in the estimated along-track freeboard. These errors are exacerbated in areas with a sparse lead distribution in consolidated ice pack conditions. Additionally model errors can impact ocean geostrophic currents, derived from satellite altimeter data, while remaining biases in these models may impact longer-term, multisensor oceanographic time series of sea level change in the Arctic. This study focuses on an assessment of five state-of-the-art Arctic MSS models (UCL13/04 and DTU15/13/10) and a commonly used GGM (EGM2008). We describe errors due to unresolved gravity features, intersatellite biases, and remaining satellite orbit errors, and their impact on the derivation of sea ice freeboard. The latest MSS models, incorporating CryoSat-2 sea surface height measurements, show improved definition of gravity features, such as the Gakkel Ridge. The standard deviation between models ranges 0.03-0.25 m. The impact of remaining MSS/GGM errors on freeboard retrieval can reach several decimeters in parts of the Arctic. While the maximum observed freeboard difference found in the central Arctic was 0.59 m (UCL13 MSS minus EGM2008 GGM), the standard deviation in freeboard differences is 0.03-0.06 m.
Shah, Rachit D; Cao, Alex; Golenberg, Lavie; Ellis, R Darin; Auner, Gregory W; Pandya, Abhilash K; Klein, Michael D
2009-04-01
Technical advances in the application of laparoscopic and robotic surgical systems have improved platform usability. The authors hypothesized that using two monitors instead of one would lead to faster performance with fewer errors. All tasks were performed using a surgical robot in a training box. One of the monitors was a standard camera with two preset zoom levels (zoomed in and zoomed out, single-monitor condition). The second monitor provided a static panoramic view of the whole surgical field. The standard camera was static at the zoomed-in level for the dual-monitor condition of the study. The study had two groups of participants: 4 surgeons proficient in both robotic and advanced laparoscopic skills and 10 lay persons (nonsurgeons) who were given adequate time to train and familiarize themselves with the equipment. Running a 50-cm rope was the basic task. Advanced tasks included running a suture through predetermined points and intracorporeal knot tying with 3-0 silk. Trial completion times and errors, categorized into three groups (orientation, precision, and task), were recorded. The trial completion times for all the tasks, basic and advanced, in the two groups were not significantly different. Fewer orientation errors occurred in the nonsurgeon group during knot tying (p=0.03) and in both groups during suturing (p=0.0002) in the dual-monitor arm of the study. Differences in precision and task error were not significant. Using two camera views helps both surgeons and lay persons perform complex tasks with fewer errors. These results may be due to better awareness of the surgical field with regard to the location of the instruments, leading to better field orientation. This display setup has potential for use in complex minimally invasive surgeries such as esophagectomy and gastric bypass. This technique also would be applicable to open microsurgery.
NASA Astrophysics Data System (ADS)
Smith, Gennifer T.; Dwork, Nicholas; Khan, Saara A.; Millet, Matthew; Magar, Kiran; Javanmard, Mehdi; Bowden, Audrey K.
2017-03-01
Urinalysis dipsticks were designed to revolutionize urine-based medical diagnosis. They are cheap, extremely portable, and have multiple assays patterned on a single platform. They were also meant to be incredibly easy to use. Unfortunately, there are many aspects in both the preparation and the analysis of the dipsticks that are plagued by user error. This high error is one reason that dipsticks have failed to flourish in both the at-home market and in low-resource settings. Sources of error include: inaccurate volume deposition, varying lighting conditions, inconsistent timing measurements, and misinterpreted color comparisons. We introduce a novel manifold and companion software for dipstick urinalysis that eliminates the aforementioned error sources. A micro-volume slipping manifold ensures precise sample delivery, an opaque acrylic box guarantees consistent lighting conditions, a simple sticker-based timing mechanism maintains accurate timing, and custom software that processes video data captured by a mobile phone ensures proper color comparisons. We show that the results obtained with the proposed device are as accurate and consistent as a properly executed dip-and-wipe method, the industry gold-standard, suggesting the potential for this strategy to enable confident urinalysis testing. Furthermore, the proposed all-acrylic slipping manifold is reusable and low in cost, making it a potential solution for at-home users and low-resource settings.
Mortamais, Marion; Chevrier, Cécile; Philippat, Claire; Petit, Claire; Calafat, Antonia M; Ye, Xiaoyun; Silva, Manori J; Brambilla, Christian; Eijkemans, Marinus J C; Charles, Marie-Aline; Cordier, Sylvaine; Slama, Rémy
2012-04-26
Environmental epidemiology and biomonitoring studies typically rely on biological samples to assay the concentration of non-persistent exposure biomarkers. Between-participant variations in sampling conditions of these biological samples constitute a potential source of exposure misclassification. Few studies attempted to correct biomarker levels for this error. We aimed to assess the influence of sampling conditions on concentrations of urinary biomarkers of select phenols and phthalates, two widely-produced families of chemicals, and to standardize biomarker concentrations on sampling conditions. Urine samples were collected between 2002 and 2006 among 287 pregnant women from Eden and Pélagie cohorts, from which phthalates and phenols metabolites levels were assayed. We applied a 2-step standardization method based on regression residuals. First, the influence of sampling conditions (including sampling hour, duration of storage before freezing) and of creatinine levels on biomarker concentrations were characterized using adjusted linear regression models. In the second step, the model estimates were used to remove the variability in biomarker concentrations due to sampling conditions and to standardize concentrations as if all samples had been collected under the same conditions (e.g., same hour of urine collection). Sampling hour was associated with concentrations of several exposure biomarkers. After standardization for sampling conditions, median concentrations differed by--38% for 2,5-dichlorophenol to +80 % for a metabolite of diisodecyl phthalate. However, at the individual level, standardized biomarker levels were strongly correlated (correlation coefficients above 0.80) with unstandardized measures. Sampling conditions, such as sampling hour, should be systematically collected in biomarker-based studies, in particular when the biomarker half-life is short. The 2-step standardization method based on regression residuals that we proposed in order to limit the impact of heterogeneity in sampling conditions could be further tested in studies describing levels of biomarkers or their influence on health.
Research on Standard Errors of Equating Differences. Research Report. ETS RR-10-25
ERIC Educational Resources Information Center
Moses, Tim; Zhang, Wenmin
2010-01-01
In this paper, the "standard error of equating difference" (SEED) is described in terms of originally proposed kernel equating functions (von Davier, Holland, & Thayer, 2004) and extended to incorporate traditional linear and equipercentile functions. These derivations expand on prior developments of SEEDs and standard errors of equating and…
Gao, Wei; Liu, Yalong; Xu, Bo
2014-12-19
A new algorithm called Huber-based iterated divided difference filtering (HIDDF) is derived and applied to cooperative localization of autonomous underwater vehicles (AUVs) supported by a single surface leader. The position states are estimated using acoustic range measurements relative to the leader, in which some disadvantages such as weak observability, large initial error and contaminated measurements with outliers are inherent. By integrating both merits of iterated divided difference filtering (IDDF) and Huber's M-estimation methodology, the new filtering method could not only achieve more accurate estimation and faster convergence contrast to standard divided difference filtering (DDF) in conditions of weak observability and large initial error, but also exhibit robustness with respect to outlier measurements, for which the standard IDDF would exhibit severe degradation in estimation accuracy. The correctness as well as validity of the algorithm is demonstrated through experiment results.
Should the Standard Count Be Excluded from Neutron Probe Calibration?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Z. Fred
About 6 decades after its introduction, the neutron probe remains one of the most accurate methods for indirect measurement of soil moisture content. Traditionally, the calibration of a neutron probe involves the ratio of the neutron count in the soil to a standard count, which is the neutron count in the fixed environment such as the probe shield or a specially-designed calibration tank. The drawback of this count-ratio-based calibration is that the error in the standard count is carried through to all the measurements. An alternative calibration is to use the neutron counts only, not the ratio, with proper correctionmore » for radioactive decay and counting time. To evaluate both approaches, the shield counts of a neutron probe used for three decades were analyzed. The results show that the surrounding conditions have a substantial effect on the standard count. The error in the standard count also impacts the calculation of water storage and could indicate false consistency among replicates. The analysis of the shield counts indicates negligible aging effect of the instrument over a period of 26 years. It is concluded that, by excluding the standard count, the use of the count-based calibration is appropriate and sometimes even better than ratio-based calibration. The count-based calibration is especially useful for historical data when the standard count was questionable or absent« less
The Calibration of Gloss Reference Standards
NASA Astrophysics Data System (ADS)
Budde, W.
1980-04-01
In present international and national standards for the measurement of specular gloss the primary and secondary reference standards are defined for monochromatic radiation. However the glossmeter specified is using polychromatic radiation (CIE Standard Illuminant C) and the CIE Standard Photometric Observer. This produces errors in practical gloss measurements of up to 0.5%. Although this may be considered small as compared to the accuracy of most practical gloss measurements, such an error should not be tolerated in the calibration of secondary standards. Corrections for such errors are presented and various alternatives for amendments of the existing documentary standards are discussed.
2013-01-01
Background Measurements of the morphology of the ankle joint, performed mostly for surgical planning of total ankle arthroplasty and for collecting data for total ankle prosthesis design, are often made on planar radiographs, and therefore can be very sensitive to the positioning of the joint during imaging. The current study aimed to compare ankle morphological measurements using CT-generated 2D images with gold standard values obtained from 3D CT data; to determine the sensitivity of the 2D measurements to mal-positioning of the ankle during imaging; and to quantify the repeatability of the 2D measurements under simulated positioning conditions involving random errors. Method Fifty-eight cadaveric ankles fixed in the neutral joint position (standard pose) were CT scanned, and the data were used to simulate lateral and frontal radiographs under various positioning conditions using digitally reconstructed radiographs (DRR). Results and discussion In the standard pose for imaging, most ankle morphometric parameters measured using 2D images were highly correlated (R > 0.8) to the gold standard values defined by the 3D CT data. For measurements made on the lateral views, the only parameters sensitive to rotational pose errors were longitudinal distances between the most anterior and the most posterior points of the tibial mortise and the tibial profile, which have important implications for determining the optimal cutting level of the bone during arthroplasty. Measurements of the trochlea tali width on the frontal views underestimated the standard values by up to 31.2%, with only a moderate reliability, suggesting that pre-surgical evaluations based on the trochlea tali width should be made with caution in order to avoid inappropriate selection of prosthesis sizes. Conclusions While highly correlated with 3D morphological measurements, some 2D measurements were affected by the bone poses in space during imaging, which may affect surgical decision-making in total ankle arthroplasty, including the amount of bone resection and the selection of the implant sizes. The linear regression equations for the relationship between 2D and 3D measurements will be helpful for correcting the errors in 2D morphometric measurements for clinical applications. PMID:24359413
Rochon, Justine; Kieser, Meinhard
2011-11-01
Student's one-sample t-test is a commonly used method when inference about the population mean is made. As advocated in textbooks and articles, the assumption of normality is often checked by a preliminary goodness-of-fit (GOF) test. In a paper recently published by Schucany and Ng it was shown that, for the uniform distribution, screening of samples by a pretest for normality leads to a more conservative conditional Type I error rate than application of the one-sample t-test without preliminary GOF test. In contrast, for the exponential distribution, the conditional level is even more elevated than the Type I error rate of the t-test without pretest. We examine the reasons behind these characteristics. In a simulation study, samples drawn from the exponential, lognormal, uniform, Student's t-distribution with 2 degrees of freedom (t(2) ) and the standard normal distribution that had passed normality screening, as well as the ingredients of the test statistics calculated from these samples, are investigated. For non-normal distributions, we found that preliminary testing for normality may change the distribution of means and standard deviations of the selected samples as well as the correlation between them (if the underlying distribution is non-symmetric), thus leading to altered distributions of the resulting test statistics. It is shown that for skewed distributions the excess in Type I error rate may be even more pronounced when testing one-sided hypotheses. ©2010 The British Psychological Society.
Hinton-Bayre, Anton D
2011-02-01
There is an ongoing debate over the preferred method(s) for determining the reliable change (RC) in individual scores over time. In the present paper, specificity comparisons of several classic and contemporary RC models were made using a real data set. This included a more detailed review of a new RC model recently proposed in this journal, that used the within-subjects standard deviation (WSD) as the error term. It was suggested that the RC(WSD) was more sensitive to change and theoretically superior. The current paper demonstrated that even in the presence of mean practice effects, false-positive rates were comparable across models when reliability was good and initial and retest variances were equivalent. However, when variances differed, discrepancies in classification across models became evident. Notably, the RC using the WSD provided unacceptably high false-positive rates in this setting. It was considered that the WSD was never intended for measuring change in this manner. The WSD actually combines systematic and error variance. The systematic variance comes from measurable between-treatment differences, commonly referred to as practice effect. It was further demonstrated that removal of the systematic variance and appropriate modification of the residual error term for the purpose of testing individual change yielded an error term already published and criticized in the literature. A consensus on the RC approach is needed. To that end, further comparison of models under varied conditions is encouraged.
Detecting letters in continuous text: effects of display size.
Healy, A F; Oliver, W L; McNamara, T P
1987-05-01
In three letter detection experiments, subjects responded to each instance of the letter t in continuous text typed in a standard paragraph, typed with one to four words per line, or shown for a fixed duration on a computer screen either one or four words at a time. In the multiword and the standard paragraph conditions, errors were greatest and latencies longest on the word the when it was correctly spelled. This effect was diminished or reversed in the one-word conditions. These findings support a set of unitization hypotheses about the reading process, according to which subjects do not process the constituent letters of a word once that word has been identified unless no other word is in view.
Development and validity of an instrumented handbike: initial results of propulsion kinetics.
van Drongelen, Stefan; van den Berg, Jos; Arnet, Ursina; Veeger, Dirkjan H E J; van der Woude, Lucas H V
2011-11-01
To develop an instrumented handbike system to measure the forces applied to the handgrip during handbiking. A 6 degrees of freedom force sensor was built into the handgrip of an attach-unit handbike, together with two optical encoders to measure the orientation of the handgrip and crank in space. Linearity, precision, and percent error were determined for static and dynamic tests. High linearity was demonstrated for both the static and the dynamic condition (r=1.01). Precision was high under the static condition (standard deviation of 0.2N), however the precision decreased with higher loads during the dynamic condition. Percent error values were between 0.3 and 5.1%. This is the first instrumented handbike system that can register 3-dimensional forces. It can be concluded that the instrumented handbike system allows for an accurate force analysis based on forces registered at the handle bars. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Wang, Tianyou
2009-01-01
Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-09
... subparagraph (4)(b)4, relating to sulfur dioxide, to correct an error in the standard condition for temperature... subparagraph (b)5(i) to clarify the specific equipment covered by permit-by-rule for hot mix asphalt plants... million BtU per hour'' is replaced by ``hot mix asphalt facilities,'' to best describe the facilities...
NASA Astrophysics Data System (ADS)
Nichols, Brandon S.; Rajaram, Narasimhan; Tunnell, James W.
2012-05-01
Diffuse optical spectroscopy (DOS) provides a powerful tool for fast and noninvasive disease diagnosis. The ability to leverage DOS to accurately quantify tissue optical parameters hinges on the model used to estimate light-tissue interaction. We describe the accuracy of a lookup table (LUT)-based inverse model for measuring optical properties under different conditions relevant to biological tissue. The LUT is a matrix of reflectance values acquired experimentally from calibration standards of varying scattering and absorption properties. Because it is based on experimental values, the LUT inherently accounts for system response and probe geometry. We tested our approach in tissue phantoms containing multiple absorbers, different sizes of scatterers, and varying oxygen saturation of hemoglobin. The LUT-based model was able to extract scattering and absorption properties under most conditions with errors of less than 5 percent. We demonstrate the validity of the lookup table over a range of source-detector separations from 0.25 to 1.48 mm. Finally, we describe the rapid fabrication of a lookup table using only six calibration standards. This optimized LUT was able to extract scattering and absorption properties with average RMS errors of 2.5 and 4 percent, respectively.
NASA Technical Reports Server (NTRS)
Curry, Timothy J.; Batterson, James G. (Technical Monitor)
2000-01-01
Low order equivalent system (LOES) models for the Tu-144 supersonic transport aircraft were identified from flight test data. The mathematical models were given in terms of transfer functions with a time delay by the military standard MIL-STD-1797A, "Flying Qualities of Piloted Aircraft," and the handling qualities were predicted from the estimated transfer function coefficients. The coefficients and the time delay in the transfer functions were estimated using a nonlinear equation error formulation in the frequency domain. Flight test data from pitch, roll, and yaw frequency sweeps at various flight conditions were used for parameter estimation. Flight test results are presented in terms of the estimated parameter values, their standard errors, and output fits in the time domain. Data from doublet maneuvers at the same flight conditions were used to assess the predictive capabilities of the identified models. The identified transfer function models fit the measured data well and demonstrated good prediction capabilities. The Tu-144 was predicted to be between level 2 and 3 for all longitudinal maneuvers and level I for all lateral maneuvers. High estimates of the equivalent time delay in the transfer function model caused the poor longitudinal rating.
Joachimsthal, Eva L; Ivanov, Volodymyr; Tay, Joo-Hwa; Tay, Stephen T-L
2003-03-01
Conventional methods for bacteriological testing of water quality take long periods of time to complete. This makes them inappropriate for a shipping industry that is attempting to comply with the International Maritime Organization's anticipated regulations for ballast water discharge. Flow cytometry for the analysis of marine and ship's ballast water is a comparatively fast and accurate method. Compared to a 5% standard error for flow cytometry analysis the standard methods of culturing and epifluorescence analysis have errors of 2-58% and 10-30%, respectively. Also, unlike culturing methods, flow cytometry is capable of detecting both non-viable and viable but non-culturable microorganisms which can still pose health risks. The great variability in both cell concentrations and microbial content for the samples tested is an indication of the difficulties facing microbial monitoring programmes. The concentration of microorganisms in the ballast tank was generally lower than in local seawater. The proportion of aerobic, microaerophilic, and facultative anaerobic microorganisms present appeared to be influenced by conditions in the ballast tank. The gradual creation of anaerobic conditions in a ballast tank could lead to the accumulation of facultative anaerobic microorganisms, which might represent a potential source of pathogenic species.
Role of the standard deviation in the estimation of benchmark doses with continuous data.
Gaylor, David W; Slikker, William
2004-12-01
For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.
NASA Technical Reports Server (NTRS)
Sun, Yushi; Sun, Changhong; Zhu, Harry; Wincheski, Buzz
2006-01-01
Stress corrosion cracking in the relief radius area of a space shuttle primary reaction control thruster is an issue of concern. The current approach for monitoring of potential crack growth is nondestructive inspection (NDI) of remaining thickness (RT) to the acoustic cavities using an eddy current or remote field eddy current probe. EDM manufacturers have difficulty in providing accurate RT calibration standards. Significant error in the RT values of NDI calibration standards could lead to a mistaken judgment of cracking condition of a thruster under inspection. A tool based on eddy current principle has been developed to measure the RT at each acoustic cavity of a calibration standard in order to validate that the standard meets the sample design criteria.
A Note on Standard Deviation and Standard Error
ERIC Educational Resources Information Center
Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth
2010-01-01
Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.
Fischer, A; Luginbühl, T; Delattre, L; Delouard, J M; Faverdin, P
2015-07-01
Body condition is an indirect estimation of the level of body reserves, and its variation reflects cumulative variation in energy balance. It interacts with reproductive and health performance, which are important to consider in dairy production but not easy to monitor. The commonly used body condition score (BCS) is time consuming, subjective, and not very sensitive. The aim was therefore to develop and validate a method assessing BCS with 3-dimensional (3D) surfaces of the cow's rear. A camera captured 3D shapes 2 m from the floor in a weigh station at the milking parlor exit. The BCS was scored by 3 experts on the same day as 3D imaging. Four anatomical landmarks had to be identified manually on each 3D surface to define a space centered on the cow's rear. A set of 57 3D surfaces from 56 Holstein dairy cows was selected to cover a large BCS range (from 0.5 to 4.75 on a 0 to 5 scale) to calibrate 3D surfaces on BCS. After performing a principal component analysis on this data set, multiple linear regression was fitted on the coordinates of these surfaces in the principal components' space to assess BCS. The validation was performed on 2 external data sets: one with cows used for calibration, but at a different lactation stage, and one with cows not used for calibration. Additionally, 6 cows were scanned once and their surfaces processed 8 times each for repeatability and then these cows were scanned 8 times each the same day for reproducibility. The selected model showed perfect calibration and a good but weaker validation (root mean square error=0.31 for the data set with cows used for calibration; 0.32 for the data set with cows not used for calibration). Assessing BCS with 3D surfaces was 3 times more repeatable (standard error=0.075 versus 0.210 for BCS) and 2.8 times more reproducible than manually scored BCS (standard error=0.103 versus 0.280 for BCS). The prediction error was similar for both validation data sets, indicating that the method is not less efficient for cows not used for calibration. The major part of reproducibility error incorporates repeatability error. An automation of the anatomical landmarks identification is required, first to allow broadband measures of body condition and second to improve repeatability and consequently reproducibility. Assessing BCS using 3D imaging coupled with principal component analysis appears to be a very promising means of improving precision and feasibility of this trait measurement. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
Nano-level instrumentation for analyzing the dynamic accuracy of a rolling element bearing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Z.; Hong, J.; Zhang, J.
2013-12-15
The rotational performance of high-precision rolling bearings is fundamental to the overall accuracy of complex mechanical systems. A nano-level instrument to analyze rotational accuracy of high-precision bearings of machine tools under working conditions was developed. In this instrument, a high-precision (error motion < 0.15 μm) and high-stiffness (2600 N axial loading capacity) aerostatic spindle was applied to spin the test bearing. Operating conditions could be simulated effectively because of the large axial loading capacity. An air-cylinder, controlled by a proportional pressure regulator, was applied to drive an air-bearing subjected to non-contact and precise loaded axial forces. The measurement results onmore » axial loading and rotation constraint with five remaining degrees of freedom were completely unconstrained and uninfluenced by the instrument's structure. Dual capacity displacement sensors with 10 nm resolution were applied to measure the error motion of the spindle using a double-probe error separation method. This enabled the separation of the spindle's error motion from the measurement results of the test bearing which were measured using two orthogonal laser displacement sensors with 5 nm resolution. Finally, a Lissajous figure was used to evaluate the non-repetitive run-out (NRRO) of the bearing at different axial forces and speeds. The measurement results at various axial loadings and speeds showed the standard deviations of the measurements’ repeatability and accuracy were less than 1% and 2%. Future studies will analyze the relationship between geometrical errors and NRRO, such as the ball diameter differences of and the geometrical errors in the grooves of rings.« less
Nano-level instrumentation for analyzing the dynamic accuracy of a rolling element bearing.
Yang, Z; Hong, J; Zhang, J; Wang, M Y; Zhu, Y
2013-12-01
The rotational performance of high-precision rolling bearings is fundamental to the overall accuracy of complex mechanical systems. A nano-level instrument to analyze rotational accuracy of high-precision bearings of machine tools under working conditions was developed. In this instrument, a high-precision (error motion < 0.15 μm) and high-stiffness (2600 N axial loading capacity) aerostatic spindle was applied to spin the test bearing. Operating conditions could be simulated effectively because of the large axial loading capacity. An air-cylinder, controlled by a proportional pressure regulator, was applied to drive an air-bearing subjected to non-contact and precise loaded axial forces. The measurement results on axial loading and rotation constraint with five remaining degrees of freedom were completely unconstrained and uninfluenced by the instrument's structure. Dual capacity displacement sensors with 10 nm resolution were applied to measure the error motion of the spindle using a double-probe error separation method. This enabled the separation of the spindle's error motion from the measurement results of the test bearing which were measured using two orthogonal laser displacement sensors with 5 nm resolution. Finally, a Lissajous figure was used to evaluate the non-repetitive run-out (NRRO) of the bearing at different axial forces and speeds. The measurement results at various axial loadings and speeds showed the standard deviations of the measurements' repeatability and accuracy were less than 1% and 2%. Future studies will analyze the relationship between geometrical errors and NRRO, such as the ball diameter differences of and the geometrical errors in the grooves of rings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-05-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less
Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing
2016-12-20
Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Solar cell and module performance assessment based on indoor calibration methods
NASA Astrophysics Data System (ADS)
Bogus, K.
A combined space/terrestrial solar cell test calibration method that requires five steps and can be performed indoors is described. The test conditions are designed to qualify the cell or module output data in standard illumination and temperature conditions. Measurements are made of the short-circuit current, the open circuit voltage, the maximum power, the efficiency, and the spectral response. Standard sunlight must be replicated both in earth surface and AM0 conditions; Xe lamps are normally used for the light source, with spectral measurements taken of the light. Cell and module spectral response are assayed by using monochromators and narrow band pass monochromatic filters. Attention is required to define the performance characteristics of modules under partial shadowing. Error sources that may effect the measurements are discussed, as are previous cell performance testing and calibration methods and their effectiveness in comparison with the behaviors of satellite solar power panels.
An Enhanced MEMS Error Modeling Approach Based on Nu-Support Vector Regression
Bhatt, Deepak; Aggarwal, Priyanka; Bhattacharya, Prabir; Devabhaktuni, Vijay
2012-01-01
Micro Electro Mechanical System (MEMS)-based inertial sensors have made possible the development of a civilian land vehicle navigation system by offering a low-cost solution. However, the accurate modeling of the MEMS sensor errors is one of the most challenging tasks in the design of low-cost navigation systems. These sensors exhibit significant errors like biases, drift, noises; which are negligible for higher grade units. Different conventional techniques utilizing the Gauss Markov model and neural network method have been previously utilized to model the errors. However, Gauss Markov model works unsatisfactorily in the case of MEMS units due to the presence of high inherent sensor errors. On the other hand, modeling the random drift utilizing Neural Network (NN) is time consuming, thereby affecting its real-time implementation. We overcome these existing drawbacks by developing an enhanced Support Vector Machine (SVM) based error model. Unlike NN, SVMs do not suffer from local minimisation or over-fitting problems and delivers a reliable global solution. Experimental results proved that the proposed SVM approach reduced the noise standard deviation by 10–35% for gyroscopes and 61–76% for accelerometers. Further, positional error drifts under static conditions improved by 41% and 80% in comparison to NN and GM approaches. PMID:23012552
Mock jurors' use of error rates in DNA database trawls.
Scurich, Nicholas; John, Richard S
2013-12-01
Forensic science is not infallible, as data collected by the Innocence Project have revealed. The rate at which errors occur in forensic DNA testing-the so-called "gold standard" of forensic science-is not currently known. This article presents a Bayesian analysis to demonstrate the profound impact that error rates have on the probative value of a DNA match. Empirical evidence on whether jurors are sensitive to this effect is equivocal: Studies have typically found they are not, while a recent, methodologically rigorous study found that they can be. This article presents the results of an experiment that examined this issue within the context of a database trawl case in which one DNA profile was tested against a multitude of profiles. The description of the database was manipulated (i.e., "medical" or "offender" database, or not specified) as was the rate of error (i.e., one-in-10 or one-in-1,000). Jury-eligible participants were nearly twice as likely to convict in the offender database condition compared to the condition not specified. The error rates did not affect verdicts. Both factors, however, affected the perception of the defendant's guilt, in the expected direction, although the size of the effect was meager compared to Bayesian prescriptions. The results suggest that the disclosure of an offender database to jurors might constitute prejudicial evidence, and calls for proficiency testing in forensic science as well as training of jurors are echoed. (c) 2013 APA, all rights reserved
Estimating standard errors in feature network models.
Frank, Laurence E; Heiser, Willem J
2007-05-01
Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.
A practical method of estimating standard error of age in the fission track dating method
Johnson, N.M.; McGee, V.E.; Naeser, C.W.
1979-01-01
A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.
Harada, Saki; Suzuki, Akio; Nishida, Shohei; Kobayashi, Ryo; Tamai, Sayuri; Kumada, Keisuke; Murakami, Nobuo; Itoh, Yoshinori
2017-06-01
Insulin is frequently used for glycemic control. Medication errors related to insulin are a common problem for medical institutions. Here, we prepared a standardized sliding scale insulin (SSI) order sheet and assessed the effect of its introduction. Observations before and after the introduction of the standardized SSI template were conducted at Gifu University Hospital. The incidence of medication errors, hyperglycemia, and hypoglycemia related to SSI were obtained from the electronic medical records. The introduction of the standardized SSI order sheet significantly reduced the incidence of medication errors related to SSI compared with that prior to its introduction (12/165 [7.3%] vs 4/159 [2.1%], P = .048). However, the incidence of hyperglycemia (≥250 mg/dL) and hypoglycemia (≤50 mg/dL) in patients who received SSI was not significantly different between the 2 groups. The introduction of the standardized SSI order sheet reduced the incidence of medication errors related to SSI. © 2016 John Wiley & Sons, Ltd.
Computer Programs for the Semantic Differential: Further Modifications.
ERIC Educational Resources Information Center
Lawson, Edwin D.; And Others
The original nine programs for semantic differential analysis have been condensed into three programs which have been further refined and augmented. They yield: (1) means, standard deviations, and standard errors for each subscale on each concept; (2) Evaluation, Potency, and Activity (EPA) means, standard deviations, and standard errors; (3)…
NASA Technical Reports Server (NTRS)
Knox, C. E.
1978-01-01
Navigation error data from these flights are presented in a format utilizing three independent axes - horizontal, vertical, and time. The navigation position estimate error term and the autopilot flight technical error term are combined to form the total navigation error in each axis. This method of error presentation allows comparisons to be made between other 2-, 3-, or 4-D navigation systems and allows experimental or theoretical determination of the navigation error terms. Position estimate error data are presented with the navigation system position estimate based on dual DME radio updates that are smoothed with inertial velocities, dual DME radio updates that are smoothed with true airspeed and magnetic heading, and inertial velocity updates only. The normal mode of navigation with dual DME updates that are smoothed with inertial velocities resulted in a mean error of 390 m with a standard deviation of 150 m in the horizontal axis; a mean error of 1.5 m low with a standard deviation of less than 11 m in the vertical axis; and a mean error as low as 252 m with a standard deviation of 123 m in the time axis.
Moerbeek, Mirjam; van Schie, Sander
2016-07-11
The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.
Determination of streamflow of the Arkansas River near Bentley in south-central Kansas
Perry, Charles A.
2012-01-01
The Kansas Department of Agriculture, Division of Water Resources, requires that the streamflow of the Arkansas River just upstream from Bentley in south-central Kansas be measured or calculated before groundwater can be pumped from the well field. When the daily streamflow of the Arkansas River near Bentley is less than 165 cubic feet per second (ft3/s), pumping must be curtailed. Daily streamflow near Bentley was calculated by determining the relations between streamflow data from two reference streamgages with a concurrent record of 24 years, one located 17.2 miles (mi) upstream and one located 10.9 mi downstream, and streamflow at a temporary gage located just upstream from Bentley (Arkansas River near Bentley, Kansas). Flow-duration curves for the two reference streamgages indicate that during 1988?2011, the mean daily streamflow was less than 165 ft3/s 30 to 35 percent of the time. During extreme low-flow (drought) conditions, the reach of the Arkansas River between Hutchinson and Maize can lose flow to the adjacent alluvial aquifer, with streamflow losses as much as 1.6 cubic feet per second per mile. Three models were developed to calculate the streamflow of the Arkansas River near Bentley, Kansas. The model chosen depends on the data available and on whether the reach of the Arkansas River between Hutchinson and Maize is gaining or losing groundwater from or to the adjacent alluvial aquifer. The first model was a pair of equations developed from linear regressions of the relation between daily streamflow data from the Bentley streamgage and daily streamflow data from either the Arkansas River near Hutchinson, Kansas, station (station number 07143330) or the Arkansas River near Maize, Kansas, station (station number 07143375). The standard error of the Hutchinson-only equation was 22.8 ft3/s, and the standard error of the Maize-only equation was 22.3 ft3/s. The single-station model would be used if only one streamgage was available. In the second model, the flow gradient between the streamflow near Hutchinson and the streamflow near Maize was used to calculate the streamflow at the Bentley streamgage. This equation resulted in a standard error of 26.7 ft3/s. In the third model, a multiple regression analysis between both the daily streamflow of the Arkansas River near Hutchinson, Kansas, and the daily streamflow of the Arkansas River near Maize, Kansas, was used to calculate the streamflow at the Bentley streamgage. The multiple regression equation had a standard error of 21.2 ft3/s, which was the smallest of the standard errors for all the models. An analysis of the number of low-flow days and the number of days when the reach between Hutchinson and Maize loses flow to the adjacent alluvial aquifer indicates that the long-term trend is toward fewer days of losing conditions. This trend may indicate a long-term increase in water levels in the alluvial aquifer, which could be caused by one or more of several conditions, including an increase in rainfall, a decrease in pumping, a decrease in temperature, and an increase in streamflow upstream from the Hutchinson-to-Maize reach of the Arkansas River.
A rocket ozonesonde for geophysical research and satellite intercomparison
NASA Technical Reports Server (NTRS)
Hilsenrath, E.; Coley, R. L.; Kirschner, P. T.; Gammill, B.
1979-01-01
The in-situ rocketsonde for ozone profile measurements developed and flown for geophysical research and satellite comparison is reviewed. The measurement principle involves the chemiluminescence caused by ambient ozone striking a detector and passive pumping as a means of sampling the atmosphere as the sonde descends through the atmosphere on a parachute. The sonde is flown on a meteorological sounding rocket, and flight data are telemetered via the standard meteorological GMD ground receiving system. The payload operation, sensor performance, and calibration procedures simulating flight conditions are described. An error analysis indicated an absolute accuracy of about 12 percent and a precision of about 8 percent. These are combined to give a measurement error of 14 percent.
Evaluation of Acoustic Doppler Current Profiler measurements of river discharge
Morlock, S.E.
1996-01-01
The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.
Zahiruddin, Kowser; Banu, Shaj; Dharmarajan, Ramya; Kulothungan, Vaitheeswaran; Vijayan, Deepa; Raman, Rajiv; Sharma, Tarun
2010-06-01
To evaluate a customized, portable Farnsworth-Munsell 100 (FM 100) hue viewing booth for compliance with colour vision testing standards and to compare it with room illumination in subjects with normal colour vision (trichromats), subjects with acquired colour vision defects (secondary to diabetes mellitus), and subjects with congenital colour vision defects (dichromats). Discrete wavelengths of the tube in the customized booth were measured using a spectrometer using the normal incident method and were compared with the spectral distribution of sunlight. Forty-eight subjects were recruited for the study and were divided into 3 groups: Group 1, Normal Trichromats (30 eyes); Group 2, Congenital Colour Vision Defects (16 eyes); and Group 3, Diabetes Mellitus (20 eyes). The FM 100 hue test performance was compared using two illumination conditions, booth illumination and room illumination. Total error scores of the classical method in Group 2 as mean+/-SD for room and booth illumination was 243.05+/-85.96 and 149.85+/-54.50 respectively (p=0.0001). Group 2 demonstrated lesser correlation (r=0.50, 0.55), lesser reliability (Cronbach's alpha, 0.625, 0.662) and greater variability (Bland & Altman value, 10.5) in total error scores for the classical method and the moment of inertia method between the two illumination conditions when compared to the other two groups. The customized booth demonstrated illumination meeting CIE standards. The total error scores were overestimated by the classical and moment of inertia methods in all groups for room illumination compared with booth illumination, however overestimation was more significant in the diabetes group.
NASA Astrophysics Data System (ADS)
Bacha, Tulu
The Goddard Lidar Observatory for Wind (GLOW), a mobile direct detection Doppler LIDAR based on molecular backscattering for measurement of wind in the troposphere and lower stratosphere region of atmosphere is operated and its errors characterized. It was operated at Howard University Beltsville Center for Climate Observation System (BCCOS) side by side with other operating instruments: the NASA/Langely Research Center Validation Lidar (VALIDAR), Leosphere WLS70, and other standard wind sensing instruments. The performance of Goddard Lidar Observatory for Wind (GLOW) is presented for various optical thicknesses of cloud conditions. It was also compared to VALIDAR under various conditions. These conditions include clear and cloudy sky regions. The performance degradation due to the presence of cirrus clouds is quantified by comparing the wind speed error to cloud thickness. The cloud thickness is quantified in terms of aerosol backscatter ratio (ASR) and cloud optical depth (COD). ASR and COD are determined from Howard University Raman Lidar (HURL) operating at the same station as GLOW. The wind speed error of GLOW was correlated with COD and aerosol backscatter ratio (ASR) which are determined from HURL data. The correlation related in a weak linear relationship. Finally, the wind speed measurements of GLOW were corrected using the quantitative relation from the correlation relations. Using ASR reduced the GLOW wind error from 19% to 8% in a thin cirrus cloud and from 58% to 28% in a relatively thick cloud. After correcting for cloud induced error, the remaining error is due to shot noise and atmospheric variability. Shot-noise error is the statistical random error of backscattered photons detected by photon multiplier tube (PMT) can only be minimized by averaging large number of data recorded. The atmospheric backscatter measured by GLOW along its line-of-sight direction is also used to analyze error due to atmospheric variability within the volume of measurement. GLOW scans in five different directions (vertical and at elevation angles of 45° in north, south, east, and west) to generate wind profiles. The non-uniformity of the atmosphere in all scanning directions is a factor contributing to the measurement error of GLOW. The atmospheric variability in the scanning region leads to difference in the intensity of backscattered signals for scanning directions. Taking the ratio of the north (east) to south (west) and comparing the statistical differences lead to a weak linear relation between atmospheric variability and line-of-sights wind speed differences. This relation was used to make correction which reduced by about 50%.
Soil pH Mapping with an On-The-Go Sensor
Schirrmann, Michael; Gebbers, Robin; Kramer, Eckart; Seidel, Jan
2011-01-01
Soil pH is a key parameter for crop productivity, therefore, its spatial variation should be adequately addressed to improve precision management decisions. Recently, the Veris pH Manager™, a sensor for high-resolution mapping of soil pH at the field scale, has been made commercially available in the US. While driving over the field, soil pH is measured on-the-go directly within the soil by ion selective antimony electrodes. The aim of this study was to evaluate the Veris pH Manager™ under farming conditions in Germany. Sensor readings were compared with data obtained by standard protocols of soil pH assessment. Experiments took place under different scenarios: (a) controlled tests in the lab, (b) semicontrolled test on transects in a stop-and-go mode, and (c) tests under practical conditions in the field with the sensor working in its typical on-the-go mode. Accuracy issues, problems, options, and potential benefits of the Veris pH Manager™ were addressed. The tests demonstrated a high degree of linearity between standard laboratory values and sensor readings. Under practical conditions in the field (scenario c), the measure of fit (r2) for the regression between the on-the-go measurements and the reference data was 0.71, 0.63, and 0.84, respectively. Field-specific calibration was necessary to reduce systematic errors. Accuracy of the on-the-go maps was considerably higher compared with the pH maps obtained by following the standard protocols, and the error in calculating lime requirements was reduced by about one half. However, the system showed some weaknesses due to blockage by residual straw and weed roots. If these problems were solved, the on-the-go sensor investigated here could be an efficient alternative to standard sampling protocols as a basis for liming in Germany. PMID:22346591
Soil pH mapping with an on-the-go sensor.
Schirrmann, Michael; Gebbers, Robin; Kramer, Eckart; Seidel, Jan
2011-01-01
Soil pH is a key parameter for crop productivity, therefore, its spatial variation should be adequately addressed to improve precision management decisions. Recently, the Veris pH Manager™, a sensor for high-resolution mapping of soil pH at the field scale, has been made commercially available in the US. While driving over the field, soil pH is measured on-the-go directly within the soil by ion selective antimony electrodes. The aim of this study was to evaluate the Veris pH Manager™ under farming conditions in Germany. Sensor readings were compared with data obtained by standard protocols of soil pH assessment. Experiments took place under different scenarios: (a) controlled tests in the lab, (b) semicontrolled test on transects in a stop-and-go mode, and (c) tests under practical conditions in the field with the sensor working in its typical on-the-go mode. Accuracy issues, problems, options, and potential benefits of the Veris pH Manager™ were addressed. The tests demonstrated a high degree of linearity between standard laboratory values and sensor readings. Under practical conditions in the field (scenario c), the measure of fit (r(2)) for the regression between the on-the-go measurements and the reference data was 0.71, 0.63, and 0.84, respectively. Field-specific calibration was necessary to reduce systematic errors. Accuracy of the on-the-go maps was considerably higher compared with the pH maps obtained by following the standard protocols, and the error in calculating lime requirements was reduced by about one half. However, the system showed some weaknesses due to blockage by residual straw and weed roots. If these problems were solved, the on-the-go sensor investigated here could be an efficient alternative to standard sampling protocols as a basis for liming in Germany.
Increasing point-count duration increases standard error
Smith, W.P.; Twedt, D.J.; Hamel, P.B.; Ford, R.P.; Wiedenfeld, D.A.; Cooper, R.J.
1998-01-01
We examined data from point counts of varying duration in bottomland forests of west Tennessee and the Mississippi Alluvial Valley to determine if counting interval influenced sampling efficiency. Estimates of standard error increased as point count duration increased both for cumulative number of individuals and species in both locations. Although point counts appear to yield data with standard errors proportional to means, a square root transformation of the data may stabilize the variance. Using long (>10 min) point counts may reduce sample size and increase sampling error, both of which diminish statistical power and thereby the ability to detect meaningful changes in avian populations.
Technique for temperature compensation of eddy-current proximity probes
NASA Technical Reports Server (NTRS)
Masters, Robert M.
1989-01-01
Eddy-current proximity probes are used in turbomachinery evaluation testing and operation to measure distances, primarily vibration, deflection, or displacment of shafts, bearings and seals. Measurements of steady-state conditions made with standard eddy-current proximity probes are susceptible to error caused by temperature variations during normal operation of the component under investigation. Errors resulting from temperature effects for the specific probes used in this study were approximately 1.016 x 10 to the -3 mm/deg C over the temperature range of -252 to 100 C. This report examines temperature caused changes on the eddy-current proximity probe measurement system, establishes their origin, and discusses what may be done to minimize their effect on the output signal. In addition, recommendations are made for the installation and operation of the electronic components associated with an eddy-current proximity probe. Several techniques are described that provide active on-line error compensation for over 95 percent of the temperature effects.
A virtual pointer to support the adoption of professional vision in laparoscopic training.
Feng, Yuanyuan; McGowan, Hannah; Semsar, Azin; Zahiri, Hamid R; George, Ivan M; Turner, Timothy; Park, Adrian; Kleinsmith, Andrea; Mentis, Helena M
2018-05-23
To assess a virtual pointer in supporting surgical trainees' development of professional vision in laparoscopic surgery. We developed a virtual pointing and telestration system utilizing the Microsoft Kinect movement sensor as an overlay for any imagine system. Training with the application was compared to a standard condition, i.e., verbal instruction with un-mediated gestures, in a laparoscopic training environment. Seven trainees performed four simulated laparoscopic tasks guided by an experienced surgeon as the trainer. Trainee performance was subjectively assessed by the trainee and trainer, and objectively measured by number of errors, time to task completion, and economy of movement. No significant differences in errors and time to task completion were obtained between virtual pointer and standard conditions. Economy of movement in the non-dominant hand was significantly improved when using virtual pointer ([Formula: see text]). The trainers perceived a significant improvement in trainee performance in virtual pointer condition ([Formula: see text]), while the trainees perceived no difference. The trainers' perception of economy of movement was similar between the two conditions in the initial three runs and became significantly improved in virtual pointer condition in the fourth run ([Formula: see text]). Results show that the virtual pointer system improves the trainer's perception of trainee's performance and this is reflected in the objective performance measures in the third and fourth training runs. The benefit of a virtual pointing and telestration system may be perceived by the trainers early on in training, but this is not evident in objective trainee performance until further mastery has been attained. In addition, the performance improvement of economy of motion specifically shows that the virtual pointer improves the adoption of professional vision- improved ability to see and use laparoscopic video results in more direct instrument movement.
Biases and Standard Errors of Standardized Regression Coefficients
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Chan, Wai
2011-01-01
The paper obtains consistent standard errors (SE) and biases of order O(1/n) for the sample standardized regression coefficients with both random and given predictors. Analytical results indicate that the formulas for SEs given in popular text books are consistent only when the population value of the regression coefficient is zero. The sample…
Barriers to success: physical separation optimizes event-file retrieval in shared workspaces.
Klempova, Bibiana; Liepelt, Roman
2017-07-08
Sharing tasks with other persons can simplify our work and life, but seeing and hearing other people's actions may also be very distracting. The joint Simon effect (JSE) is a standard measure of referential response coding when two persons share a Simon task. Sequential modulations of the joint Simon effect (smJSE) are interpreted as a measure of event-file processing containing stimulus information, response information and information about the just relevant control-state active in a given social situation. This study tested effects of physical (Experiment 1) and virtual (Experiment 2) separation of shared workspaces on referential coding and event-file processing using a joint Simon task. In Experiment 1, participants performed this task in individual (go-nogo), joint and standard Simon task conditions with and without a transparent curtain (physical separation) placed along the imagined vertical midline of the monitor. In Experiment 2, participants performed the same tasks with and without receiving background music (virtual separation). For response times, physical separation enhanced event-file retrieval indicated by an enlarged smJSE in the joint Simon task with curtain than without curtain (Experiment1), but did not change referential response coding. In line with this, we also found evidence for enhanced event-file processing through physical separation in the joint Simon task for error rates. Virtual separation did neither impact event-file processing, nor referential coding, but generally slowed down response times in the joint Simon task. For errors, virtual separation hampered event-file processing in the joint Simon task. For the cognitively more demanding standard two-choice Simon task, we found music to have a degrading effect on event-file retrieval for response times. Our findings suggest that adding a physical separation optimizes event-file processing in shared workspaces, while music seems to lead to a more relaxed task processing mode under shared task conditions. In addition, music had an interfering impact on joint error processing and more generally when dealing with a more complex task in isolation.
Taylor, Diane M; Chow, Fotini K; Delkash, Madjid; Imhoff, Paul T
2018-03-01
The short-term temporal variability of landfill methane emissions is not well understood due to uncertainty in measurement methods. Significant variability is seen over short-term measurement campaigns with the tracer dilution method (TDM), but this variability may be due in part to measurement error rather than fluctuations in the actual landfill emissions. In this study, landfill methane emissions and TDM-measured emissions are simulated over a real landfill in Delaware, USA using the Weather Research and Forecasting model (WRF) for two emissions scenarios. In the steady emissions scenario, a constant landfill emissions rate is prescribed at each model grid point on the surface of the landfill. In the unsteady emissions scenario, emissions are calculated at each time step as a function of the local surface wind speed, resulting in variable emissions over each 1.5-h measurement period. The simulation output is used to assess the standard deviation and percent error of the TDM-measured emissions. Eight measurement periods are simulated over two different days to look at different conditions. Results show that standard deviation of the TDM- measured emissions does not increase significantly from the steady emissions simulations to the unsteady emissions scenarios, indicating that the TDM may have inherent errors in its prediction of emissions fluctuations. Results also show that TDM error does not increase significantly from the steady to the unsteady emissions simulations. This indicates that introducing variability to the landfill emissions does not increase errors in the TDM at this site. Across all simulations, TDM errors range from -15% to 43%, consistent with the range of errors seen in previous TDM studies. Simulations indicate diurnal variations of methane emissions when wind effects are significant, which may be important when developing daily and annual emissions estimates from limited field data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Simultaneous estimation of human and exoskeleton motion: A simplified protocol.
Alvarez, M T; Torricelli, D; Del-Ama, A J; Pinto, D; Gonzalez-Vargas, J; Moreno, J C; Gil-Agudo, A; Pons, J L
2017-07-01
Adequate benchmarking procedures in the area of wearable robots is gaining importance in order to compare different devices on a quantitative basis, improve them and support the standardization and regulation procedures. Performance assessment usually focuses on the execution of locomotion tasks, and is mostly based on kinematic-related measures. Typical drawbacks of marker-based motion capture systems, gold standard for measure of human limb motion, become challenging when measuring limb kinematics, due to the concomitant presence of the robot. This work answers the question of how to reliably assess the subject's body motion by placing markers over the exoskeleton. Focusing on the ankle joint, the proposed methodology showed that it is possible to reconstruct the trajectory of the subject's joint by placing markers on the exoskeleton, although foot flexibility during walking can impact the reconstruction accuracy. More experiments are needed to confirm this hypothesis, and more subjects and walking conditions are needed to better characterize the errors of the proposed methodology, although our results are promising, indicating small errors.
NASA Technical Reports Server (NTRS)
Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don
1998-01-01
Sophisticated network storage management applications are rapidly evolving to satisfy a market demand for highly reliable data storage systems with large data storage capacities and performance requirements. To preserve a high degree of data integrity, these applications must rely on intelligent data storage devices that can provide reliable indicators of data degradation. Error correction activity generally occurs within storage devices without notification to the host. Early indicators of degradation and media error monitoring 333 and reporting (MEMR) techniques implemented in data storage devices allow network storage management applications to notify system administrators of these events and to take appropriate corrective actions before catastrophic errors occur. Although MEMR techniques have been implemented in data storage devices for many years, until 1996 no MEMR standards existed. In 1996 the American National Standards Institute (ANSI) approved the only known (world-wide) industry standard specifying MEMR techniques to verify stored data on optical disks. This industry standard was developed under the auspices of the Association for Information and Image Management (AIIM). A recently formed AIIM Optical Tape Subcommittee initiated the development of another data integrity standard specifying a set of media error monitoring tools and media error monitoring information (MEMRI) to verify stored data on optical tape media. This paper discusses the need for intelligent storage devices that can provide data integrity metadata, the content of the existing data integrity standard for optical disks, and the content of the MEMRI standard being developed by the AIIM Optical Tape Subcommittee.
The application of robotics to microlaryngeal laser surgery.
Buckmire, Robert A; Wong, Yu-Tung; Deal, Allison M
2015-06-01
To evaluate the performance of human subjects, using a prototype robotic micromanipulator controller in a simulated, microlaryngeal operative setting. Observational cross-sectional study. Twenty-two human subjects with varying degrees of laser experience performed CO2 laser surgical tasks within a simulated microlaryngeal operative setting using an industry standard manual micromanipulator (MMM) and a prototype robotic micromanipulator controller (RMC). Accuracy, repeatability, and ablation consistency measures were obtained for each human subject across both conditions and for the preprogrammed RMC device. Using the standard MMM, surgeons with >10 previous laser cases performed superior to subjects with fewer cases on measures of error percentage and cumulative error (P = .045 and .03, respectively). No significant differences in performance were observed between subjects using the RMC device. In the programmed (P/A) mode, the RMC performed equivalently or superiorly to experienced human subjects on accuracy and repeatability measures, and nearly an order of magnitude better on measures of ablation consistency. The programmed RMC performed significantly better for repetition error when compared to human subjects with <100 previous laser cases (P = .04). Experienced laser surgeons perform better than novice surgeons on tasks of accuracy and repeatability using the MMM device but roughly equivalently using the novel RMC. Operated in the P/A mode, the RMC performs equivalently or superior to experienced laser surgeons using the industry standard MMM for all measured parameters, and delivers an ablation consistency nearly an order of magnitude better than human laser operators. NA. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
Lee, Jin H; Howell, David R; Meehan, William P; Iverson, Grant L; Gardner, Andrew J
2017-09-01
The Sport Concussion Assessment Tool-Third Edition (SCAT3) is currently considered the standard sideline assessment for concussions. In-game exercise, however, may affect SCAT3 performance and the diagnosis of concussions. To examine the influence of exercise on SCAT3 performance in professional male athletes. Controlled laboratory study. We examined the SCAT3 performance of 82 professional male athletes under 2 conditions: at rest and after exercise. Athletes reported significantly fewer total symptoms (mean, 1.0 ± 1.5 vs 1.6 ± 2.3 total symptoms, respectively; P = .008; Cohen d = 0.34), committed significantly fewer errors on the modified Balance Error Scoring System (mean, 3.5 ± 3.5 vs 4.6 ± 4.1 errors, respectively; P = .017; d = 0.31), and required significantly less time to complete the tandem gait test (mean, 9.5 ± 1.4 vs 9.9 ± 1.7 seconds, respectively; P = .02; d = 0.30) during the at-rest condition compared with the postexercise condition. The interpretation of in-game (sideline) SCAT3 results should consider the effects of postexercise fatigue levels on an athlete's performance, particularly if preseason baseline data have been collected when the athlete was well rested. Exercise appears to affect symptom burden and physical abilities, such as balance and tandem gait, more so than the cognitive components of the SCAT3.
Wang, Zhipeng; Wang, Shujing; Zhu, Yanbo; Xin, Pumin
2017-01-01
Ionospheric delay is one of the largest and most variable sources of error for Ground-Based Augmentation System (GBAS) users because inospheric activity is unpredictable. Under normal conditions, GBAS eliminates ionospheric delays, but during extreme ionospheric storms, GBAS users and GBAS ground facilities may experience different ionospheric delays, leading to considerable differential errors and threatening the safety of users. Therefore, ionospheric monitoring and assessment are important parts of GBAS integrity monitoring. To study the effects of the ionosphere on the GBAS of Guangdong Province, China, GPS data collected from 65 reference stations were processed using the improved “Simple Truth” algorithm. In addition, the ionospheric characteristics of Guangdong Province were calculated and an ionospheric threat model was established. Finally, we evaluated the influence of the standard deviation and maximum ionospheric gradient on GBAS. The results show that, under normal ionospheric conditions, the vertical protection level of GBAS was increased by 0.8 m for the largest over bound σvig (sigma of vertical ionospheric gradient), and in the case of the maximum ionospheric gradient conditions, the differential correction error may reach 5 m. From an airworthiness perspective, when the satellite is at a low elevation, this interference does not cause airworthiness risks, but when the satellite is at a high elevation, this interference can cause airworthiness risks. PMID:29019953
Wang, Zhipeng; Wang, Shujing; Zhu, Yanbo; Xin, Pumin
2017-10-11
Ionospheric delay is one of the largest and most variable sources of error for Ground-Based Augmentation System (GBAS) users because inospheric activity is unpredictable. Under normal conditions, GBAS eliminates ionospheric delays, but during extreme ionospheric storms, GBAS users and GBAS ground facilities may experience different ionospheric delays, leading to considerable differential errors and threatening the safety of users. Therefore, ionospheric monitoring and assessment are important parts of GBAS integrity monitoring. To study the effects of the ionosphere on the GBAS of Guangdong Province, China, GPS data collected from 65 reference stations were processed using the improved "Simple Truth" algorithm. In addition, the ionospheric characteristics of Guangdong Province were calculated and an ionospheric threat model was established. Finally, we evaluated the influence of the standard deviation and maximum ionospheric gradient on GBAS. The results show that, under normal ionospheric conditions, the vertical protection level of GBAS was increased by 0.8 m for the largest over bound σ v i g (sigma of vertical ionospheric gradient), and in the case of the maximum ionospheric gradient conditions, the differential correction error may reach 5 m. From an airworthiness perspective, when the satellite is at a low elevation, this interference does not cause airworthiness risks, but when the satellite is at a high elevation, this interference can cause airworthiness risks.
Song, Mi; Chen, Zeng-Ping; Chen, Yao; Jin, Jing-Wen
2014-07-01
Liquid chromatography-mass spectrometry assays suffer from signal instability caused by the gradual fouling of the ion source, vacuum instability, aging of the ion multiplier, etc. To address this issue, in this contribution, an internal standard was added into the mobile phase. The internal standard was therefore ionized and detected together with the analytes of interest by the mass spectrometer to ensure that variations in measurement conditions and/or instrument have similar effects on the signal contributions of both the analytes of interest and the internal standard. Subsequently, based on the unique strategy of adding internal standard in mobile phase, a multiplicative effects model was developed for quantitative LC-MS assays and tested on a proof of concept model system: the determination of amino acids in water by LC-MS. The experimental results demonstrated that the proposed method could efficiently mitigate the detrimental effects of continuous signal variation, and achieved quantitative results with average relative predictive error values in the range of 8.0-15.0%, which were much more accurate than the corresponding results of conventional internal standard method based on the peak height ratio and partial least squares method (their average relative predictive error values were as high as 66.3% and 64.8%, respectively). Therefore, it is expected that the proposed method can be developed and extended in quantitative LC-MS analysis of more complex systems. Copyright © 2014 Elsevier B.V. All rights reserved.
Warrick, J.A.; Rubin, D.M.; Ruggiero, P.; Harney, J.N.; Draut, A.E.; Buscombe, D.
2009-01-01
A new application of the autocorrelation grain size analysis technique for mixed to coarse sediment settings has been investigated. Photographs of sand- to boulder-sized sediment along the Elwha River delta beach were taken from approximately 1??2 m above the ground surface, and detailed grain size measurements were made from 32 of these sites for calibration and validation. Digital photographs were found to provide accurate estimates of the long and intermediate axes of the surface sediment (r2 > 0??98), but poor estimates of the short axes (r2 = 0??68), suggesting that these short axes were naturally oriented in the vertical dimension. The autocorrelation method was successfully applied resulting in total irreducible error of 14% over a range of mean grain sizes of 1 to 200 mm. Compared with reported edge and object-detection results, it is noted that the autocorrelation method presented here has lower error and can be applied to a much broader range of mean grain sizes without altering the physical set-up of the camera (~200-fold versus ~6-fold). The approach is considerably less sensitive to lighting conditions than object-detection methods, although autocorrelation estimates do improve when measures are taken to shade sediments from direct sunlight. The effects of wet and dry conditions are also evaluated and discussed. The technique provides an estimate of grain size sorting from the easily calculated autocorrelation standard error, which is correlated with the graphical standard deviation at an r2 of 0??69. The technique is transferable to other sites when calibrated with linear corrections based on photo-based measurements, as shown by excellent grain-size analysis results (r2 = 0??97, irreducible error = 16%) from samples from the mixed grain size beaches of Kachemak Bay, Alaska. Thus, a method has been developed to measure mean grain size and sorting properties of coarse sediments. ?? 2009 John Wiley & Sons, Ltd.
Shi, Joy; Korsiak, Jill; Roth, Daniel E
2018-03-01
We aimed to demonstrate the use of jackknife residuals to take advantage of the longitudinal nature of available growth data in assessing potential biologically implausible values and outliers. Artificial errors were induced in 5% of length, weight, and head circumference measurements, measured on 1211 participants from the Maternal Vitamin D for Infant Growth (MDIG) trial from birth to 24 months of age. Each child's sex- and age-standardized z-score or raw measurements were regressed as a function of age in child-specific models. Each error responsible for a biologically implausible decrease between a consecutive pair of measurements was identified based on the higher of the two absolute values of jackknife residuals in each pair. In further analyses, outliers were identified as those values beyond fixed cutoffs of the jackknife residuals (e.g., greater than +5 or less than -5 in primary analyses). Kappa, sensitivity, and specificity were calculated over 1000 simulations to assess the ability of the jackknife residual method to detect induced errors and to compare these methods with the use of conditional growth percentiles and conventional cross-sectional methods. Among the induced errors that resulted in a biologically implausible decrease in measurement between two consecutive values, the jackknife residual method identified the correct value in 84.3%-91.5% of these instances when applied to the sex- and age-standardized z-scores, with kappa values ranging from 0.685 to 0.795. Sensitivity and specificity of the jackknife method were higher than those of the conditional growth percentile method, but specificity was lower than for conventional cross-sectional methods. Using jackknife residuals provides a simple method to identify biologically implausible values and outliers in longitudinal child growth data sets in which each child contributes at least 4 serial measurements. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-01-01
A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less
Magnetic-field sensing with quantum error detection under the effect of energy relaxation
NASA Astrophysics Data System (ADS)
Matsuzaki, Yuichiro; Benjamin, Simon
2017-03-01
A solid state spin is an attractive system with which to realize an ultrasensitive magnetic field sensor. A spin superposition state will acquire a phase induced by the target field, and we can estimate the field strength from this phase. Recent studies have aimed at improving sensitivity through the use of quantum error correction (QEC) to detect and correct any bit-flip errors that may occur during the sensing period. Here we investigate the performance of a two-qubit sensor employing QEC and under the effect of energy relaxation. Surprisingly, we find that the standard QEC technique to detect and recover from an error does not improve the sensitivity compared with the single-qubit sensors. This is a consequence of the fact that the energy relaxation induces both a phase-flip and a bit-flip noise where the former noise cannot be distinguished from the relative phase induced from the target fields. However, we have found that we can improve the sensitivity if we adopt postselection to discard the state when error is detected. Even when quantum error detection is moderately noisy, and allowing for the cost of the postselection technique, we find that this two-qubit system shows an advantage in sensing over a single qubit in the same conditions.
Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems
Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang
2015-01-01
The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches. PMID:26007726
Ion diffusion may introduce spurious current sources in current-source density (CSD) analysis.
Halnes, Geir; Mäki-Marttunen, Tuomo; Pettersen, Klas H; Andreassen, Ole A; Einevoll, Gaute T
2017-07-01
Current-source density (CSD) analysis is a well-established method for analyzing recorded local field potentials (LFPs), that is, the low-frequency part of extracellular potentials. Standard CSD theory is based on the assumption that all extracellular currents are purely ohmic, and thus neglects the possible impact from ionic diffusion on recorded potentials. However, it has previously been shown that in physiological conditions with large ion-concentration gradients, diffusive currents can evoke slow shifts in extracellular potentials. Using computer simulations, we here show that diffusion-evoked potential shifts can introduce errors in standard CSD analysis, and can lead to prediction of spurious current sources. Further, we here show that the diffusion-evoked prediction errors can be removed by using an improved CSD estimator which accounts for concentration-dependent effects. NEW & NOTEWORTHY Standard CSD analysis does not account for ionic diffusion. Using biophysically realistic computer simulations, we show that unaccounted-for diffusive currents can lead to the prediction of spurious current sources. This finding may be of strong interest for in vivo electrophysiologists doing extracellular recordings in general, and CSD analysis in particular. Copyright © 2017 the American Physiological Society.
Random errors in interferometry with the least-squares method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Qi
2011-01-20
This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less
Measuring the uncertainties of discharge measurements: interlaboratory experiments in hydrometry
NASA Astrophysics Data System (ADS)
Le Coz, Jérôme; Blanquart, Bertrand; Pobanz, Karine; Dramais, Guillaume; Pierrefeu, Gilles; Hauet, Alexandre; Despax, Aurélien
2015-04-01
Quantifying the uncertainty of streamflow data is key for hydrological sciences. The conventional uncertainty analysis based on error propagation techniques is restricted by the absence of traceable discharge standards and by the weight of difficult-to-predict errors related to the operator, procedure and measurement environment. Field interlaboratory experiments recently emerged as an efficient, standardized method to 'measure' the uncertainties of a given streamgauging technique in given measurement conditions. Both uncertainty approaches are compatible and should be developed jointly in the field of hydrometry. In the recent years, several interlaboratory experiments have been reported by different hydrological services. They involved different streamgauging techniques, including acoustic profilers (ADCP), current-meters and handheld radars (SVR). Uncertainty analysis was not always their primary goal: most often, testing the proficiency and homogeneity of instruments, makes and models, procedures and operators was the original motivation. When interlaboratory experiments are processed for uncertainty analysis, once outliers have been discarded all participants are assumed to be equally skilled and to apply the same streamgauging technique in equivalent conditions. A universal requirement is that all participants simultaneously measure the same discharge, which shall be kept constant within negligible variations. To our best knowledge, we were the first to apply the interlaboratory method for computing the uncertainties of streamgauging techniques, according to the authoritative international documents (ISO standards). Several specific issues arise due to the measurements conditions in outdoor canals and rivers. The main limitation is that the best available river discharge references are usually too uncertain to quantify the bias of the streamgauging technique, i.e. the systematic errors that are common to all participants in the experiment. A reference or a sensitivity analysis to the fixed parameters of the streamgauging technique remain very useful for estimating the uncertainty related to the (non quantified) bias correction. In the absence of a reference, the uncertainty estimate is referenced to the average of all discharge measurements in the interlaboratory experiment, ignoring the technique bias. Simple equations can be used to assess the uncertainty of the uncertainty results, as a function of the number of participants and of repeated measurements. The interlaboratory method was applied to several interlaboratory experiments on ADCPs and currentmeters mounted on wading rods, in streams of different sizes and aspects, with 10 to 30 instruments, typically. The uncertainty results were consistent with the usual expert judgment and highly depended on the measurement environment. Approximately, the expanded uncertainties (within the 95% probability interval) were ±5% to ±10% for ADCPs in good or poor conditions, and ±10% to ±15% for currentmeters in shallow creeks. Due to the specific limitations related to a slow measurement process and to small, natural streams, uncertainty results for currentmeters were more uncertain than for ADCPs, for which the site-specific errors were significantly evidenced. The proposed method can be applied to a wide range of interlaboratory experiments conducted in contrasted environments for different streamgauging techniques, in a standardized way. Ideally, an international open database would enhance the investigation of hydrological data uncertainties, according to the characteristics of the measurement conditions and procedures. Such a dataset could be used for implementing and validating uncertainty propagation methods in hydrometry.
Effects of Listening Conditions, Error Types, and Ensemble Textures on Error Detection Skills
ERIC Educational Resources Information Center
Waggoner, Dori T.
2011-01-01
This study was designed with three main purposes: (a) to investigate the effects of two listening conditions on error detection accuracy, (b) to compare error detection responses for rhythm errors and pitch errors, and (c) to examine the influences of texture on error detection accuracy. Undergraduate music education students (N = 18) listened to…
Impacts of motivational valence on the error-related negativity elicited by full and partial errors.
Maruo, Yuya; Schacht, Annekathrin; Sommer, Werner; Masaki, Hiroaki
2016-02-01
Affect and motivation influence the error-related negativity (ERN) elicited by full errors; however, it is unknown whether they also influence ERNs to correct responses accompanied by covert incorrect response activation (partial errors). Here we compared a neutral condition with conditions, where correct responses were rewarded or where incorrect responses were punished with gains and losses of small amounts of money, respectively. Data analysis distinguished ERNs elicited by full and partial errors. In the reward and punishment conditions, ERN amplitudes to both full and partial errors were larger than in the neutral condition, confirming participants' sensitivity to the significance of errors. We also investigated the relationships between ERN amplitudes and the behavioral inhibition and activation systems (BIS/BAS). Regardless of reward/punishment condition, participants scoring higher on BAS showed smaller ERN amplitudes in full error trials. These findings provide further evidence that the ERN is related to motivational valence and that similar relationships hold for both full and partial errors. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Bo; Zhou, Xian; Ma, Yanan; Luo, Jun; Zhong, Kangping; Qiu, Shaofeng; Feng, Zhiyong; Luo, Yazhi; Agustin, Mikel; Ledentsov, Nikolay; Kropp, Joerg; Shchukin, Vitaly; Ledentsov, Nikolay N.; Eddie, Iain; Chao, Lu
2016-03-01
Discrete Multitone Transmission (DMT) transmission over standard multimode fiber (MMF) using high-speed single (SM) and multimode (MM) Vertical-Cavity Surface-Emitting Lasers (VCSELs) is studied. Transmission speed in the range of 72Gbps to 82Gbps over 300m -100m distances of OM4 fiber is realized, respectively, at Bit-Error-Ratio (BER) <5e-3 and the received optical power of only -5dBm. Such BER condition requires only 7% overhead for the conversion to error-free operation using single Bose-Chaudhuri-Hocquenghem forward error correction (BCH-FEC) coding and decoding. SM VCSEL is demonstrated to provide a much higher data transmission capacity over MMF. For 100m MMF transmission SM VCSEL allows 82Gbps as compared to MM VCSEL resulting in only 34Gbps at the same power (-5dBm). Furthermore, MM VCSEL link at 0dBm is still restricted at 100m distance by 63Gbps while SM VCSEL can exceed 100Gbps at such power levels. We believe that with further improvement in SM VCSELs and fiber coupling >100Gbps data transmission over >300m MMF distances at the BER levels matching the industry standards will become possible.
Total ozone trend significance from space time variability of daily Dobson data
NASA Technical Reports Server (NTRS)
Wilcox, R. W.
1981-01-01
Estimates of standard errors of total ozone time and area means, as derived from ozone's natural temporal and spatial variability and autocorrelation in middle latitudes determined from daily Dobson data are presented. Assessing the significance of apparent total ozone trends is equivalent to assessing the standard error of the means. Standard errors of time averages depend on the temporal variability and correlation of the averaged parameter. Trend detectability is discussed, both for the present network and for satellite measurements.
Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C
2013-12-01
To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.
Evaluation of lens distortion errors in video-based motion analysis
NASA Technical Reports Server (NTRS)
Poliner, Jeffrey; Wilmington, Robert; Klute, Glenn K.; Micocci, Angelo
1993-01-01
In an effort to study lens distortion errors, a grid of points of known dimensions was constructed and videotaped using a standard and a wide-angle lens. Recorded images were played back on a VCR and stored on a personal computer. Using these stored images, two experiments were conducted. Errors were calculated as the difference in distance from the known coordinates of the points to the calculated coordinates. The purposes of this project were as follows: (1) to develop the methodology to evaluate errors introduced by lens distortion; (2) to quantify and compare errors introduced by use of both a 'standard' and a wide-angle lens; (3) to investigate techniques to minimize lens-induced errors; and (4) to determine the most effective use of calibration points when using a wide-angle lens with a significant amount of distortion. It was seen that when using a wide-angle lens, errors from lens distortion could be as high as 10 percent of the size of the entire field of view. Even with a standard lens, there was a small amount of lens distortion. It was also found that the choice of calibration points influenced the lens distortion error. By properly selecting the calibration points and avoidance of the outermost regions of a wide-angle lens, the error from lens distortion can be kept below approximately 0.5 percent with a standard lens and 1.5 percent with a wide-angle lens.
Two Cultures in Modern Science and Technology: For Safety and Validity Does Medicine Have to Update?
Becker, Robert E
2016-01-11
Two different scientific cultures go unreconciled in modern medicine. Each culture accepts that scientific knowledge and technologies are vulnerable to and easily invalidated by methods and conditions of acquisition, interpretation, and application. How these vulnerabilities are addressed separates the 2 cultures and potentially explains medicine's difficulties eradicating errors. A traditional culture, dominant in medicine, leaves error control in the hands of individual and group investigators and practitioners. A competing modern scientific culture accepts errors as inevitable, pernicious, and pervasive sources of adverse events throughout medical research and patient care too malignant for individuals or groups to control. Error risks to the validity of scientific knowledge and safety in patient care require systemwide programming able to support a culture in medicine grounded in tested, continually updated, widely promulgated, and uniformly implemented standards of practice for research and patient care. Experiences from successes in other sciences and industries strongly support the need for leadership from the Institute of Medicine's recommended Center for Patient Safely within the Federal Executive branch of government.
Intravenous Chemotherapy Compounding Errors in a Follow-Up Pan-Canadian Observational Study.
Gilbert, Rachel E; Kozak, Melissa C; Dobish, Roxanne B; Bourrier, Venetia C; Koke, Paul M; Kukreti, Vishal; Logan, Heather A; Easty, Anthony C; Trbovich, Patricia L
2018-05-01
Intravenous (IV) compounding safety has garnered recent attention as a result of high-profile incidents, awareness efforts from the safety community, and increasingly stringent practice standards. New research with more-sensitive error detection techniques continues to reinforce that error rates with manual IV compounding are unacceptably high. In 2014, our team published an observational study that described three types of previously unrecognized and potentially catastrophic latent chemotherapy preparation errors in Canadian oncology pharmacies that would otherwise be undetectable. We expand on this research and explore whether additional potential human failures are yet to be addressed by practice standards. Field observations were conducted in four cancer center pharmacies in four Canadian provinces from January 2013 to February 2015. Human factors specialists observed and interviewed pharmacy managers, oncology pharmacists, pharmacy technicians, and pharmacy assistants as they carried out their work. Emphasis was on latent errors (potential human failures) that could lead to outcomes such as wrong drug, dose, or diluent. Given the relatively short observational period, no active failures or actual errors were observed. However, 11 latent errors in chemotherapy compounding were identified. In terms of severity, all 11 errors create the potential for a patient to receive the wrong drug or dose, which in the context of cancer care, could lead to death or permanent loss of function. Three of the 11 practices were observed in our previous study, but eight were new. Applicable Canadian and international standards and guidelines do not explicitly address many of the potentially error-prone practices observed. We observed a significant degree of risk for error in manual mixing practice. These latent errors may exist in other regions where manual compounding of IV chemotherapy takes place. Continued efforts to advance standards, guidelines, technological innovation, and chemical quality testing are needed.
Intimate Partner Violence, 1993-2010
... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ... appendix table 2 for standard errors. *Due to methodological changes, use caution when comparing 2006 NCVS criminal ...
Aquatic habitat mapping with an acoustic doppler current profiler: Considerations for data quality
Gaeuman, David; Jacobson, Robert B.
2005-01-01
When mounted on a boat or other moving platform, acoustic Doppler current profilers (ADCPs) can be used to map a wide range of ecologically significant phenomena, including measures of fluid shear, turbulence, vorticity, and near-bed sediment transport. However, the instrument movement necessary for mapping applications can generate significant errors, many of which have not been inadequately described. This report focuses on the mechanisms by which moving-platform errors are generated, and quantifies their magnitudes under typical habitat-mapping conditions. The potential for velocity errors caused by mis-alignment of the instrument?s internal compass are widely recognized, but has not previously been quantified for moving instruments. Numerical analyses show that even relatively minor compass mis-alignments can produce significant velocity errors, depending on the ratio of absolute instrument velocity to the target velocity and on the relative directions of instrument and target motion. A maximum absolute instrument velocity of about 1 m/s is recommended for most mapping applications. Lower velocities are appropriate when making bed velocity measurements, an emerging application that makes use of ADCP bottom-tracking to measure the velocity of sediment particles at the bed. The mechanisms by which heterogeneities in the flow velocity field generate horizontal velocities errors are also quantified, and some basic limitations in the effectiveness of standard error-detection criteria for identifying these errors are described. Bed velocity measurements may be particularly vulnerable to errors caused by spatial variability in the sediment transport field.
Calibration procedure for a laser triangulation scanner with uncertainty evaluation
NASA Astrophysics Data System (ADS)
Genta, Gianfranco; Minetola, Paolo; Barbato, Giulio
2016-11-01
Most of low cost 3D scanning devices that are nowadays available on the market are sold without a user calibration procedure to correct measurement errors related to changes in environmental conditions. In addition, there is no specific international standard defining a procedure to check the performance of a 3D scanner along time. This paper aims at detailing a thorough methodology to calibrate a 3D scanner and assess its measurement uncertainty. The proposed procedure is based on the use of a reference ball plate and applied to a triangulation laser scanner. Experimental results show that the metrological performance of the instrument can be greatly improved by the application of the calibration procedure that corrects systematic errors and reduces the device's measurement uncertainty.
NASA Astrophysics Data System (ADS)
Morgan, A. M.; Aird, E. G. A.; Aukett, R. J.; Duane, S.; Jenkins, N. H.; Mayles, W. P. M.; Moretti, C.; Thwaites, D. I.
2000-09-01
United Kingdom dosimetry codes of practice have traditionally specified one electrometer for use as a secondary standard, namely the Nuclear Enterprises (NE) 2560 NPL secondary standard therapy level exposure meter. The NE2560 will become obsolete in the foreseeable future. This report provides guidelines to assist physicists following the United Kingdom dosimetry codes of practice in the selection of an electrometer to replace the NE2560 when necessary. Using an internationally accepted standard (BS EN 60731:1997) as a basis, estimated error analyses demonstrate that the uncertainty (one standard deviation) in a charge measurement associated with the NE2560 alone is approximately 0.3% under specified conditions. Following a review of manufacturers' literature, it is considered that modern electrometers should be capable of equalling this performance. Additional constructural and operational requirements not specified in the international standard but considered essential in a modern electrometer to be used as a secondary standard are presented.
Estimating extreme stream temperatures by the standard deviate method
NASA Astrophysics Data System (ADS)
Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz
2006-02-01
It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.
Decreasing patient identification band errors by standardizing processes.
Walley, Susan Chu; Berger, Stephanie; Harris, Yolanda; Gallizzi, Gina; Hayes, Leslie
2013-04-01
Patient identification (ID) bands are an essential component in patient ID. Quality improvement methodology has been applied as a model to reduce ID band errors although previous studies have not addressed standardization of ID bands. Our specific aim was to decrease ID band errors by 50% in a 12-month period. The Six Sigma DMAIC (define, measure, analyze, improve, and control) quality improvement model was the framework for this study. ID bands at a tertiary care pediatric hospital were audited from January 2011 to January 2012 with continued audits to June 2012 to confirm the new process was in control. After analysis, the major improvement strategy implemented was standardization of styles of ID bands and labels. Additional interventions included educational initiatives regarding the new ID band processes and disseminating institutional and nursing unit data. A total of 4556 ID bands were audited with a preimprovement ID band error average rate of 9.2%. Significant variation in the ID band process was observed, including styles of ID bands. Interventions were focused on standardization of the ID band and labels. The ID band error rate improved to 5.2% in 9 months (95% confidence interval: 2.5-5.5; P < .001) and was maintained for 8 months. Standardization of ID bands and labels in conjunction with other interventions resulted in a statistical decrease in ID band error rates. This decrease in ID band error rates was maintained over the subsequent 8 months.
Methods for estimating flood frequency in Montana based on data through water year 1998
Parrett, Charles; Johnson, Dave R.
2004-01-01
Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.
Human operator response to error-likely situations in complex engineering systems
NASA Technical Reports Server (NTRS)
Morris, Nancy M.; Rouse, William B.
1988-01-01
The causes of human error in complex systems are examined. First, a conceptual framework is provided in which two broad categories of error are discussed: errors of action, or slips, and errors of intention, or mistakes. Conditions in which slips and mistakes might be expected to occur are identified, based on existing theories of human error. Regarding the role of workload, it is hypothesized that workload may act as a catalyst for error. Two experiments are presented in which humans' response to error-likely situations were examined. Subjects controlled PLANT under a variety of conditions and periodically provided subjective ratings of mental effort. A complex pattern of results was obtained, which was not consistent with predictions. Generally, the results of this research indicate that: (1) humans respond to conditions in which errors might be expected by attempting to reduce the possibility of error, and (2) adaptation to conditions is a potent influence on human behavior in discretionary situations. Subjects' explanations for changes in effort ratings are also explored.
Schoenberg, Mike R; Rum, Ruba S
2017-11-01
Rapid, clear and efficient communication of neuropsychological results is essential to benefit patient care. Errors in communication are a lead cause of medical errors; nevertheless, there remains a lack of consistency in how neuropsychological scores are communicated. A major limitation in the communication of neuropsychological results is the inconsistent use of qualitative descriptors for standardized test scores and the use of vague terminology. PubMed search from 1 Jan 2007 to 1 Aug 2016 to identify guidelines or consensus statements for the description and reporting of qualitative terms to communicate neuropsychological test scores was conducted. The review found the use of confusing and overlapping terms to describe various ranges of percentile standardized test scores. In response, we propose a simplified set of qualitative descriptors for normalized test scores (Q-Simple) as a means to reduce errors in communicating test results. The Q-Simple qualitative terms are: 'very superior', 'superior', 'high average', 'average', 'low average', 'borderline' and 'abnormal/impaired'. A case example illustrates the proposed Q-Simple qualitative classification system to communicate neuropsychological results for neurosurgical planning. The Q-Simple qualitative descriptor system is aimed as a means to improve and standardize communication of standardized neuropsychological test scores. Research are needed to further evaluate neuropsychological communication errors. Conveying the clinical implications of neuropsychological results in a manner that minimizes risk for communication errors is a quintessential component of evidence-based practice. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, H; Chen, Z; Nath, R
Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less
NASA Astrophysics Data System (ADS)
Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne
2014-01-01
Inverse modelling techniques can be used to estimate the amount of radionuclides and the temporal profile of the source term released in the atmosphere during the accident of the Fukushima Daiichi nuclear power plant in March 2011. In Winiarek et al. (2012b), the lower bounds of the caesium-137 and iodine-131 source terms were estimated with such techniques, using activity concentration measurements. The importance of an objective assessment of prior errors (the observation errors and the background errors) was emphasised for a reliable inversion. In such critical context where the meteorological conditions can make the source term partly unobservable and where only a few observations are available, such prior estimation techniques are mandatory, the retrieved source term being very sensitive to this estimation. We propose to extend the use of these techniques to the estimation of prior errors when assimilating observations from several data sets. The aim is to compute an estimate of the caesium-137 source term jointly using all available data about this radionuclide, such as activity concentrations in the air, but also daily fallout measurements and total cumulated fallout measurements. It is crucial to properly and simultaneously estimate the background errors and the prior errors relative to each data set. A proper estimation of prior errors is also a necessary condition to reliably estimate the a posteriori uncertainty of the estimated source term. Using such techniques, we retrieve a total released quantity of caesium-137 in the interval 11.6-19.3 PBq with an estimated standard deviation range of 15-20% depending on the method and the data sets. The “blind” time intervals of the source term have also been strongly mitigated compared to the first estimations with only activity concentration data.
A comparison of exact tests for trend with binary endpoints using Bartholomew's statistic.
Consiglio, J D; Shan, G; Wilding, G E
2014-01-01
Tests for trend are important in a number of scientific fields when trends associated with binary variables are of interest. Implementing the standard Cochran-Armitage trend test requires an arbitrary choice of scores assigned to represent the grouping variable. Bartholomew proposed a test for qualitatively ordered samples using asymptotic critical values, but type I error control can be problematic in finite samples. To our knowledge, use of the exact probability distribution has not been explored, and we study its use in the present paper. Specifically we consider an approach based on conditioning on both sets of marginal totals and three unconditional approaches where only the marginal totals corresponding to the group sample sizes are treated as fixed. While slightly conservative, all four tests are guaranteed to have actual type I error rates below the nominal level. The unconditional tests are found to exhibit far less conservatism than the conditional test and thereby gain a power advantage.
Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You
2013-11-04
A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.
Comparing interval estimates for small sample ordinal CFA models
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
Cost-effectiveness of the stream-gaging program in Nebraska
Engel, G.B.; Wahl, K.L.; Boohar, J.A.
1984-01-01
This report documents the results of a study of the cost-effectiveness of the streamflow information program in Nebraska. Presently, 145 continuous surface-water stations are operated in Nebraska on a budget of $908,500. Data uses and funding sources are identified for each of the 145 stations. Data from most stations have multiple uses. All stations have sufficient justification for continuation, but two stations primarily are used in short-term research studies; their continued operation needs to be evaluated when the research studies end. The present measurement frequency produces an average standard error for instantaneous discharges of about 12 percent, including periods when stage data are missing. Altering the travel routes and the measurement frequency will allow a reduction in standard error of about 1 percent with the present budget. Standard error could be reduced to about 8 percent if lost record could be eliminated. A minimum budget of $822,000 is required to operate the present network, but operations at that funding level would result in an increase in standard error to about 16 percent. The maximum budget analyzed was $1,363,000, which would result in an average standard error of 6 percent. (USGS)
Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.
Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J
2012-08-01
Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.
Momen, Awad A; Zachariadis, George A; Anthemidis, Aristidis N; Stratis, John A
2007-01-15
Two digestion procedures have been tested on nut samples for application in the determination of essential (Cr, Cu, Fe, Mg, Mn, Zn) and non-essential (Al, Ba, Cd, Pb) elements by inductively coupled plasma-optical emission spectrometry (ICP-OES). These included wet digestions with HNO(3)/H(2)SO(4) and HNO(3)/H(2)SO(4)/H(2)O(2). The later one is recommended for better analytes recoveries (relative error<11%). Two calibrations (aqueous standard and standard addition) procedures were studied and proved that standard addition was preferable for all analytes. Experimental designs for seven factors (HNO(3), H(2)SO(4) and H(2)O(2) volumes, digestion time, pre-digestion time, temperature of the hot plate and sample weight) were used for optimization of sample digestion procedures. For this purpose Plackett-Burman fractional factorial design, which involve eight experiments was adopted. The factors HNO(3) and H(2)O(2) volume, and the digestion time were found to be the most important parameters. The instrumental conditions were also optimized (using peanut matrix rather than aqueous standard solutions) considering radio-frequency (rf) incident power, nebulizer argon gas flow rate and sample uptake flow rate. The analytical performance, such as limits of detection (LOD<0.74mugg(-1)), precision of the overall procedures (relative standard deviation between 2.0 and 8.2%) and accuracy (relative errors between 0.4 and 11%) were assessed statistically to evaluate the developed analytical procedures. The good agreement between measured and certified values for all analytes (relative error <11%) with respect to IAEA-331 (spinach leaves) and IAEA-359 (cabbage) indicates that the developed analytical method is well suited for further studies on the fate of major elements in nuts and possibly similar matrices.
NASA Astrophysics Data System (ADS)
Bowen, S. R.; Nyflot, M. J.; Herrmann, C.; Groh, C. M.; Meyer, J.; Wollenweber, S. D.; Stearns, C. W.; Kinahan, P. E.; Sandison, G. A.
2015-05-01
Effective positron emission tomography / computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [18F]FDG. The lung lesion insert was driven by six different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses, and 2%-2 mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10-20%, treatment planning errors were 5-10%, and treatment delivery errors were 5-30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5-10% in PET/CT imaging, <5% in treatment planning, and <2% in treatment delivery. We have demonstrated that estimation of respiratory motion uncertainty and its propagation from PET/CT imaging to RT planning, and RT delivery under a dose painting paradigm is feasible within an integrated respiratory motion phantom workflow. For a limited set of cases, the magnitude of errors was comparable during PET/CT imaging and treatment delivery without motion compensation. Errors were moderately mitigated during PET/CT imaging and significantly mitigated during RT delivery with motion compensation. This dynamic motion phantom end-to-end workflow provides a method for quality assurance of 4D PET/CT-guided radiotherapy, including evaluation of respiratory motion compensation methods during imaging and treatment delivery.
Bowen, S R; Nyflot, M J; Herrmann, C; Groh, C M; Meyer, J; Wollenweber, S D; Stearns, C W; Kinahan, P E; Sandison, G A
2015-05-07
Effective positron emission tomography / computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [(18)F]FDG. The lung lesion insert was driven by six different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses, and 2%-2 mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10-20%, treatment planning errors were 5-10%, and treatment delivery errors were 5-30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5-10% in PET/CT imaging, <5% in treatment planning, and <2% in treatment delivery. We have demonstrated that estimation of respiratory motion uncertainty and its propagation from PET/CT imaging to RT planning, and RT delivery under a dose painting paradigm is feasible within an integrated respiratory motion phantom workflow. For a limited set of cases, the magnitude of errors was comparable during PET/CT imaging and treatment delivery without motion compensation. Errors were moderately mitigated during PET/CT imaging and significantly mitigated during RT delivery with motion compensation. This dynamic motion phantom end-to-end workflow provides a method for quality assurance of 4D PET/CT-guided radiotherapy, including evaluation of respiratory motion compensation methods during imaging and treatment delivery.
Bowen, S R; Nyflot, M J; Hermann, C; Groh, C; Meyer, J; Wollenweber, S D; Stearns, C W; Kinahan, P E; Sandison, G A
2015-01-01
Effective positron emission tomography/computed tomography (PET/CT) guidance in radiotherapy of lung cancer requires estimation and mitigation of errors due to respiratory motion. An end-to-end workflow was developed to measure patient-specific motion-induced uncertainties in imaging, treatment planning, and radiation delivery with respiratory motion phantoms and dosimeters. A custom torso phantom with inserts mimicking normal lung tissue and lung lesion was filled with [18F]FDG. The lung lesion insert was driven by 6 different patient-specific respiratory patterns or kept stationary. PET/CT images were acquired under motionless ground truth, tidal breathing motion-averaged (3D), and respiratory phase-correlated (4D) conditions. Target volumes were estimated by standardized uptake value (SUV) thresholds that accurately defined the ground-truth lesion volume. Non-uniform dose-painting plans using volumetrically modulated arc therapy (VMAT) were optimized for fixed normal lung and spinal cord objectives and variable PET-based target objectives. Resulting plans were delivered to a cylindrical diode array at rest, in motion on a platform driven by the same respiratory patterns (3D), or motion-compensated by a robotic couch with an infrared camera tracking system (4D). Errors were estimated relative to the static ground truth condition for mean target-to-background (T/Bmean) ratios, target volumes, planned equivalent uniform target doses (EUD), and 2%-2mm gamma delivery passing rates. Relative to motionless ground truth conditions, PET/CT imaging errors were on the order of 10–20%, treatment planning errors were 5–10%, and treatment delivery errors were 5–30% without motion compensation. Errors from residual motion following compensation methods were reduced to 5–10% in PET/CT imaging, < 5% in treatment planning, and < 2% in treatment delivery. We have demonstrated that estimation of respiratory motion uncertainty and its propagation from PET/CT imaging to RT planning, and RT delivery under a dose painting paradigm is feasible within an integrated respiratory motion phantom workflow. For a limited set of cases, the magnitude of errors was comparable during PET/CT imaging and treatment delivery without motion compensation. Errors were moderately mitigated during PET/CT imaging and significantly mitigated during RT delivery with motion compensation. This dynamic motion phantom end-to-end workflow provides a method for quality assurance of 4D PET/CT-guided radiotherapy, including evaluation of respiratory motion compensation methods during imaging and treatment delivery. PMID:25884892
NASA Astrophysics Data System (ADS)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
Cost effectiveness of the US Geological Survey stream-gaging program in Alabama
Jeffcoat, H.H.
1987-01-01
A study of the cost effectiveness of the stream gaging program in Alabama identified data uses and funding sources for 72 surface water stations (including dam stations, slope stations, and continuous-velocity stations) operated by the U.S. Geological Survey in Alabama with a budget of $393,600. Of these , 58 gaging stations were used in all phases of the analysis at a funding level of $328,380. For the current policy of operation of the 58-station program, the average standard error of estimation of instantaneous discharge is 29.3%. This overall level of accuracy can be maintained with a budget of $319,800 by optimizing routes and implementing some policy changes. The maximum budget considered in the analysis was $361,200, which gave an average standard error of estimation of 20.6%. The minimum budget considered was $299,360, with an average standard error of estimation of 36.5%. The study indicates that a major source of error in the stream gaging records is lost or missing data that are the result of streamside equipment failure. If perfect equipment were available, the standard error in estimating instantaneous discharge under the current program and budget could be reduced to 18.6%. This can also be interpreted to mean that the streamflow data records have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)
Survey of Header Compression Techniques
NASA Technical Reports Server (NTRS)
Ishac, Joseph
2001-01-01
This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves compression schemes which provide better tolerances in conditions with a high BER.
The proposed coding standard at GSFC
NASA Technical Reports Server (NTRS)
Morakis, J. C.; Helgert, H. J.
1977-01-01
As part of the continuing effort to introduce standardization of spacecraft and ground equipment in satellite systems, NASA's Goddard Space Flight Center and other NASA facilities have supported the development of a set of standards for the use of error control coding in telemetry subsystems. These standards are intended to ensure compatibility between spacecraft and ground encoding equipment, while allowing sufficient flexibility to meet all anticipated mission requirements. The standards which have been developed to date cover the application of block codes in error detection and error correction modes, as well as short and long constraint length convolutional codes decoded via the Viterbi and sequential decoding algorithms, respectively. Included are detailed specifications of the codes, and their implementation. Current effort is directed toward the development of standards covering channels with burst noise characteristics, channels with feedback, and code concatenation.
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
H∞ output tracking control of discrete-time nonlinear systems via standard neural network models.
Liu, Meiqin; Zhang, Senlin; Chen, Haiyang; Sheng, Weihua
2014-10-01
This brief proposes an output tracking control for a class of discrete-time nonlinear systems with disturbances. A standard neural network model is used to represent discrete-time nonlinear systems whose nonlinearity satisfies the sector conditions. H∞ control performance for the closed-loop system including the standard neural network model, the reference model, and state feedback controller is analyzed using Lyapunov-Krasovskii stability theorem and linear matrix inequality (LMI) approach. The H∞ controller, of which the parameters are obtained by solving LMIs, guarantees that the output of the closed-loop system closely tracks the output of a given reference model well, and reduces the influence of disturbances on the tracking error. Three numerical examples are provided to show the effectiveness of the proposed H∞ output tracking design approach.
Derivation of an analytic expression for the error associated with the noise reduction rating
NASA Astrophysics Data System (ADS)
Murphy, William J.
2005-04-01
Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-20
...The Food and Drug Administration (FDA or we) is correcting the preamble to a proposed rule that published in the Federal Register of January 16, 2013. That proposed rule would establish science-based minimum standards for the safe growing, harvesting, packing, and holding of produce, meaning fruits and vegetables grown for human consumption. FDA proposed these standards as part of our implementation of the FDA Food Safety Modernization Act. The document published with several technical errors, including some errors in cross references, as well as several errors in reference numbers cited throughout the document. This document corrects those errors. We are also placing a corrected copy of the proposed rule in the docket.
Davis, Edward T; Pagkalos, Joseph; Gallie, Price A M; Macgroarty, Kelly; Waddell, James P; Schemitsch, Emil H
2015-01-01
Optimal component alignment in total knee arthroplasty has been associated with better functional outcome as well as improved implant longevity. The ability to align components optimally during minimally invasive (MIS) total knee replacement (TKR) has been a cause of concern. Computer navigation is a useful aid in achieving the desired alignment although it is limited by the error during the manual registration of landmarks. Our study aims to compare the registration process error between a standard and a MIS surgical approach. We hypothesized that performing the registration error via an MIS approach would increase the registration process error. Five fresh frozen lower limbs were routinely prepared and draped. The registration process was performed through an MIS approach. This was then extended to the standard approach and the registration was performed again. Two surgeons performed the registration process five times with each approach. Performing the registration process through the MIS approach was not associated with higher error compared to the standard approach in the alignment parameters of interest. This rejects our hypothesis. Image-free navigated MIS TKR does not appear to carry higher risk of component malalignment due to the registration process error. Navigation can be used during MIS TKR to improve alignment without reduced accuracy due to the approach.
The impact of social threat cues on a card sorting task with attentional-shifting demands.
Mohlman, Jan; DeVito, Alyssa
2017-12-01
The current study investigated social anxiety and attentional control using two versions of a task designed to tap intentional shifting of attention and set switching: the standard Wisconsin Card Sorting Test (WCST; Heaton, 1981) and a modified version that included emotionally salient pictorial stimuli, the Emotional Faces Card Sorting Test (EFCST). A Group (lower-, higher-SPS) by Condition (WCST, EFCST) by Sorting Rule (color, form, number) interaction was expected in which the higher-SPS EFCST group would have worse overall performance and make more perseverative errors than the other groups. No differences were predicted on nonperseverative errors, which are typically caused by brief attentional lapses. Participants were 80 undergraduate students who scored in the upper and lower quartile of the distribution on the Social Phobia Scale (SPS; Mattick & Clarke, 1998) were randomly assigned to complete either the WCST or EFCST. On the WCST, the higher-SPS group showed performance similar to that of the lower-SPS group. On the EFCST, the higher-SPS group evidenced more perseverative errors in the condition that depicted angry faces. Interpretations based on a non-clinical sample limit the generalisability of the conclusions. Reliability of this new measure has yet to be established. Successful completion of the WCST requires more than set-shifting processes. These results suggest that the higher-SPS group in the EFCST condition might have had trouble disengaging attention from threat-related cues, despite ongoing corrective feedback. Copyright © 2017. Published by Elsevier Ltd.
Lee, Jin H.; Howell, David R.; Meehan, William P.; Iverson, Grant L.; Gardner, Andrew J.
2017-01-01
Background: The Sport Concussion Assessment Tool–Third Edition (SCAT3) is currently considered the standard sideline assessment for concussions. In-game exercise, however, may affect SCAT3 performance and the diagnosis of concussions. Purpose: To examine the influence of exercise on SCAT3 performance in professional male athletes. Study Design: Controlled laboratory study. Methods: We examined the SCAT3 performance of 82 professional male athletes under 2 conditions: at rest and after exercise. Results: Athletes reported significantly fewer total symptoms (mean, 1.0 ± 1.5 vs 1.6 ± 2.3 total symptoms, respectively; P = .008; Cohen d = 0.34), committed significantly fewer errors on the modified Balance Error Scoring System (mean, 3.5 ± 3.5 vs 4.6 ± 4.1 errors, respectively; P = .017; d = 0.31), and required significantly less time to complete the tandem gait test (mean, 9.5 ± 1.4 vs 9.9 ± 1.7 seconds, respectively; P = .02; d = 0.30) during the at-rest condition compared with the postexercise condition. Conclusion: The interpretation of in-game (sideline) SCAT3 results should consider the effects of postexercise fatigue levels on an athlete’s performance, particularly if preseason baseline data have been collected when the athlete was well rested. Clinical Relevance: Exercise appears to affect symptom burden and physical abilities, such as balance and tandem gait, more so than the cognitive components of the SCAT3. PMID:28944251
The Effect of Information Level on Human-Agent Interaction for Route Planning
2015-12-01
13 Fig. 4 Experiment 1 shows regression results for time spent at DP predicting posttest trust group membership for the high LOI...decision time by pretest trust group membership. Bars denote standard error (SE). DT at DP was evaluated to see if it predicted posttest trust... group . Linear regression indicated that DT at DP was not a significant predictor of posttest trust for the Low or the Medium LOI conditions; however, it
Role-modeling and medical error disclosure: a national survey of trainees.
Martinez, William; Hickson, Gerald B; Miller, Bonnie M; Doukas, David J; Buckley, John D; Song, John; Sehgal, Niraj L; Deitz, Jennifer; Braddock, Clarence H; Lehmann, Lisa Soleymani
2014-03-01
To measure trainees' exposure to negative and positive role-modeling for responding to medical errors and to examine the association between that exposure and trainees' attitudes and behaviors regarding error disclosure. Between May 2011 and June 2012, 435 residents at two large academic medical centers and 1,187 medical students from seven U.S. medical schools received anonymous, electronic questionnaires. The questionnaire asked respondents about (1) experiences with errors, (2) training for responding to errors, (3) behaviors related to error disclosure, (4) exposure to role-modeling for responding to errors, and (5) attitudes regarding disclosure. Using multivariate regression, the authors analyzed whether frequency of exposure to negative and positive role-modeling independently predicted two primary outcomes: (1) attitudes regarding disclosure and (2) nontransparent behavior in response to a harmful error. The response rate was 55% (884/1,622). Training on how to respond to errors had the largest independent, positive effect on attitudes (standardized effect estimate, 0.32, P < .001); negative role-modeling had the largest independent, negative effect (standardized effect estimate, -0.26, P < .001). Positive role-modeling had a positive effect on attitudes (standardized effect estimate, 0.26, P < .001). Exposure to negative role-modeling was independently associated with an increased likelihood of trainees' nontransparent behavior in response to an error (OR 1.37, 95% CI 1.15-1.64; P < .001). Exposure to role-modeling predicts trainees' attitudes and behavior regarding the disclosure of harmful errors. Negative role models may be a significant impediment to disclosure among trainees.
Sandblom, Gabriel; Granroth, Sofie; Rasmussen, Ib Christian
2008-01-01
Although numerous tumour markers are available for periampullary tumours, including pancreatic cancer, their specificity and sensitivity have been questioned. To assess the diagnostic and prognostic values of tissue polypeptide specific antigen (TPS), carbohydrate antigen 19-9 (CA 19-9), vascular endothelial growth factor (VEGF-A), and carcinoembryonic antigen (CEA) we took serum samples in 56 patients with mass lesions in the pancreatic head. Among these patients, further investigations revealed pancreatic cancer in 20 patients, other malignant diseases in 12 and benign conditions in 24. Median CEA in all patients was 3.4 microg/L (range 0.5-585.0), median CA 19-9 was105 kU/L (range 0.6-1 300 00), median TPS 123.5 U/L (range 15.0-3350) and median VEGF-A 132.5 ng/L (range 60.0-4317). Area under the curve was 0.747, standard error (standard error [SE] =0.075) for CEA, 0.716 (SE=0.078) for CA 19-9 and 0.822 (SE=0.086) for TPS in ROC plots based on the ability of the tumours to distinguish between benign and malignant conditions. None of the markers significantly predicted survival in the subgroup of patients with pancreatic cancer. Our study shows that the markers may be used as fairly reliable diagnostic tools, but cannot be used to predict survival.
Velikina, Julia V; Samsonov, Alexey A
2015-11-01
To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models preestimated from training data. We introduce the model consistency condition (MOCCO) technique, which utilizes temporal models to regularize reconstruction without constraining the solution to be low-rank, as is performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Our method was compared with a standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE-MRA) and cardiac CINE imaging. We studied the sensitivity of all methods to rank reduction and temporal subspace modeling errors. MOCCO demonstrated reduced sensitivity to modeling errors compared with the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE-MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. © 2014 Wiley Periodicals, Inc.
Velikina, Julia V.; Samsonov, Alexey A.
2014-01-01
Purpose To accelerate dynamic MR imaging through development of a novel image reconstruction technique using low-rank temporal signal models pre-estimated from training data. Theory We introduce the MOdel Consistency COndition (MOCCO) technique that utilizes temporal models to regularize the reconstruction without constraining the solution to be low-rank as performed in related techniques. This is achieved by using a data-driven model to design a transform for compressed sensing-type regularization. The enforcement of general compliance with the model without excessively penalizing deviating signal allows recovery of a full-rank solution. Methods Our method was compared to standard low-rank approach utilizing model-based dimensionality reduction in phantoms and patient examinations for time-resolved contrast-enhanced angiography (CE MRA) and cardiac CINE imaging. We studied sensitivity of all methods to rank-reduction and temporal subspace modeling errors. Results MOCCO demonstrated reduced sensitivity to modeling errors compared to the standard approach. Full-rank MOCCO solutions showed significantly improved preservation of temporal fidelity and aliasing/noise suppression in highly accelerated CE MRA (acceleration up to 27) and cardiac CINE (acceleration up to 15) data. Conclusions MOCCO overcomes several important deficiencies of previously proposed methods based on pre-estimated temporal models and allows high quality image restoration from highly undersampled CE-MRA and cardiac CINE data. PMID:25399724
Technology utilization to prevent medication errors.
Forni, Allison; Chu, Hanh T; Fanikos, John
2010-01-01
Medication errors have been increasingly recognized as a major cause of iatrogenic illness and system-wide improvements have been the focus of prevention efforts. Critically ill patients are particularly vulnerable to injury resulting from medication errors because of the severity of illness, need for high risk medications with a narrow therapeutic index and frequent use of intravenous infusions. Health information technology has been identified as method to reduce medication errors as well as improve the efficiency and quality of care; however, few studies regarding the impact of health information technology have focused on patients in the intensive care unit. Computerized physician order entry and clinical decision support systems can play a crucial role in decreasing errors in the ordering stage of the medication use process through improving the completeness and legibility of orders, alerting physicians to medication allergies and drug interactions and providing a means for standardization of practice. Electronic surveillance, reminders and alerts identify patients susceptible to an adverse event, communicate critical changes in a patient's condition, and facilitate timely and appropriate treatment. Bar code technology, intravenous infusion safety systems, and electronic medication administration records can target prevention of errors in medication dispensing and administration where other technologies would not be able to intercept a preventable adverse event. Systems integration and compliance are vital components in the implementation of health information technology and achievement of a safe medication use process.
Correct consideration of the index of refraction using blackbody radiation.
Hartmann, Jurgen
2006-09-04
The correct consideration of the index of refraction when using blackbody radiators as standard sources for optical radiation is derived and discussed. It is shown that simply using the index of refraction of air at laboratory conditions is not sufficient. A combination of the index of refraction of the media used inside the blackbody radiator and for the optical path between blackbody and detector has to be used instead. A worst case approximation for the introduced error when neglecting these effects is presented, showing that the error is below 0.1 % for wavelengths above 200 nm. Nevertheless, for the determination of the spectral radiance for the purpose of radiation temperature measurements the correct consideration of the refractive index is mandatory. The worst case estimation reveals that the introduced error in temperature at a blackbody temperature of 3000 degrees C can be as high as 400 mk at a wavelength of 650 nm and even higher at longer wavelengths.
Robust Methods for Moderation Analysis with a Two-Level Regression Model.
Yang, Miao; Yuan, Ke-Hai
2016-01-01
Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.
Combining forecast weights: Why and how?
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim
2012-09-01
This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.
Computation of infrared cooling rates in the water vapor bands
NASA Technical Reports Server (NTRS)
Chou, M. D.; Arking, A.
1978-01-01
A fast but accurate method for calculating the infrared radiative terms due to water vapor has been developed. It makes use of the far wing approximation to scale transmission along an inhomogeneous path to an equivalent homogeneous path. Rather than using standard conditions for scaling, the reference temperatures and pressures are chosen in this study to correspond to the regions where cooling is most significant. This greatly increased the accuracy of the new method. Compared to line by line calculations, the new method has errors up to 4% of the maximum cooling rate, while a commonly used method based upon the Goody band model (Rodgers and Walshaw, 1966) introduces errors up to 11%. The effect of temperature dependence of transmittance has also been evaluated; the cooling rate errors range up to 11% when the temperature dependence is ignored. In addition to being more accurate, the new method is much faster than those based upon the Goody band model.
Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.
2018-01-01
Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737
Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D
2018-01-01
Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.
Chow, Gary C C; Yam, Timothy T T; Chung, Joanne W Y; Fong, Shirley S M
2017-02-01
This single-blinded, three-armed randomized controlled trial aimed to compare the effects of postexercise ice-water immersion (IWI), room-temperature water immersion (RWI), and no water immersion on the balance performance and knee joint proprioception of amateur rugby players. Fifty-three eligible amateur rugby players (mean age ± standard deviation: 21.6 ± 2.9 years) were randomly assigned to the IWI group (5.3 °C), RWI group (25.0 °C), or the no immersion control group. The participants in each group underwent the same fatigue protocol followed by their allocated recovery intervention, which lasted for 1 minute. Measurements were taken before and after the fatigue-recovery intervention. The primary outcomes were the sensory organization test (SOT) composite equilibrium score (ES) and the condition-specific ES, which were measured using a computerized dynamic posturography machine. The secondary outcome was the knee joint repositioning error. Two-way repeated measures analysis of variance was used to test the effect of water immersion on each outcome variable. There were no significant within- and between-group differences in the SOT composite ESs or the condition-specific ESs. However, there was a group-by-time interaction effect on the knee joint repositioning error. It seems that participants in the RWI group had lower errors over time, but those in the IWI and control groups had increased errors over time. The RWI group had significantly lower error score than the IWI group at postintervention. One minute of postexercise IWI or RWI did not impair rugby players' sensory organization of balance control. RWI had a less detrimental effect on knee joint proprioception to IWI at postintervention.
Chow, Gary C.C.; Yam, Timothy T.T.; Chung, Joanne W.Y.; Fong, Shirley S.M.
2017-01-01
Abstract Background: This single-blinded, three-armed randomized controlled trial aimed to compare the effects of postexercise ice-water immersion (IWI), room-temperature water immersion (RWI), and no water immersion on the balance performance and knee joint proprioception of amateur rugby players. Methods: Fifty-three eligible amateur rugby players (mean age ± standard deviation: 21.6 ± 2.9 years) were randomly assigned to the IWI group (5.3 °C), RWI group (25.0 °C), or the no immersion control group. The participants in each group underwent the same fatigue protocol followed by their allocated recovery intervention, which lasted for 1 minute. Measurements were taken before and after the fatigue-recovery intervention. The primary outcomes were the sensory organization test (SOT) composite equilibrium score (ES) and the condition-specific ES, which were measured using a computerized dynamic posturography machine. The secondary outcome was the knee joint repositioning error. Two-way repeated measures analysis of variance was used to test the effect of water immersion on each outcome variable. Results: There were no significant within- and between-group differences in the SOT composite ESs or the condition-specific ESs. However, there was a group-by-time interaction effect on the knee joint repositioning error. It seems that participants in the RWI group had lower errors over time, but those in the IWI and control groups had increased errors over time. The RWI group had significantly lower error score than the IWI group at postintervention. Conclusion: One minute of postexercise IWI or RWI did not impair rugby players’ sensory organization of balance control. RWI had a less detrimental effect on knee joint proprioception to IWI at postintervention. PMID:28207546
[Improvement of team competence in the operating room : Training programs from aviation].
Schmidt, C E; Hardt, F; Möller, J; Malchow, B; Schmidt, K; Bauer, M
2010-08-01
Growing attention has been drawn to patient safety during recent months due to media reports of clinical errors. To date only clinical incident reporting systems have been implemented in acute care hospitals as instruments of risk management. However, these systems only have a limited impact on human factors which account for the majority of all errors in medicine. Crew resource management (CRM) starts here. For the commissioning of a new hospital in Minden, training programs were installed in order to maintain patient safety in a new complex environment. The training was planned in three parts: All relevant processes were defined as standard operating procedures (SOP), visualized and then simulated in the new building. In addition, staff members (trainers) in leading positions were trained in CRM in order to train the complete staff. The training programs were analyzed by questionnaires. Selection of topics, relevance for practice and mode of presentation were rated as very good by 73% of the participants. The staff members ranked the topics communication in crisis situations, individual errors and compensating measures as most important followed by case studies and teamwork. Employees improved in compliance to the SOP, team competence and communication. In high technology environments with escalating workloads and interdisciplinary organization, staff members are confronted with increasing demands in knowledge and skills. To reduce errors under such working conditions relevant processes should be standardized and trained for the emergency situation. Human performance can be supported by well-trained interpersonal skills which are evolved in CRM training. In combination these training programs make a significant contribution to maintaining patient safety.
A fuzzy logic-based model for noise control at industrial workplaces.
Aluclu, I; Dalgic, A; Toprak, Z F
2008-05-01
Ergonomics is a broad science encompassing the wide variety of working conditions that can affect worker comfort and health, including factors such as lighting, noise, temperature, vibration, workstation design, tool design, machine design, etc. This paper describes noise-human response and a fuzzy logic model developed by comprehensive field studies on noise measurements (including atmospheric parameters) and control measures. The model has two subsystems constructed on noise reduction quantity in dB. The first subsystem of the fuzzy model depending on 549 linguistic rules comprises acoustical features of all materials used in any workplace. Totally 984 patterns were used, 503 patterns for model development and the rest 481 patterns for testing the model. The second subsystem deals with atmospheric parameter interactions with noise and has 52 linguistic rules. Similarly, 94 field patterns were obtained; 68 patterns were used for training stage of the model and the rest 26 patterns for testing the model. These rules were determined by taking into consideration formal standards, experiences of specialists and the measurements patterns. The results of the model were compared with various statistics (correlation coefficients, max-min, standard deviation, average and coefficient of skewness) and error modes (root mean square error and relative error). The correlation coefficients were significantly high, error modes were quite low and the other statistics were very close to the data. This statement indicates the validity of the model. Therefore, the model can be used for noise control in any workplace and helpful to the designer in planning stage of a workplace.
NASA Astrophysics Data System (ADS)
Graus, Matthew S.; Neumann, Aaron K.; Timlin, Jerilyn A.
2017-01-01
Fungi in the Candida genus are the most common fungal pathogens. They not only cause high morbidity and mortality but can also cost billions of dollars in healthcare. To alleviate this burden, early and accurate identification of Candida species is necessary. However, standard identification procedures can take days and have a large false negative error. The method described in this study takes advantage of hyperspectral confocal fluorescence microscopy, which enables the capability to quickly and accurately identify and characterize the unique autofluorescence spectra from different Candida species with up to 84% accuracy when grown in conditions that closely mimic physiological conditions.
NASA Technical Reports Server (NTRS)
Jentink, Thomas Neil; Usab, William J., Jr.
1990-01-01
An explicit, Multigrid algorithm was written to solve the Euler and Navier-Stokes equations with special consideration given to the coarse mesh boundary conditions. These are formulated in a manner consistent with the interior solution, utilizing forcing terms to prevent coarse-mesh truncation error from affecting the fine-mesh solution. A 4-Stage Hybrid Runge-Kutta Scheme is used to advance the solution in time, and Multigrid convergence is further enhanced by using local time-stepping and implicit residual smoothing. Details of the algorithm are presented along with a description of Jameson's standard Multigrid method and a new approach to formulating the Multigrid equations.
Improved pressure measurement system for calibration of the NASA LeRC 10x10 supersonic wind tunnel
NASA Technical Reports Server (NTRS)
Blumenthal, Philip Z.; Helland, Stephen M.
1994-01-01
This paper discusses a method used to provide a significant improvement in the accuracy of the Electronically Scanned Pressure (ESP) Measurement System by means of a fully automatic floating pressure generating system for the ESP calibration and reference pressures. This system was used to obtain test section Mach number and flow angularity measurements over the full envelope of test conditions for the 10 x 10 Supersonic Wind Tunnel. The uncertainty analysis and actual test data demonstrated that, for most test conditions, this method could reduce errors to about one-third to one-half that obtained with the standard system.
Zhang, Shuai; Li, PeiPei; Yan, Zhongyong; Long, Ju; Zhang, Xiaojun
2017-03-01
An ultraperformance liquid chromatography-quadrupole time-of-flight high-resolution mass spectrometry method was developed and validated for the determination of nitrofurazone metabolites. Precolumn derivatization with 2,4-dinitrophenylhydrazine and p-dimethylaminobenzaldehyde as an internal standard was used successfully to determine the biomarker 5-nitro-2-furaldehyde. In negative electrospray ionization mode, the precise molecular weights of the derivatives were 320.0372 for the biomarker and 328.1060 for the internal standard (relative error 1.08 ppm). The matrix effect was evaluated and the analytical characteristics of the method and derivatization reaction conditions were validated. For comparison purposes, spiked samples were tested by both internal and external standard methods. The results show high precision can be obtained with p-dimethylaminobenzaldehyde as an internal standard for the identification and quantification of nitrofurazone metabolites in complex biological samples. Graphical Abstract A simplified preparation strategy for biological samples.
Pressure control in interfacial systems: Atomistic simulations of vapor nucleation
NASA Astrophysics Data System (ADS)
Marchio, S.; Meloni, S.; Giacomello, A.; Valeriani, C.; Casciola, C. M.
2018-02-01
A large number of phenomena of scientific and technological interest involve multiple phases and occur at constant pressure of one of the two phases, e.g., the liquid phase in vapor nucleation. It is therefore of great interest to be able to reproduce such conditions in atomistic simulations. Here we study how popular barostats, originally devised for homogeneous systems, behave when applied straightforwardly to heterogeneous systems. We focus on vapor nucleation from a super-heated Lennard-Jones liquid, studied via hybrid restrained Monte Carlo simulations. The results show a departure from the trends predicted for the case of constant liquid pressure, i.e., from the conditions of classical nucleation theory. Artifacts deriving from standard (global) barostats are shown to depend on the size of the simulation box. In particular, for Lennard-Jones liquid systems of 7000 and 13 500 atoms, at conditions typically found in the literature, we have estimated an error of 10-15 kBT on the free-energy barrier, corresponding to an error of 104-106 s-1σ-3 on the nucleation rate. A mechanical (local) barostat is proposed which heals the artifacts for the considered case of vapor nucleation.
Rosenblum, Uri; Melzer, Itshak
2017-01-01
About 90% of people with multiple sclerosis (PwMS) have gait instability and 50% fall. Reliable and clinically feasible methods of gait instability assessment are needed. The study investigated the reliability and validity of the Narrow Path Walking Test (NPWT) under single-task (ST) and dual-task (DT) conditions for PwMS. Thirty PwMS performed the NPWT on 2 different occasions, a week apart. Number of Steps, Trial Time, Trial Velocity, Step Length, Number of Step Errors, Number of Cognitive Task Errors, and Number of Balance Losses were measured. Intraclass correlation coefficients (ICC2,1) were calculated from the average values of NPWT parameters. Absolute reliability was quantified from standard error of measurement (SEM) and smallest real difference (SRD). Concurrent validity of NPWT with Functional Reach Test, Four Square Step Test (FSST), 12-item Multiple Sclerosis Walking Scale (MSWS-12), and 2 Minute Walking Test (2MWT) was determined using partial correlations. Intraclass correlation coefficients (ICCs) for most NPWT parameters during ST and DT ranged from 0.46-0.94 and 0.55-0.95, respectively. The highest relative reliability was found for Number of Step Errors (ICC = 0.94 and 0.93, for ST and DT, respectively) and Trial Velocity (ICC = 0.83 and 0.86, for ST and DT, respectively). Absolute reliability was high for Number of Step Errors in ST (SEM % = 19.53%) and DT (SEM % = 18.14%) and low for Trial Velocity in ST (SEM % = 6.88%) and DT (SEM % = 7.29%). Significant correlations for Number of Step Errors and Trial Velocity were found with FSST, MSWS-12, and 2MWT. In persons with PwMS performing the NPWT, Number of Step Errors and Trial Velocity were highly reliable parameters. Based on correlations with other measures of gait instability, Number of Step Errors was the most valid parameter of dynamic balance under the conditions of our test.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, available at: http://links.lww.com/JNPT/A159).
An optimized network for phosphorus load monitoring for Lake Okeechobee, Florida
Gain, W.S.
1997-01-01
Phosphorus load data were evaluated for Lake Okeechobee, Florida, for water years 1982 through 1991. Standard errors for load estimates were computed from available phosphorus concentration and daily discharge data. Components of error were associated with uncertainty in concentration and discharge data and were calculated for existing conditions and for 6 alternative load-monitoring scenarios for each of 48 distinct inflows. Benefit-cost ratios were computed for each alternative monitoring scenario at each site by dividing estimated reductions in load uncertainty by the 5-year average costs of each scenario in 1992 dollars. Absolute and marginal benefit-cost ratios were compared in an iterative optimization scheme to determine the most cost-effective combination of discharge and concentration monitoring scenarios for the lake. If the current (1992) discharge-monitoring network around the lake is maintained, the water-quality sampling at each inflow site twice each year is continued, and the nature of loading remains the same, the standard error of computed mean-annual load is estimated at about 98 metric tons per year compared to an absolute loading rate (inflows and outflows) of 530 metric tons per year. This produces a relative uncertainty of nearly 20 percent. The standard error in load can be reduced to about 20 metric tons per year (4 percent) by adopting an optimized set of monitoring alternatives at a cost of an additional $200,000 per year. The final optimized network prescribes changes to improve both concentration and discharge monitoring. These changes include the addition of intensive sampling with automatic samplers at 11 sites, the initiation of event-based sampling by observers at another 5 sites, the continuation of periodic sampling 12 times per year at 1 site, the installation of acoustic velocity meters to improve discharge gaging at 9 sites, and the improvement of a discharge rating at 1 site.
NASA Astrophysics Data System (ADS)
Jutebring Sterte, Elin; Johansson, Emma; Sjöberg, Ylva; Huseby Karlsen, Reinert; Laudon, Hjalmar
2018-05-01
Groundwater and surface-water interactions are regulated by catchment characteristics and complex inter- and intra-annual variations in climatic conditions that are not yet fully understood. Our objective was to investigate the influence of catchment characteristics and freeze-thaw processes on surface and groundwater interactions in a boreal landscape, the Krycklan catchment in Sweden. We used a numerical modelling approach and sub-catchment evaluation method to identify and evaluate fundamental catchment characteristics and processes. The model reproduced observed stream discharge patterns of the 14 sub-catchments and the dynamics of the 15 groundwater wells with an average accumulated discharge error of 1% (15% standard deviation) and an average groundwater-level mean error of 0.1 m (0.23 m standard deviation). We show how peatland characteristics dampen the effect of intense rain, and how soil freeze-thaw processes regulate surface and groundwater partitioning during snowmelt. With these results, we demonstrate the importance of defining, understanding and quantifying the role of landscape heterogeneity and sub-catchment characteristics for accurately representing catchment hydrological functioning.
ERIC Educational Resources Information Center
Jeptarus, Kipsamo E.; Ngene, Patrick K.
2016-01-01
The purpose of this research was to study the Lexico-semantic errors of the Keiyo-speaking standard seven primary school learners of English as a Second Language (ESL) in Keiyo District, Kenya. This study was guided by two related theories: Error Analysis Theory/Approach by Corder (1971) which approaches L2 learning through a detailed analysis of…
Human Error In Complex Systems
NASA Technical Reports Server (NTRS)
Morris, Nancy M.; Rouse, William B.
1991-01-01
Report presents results of research aimed at understanding causes of human error in such complex systems as aircraft, nuclear powerplants, and chemical processing plants. Research considered both slips (errors of action) and mistakes (errors of intention), and influence of workload on them. Results indicated that: humans respond to conditions in which errors expected by attempting to reduce incidence of errors; and adaptation to conditions potent influence on human behavior in discretionary situations.
49 CFR Appendix F to Part 240 - Medical Standards Guidelines
Code of Federal Regulations, 2010 CFR
2010-10-01
... greater guidance on the procedures that should be employed in administering the vision and hearing... more errors on plates 1-15. MULTIFUNCTION VISION TESTER Keystone Orthoscope Any error. OPTEC 2000 Any error. Titmus Vision Tester Any error. Titmus II Vision Tester Any error. (3) In administering any of...
49 CFR Appendix F to Part 240 - Medical Standards Guidelines
Code of Federal Regulations, 2011 CFR
2011-10-01
... greater guidance on the procedures that should be employed in administering the vision and hearing... more errors on plates 1-15. MULTIFUNCTION VISION TESTER Keystone Orthoscope Any error. OPTEC 2000 Any error. Titmus Vision Tester Any error. Titmus II Vision Tester Any error. (3) In administering any of...
Comparison of Optimal Design Methods in Inverse Problems
2011-05-11
corresponding FIM can be estimated by F̂ (τ) = F̂ (τ, θ̂OLS) = (Σ̂ N (θ̂OLS)) −1. (13) The asymptotic standard errors are given by SEk (θ0) = √ (ΣN0 )kk, k...1, . . . , p. (14) These standard errors are estimated in practice (when θ0 and σ0 are not known) by SEk (θ̂OLS) = √ (Σ̂N (θ̂OLS))kk, k = 1... SEk (θ̂boot) = √ Cov(θ̂boot)kk. We will compare the optimal design methods using the standard errors resulting from the op- timal time points each
NASA Astrophysics Data System (ADS)
Cooper, W. A.; Spuler, S. M.; Spowart, M.; Lenschow, D. H.; Friesen, R. B.
2014-03-01
A new laser air-motion sensor measures the true airspeed with an uncertainty of less than 0.1 m s-1 (standard error) and so reduces uncertainty in the measured component of the relative wind along the longitudinal axis of the aircraft to about the same level. The calculated pressure expected from that airspeed at the inlet of a pitot tube then provides a basis for calibrating the measurements of dynamic and static pressure, reducing standard-error uncertainty in those measurements to less than 0.3 hPa and the precision applicable to steady flight conditions to about 0.1 hPa. These improved measurements of pressure, combined with high-resolution measurements of geometric altitude from the Global Positioning System, then indicate (via integrations of the hydrostatic equation during climbs and descents) that the offset and uncertainty in temperature measurement for one research aircraft are +0.3 ± 0.3 °C. For airspeed, pressure and temperature these are significant reductions in uncertainty vs. those obtained from calibrations using standard techniques. Finally, it is shown that the new laser air-motion sensor, combined with parametrized fits to correction factors for the measured dynamic and ambient pressure, provides a measurement of temperature that is independent of any other temperature sensor.
Feasibility study and quality assessment of unmanned aircraft system-derived multispectral images
NASA Astrophysics Data System (ADS)
Chang, Kuo-Jen
2017-04-01
The purpose of study is to explore the precision and the applicability of UAS-derived multispectral images. In this study, the Micro-MCA6 multispectral camera was mounted on quadcopter. The Micro-MCA6 shoot images synchronized of each single band. By means of geotagged images and control points, the orthomosaic images of each single band generated firstly by 14cm resolution. The multispectral image was merged complete with 6 bands. In order to improve the spatial resolution, the 6 band image fused with 9cm resolution image taken from RGB camera. Quality evaluation of the image is verified of the each single band by using control points and check points. The standard deviations of errors are within 1 to 2 pixel resolution of each band. The quality of the multispectral image is compared with 3 cm resolution orthomosaic RGB image gathered from UAV in the same mission, as well. The standard deviations of errors are within 2 to 3 pixel resolution. The result shows that the errors resulting from the blurry and the band dislocation of the objects edge identification. To the end, the normalized difference vegetation index (NDVI) extracted from the image to explore the condition of vegetation and the nature of the environment. This study demonstrates the feasibility and the capability of the high resolution multispectral images.
ERIC Educational Resources Information Center
Longford, Nicholas T.
Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…
Vajda, E G; Skedros, J G; Bloebaum, R D
1998-10-01
Backscattered electron (BSE) imaging has proven to be a useful method for analyzing the mineral distribution in microscopic regions of bone. However, an accepted method of standardization has not been developed, limiting the utility of BSE imaging for truly quantitative analysis. Previous work has suggested that BSE images can be standardized by energy-dispersive x-ray spectrometry (EDX). Unfortunately, EDX-standardized BSE images tend to underestimate the mineral content of bone when compared with traditional ash measurements. The goal of this study is to investigate the nature of the deficit between EDX-standardized BSE images and ash measurements. A series of analytical standards, ashed bone specimens, and unembedded bone specimens were investigated to determine the source of the deficit previously reported. The primary source of error was found to be inaccurate ZAF corrections to account for the organic phase of the bone matrix. Conductive coatings, methylmethacrylate embedding media, and minor elemental constituents in bone mineral introduced negligible errors. It is suggested that the errors would remain constant and an empirical correction could be used to account for the deficit. However, extensive preliminary testing of the analysis equipment is essential.
Morjaria, Priya; Bastawrous, Andrew; Murthy, Gudlavalleti Venkata Satyanarayana; Evans, Jennifer; Gilbert, Clare
2017-04-08
Uncorrected refractive errors are the commonest cause of visual loss in children despite spectacle correction being highly cost-effective. Many affected children do not benefit from correction as a high proportion do not wear their spectacles. Reasons for non-wear include parental attitudes, overprescribing and children being teased/bullied. Most school programmes do not provide health education for affected children, their peers, teachers or parents. The Portable Eye Examination Kit (Peek) will be used in this study. Peek has applications for measuring visual acuity with software for data entry and sending automated messages to inform providers and parents. Peek also has an application which simulates the visual blur of uncorrected refractive error (SightSim). The hypothesis is that higher proportion of children with uncorrected refractive errors in schools allocated to the Peek educational package will wear their spectacles 3-4 months after they are dispensed, and a higher proportion of children identified with other eye conditions will access services, compared with schools receiving standard school screening. Cluster randomized, double-masked trial of children with and without uncorrected refractive errors or other eye conditions. Government schools in Hyderabad, India will be allocated to intervention (Peek) or comparator (standard programme) arms before vision screening. In the intervention arm Peek will be used for vision screening, SightSim images will be used in classroom teaching and will be taken home by children, and voice messages will be sent to parents of children requiring spectacles or referral. In both arms the same criteria for recruitment, prescribing and dispensing spectacles will be used. After 3-4 months children dispensed spectacles will be followed up to assess spectacle wear, and uptake of referrals will be ascertained. The cost of developing and delivering the Peek package will be assessed. The cost per child wearing their spectacles or accessing services will be compared. Educating parents, teachers and children about refractive errors and the importance of wearing spectacles has the potential to increase spectacle wear amongst children. Innovative, potentially scalable mobile technology (Peek) will be used to screen, provide health education, track spectacle wear and adherence to follow-up amongst children referred. Controlled-Trials.com, ISRCTN78134921 . Registered on 29 June 2016.
Task motivation influences alpha suppression following errors.
Compton, Rebecca J; Bissey, Bryn; Worby-Selim, Sharoda
2014-07-01
The goal of the present research is to examine the influence of motivation on a novel error-related neural marker, error-related alpha suppression (ERAS). Participants completed an attentionally demanding flanker task under conditions that emphasized either speed or accuracy or under conditions that manipulated the monetary value of errors. Conditions in which errors had greater motivational value produced greater ERAS, that is, greater alpha suppression following errors compared to correct trials. A second study found that a manipulation of task difficulty did not affect ERAS. Together, the results confirm that ERAS is both a robust phenomenon and one that is sensitive to motivational factors. Copyright © 2014 Society for Psychophysiological Research.
Analysis of DGPS/INS and MLS/INS final approach navigation errors and control performance data
NASA Technical Reports Server (NTRS)
Hueschen, Richard M.; Spitzer, Cary R.
1992-01-01
Flight tests were conducted jointly by NASA Langley Research Center and Honeywell, Inc., on a B-737 research aircraft to record a data base for evaluating the performance of a differential DGPS/inertial navigation system (INS) which used GPS Course/Acquisition code receivers. Estimates from the DGPS/INS and a Microwave Landing System (MLS)/INS, and various aircraft parameter data were recorded in real time aboard the aircraft while flying along the final approach path to landing. This paper presents the mean and standard deviation of the DGPS/INS and MLS/INS navigation position errors computed relative to the laser tracker system and of the difference between the DGPS/INS and MLS/INS velocity estimates. RMS errors are presented for DGPS/INS and MLS/INS guidance errors (localizer and glideslope). The mean navigation position errors and standard deviation of the x position coordinate of the DGPS/INS and MLS/INS systems were found to be of similar magnitude while the standard deviation of the y and z position coordinate errors were significantly larger for DGPS/INS compared to MLS/INS.
Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt
2016-08-01
A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.
What to use to express the variability of data: Standard deviation or standard error of mean?
Barde, Mohini P; Barde, Prajakt J
2012-07-01
Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.
Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation
NASA Astrophysics Data System (ADS)
Räisänen, Petri; Barker, W. Howard
2004-07-01
The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.
Atmospheric density determination using high-accuracy satellite GPS data
NASA Astrophysics Data System (ADS)
Tingling, R.; Miao, J.; Liu, S.
2017-12-01
Atmospheric drag is the main error source in the orbit determination and prediction of low Earth orbit (LEO) satellites, however, empirical models which are used to account for atmosphere often exhibit density errors around 15 30%. Atmospheric density determination thus become an important topic for atmospheric researchers. Based on the relation between atmospheric drag force and the decay of orbit semi-major axis, we derived atmospheric density along the trajectory of CHAMP with its Rapid Science Orbit (RSO) data. Three primary parameters are calculated, including the ratio of cross sectional area to mass, drag coefficient, and the decay of semi-major axis caused by atmospheric drag. We also analyzed the source of error and made a comparison between GPS-derived and reference density. Result on 2 Dec 2008 shows that the mean error of GPS-derived density can decrease from 29.21% to 9.20% when time span adopted on the process of computation increase from 10min to 50min. Result for the whole December indicates that when the time span meet the condition that the amplitude of the decay of semi-major axis is much greater than its standard deviation, then density precision of 10% can be achieved.
Protective Prevention Effects on the Association of Poverty With Brain Development.
Brody, Gene H; Gray, Joshua C; Yu, Tianyi; Barton, Allen W; Beach, Steven R H; Galván, Adrianna; MacKillop, James; Windle, Michael; Chen, Edith; Miller, Gregory E; Sweet, Lawrence H
2017-01-01
This study was designed to determine whether a preventive intervention focused on enhancing supportive parenting could ameliorate the association between exposure to poverty and brain development in low socioeconomic status African American individuals from the rural South. To determine whether participation in an efficacious prevention program designed to enhance supportive parenting for rural African American children will ameliorate the association between living in poverty and reduced hippocampal and amygdalar volumes in adulthood. In the rural southeastern United States, African American parents and their 11-year-old children were assigned randomly to the Strong African American Families randomized prevention trial or to a control condition. Parents provided data used to calculate income-to-needs ratios when children were aged 11 to 13 years and 16 to 18 years. When the participants were aged 25 years, hippocampal and amygdalar volumes were measured using magnetic resonance imaging. Household poverty was measured by income-to-needs ratios. Young adults' whole hippocampal, dentate gyrus, and CA3 hippocampal subfields as well as amygdalar volumes were assessed using magnetic resonance imaging. Of the 667 participants in the Strong African American Families randomized prevention trial, 119 right-handed African American individuals aged 25 years living in rural areas were recruited. Years lived in poverty across ages 11 to 18 years forecasted diminished left dentate gyrus (simple slope, -14.20; standard error, 5.22; P = .008) and CA3 (simple slope, -6.42; standard error, 2.42; P = .009) hippocampal subfields and left amygdalar (simple slope, -34.62; standard error, 12.74; P = .008) volumes among young adults in the control condition (mean [SD] time, 2.04 [1.88] years) but not among those who participated in the Strong African American Families program (mean [SD] time, 2.61 [1.77] years). In this study, we described how participation in a randomized clinical trial designed to enhance supportive parenting ameliorated the association of years lived in poverty with left dentate gyrus and CA3 hippocampal subfields and left amygdalar volumes. These findings are consistent with a possible role for supportive parenting and suggest a strategy for narrowing social disparities.
Waddell, George; Williamon, Aaron
2017-01-01
Judgments of music performance quality are commonly employed in music practice, education, and research. However, previous studies have demonstrated the limited reliability of such judgments, and there is now evidence that extraneous visual, social, and other “non-musical” features can unduly influence them. The present study employed continuous measurement techniques to examine how the process of forming a music quality judgment is affected by the manipulation of temporally specific visual cues. Video footage comprising an appropriate stage entrance and error-free performance served as the standard condition (Video 1). This footage was manipulated to provide four additional conditions, each identical save for a single variation: an inappropriate stage entrance (Video 2); the presence of an aural performance error midway through the piece (Video 3); the same error accompanied by a negative facial reaction by the performer (Video 4); the facial reaction with no corresponding aural error (Video 5). The participants were 53 musicians and 52 non-musicians (N = 105) who individually assessed the performance quality of one of the five randomly assigned videos via a digital continuous measurement interface and headphones. The results showed that participants viewing the “inappropriate” stage entrance made judgments significantly more quickly than those viewing the “appropriate” entrance, and while the poor entrance caused significantly lower initial scores among those with musical training, the effect did not persist long into the performance. The aural error caused an immediate drop in quality judgments that persisted to a lower final score only when accompanied by the frustrated facial expression from the pianist; the performance error alone caused a temporary drop only in the musicians' ratings, and the negative facial reaction alone caused no reaction regardless of participants' musical experience. These findings demonstrate the importance of visual information in forming evaluative and aesthetic judgments in musical contexts and highlight how visual cues dynamically influence those judgments over time. PMID:28487662
The Effect of Interruptions on Part 121 Air Carrier Operations
NASA Technical Reports Server (NTRS)
Damos, Diane L.
1998-01-01
The primary purpose of this study was to determine the relative priorities of various events and activities by examining the probability that a given activity was interrupted by a given event. The analysis will begin by providing frequency of interruption data by crew position (captain versus first officer) and event type. Any differences in the pattern of interruptions between the first officers and the captains will be explored and interpreted in terms of standard operating procedures. Subsequent data analyses will focus on comparing the frequency of interruptions for different types of activities and for the same activities under normal versus emergency conditions. Briefings and checklists will receive particular attention. The frequency with which specific activities are interrupted under multiple- versus single-task conditions also will be examined; because the majority of multiple-task data were obtained under laboratory conditions, LOFT-type tapes offer a unique opportunity to examine concurrent task performance under 'real-world' conditions. A second purpose of this study is to examine the effects of the interruptions on performance. More specifically, when possible, the time to resume specific activities will be compared to determine if pilots are slower to resume certain types of activities. Errors in resumption or failures to resume specific activities will be noted and any patterns in these errors will be identified. Again, particular attention will be given to the effects of interruptions on the completion of checklists and briefings. Other types of errors and missed events (i.e., the crew should have responded to the event but did not) will be examined. Any methodology using interruptions to examine task prioritization must be able to identify when an interruption has occurred and describe the ongoing activities that were interrupted. Both of these methodological problems are discussed In detail in the following section,
Wang, Hue-Yu; Wen, Ching-Feng; Chiu, Yu-Hsien; Lee, I-Nong; Kao, Hao-Yun; Lee, I-Chen; Ho, Wen-Hsien
2013-01-01
An adaptive-network-based fuzzy inference system (ANFIS) was compared with an artificial neural network (ANN) in terms of accuracy in predicting the combined effects of temperature (10.5 to 24.5°C), pH level (5.5 to 7.5), sodium chloride level (0.25% to 6.25%) and sodium nitrite level (0 to 200 ppm) on the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. THE ANFIS AND ANN MODELS WERE COMPARED IN TERMS OF SIX STATISTICAL INDICES CALCULATED BY COMPARING THEIR PREDICTION RESULTS WITH ACTUAL DATA: mean absolute percentage error (MAPE), root mean square error (RMSE), standard error of prediction percentage (SEP), bias factor (Bf), accuracy factor (Af), and absolute fraction of variance (R (2)). Graphical plots were also used for model comparison. The learning-based systems obtained encouraging prediction results. Sensitivity analyses of the four environmental factors showed that temperature and, to a lesser extent, NaCl had the most influence on accuracy in predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. The observed effectiveness of ANFIS for modeling microbial kinetic parameters confirms its potential use as a supplemental tool in predictive mycology. Comparisons between growth rates predicted by ANFIS and actual experimental data also confirmed the high accuracy of the Gaussian membership function in ANFIS. Comparisons of the six statistical indices under both aerobic and anaerobic conditions also showed that the ANFIS model was better than all ANN models in predicting the four kinetic parameters. Therefore, the ANFIS model is a valuable tool for quickly predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions.
Wang, Hue-Yu; Wen, Ching-Feng; Chiu, Yu-Hsien; Lee, I-Nong; Kao, Hao-Yun; Lee, I-Chen; Ho, Wen-Hsien
2013-01-01
Background An adaptive-network-based fuzzy inference system (ANFIS) was compared with an artificial neural network (ANN) in terms of accuracy in predicting the combined effects of temperature (10.5 to 24.5°C), pH level (5.5 to 7.5), sodium chloride level (0.25% to 6.25%) and sodium nitrite level (0 to 200 ppm) on the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. Methods The ANFIS and ANN models were compared in terms of six statistical indices calculated by comparing their prediction results with actual data: mean absolute percentage error (MAPE), root mean square error (RMSE), standard error of prediction percentage (SEP), bias factor (Bf), accuracy factor (Af), and absolute fraction of variance (R 2). Graphical plots were also used for model comparison. Conclusions The learning-based systems obtained encouraging prediction results. Sensitivity analyses of the four environmental factors showed that temperature and, to a lesser extent, NaCl had the most influence on accuracy in predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. The observed effectiveness of ANFIS for modeling microbial kinetic parameters confirms its potential use as a supplemental tool in predictive mycology. Comparisons between growth rates predicted by ANFIS and actual experimental data also confirmed the high accuracy of the Gaussian membership function in ANFIS. Comparisons of the six statistical indices under both aerobic and anaerobic conditions also showed that the ANFIS model was better than all ANN models in predicting the four kinetic parameters. Therefore, the ANFIS model is a valuable tool for quickly predicting the growth rate of Leuconostoc mesenteroides under aerobic and anaerobic conditions. PMID:23705023
Cost-effectiveness of the stream-gaging program in Kentucky
Ruhl, K.J.
1989-01-01
This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)
Pleil, Joachim D
2016-01-01
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.
ERIC Educational Resources Information Center
Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.
2011-01-01
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the…
Multi-Dimensional Asymptotically Stable 4th Order Accurate Schemes for the Diffusion Equation
NASA Technical Reports Server (NTRS)
Abarbanel, Saul; Ditkowski, Adi
1996-01-01
An algorithm is presented which solves the multi-dimensional diffusion equation on co mplex shapes to 4th-order accuracy and is asymptotically stable in time. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions fail.
General Aviation Avionics Statistics.
1980-12-01
designed to produce standard errors on these variables at levels specified by the FAA. No controls were placed on the standard errors of the non-design...Transponder Encoding Requirement. and Mode CAutomatic (11as been deleted) Altitude Reporting Ca- pabili.,; Two-way Radio; VOR or TACAN Receiver. Remaining 42
ERIC Educational Resources Information Center
Schretlen, David; And Others
1994-01-01
Composite reliability and standard errors of measurement were computed for prorated Verbal, Performance, and Full-Scale intelligence quotient (IQ) scores from a seven-subtest short form of the Wechsler Adult Intelligence Scale-Revised. Results with 1,880 adults (standardization sample) indicate that this form is as reliable as the complete test.…
A Brief Look at: Test Scores and the Standard Error of Measurement. E&R Report No. 10.13
ERIC Educational Resources Information Center
Holdzkom, David; Sumner, Brian; McMillen, Brad
2010-01-01
In the context of standardized testing, the standard error of measurement (SEM) is a measure of the factors other than the student's actual knowledge of the tested material that may affect the student's test score. Such factors may include distractions in the testing environment, fatigue, hunger, or even luck. This means that a student's observed…
Toward a new culture in verified quantum operations
NASA Astrophysics Data System (ADS)
Flammia, Steve
Measuring error rates of quantum operations has become an indispensable component in any aspiring platform for quantum computation. As the quality of controlled quantum operations increases, the demands on the accuracy and precision with which we measure these error rates also grows. However, well-meaning scientists that report these error measures are faced with a sea of non-standardized methodologies and are often asked during publication for only coarse information about how their estimates were obtained. Moreover, there are serious incentives to use methodologies and measures that will continually produce numbers that improve with time to show progress. These problems will only get exacerbated as our typical error rates go from 1 in 100 to 1 in 1000 or less. This talk will survey existing challenges presented by the current paradigm and offer some suggestions for solutions than can help us move toward fair and standardized methods for error metrology in quantum computing experiments, and towards a culture that values full disclose of methodologies and higher standards for data analysis.
Kletting, P; Schimmel, S; Kestler, H A; Hänscheid, H; Luster, M; Fernández, M; Bröer, J H; Nosske, D; Lassmann, M; Glatting, G
2013-10-01
Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error. The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB. To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit parameters and their standard error estimated by using SAAM numerical and NUKFIT showed differences of <1%. The differences for the time-integrated activity coefficients were also <1% (standard error between 0.4% and 3%). In general, the application of the software is user-friendly and the results are mathematically correct and reproducible. An application of NUKFIT is presented for three different clinical examples. The software tool with its underlying methodology can be employed to objectively and reproducibly estimate the time integrated activity coefficient and its standard error for most time activity data in molecular radiotherapy.
Lewis, Matthew S; Maruff, Paul; Silbert, Brendan S; Evered, Lis A; Scott, David A
2007-02-01
The reliable change index (RCI) expresses change relative to its associated error, and is useful in the identification of postoperative cognitive dysfunction (POCD). This paper examines four common RCIs that each account for error in different ways. Three rules incorporate a constant correction for practice effects and are contrasted with the standard RCI that had no correction for practice. These rules are applied to 160 patients undergoing coronary artery bypass graft (CABG) surgery who completed neuropsychological assessments preoperatively and 1 week postoperatively using error and reliability data from a comparable healthy nonsurgical control group. The rules all identify POCD in a similar proportion of patients, but the use of the within-subject standard deviation (WSD), expressing the effects of random error, as an error estimate is a theoretically appropriate denominator when a constant error correction, removing the effects of systematic error, is deducted from the numerator in a RCI.
McClure, Foster D; Lee, Jung K
2005-01-01
Sample size formulas are developed to estimate the repeatability and reproducibility standard deviations (Sr and S(R)) such that the actual error in (Sr and S(R)) relative to their respective true values, sigmar and sigmaR, are at predefined levels. The statistical consequences associated with AOAC INTERNATIONAL required sample size to validate an analytical method are discussed. In addition, formulas to estimate the uncertainties of (Sr and S(R)) were derived and are provided as supporting documentation. Formula for the Number of Replicates Required for a Specified Margin of Relative Error in the Estimate of the Repeatability Standard Deviation.
NASA Astrophysics Data System (ADS)
Charonko, John J.; Vlachos, Pavlos P.
2013-06-01
Numerous studies have established firmly that particle image velocimetry (PIV) is a robust method for non-invasive, quantitative measurements of fluid velocity, and that when carefully conducted, typical measurements can accurately detect displacements in digital images with a resolution well below a single pixel (in some cases well below a hundredth of a pixel). However, to date, these estimates have only been able to provide guidance on the expected error for an average measurement under specific image quality and flow conditions. This paper demonstrates a new method for estimating the uncertainty bounds to within a given confidence interval for a specific, individual measurement. Here, cross-correlation peak ratio, the ratio of primary to secondary peak height, is shown to correlate strongly with the range of observed error values for a given measurement, regardless of flow condition or image quality. This relationship is significantly stronger for phase-only generalized cross-correlation PIV processing, while the standard correlation approach showed weaker performance. Using an analytical model of the relationship derived from synthetic data sets, the uncertainty bounds at a 95% confidence interval are then computed for several artificial and experimental flow fields, and the resulting errors are shown to match closely to the predicted uncertainties. While this method stops short of being able to predict the true error for a given measurement, knowledge of the uncertainty level for a PIV experiment should provide great benefits when applying the results of PIV analysis to engineering design studies and computational fluid dynamics validation efforts. Moreover, this approach is exceptionally simple to implement and requires negligible additional computational cost.
NASA Astrophysics Data System (ADS)
Meier, Walter Neil
This thesis demonstrates the applicability of data assimilation methods to improve observed and modeled ice motion fields and to demonstrate the effects of assimilated motion on Arctic processes important to the global climate and of practical concern to human activities. Ice motions derived from 85 GHz and 37 GHz SSM/I imagery and estimated from two-dimensional dynamic-thermodynamic sea ice models are compared to buoy observations. Mean error, error standard deviation, and correlation with buoys are computed for the model domain. SSM/I motions generally have a lower bias, but higher error standard deviations and lower correlation with buoys than model motions. There are notable variations in the statistics depending on the region of the Arctic, season, and ice characteristics. Assimilation methods are investigated and blending and optimal interpolation strategies are implemented. Blending assimilation improves error statistics slightly, but the effect of the assimilation is reduced due to noise in the SSM/I motions and is thus not an effective method to improve ice motion estimates. However, optimal interpolation assimilation reduces motion errors by 25--30% over modeled motions and 40--45% over SSM/I motions. Optimal interpolation assimilation is beneficial in all regions, seasons and ice conditions, and is particularly effective in regimes where modeled and SSM/I errors are high. Assimilation alters annual average motion fields. Modeled ice products of ice thickness, ice divergence, Fram Strait ice volume export, transport across the Arctic and interannual basin averages are also influenced by assimilated motions. Assimilation improves estimates of pollutant transport and corrects synoptic-scale errors in the motion fields caused by incorrect forcings or errors in model physics. The portability of the optimal interpolation assimilation method is demonstrated by implementing the strategy in an ice thickness distribution (ITD) model. This research presents an innovative method of combining a new data set of SSM/I-derived ice motions with three different sea ice models via two data assimilation methods. The work described here is the first example of assimilating remotely-sensed data within high-resolution and detailed dynamic-thermodynamic sea ice models. The results demonstrate that assimilation is a valuable resource for determining accurate ice motion in the Arctic.
Bootstrap Estimates of Standard Errors in Generalizability Theory
ERIC Educational Resources Information Center
Tong, Ye; Brennan, Robert L.
2007-01-01
Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…
Eppenhof, Koen A J; Pluim, Josien P W
2018-04-01
Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.
Hejl, H.R.
1989-01-01
The precipitation-runoff modeling system was applied to the 8.21 sq-mi drainage area of the Ah-shi-sle-pah Wash watershed in northwestern New Mexico. The calibration periods were May to September of 1981 and 1982, and the verification period was May to September 1983. Twelve storms were available for calibration and 8 storms were available for verification. For calibration A (hydraulic conductivity estimated from onsite data and other storm-mode parameters optimized), the computed standard error of estimate was 50% for runoff volumes and 72% of peak discharges. Calibration B included hydraulic conductivity in the optimization, which reduced the standard error of estimate to 28 % for runoff volumes and 50% for peak discharges. Optimized values for hydraulic conductivity resulted in reductions from 1.00 to 0.26 in/h and 0.20 to 0.03 in/h for the 2 general soils groups in the calibrations. Simulated runoff volumes using 7 of 8 storms occurring during the verification period had a standard error of estimate of 40% for verification A and 38% for verification B. Simulated peak discharge had a standard error of estimate of 120% for verification A and 56% for verification B. Including the eighth storm which had a relatively small magnitude in the verification analysis more than doubled the standard error of estimating volumes and peaks. (USGS)
Hess, G.W.; Bohman, L.R.
1996-01-01
Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada were developed using streamflow records at six gaged sites and basin physical and climatic characteristics. Streamflow data at gaged sites were related by regression techniques to concurrent flows at nearby gaging stations so that monthly mean streamflows for periods of missing or no record can be estimated for gaged sites in central Nevada. The standard error of estimate for relations at these sites ranged from 12 to 196 percent. Also, monthly streamflow data for selected percent exceedence levels were used in regression analyses with basin and climatic variables to determine relations for ungaged basins for annual and monthly percent exceedence levels. Analyses indicate that the drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the annual percent exceedence, the standard error of estimate of the relations for ungaged sites ranged from 51 to 96 percent and standard error of prediction for ungaged sites ranged from 96 to 249 percent. For the monthly percent exceedence values, the standard error of estimate of the relations ranged from 31 to 168 percent, and the standard error of prediction ranged from 115 to 3,124 percent. Reliability and limitations of the estimating methods are described.
Dukić, Lora; Kopčinović, Lara Milevoj; Dorotić, Adrijana; Baršić, Ivana
2016-10-15
Blood gas analysis (BGA) is exposed to risks of errors caused by improper sampling, transport and storage conditions. The Clinical and Laboratory Standards Institute (CLSI) generated documents with recommendations for avoidance of potential errors caused by sample mishandling. Two main documents related to BGA issued by the CLSI are GP43-A4 (former H11-A4) Procedures for the collection of arterial blood specimens; approved standard - fourth edition, and C46-A2 Blood gas and pH analysis and related measurements; approved guideline - second edition. Practices related to processing of blood gas samples are not standardized in the Republic of Croatia. Each institution has its own protocol for ordering, collection and analysis of blood gases. Although many laboratories use state of the art analyzers, still many preanalytical procedures remain unchanged. The objective of the Croatian Society of Medical Biochemistry and Laboratory Medicine (CSMBLM) is to standardize the procedures for BGA based on CLSI recommendations. The Working Group for Blood Gas Testing as part of the Committee for the Scientific Professional Development of the CSMBLM prepared a set of recommended protocols for sampling, transport, storage and processing of blood gas samples based on relevant CLSI documents, relevant literature search and on the results of Croatian survey study on practices and policies in acid-base testing. Recommendations are intended for laboratory professionals and all healthcare workers involved in blood gas processing.
Dukić, Lora; Kopčinović, Lara Milevoj; Dorotić, Adrijana; Baršić, Ivana
2016-01-01
Blood gas analysis (BGA) is exposed to risks of errors caused by improper sampling, transport and storage conditions. The Clinical and Laboratory Standards Institute (CLSI) generated documents with recommendations for avoidance of potential errors caused by sample mishandling. Two main documents related to BGA issued by the CLSI are GP43-A4 (former H11-A4) Procedures for the collection of arterial blood specimens; approved standard – fourth edition, and C46-A2 Blood gas and pH analysis and related measurements; approved guideline – second edition. Practices related to processing of blood gas samples are not standardized in the Republic of Croatia. Each institution has its own protocol for ordering, collection and analysis of blood gases. Although many laboratories use state of the art analyzers, still many preanalytical procedures remain unchanged. The objective of the Croatian Society of Medical Biochemistry and Laboratory Medicine (CSMBLM) is to standardize the procedures for BGA based on CLSI recommendations. The Working Group for Blood Gas Testing as part of the Committee for the Scientific Professional Development of the CSMBLM prepared a set of recommended protocols for sampling, transport, storage and processing of blood gas samples based on relevant CLSI documents, relevant literature search and on the results of Croatian survey study on practices and policies in acid-base testing. Recommendations are intended for laboratory professionals and all healthcare workers involved in blood gas processing. PMID:27812301
Kim, Yoonsang; Huang, Jidong; Emery, Sherry
2016-02-26
Social media have transformed the communications landscape. People increasingly obtain news and health information online and via social media. Social media platforms also serve as novel sources of rich observational data for health research (including infodemiology, infoveillance, and digital disease detection detection). While the number of studies using social data is growing rapidly, very few of these studies transparently outline their methods for collecting, filtering, and reporting those data. Keywords and search filters applied to social data form the lens through which researchers may observe what and how people communicate about a given topic. Without a properly focused lens, research conclusions may be biased or misleading. Standards of reporting data sources and quality are needed so that data scientists and consumers of social media research can evaluate and compare methods and findings across studies. We aimed to develop and apply a framework of social media data collection and quality assessment and to propose a reporting standard, which researchers and reviewers may use to evaluate and compare the quality of social data across studies. We propose a conceptual framework consisting of three major steps in collecting social media data: develop, apply, and validate search filters. This framework is based on two criteria: retrieval precision (how much of retrieved data is relevant) and retrieval recall (how much of the relevant data is retrieved). We then discuss two conditions that estimation of retrieval precision and recall rely on--accurate human coding and full data collection--and how to calculate these statistics in cases that deviate from the two ideal conditions. We then apply the framework on a real-world example using approximately 4 million tobacco-related tweets collected from the Twitter firehose. We developed and applied a search filter to retrieve e-cigarette-related tweets from the archive based on three keyword categories: devices, brands, and behavior. The search filter retrieved 82,205 e-cigarette-related tweets from the archive and was validated. Retrieval precision was calculated above 95% in all cases. Retrieval recall was 86% assuming ideal conditions (no human coding errors and full data collection), 75% when unretrieved messages could not be archived, 86% assuming no false negative errors by coders, and 93% allowing both false negative and false positive errors by human coders. This paper sets forth a conceptual framework for the filtering and quality evaluation of social data that addresses several common challenges and moves toward establishing a standard of reporting social data. Researchers should clearly delineate data sources, how data were accessed and collected, and the search filter building process and how retrieval precision and recall were calculated. The proposed framework can be adapted to other public social media platforms.
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.
2016-12-01
Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.
Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun
2017-09-19
In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.
Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun
2017-01-01
In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions. PMID:28925979
Meta-regression approximations to reduce publication selection bias.
Stanley, T D; Doucouliagos, Hristos
2014-03-01
Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with standard error (PEESE), is shown to have the smallest bias and mean squared error in most cases and to outperform conventional meta-analysis estimators, often by a great deal. Monte Carlo simulations also demonstrate how a new hybrid estimator that conditionally combines PEESE and the Egger regression intercept can provide a practical solution to publication selection bias. PEESE is easily expanded to accommodate systematic heterogeneity along with complex and differential publication selection bias that is related to moderator variables. By providing an intuitive reason for these approximations, we can also explain why the Egger regression works so well and when it does not. These meta-regression methods are applied to several policy-relevant areas of research including antidepressant effectiveness, the value of a statistical life, the minimum wage, and nicotine replacement therapy. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.
2014-01-01
Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.
da Cunha, Antonio Ribeiro
2015-05-01
This study aimed to assess measurements of temperature and relative humidity obtained with HOBO a data logger, under various conditions of exposure to solar radiation, comparing them with those obtained through the use of a temperature/relative humidity probe and a copper-constantan thermocouple psychrometer, which are considered the standards for obtaining such measurements. Data were collected over a 6-day period (from 25 March to 1 April, 2010), during which the equipment was monitored continuously and simultaneously. We employed the following combinations of equipment and conditions: a HOBO data logger in full sunlight; a HOBO data logger shielded within a white plastic cup with windows for air circulation; a HOBO data logger shielded within a gill-type shelter (multi-plate prototype plastic); a copper-constantan thermocouple psychrometer exposed to natural ventilation and protected from sunlight; and a temperature/relative humidity probe under a commercial, multi-plate radiation shield. Comparisons between the measurements obtained with the various devices were made on the basis of statistical indicators: linear regression, with coefficient of determination; index of agreement; maximum absolute error; and mean absolute error. The prototype multi-plate shelter (gill-type) used in order to protect the HOBO data logger was found to provide the best protection against the effects of solar radiation on measurements of temperature and relative humidity. The precision and accuracy of a device that measures temperature and relative humidity depend on an efficient shelter that minimizes the interference caused by solar radiation, thereby avoiding erroneous analysis of the data obtained.
Ing, Alex; Schwarzbauer, Christian
2014-01-01
Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods--the cluster size statistic (CSS) and cluster mass statistic (CMS)--are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity.
Ing, Alex; Schwarzbauer, Christian
2014-01-01
Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods – the cluster size statistic (CSS) and cluster mass statistic (CMS) – are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity. PMID:24906136
Quantifying the uncertainty of regional and national estimates of soil carbon stocks
NASA Astrophysics Data System (ADS)
Papritz, Andreas
2013-04-01
At regional and national scales, carbon (C) stocks are frequently estimated by means of regression models. Such statistical models link measurements of carbons stocks, recorded for a set of soil profiles or soil cores, to covariates that characterize soil formation conditions and land management. A prerequisite is that these covariates are available for any location within a region of interest G because they are used along with the fitted regression coefficients to predict the carbon stocks at the nodes of a fine-meshed grid that is laid over G. The mean C stock in G is then estimated by the arithmetic mean of the stock predictions for the grid nodes. Apart from the mean stock, the precision of the estimate is often also of interest, for example to judge whether the mean C stock has changed significantly between two inventories. The standard error of the estimated mean stock in G can be computed from the regression results as well. Two issues are thereby important: (i) How large is the area of G relative to the support of the measurements? (ii) Are the residuals of the regression model spatially auto-correlated or is the assumption of statistical independence tenable? Both issues are correctly handled if one adopts a geostatistical block kriging approach for estimating the mean C stock within a region and its standard error. In the presentation I shall summarize the main ideas of external drift block kriging. To compute the standard error of the mean stock, one has in principle to sum the elements a potentially very large covariance matrix of point prediction errors, but I shall show that the required term can be approximated very well by Monte Carlo techniques. I shall further illustrated with a few examples how the standard error of the mean stock estimate changes with the size of G and with the strenght of the auto-correlation of the regression residuals. As an application a robust variant of block kriging is used to quantify the mean carbon stock stored in the soils of Swiss forests (Nussbaum et al., 2012). Nussbaum, M., Papritz, A., Baltensweiler, A., and Walthert, L. (2012). Organic carbon stocks of swiss forest soils. Final report, Institute of Terrestrial Ecosystems, ETH Zürich and Swiss Federal Institute for Forest, Snow and Landscape Research (WSL), pp. 51, http://e-collection.library.ethz.ch/eserv/eth:6027/eth-6027-01.pdf
Cole, Sindy; McNally, Gavan P
2007-10-01
Three experiments studied temporal-difference (TD) prediction errors during Pavlovian fear conditioning. In Stage I, rats received conditioned stimulus A (CSA) paired with shock. In Stage II, they received pairings of CSA and CSB with shock that blocked learning to CSB. In Stage III, a serial overlapping compound, CSB --> CSA, was followed by shock. The change in intratrial durations supported fear learning to CSB but reduced fear of CSA, revealing the operation of TD prediction errors. N-methyl- D-aspartate (NMDA) receptor antagonism prior to Stage III prevented learning, whereas opioid receptor antagonism selectively affected predictive learning. These findings support a role for TD prediction errors in fear conditioning. They suggest that NMDA receptors contribute to fear learning by acting on the product of predictive error, whereas opioid receptors contribute to predictive error. (PsycINFO Database Record (c) 2007 APA, all rights reserved).
Best practices to optimize intraoperative photography.
Gaujoux, Sébastien; Ceribelli, Cecilia; Goudard, Geoffrey; Khayat, Antoine; Leconte, Mahaut; Massault, Pierre-Philippe; Balagué, Julie; Dousset, Bertrand
2016-04-01
Intraoperative photography is used extensively for communication, research, or teaching. The objective of the present work was to define, using a standardized methodology and literature review, the best technical conditions for intraoperative photography. Using either a smartphone camera, a bridge camera, or a single-lens reflex (SLR) camera, photographs were taken under various standard conditions by a professional photographer. All images were independently assessed blinded to technical conditions to define the best shooting conditions and methods. For better photographs, an SLR camera with manual settings should be used. Photographs should be centered and taken vertically and orthogonal to the surgical field with a linear scale to avoid error in perspective. The shooting distance should be about 75 cm using an 80-100-mm focal lens. Flash should be avoided and scialytic low-powered light should be used without focus. The operative field should be clean, wet surfaces should be avoided, and metal instruments should be hidden to avoid reflections. For SLR camera, International Organization for Standardization speed should be as low as possible, autofocus area selection mode should be on single point AF, shutter speed should be above 1/100 second, and aperture should be as narrow as possible, above f/8. For smartphone, use high dynamic range setting if available, use of flash, digital filter, effect apps, and digital zoom is not recommended. If a few basic technical rules are known and applied, high-quality photographs can be taken by amateur photographers and fit the standards accepted in clinical practice, academic communication, and publications. Copyright © 2016 Elsevier Inc. All rights reserved.
The computation of equating errors in international surveys in education.
Monseur, Christian; Berezner, Alla
2007-01-01
Since the IEA's Third International Mathematics and Science Study, one of the major objectives of international surveys in education has been to report trends in achievement. The names of the two current IEA surveys reflect this growing interest: Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study (PIRLS). Similarly a central concern of the OECD's PISA is with trends in outcomes over time. To facilitate trend analyses these studies link their tests using common item equating in conjunction with item response modelling methods. IEA and PISA policies differ in terms of reporting the error associated with trends. In IEA surveys, the standard errors of the trend estimates do not include the uncertainty associated with the linking step while PISA does include a linking error component in the standard errors of trend estimates. In other words, PISA implicitly acknowledges that trend estimates partly depend on the selected common items, while the IEA's surveys do not recognise this source of error. Failing to recognise the linking error leads to an underestimation of the standard errors and thus increases the Type I error rate, thereby resulting in reporting of significant changes in achievement when in fact these are not significant. The growing interest of policy makers in trend indicators and the impact of the evaluation of educational reforms appear to be incompatible with such underestimation. However, the procedure implemented by PISA raises a few issues about the underlying assumptions for the computation of the equating error. After a brief introduction, this paper will describe the procedure PISA implemented to compute the linking error. The underlying assumptions of this procedure will then be discussed. Finally an alternative method based on replication techniques will be presented, based on a simulation study and then applied to the PISA 2000 data.
WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.
Grech, Victor
2018-03-01
The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.
Determinants of Standard Errors of MLEs in Confirmatory Factor Analysis
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Cheng, Ying; Zhang, Wei
2010-01-01
This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found.…
How does Socio-Economic Factors Influence Interest to Go to Vocational High Schools?
NASA Astrophysics Data System (ADS)
Utomo, N. F.; Wonggo, D.
2018-02-01
This study is aimed to reveal the interest of the students of junior high schools in Sangihe Islands, Indonesia, to go to vocational high schools and the affecting factors. This study used the quantitative method with the ex-post facto approach. The population consisted of 332 students, and the sample of 178 students was established using the proportional random sampling technique applying Isaac table’s 5% error standard. The results show that family’s socio-economic condition positively contributes 26% to interest to go to vocational high schools thus proving that family’s socio-economic condition is influential and contribute to junior high school students’ interest to go to vocational high schools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graus, Matthew S.; Neumann, Aaron K.; Timlin, Jerilyn A.
Fungi in the Candida genus are the most common fungal pathogens. They not only cause high morbidity and mortality but can also cost billions of dollars in healthcare. To alleviate this burden, early and accurate identification of Candida species is necessary. However, standard identification procedures can take days and have a large false negative error. The method described in this study takes advantage of hyperspectral confocal fluorescence microscopy, which enables the capability to quickly and accurately identify and characterize the unique autofluorescence spectra from different Candida species with up to 84% accuracy when grown in conditions that closely mimic physiologicalmore » conditions.« less
Graus, Matthew S.; Neumann, Aaron K.; Timlin, Jerilyn A.
2017-01-05
Fungi in the Candida genus are the most common fungal pathogens. They not only cause high morbidity and mortality but can also cost billions of dollars in healthcare. To alleviate this burden, early and accurate identification of Candida species is necessary. However, standard identification procedures can take days and have a large false negative error. The method described in this study takes advantage of hyperspectral confocal fluorescence microscopy, which enables the capability to quickly and accurately identify and characterize the unique autofluorescence spectra from different Candida species with up to 84% accuracy when grown in conditions that closely mimic physiologicalmore » conditions.« less
Prevalence of amblyopia and refractive errors in an unscreened population of children.
Polling, Jan-Roelof; Loudon, Sjoukje E; Klaver, Caroline C W
2012-11-01
To describe the frequency of refractive errors and amblyopia in unscreened children aged 2 months to 12 years from a rural town in Poland. Five hundred ninety-one children were identified by medical records and examined in a standardized manner.Visual acuity was measured using LogMAR charts; refractive error was determined using retinoscopy or autorefraction after cycloplegia. Myopia was defined as spherical equivalent (SE) ≤ -0.50 D, emmetropia as SE between -0.5 D and+0.5 D, mild hyperopia as SE between +0.5 D and +2.0 D, and high hyperopia as SE Q+2.0 D. Amblyopia was classified as best-corrected visual acuity ≥0.3 (≤ 20/40) LogMAR, in combination with a 2 LogMAR line difference between the two eyes and the presence of an amblyogenic factor. Refractive errors ranged from 84.2% in children aged up to 2 years to 75.5% in those aged 10 to 12 years.Refractive error showed a myopic shift with age; myopia prevalence increased from 2.2% in those aged 6 to 7 years to 6.3% in those aged 10 to 12 years. Of the examined children, 77 (16.3%) had refractive errors, with visual loss; of these,60 (78%) did not use corrections. The prevalence of amblyopia was 3.1%, and refractive error attributed to the amblyopiain 9 of 13 (69%) children. Refractive errors are common in Caucasian children and often remain undiagnosed. The prevalence of amblyopia was three times higher in this unscreened population compared with screened populations. Greater awarenessof these common treatable visual conditions in children is warranted.
Galerkin v. discrete-optimal projection in nonlinear model reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir
Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less
Human eyes do not need monochromatic aberrations for dynamic accommodation.
Bernal-Molina, Paula; Marín-Franch, Iván; Del Águila-Carrasco, Antonio J; Esteve-Taboada, Jose J; López-Gil, Norberto; Kruger, Philip B; Montés-Micó, Robert
2017-09-01
To determine if human accommodation uses the eye's own monochromatic aberrations to track dynamic accommodative stimuli. Wavefront aberrations were measured while subjects monocularly viewed a monochromatic Maltese cross moving sinusoidally around 2D of accommodative demand with 1D amplitude at 0.2 Hz. The amplitude and phase (delay) of the accommodation response were compared to the actual vergence of the stimulus to obtain gain and temporal phase, calculated from wavefront aberrations recorded over time during experimental trials. The tested conditions were as follows: Correction of all the subject's aberrations except defocus (C); Correction of all the subject's aberrations except defocus and habitual second-order astigmatism (AS); Correction of all the subject's aberrations except defocus and odd higher-order aberrations (HOAs); Correction of all the subject's aberrations except defocus and even HOAs (E); Natural aberrations of the subject's eye, i.e., the adaptive-optics system only corrected the optical system's aberrations (N); Correction of all the subject's aberrations except defocus and fourth-order spherical aberration (SA). The correction was performed at 20 Hz and each condition was repeated six times in randomised order. Average gain (±2 standard errors of the mean) varied little across conditions; between 0.55 ± 0.06 (SA), and 0.62 ± 0.06 (AS). Average phase (±2 standard errors of the mean) also varied little; between 0.41 ± 0.02 s (E), and 0.47 ± 0.02 s (O). After Bonferroni correction, no statistically significant differences in gain or phase were found in the presence of specific monochromatic aberrations or in their absence. These results show that the eye's monochromatic aberrations are not necessary for accommodation to track dynamic accommodative stimuli. © 2017 The Authors. Ophthalmic and Physiological Optics published by John Wiley & Sons Ltd on behalf of College of Optometrists.
Thermal effects on electronic properties of CO/Pt(111) in water.
Duan, Sai; Xu, Xin; Luo, Yi; Hermansson, Kersti; Tian, Zhong-Qun
2013-08-28
Structure and adsorption energy of carbon monoxide molecules adsorbed on the Pt(111) surfaces with various CO coverages in water as well as work function of the whole systems at room temperature of 298 K were studied by means of a hybrid method that combines classical molecular dynamics and density functional theory. We found that when the coverage of CO is around half monolayer, i.e. 50%, there is no obvious peak of the oxygen density profile appearing in the first water layer. This result reveals that, in this case, the external force applied to water molecules from the CO/Pt(111) surface almost vanishes as a result of the competitive adsorption between CO and water molecules on the Pt(111) surface. This coverage is also the critical point of the wetting/non-wetting conditions for the CO/Pt(111) surface. Averaged work function and adsorption energy from current simulations are consistent with those of previous studies, which show that thermal average is required for direct comparisons between theoretical predictions and experimental measurements. Meanwhile, the statistical behaviors of work function and adsorption energy at room temperature have also been calculated. The standard errors of the calculated work function for the water-CO/Pt(111) interfaces are around 0.6 eV at all CO coverages, while the standard error decreases from 1.29 to 0.05 eV as the CO coverage increases from 4% to 100% for the calculated adsorption energy. Moreover, the critical points for these electronic properties are the same as those for the wetting/non-wetting conditions. These findings provide a better understanding about the interfacial structure under specific adsorption conditions, which can have important applications on the structure of electric double layers and therefore offer a useful perspective for the design of the electrochemical catalysts.
Mieritz, Rune M; Bronfort, Gert; Jakobsen, Markus D; Aagaard, Per; Hartvigsen, Jan
2014-09-01
A basic premise for any instrument measuring spinal motion is that reliable outcomes can be obtained on a relevant sample under standardized conditions. The purpose of this study was to assess the overall reliability and measurement error of regional spinal sagittal plane motion in patients with chronic low back pain (LBP), and then to evaluate the influence of body mass index, examiner, gender, stability of pain, and pain distribution on reliability and measurement error. This study comprises a test-retest design separated by 7 to 14 days. The patient cohort consisted of 220 individuals with chronic LBP. Kinematics of the lumbar spine were sampled during standardized spinal extension-flexion testing using a 6-df instrumented spatial linkage system. Test-retest reliability and measurement error were evaluated using interclass correlation coefficients (ICC(1,1)) and Bland-Altman limits of agreement (LOAs). The overall test-retest reliability (ICC(1,1)) for various motion parameters ranged from 0.51 to 0.70, and relatively wide LOAs were observed for all parameters. Reliability measures in patient subgroups (ICC(1,1)) ranged between 0.34 and 0.77. In general, greater (ICC(1,1)) coefficients and smaller LOAs were found in subgroups with patients examined by the same examiner, patients with a stable pain level, patients with a body mass index less than below 30 kg/m(2), patients who were men, and patients in the Quebec Task Force classifications Group 1. This study shows that sagittal plane kinematic data from patients with chronic LBP may be sufficiently reliable in measurements of groups of patients. However, because of the large LOAs, this test procedure appears unusable at the individual patient level. Furthermore, reliability and measurement error varies substantially among subgroups of patients. Copyright © 2014 Elsevier Inc. All rights reserved.
Towards First Principles-Based Prediction of Highly Accurate Electrochemical Pourbaix Diagrams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, Zhenhua; Chan, Maria K. Y.; Zhao, Zhi-Jian
2015-08-13
Electrochemical potential/pH (Pourbaix) diagrams underpin many aqueous electrochemical processes and are central to the identification of stable phases of metals for processes ranging from electrocatalysis to corrosion. Even though standard DFT calculations are potentially powerful tools for the prediction of such diagrams, inherent errors in the description of transition metal (hydroxy)oxides, together with neglect of van der Waals interactions, have limited the reliability of such predictions for even the simplest pure metal bulk compounds, and corresponding predictions for more complex alloy or surface structures are even more challenging. In the present work, through synergistic use of a Hubbard U correction,more » a state-of-the-art dispersion correction, and a water-based bulk reference state for the calculations, these errors are systematically corrected. The approach describes the weak binding that occurs between hydroxyl-containing functional groups in certain compounds in Pourbaix diagrams, corrects for self-interaction errors in transition metal compounds, and reduces residual errors on oxygen atoms by preserving a consistent oxidation state between the reference state, water, and the relevant bulk phases. The strong performance is illustrated on a series of bulk transition metal (Mn, Fe, Co and Ni) hydroxides, oxyhydroxides, binary, and ternary oxides, where the corresponding thermodynamics of redox and (de)hydration are described with standard errors of 0.04 eV per (reaction) formula unit. The approach further preserves accurate descriptions of the overall thermodynamics of electrochemically-relevant bulk reactions, such as water formation, which is an essential condition for facilitating accurate analysis of reaction energies for electrochemical processes on surfaces. The overall generality and transferability of the scheme suggests that it may find useful application in the construction of a broad array of electrochemical phase diagrams, including both bulk Pourbaix diagrams and surface phase diagrams of interest for corrosion and electrocatalysis.« less
Errors in Bibliographic Citations: A Continuing Problem.
ERIC Educational Resources Information Center
Sweetland, James H.
1989-01-01
Summarizes studies examining citation errors and illustrates errors resulting from a lack of standardization, misunderstanding of foreign languages, failure to examine the document cited, and general lack of training in citation norms. It is argued that the failure to detect and correct citation errors is due to diffusion of responsibility in the…
Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.
2011-01-01
Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required to result in substantial decreases in this error. During standard two-way depth integration, a depth-integrating suspended-sediment sampler collects a sample of the water-sediment mixture during two transits at each vertical in a cross section: one transit while moving from the water surface to the bed, and another transit while moving from the bed to the water surface. As the number of transits is doubled at an individual vertical, this error is reduced by ~30 percent in each size class of suspended sediment. For a given size class of suspended sediment, the error arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration depends only on the number of verticals collected, whereas the error arising from inadequate time averaging depends on both the number of verticals collected and the number of transits collected at each vertical. Summing these two errors in quadrature yields a total uncertainty in an equal-discharge-increment (EDI) or equal-width-increment (EWI) measurement of the time-averaged velocity-weighted suspended-sediment concentration in a river cross section (exclusive of any laboratory-processing errors). By virtue of how the number of verticals and transits influences the two individual errors within this total uncertainty, the error arising from inadequate time averaging slightly dominates that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. Adding verticals to an EDI or EWI measurement is slightly more effective in reducing the total uncertainty than adding transits only at each vertical, because a new vertical contributes both temporal and spatial information. However, because collection of depth-integrated samples at more transits at each vertical is generally easier and faster than at more verticals, addition of a combination of verticals and transits is likely a more practical approach to reducing the total uncertainty in most field situatio
Graf, Alexandra C; Bauer, Peter
2011-06-30
We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.
Spatial compression impairs prism adaptation in healthy individuals.
Scriven, Rachel J; Newport, Roger
2013-01-01
Neglect patients typically present with gross inattention to one side of space following damage to the contralateral hemisphere. While prism-adaptation (PA) is effective in ameliorating some neglect behaviors, the mechanisms involved and their relationship to neglect remain unclear. Recent studies have shown that conscious strategic control (SC) processes in PA may be impaired in neglect patients, who are also reported to show extraordinarily long aftereffects compared to healthy participants. Determining the underlying cause of these effects may be the key to understanding therapeutic benefits. Alternative accounts suggest that reduced SC might result from a failure to detect prism-induced reaching errors properly either because (a) the size of the error is underestimated in compressed visual space or (b) pathologically increased error-detection thresholds reduce the requirement for error correction. The purpose of this study was to model these two alternatives in healthy participants and to examine whether SC and subsequent aftereffects were abnormal compared to standard PA. Each participant completed three PA procedures within a MIRAGE mediated reality environment with direction errors recorded before, during and after adaptation. During PA, visual feedback of the reach could be compressed, perturbed by noise, or represented veridically. Compressed visual space significantly reduced SC and aftereffects compared to control and noise conditions. These results support recent observations in neglect patients, suggesting that a distortion of spatial representation may successfully model neglect and explain neglect performance while adapting to prisms.
Lobach, David F; Kawamoto, Kensaku; Anstrom, Kevin J; Russell, Michael L; Woods, Peter; Smith, Dwight
2007-01-01
Clinical decision support is recognized as one potential remedy for the growing crisis in healthcare quality in the United States and other industrialized nations. While decision support systems have been shown to improve care quality and reduce errors, these systems are not widely available. This lack of availability arises in part because most decision support systems are not portable or scalable. The Health Level 7 international standard development organization recently adopted a draft standard known as the Decision Support Service standard to facilitate the implementation of clinical decision support systems using software services. In this paper, we report the first implementation of a clinical decision support system using this new standard. This system provides point-of-care chronic disease management for diabetes and other conditions and is deployed throughout a large regional health system. We also report process measures and usability data concerning the system. Use of the Decision Support Service standard provides a portable and scalable approach to clinical decision support that could facilitate the more extensive use of decision support systems.
Evaluating concentration estimation errors in ELISA microarray experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; White, Amanda M.; Varnum, Susan M.
Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Althoughmore » propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.« less
WE-H-BRC-05: Catastrophic Error Metrics for Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, S; Molloy, J
Purpose: Intuitive evaluation of complex radiotherapy treatments is impractical, while data transfer anomalies create the potential for catastrophic treatment delivery errors. Contrary to prevailing wisdom, logical scrutiny can be applied to patient-specific machine settings. Such tests can be automated, applied at the point of treatment delivery and can be dissociated from prior states of the treatment plan, potentially revealing errors introduced early in the process. Methods: Analytical metrics were formulated for conventional and intensity modulated RT (IMRT) treatments. These were designed to assess consistency between monitor unit settings, wedge values, prescription dose and leaf positioning (IMRT). Institutional metric averages formore » 218 clinical plans were stratified over multiple anatomical sites. Treatment delivery errors were simulated using a commercial treatment planning system and metric behavior assessed via receiver-operator-characteristic (ROC) analysis. A positive result was returned if the erred plan metric value exceeded a given number of standard deviations, e.g. 2. The finding was declared true positive if the dosimetric impact exceeded 25%. ROC curves were generated over a range of metric standard deviations. Results: Data for the conventional treatment metric indicated standard deviations of 3%, 12%, 11%, 8%, and 5 % for brain, pelvis, abdomen, lung and breast sites, respectively. Optimum error declaration thresholds yielded true positive rates (TPR) between 0.7 and 1, and false positive rates (FPR) between 0 and 0.2. Two proposed IMRT metrics possessed standard deviations of 23% and 37%. The superior metric returned TPR and FPR of 0.7 and 0.2, respectively, when both leaf position and MUs were modelled. Isolation to only leaf position errors yielded TPR and FPR values of 0.9 and 0.1. Conclusion: Logical tests can reveal treatment delivery errors and prevent large, catastrophic errors. Analytical metrics are able to identify errors in monitor units, wedging and leaf positions with favorable sensitivity and specificity. In part by Varian.« less
The effect of monetary punishment on error evaluation in a Go/No-go task.
Maruo, Yuya; Sommer, Werner; Masaki, Hiroaki
2017-10-01
Little is known about the effects of the motivational significance of errors in Go/No-go tasks. We investigated the impact of monetary punishment on the error-related negativity (ERN) and error positivity (Pe) for both overt errors and partial errors, that is, no-go trials without overt responses but with covert muscle activities. We compared high and low punishment conditions where errors were penalized with 50 or 5 yen, respectively, and a control condition without monetary consequences for errors. Because we hypothesized that the partial-error ERN might overlap with the no-go N2, we compared ERPs between correct rejections (i.e., successful no-go trials) and partial errors in no-go trials. We also expected that Pe amplitudes should increase with the severity of the penalty for errors. Mean error rates were significantly lower in the high punishment than in the control condition. Monetary punishment did not influence the overt-error ERN and partial-error ERN in no-go trials. The ERN in no-go trials did not differ between partial errors and overt errors; in addition, ERPs for correct rejections in no-go trials without partial errors were of the same size as in go-trial. Therefore the overt-error ERN and the partial-error ERN may share similar error monitoring processes. Monetary punishment increased Pe amplitudes for overt errors, suggesting enhanced error evaluation processes. For partial errors an early Pe was observed, presumably representing inhibition processes. Interestingly, even partial errors elicited the Pe, suggesting that covert erroneous activities could be detected in Go/No-go tasks. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Contingent negative variation (CNV) associated with sensorimotor timing error correction.
Jang, Joonyong; Jones, Myles; Milne, Elizabeth; Wilson, Daniel; Lee, Kwang-Hyuk
2016-02-15
Detection and subsequent correction of sensorimotor timing errors are fundamental to adaptive behavior. Using scalp-recorded event-related potentials (ERPs), we sought to find ERP components that are predictive of error correction performance during rhythmic movements. Healthy right-handed participants were asked to synchronize their finger taps to a regular tone sequence (every 600 ms), while EEG data were continuously recorded. Data from 15 participants were analyzed. Occasional irregularities were built into stimulus presentation timing: 90 ms before (advances: negative shift) or after (delays: positive shift) the expected time point. A tapping condition alternated with a listening condition in which identical stimulus sequence was presented but participants did not tap. Behavioral error correction was observed immediately following a shift, with a degree of over-correction with positive shifts. Our stimulus-locked ERP data analysis revealed, 1) increased auditory N1 amplitude for the positive shift condition and decreased auditory N1 modulation for the negative shift condition; and 2) a second enhanced negativity (N2) in the tapping positive condition, compared with the tapping negative condition. In response-locked epochs, we observed a CNV (contingent negative variation)-like negativity with earlier latency in the tapping negative condition compared with the tapping positive condition. This CNV-like negativity peaked at around the onset of subsequent tapping, with the earlier the peak, the better the error correction performance with the negative shifts while the later the peak, the better the error correction performance with the positive shifts. This study showed that the CNV-like negativity was associated with the error correction performance during our sensorimotor synchronization study. Auditory N1 and N2 were differentially involved in negative vs. positive error correction. However, we did not find evidence for their involvement in behavioral error correction. Overall, our study provides the basis from which further research on the role of the CNV in perceptual and motor timing can be developed. Copyright © 2015 Elsevier Inc. All rights reserved.
Huang, Kuo-Chen; Wang, Hsiu-Feng; Chen, Chun-Ching
2010-06-01
Effects of shape, size, and chromaticity of stimuli on participants' errors when estimating the size of simultaneously presented standard and comparison stimuli were examined. 48 Taiwanese college students ages 20 to 24 years old (M = 22.3, SD = 1.3) participated. Analysis showed that the error for estimated size was significantly greater for those in the low-vision group than for those in the normal-vision and severe-myopia groups. The errors were significantly greater with green and blue stimuli than with red stimuli. Circular stimuli produced smaller mean errors than did square stimuli. The actual size of the standard stimulus significantly affected the error for estimated size. Errors for estimations using smaller sizes were significantly higher than when the sizes were larger. Implications of the results for graphics-based interface design, particularly when taking account of visually impaired users, are discussed.
Mansourian, Robert; Mutch, David M; Antille, Nicolas; Aubert, Jerome; Fogel, Paul; Le Goff, Jean-Marc; Moulin, Julie; Petrov, Anton; Rytz, Andreas; Voegel, Johannes J; Roberts, Matthew-Alan
2004-11-01
Microarray technology has become a powerful research tool in many fields of study; however, the cost of microarrays often results in the use of a low number of replicates (k). Under circumstances where k is low, it becomes difficult to perform standard statistical tests to extract the most biologically significant experimental results. Other more advanced statistical tests have been developed; however, their use and interpretation often remain difficult to implement in routine biological research. The present work outlines a method that achieves sufficient statistical power for selecting differentially expressed genes under conditions of low k, while remaining as an intuitive and computationally efficient procedure. The present study describes a Global Error Assessment (GEA) methodology to select differentially expressed genes in microarray datasets, and was developed using an in vitro experiment that compared control and interferon-gamma treated skin cells. In this experiment, up to nine replicates were used to confidently estimate error, thereby enabling methods of different statistical power to be compared. Gene expression results of a similar absolute expression are binned, so as to enable a highly accurate local estimate of the mean squared error within conditions. The model then relates variability of gene expression in each bin to absolute expression levels and uses this in a test derived from the classical ANOVA. The GEA selection method is compared with both the classical and permutational ANOVA tests, and demonstrates an increased stability, robustness and confidence in gene selection. A subset of the selected genes were validated by real-time reverse transcription-polymerase chain reaction (RT-PCR). All these results suggest that GEA methodology is (i) suitable for selection of differentially expressed genes in microarray data, (ii) intuitive and computationally efficient and (iii) especially advantageous under conditions of low k. The GEA code for R software is freely available upon request to authors.
Fan, Rong; He, Tao; Qiu, Yan; Di, Yu-Lan; Xu, Su-yun; Li, Yao-yu
2012-01-01
To evaluate the differences of wavefront aberrations under cycloplegic, scotopic and photopic conditions. A total of 174 eyes of 105 patients were measured using the wavefront sensor (WaveScan® 3.62) under different pupil conditions: cycloplegic 8.58 ± 0.54 mm (6.4 mm - 9.5 mm), scotopic 7.53 ± 0.69 mm (5.7 mm - 9.1 mm) and photopic 6.08 ± 1.14 mm (4.1 mm - 8.8 mm). The pupil diameter, standard Zernike coefficients, root mean square of higher-order aberrations and dominant aberrations were compared between cycloplegic and scotopic conditions, and between scotopic and photopic conditions. The pupil diameter was 7.53 ± 0.69 mm under the scotopic condition, which reached the requirement of about 6.5 mm optical zone design in the wavefront-guided surgery and prevented measurement error due to the pupil centroid shift caused by mydriatics. Pharmacological pupil dilation induced increase of standard Zernike coefficients Z(3)(-3), Z(4)(0) and Z(5)(-5). The higher-order aberrations, third-order aberration, fourth-order aberration, fifth-order aberration, sixth-order aberration, and spherical aberration increased statistically significantly, compared to the scotopic condition (P<0.010). When the scotopic condition shifted to the photopic condition, the standard Zernike coefficients Z(4)(0), Z(4)(2), Z(6)(-4), Z(6)(-2), Z(6)(2) decreased and all the higher-order aberrations decreased statistically significantly (P<0.010), demonstrating that accommodative miosis can significantly improve vision under the photopic condition. Under the three conditions, the vertical coma aberration appears the most frequently within the dominant aberrations without significant effect by pupil size variance, and the proportion of spherical aberrations decreased with the decrease of the pupil size. The wavefront aberrations are significantly different under cycloplegic, scotopic and photopic conditions. Using the wavefront sensor (VISX WaveScan) to measure scotopic wavefront aberrations is feasible for the wavefront-guided refractive surgery.
Degrees of Freedom for Allan Deviation Estimates of Multiple Clocks
2016-04-01
Allan deviation . Allan deviation will be represented by σ and standard deviation will be represented by δ. In practice, when the Allan deviation of a...the Allan deviation of standard noise types. Once the number of degrees of freedom is known, an approximate confidence interval can be assigned by...measurement errors from paired difference data. We extend this approach by using the Allan deviation to estimate the error in a frequency standard
Disclosure of Medical Errors: What Factors Influence How Patients Respond?
Mazor, Kathleen M; Reed, George W; Yood, Robert A; Fischer, Melissa A; Baril, Joann; Gurwitz, Jerry H
2006-01-01
BACKGROUND Disclosure of medical errors is encouraged, but research on how patients respond to specific practices is limited. OBJECTIVE This study sought to determine whether full disclosure, an existing positive physician-patient relationship, an offer to waive associated costs, and the severity of the clinical outcome influenced patients' responses to medical errors. PARTICIPANTS Four hundred and seven health plan members participated in a randomized experiment in which they viewed video depictions of medical error and disclosure. DESIGN Subjects were randomly assigned to experimental condition. Conditions varied in type of medication error, level of disclosure, reference to a prior positive physician-patient relationship, an offer to waive costs, and clinical outcome. MEASURES Self-reported likelihood of changing physicians and of seeking legal advice; satisfaction, trust, and emotional response. RESULTS Nondisclosure increased the likelihood of changing physicians, and reduced satisfaction and trust in both error conditions. Nondisclosure increased the likelihood of seeking legal advice and was associated with a more negative emotional response in the missed allergy error condition, but did not have a statistically significant impact on seeking legal advice or emotional response in the monitoring error condition. Neither the existence of a positive relationship nor an offer to waive costs had a statistically significant impact. CONCLUSIONS This study provides evidence that full disclosure is likely to have a positive effect or no effect on how patients respond to medical errors. The clinical outcome also influences patients' responses. The impact of an existing positive physician-patient relationship, or of waiving costs associated with the error remains uncertain. PMID:16808770
Impact of Standardized Communication Techniques on Errors during Simulated Neonatal Resuscitation.
Yamada, Nicole K; Fuerch, Janene H; Halamek, Louis P
2016-03-01
Current patterns of communication in high-risk clinical situations, such as resuscitation, are imprecise and prone to error. We hypothesized that the use of standardized communication techniques would decrease the errors committed by resuscitation teams during neonatal resuscitation. In a prospective, single-blinded, matched pairs design with block randomization, 13 subjects performed as a lead resuscitator in two simulated complex neonatal resuscitations. Two nurses assisted each subject during the simulated resuscitation scenarios. In one scenario, the nurses used nonstandard communication; in the other, they used standardized communication techniques. The performance of the subjects was scored to determine errors committed (defined relative to the Neonatal Resuscitation Program algorithm), time to initiation of positive pressure ventilation (PPV), and time to initiation of chest compressions (CC). In scenarios in which subjects were exposed to standardized communication techniques, there was a trend toward decreased error rate, time to initiation of PPV, and time to initiation of CC. While not statistically significant, there was a 1.7-second improvement in time to initiation of PPV and a 7.9-second improvement in time to initiation of CC. Should these improvements in human performance be replicated in the care of real newborn infants, they could improve patient outcomes and enhance patient safety. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Visuomotor adaptation needs a validation of prediction error by feedback error
Gaveau, Valérie; Prablanc, Claude; Laurent, Damien; Rossetti, Yves; Priot, Anne-Emmanuelle
2014-01-01
The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In “terminal feedback error” condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In “movement prediction error” condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the “terminal feedback error” condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are eliminated or strongly attenuated. PMID:25408644
Learning to Fail in Aphasia: An Investigation of Error Learning in Naming
Middleton, Erica L.; Schwartz, Myrna F.
2013-01-01
Purpose To determine if the naming impairment in aphasia is influenced by error learning and if error learning is related to type of retrieval strategy. Method Nine participants with aphasia and ten neurologically-intact controls named familiar proper noun concepts. When experiencing tip-of-the-tongue naming failure (TOT) in an initial TOT-elicitation phase, participants were instructed to adopt phonological or semantic self-cued retrieval strategies. In the error learning manipulation, items evoking TOT states during TOT-elicitation were randomly assigned to a short or long time condition where participants were encouraged to continue to try to retrieve the name for either 20 seconds (short interval) or 60 seconds (long). The incidence of TOT on the same items was measured on a post test after 48-hours. Error learning was defined as a higher rate of recurrent TOTs (TOT at both TOT-elicitation and post test) for items assigned to the long (versus short) time condition. Results In the phonological condition, participants with aphasia showed error learning whereas controls showed a pattern opposite to error learning. There was no evidence for error learning in the semantic condition for either group. Conclusion Error learning is operative in aphasia, but dependent on the type of strategy employed during naming failure. PMID:23816662
Arifin, Nooranida; Abu Osman, Noor Azuan; Wan Abas, Wan Abu Bakar
2014-04-01
The measurements of postural balance often involve measurement error, which affects the analysis and interpretation of the outcomes. In most of the existing clinical rehabilitation research, the ability to produce reliable measures is a prerequisite for an accurate assessment of an intervention after a period of time. Although clinical balance assessment has been performed in previous study, none has determined the intrarater test-retest reliability of static and dynamic stability indexes during dominant single stance. In this study, one rater examined 20 healthy university students (female=12, male=8) in two sessions separated by 7 day intervals. Three stability indexes--the overall stability index (OSI), anterior/posterior stability index (APSI), and medial/ lateral stability index (MLSI) in static and dynamic conditions--were measured during single dominant stance. Intraclass correlation coefficient (ICC), standard error measurement (SEM) and 95% confidence interval (95% CI) were calculated. Test-retest ICCs for OSI, APSI, and MLSI were 0.85, 0.78, and 0.84 during static condition and were 0.77, 0.77, and 0.65 during dynamic condition, respectively. We concluded that the postural stability assessment using Biodex stability system demonstrates good-to-excellent test-retest reliability over a 1 week time interval.
Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-01-01
Coordinate measuring machines (CMM) are main instruments of measurement in laboratories and in industrial quality control. A compensation error model has been formulated (Part I). It integrates error and uncertainty in the feature measurement model. Experimental implementation for the verification of this model is carried out based on the direct testing on a moving bridge CMM. The regression results by axis are quantified and compared to CMM indication with respect to the assigned values of the measurand. Next, testing of selected measurements of length, flatness, dihedral angle, and roundness features are accomplished. The measurement of calibrated gauge blocks for length or angle, flatness verification of the CMM granite table and roundness of a precision glass hemisphere are presented under a setup of repeatability conditions. The results are analysed and compared with alternative methods of estimation. The overall performance of the model is endorsed through experimental verification, as well as the practical use and the model capability to contribute in the improvement of current standard CMM measuring capabilities. PMID:27754441
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarazona, David; Berz, Martin; Hipple, Robert
The main goal of the Muon g-2 Experiment (g-2) at Fermilab is to measure the muon anomalous magnetic moment to unprecedented precision. This new measurement will allow to test the completeness of the Standard Model (SM) and to validate other theoretical models beyond the SM. The close interplay of the understanding of particle beam dynamics and the preparation of the beam properties with the experimental measurement is tantamount to the reduction of systematic errors in the determination of the muon anomalous magnetic moment. We describe progress in developing detailed calculations and modeling of the muon beam delivery system in ordermore » to obtain a better understanding of spin-orbit correlations, nonlinearities, and more realistic aspects that contribute to the systematic errors of the g-2 measurement. Our simulation is meant to provide statistical studies of error effects and quick analyses of running conditions for when g-2 is taking beam, among others. We are using COSY, a differential algebra solver developed at Michigan State University that will also serve as an alternative to compare results obtained by other simulation teams of the g-2 Collaboration.« less
Adjoint-Based Mesh Adaptation for the Sonic Boom Signature Loudness
NASA Technical Reports Server (NTRS)
Rallabhandi, Sriram K.; Park, Michael A.
2017-01-01
The mesh adaptation functionality of FUN3D is utilized to obtain a mesh optimized to calculate sonic boom ground signature loudness. During this process, the coupling between the discrete-adjoints of the computational fluid dynamics tool FUN3D and the atmospheric propagation tool sBOOM is exploited to form the error estimate. This new mesh adaptation methodology will allow generation of suitable meshes adapted to reduce the estimated errors in the ground loudness, which is an optimization metric employed in supersonic aircraft design. This new output-based adaptation could allow new insights into meshing for sonic boom analysis and design, and complements existing output-based adaptation techniques such as adaptation to reduce estimated errors in off-body pressure functional. This effort could also have implications for other coupled multidisciplinary adjoint capabilities (e.g., aeroelasticity) as well as inclusion of propagation specific parameters such as prevailing winds or non-standard atmospheric conditions. Results are discussed in the context of existing methods and appropriate conclusions are drawn as to the efficacy and efficiency of the developed capability.
NASA Technical Reports Server (NTRS)
Knox, C. E.
1984-01-01
A simple airborne flight management descent algorithm designed to define a flight profile subject to the constraints of using idle thrust, a clean airplane configuration (landing gear up, flaps zero, and speed brakes retracted), and fixed-time end conditions was developed and flight tested in the NASA TSRV B-737 research airplane. The research test flights, conducted in the Denver ARTCC automated time-based metering LFM/PD ATC environment, demonstrated that time guidance and control in the cockpit was acceptable to the pilots and ATC controllers and resulted in arrival of the airplane over the metering fix with standard deviations in airspeed error of 6.5 knots, in altitude error of 23.7 m (77.8 ft), and in arrival time accuracy of 12 sec. These accuracies indicated a good representation of airplane performance and wind modeling. Fuel savings will be obtained on a fleet-wide basis through a reduction of the time error dispersions at the metering fix and on a single-airplane basis by presenting the pilot with guidance for a fuel-efficient descent.
Blöchliger, Nicolas; Keller, Peter M; Böttger, Erik C; Hombach, Michael
2017-09-01
The procedure for setting clinical breakpoints (CBPs) for antimicrobial susceptibility has been poorly standardized with respect to population data, pharmacokinetic parameters and clinical outcome. Tools to standardize CBP setting could result in improved antibiogram forecast probabilities. We propose a model to estimate probabilities for methodological categorization errors and defined zones of methodological uncertainty (ZMUs), i.e. ranges of zone diameters that cannot reliably be classified. The impact of ZMUs on methodological error rates was used for CBP optimization. The model distinguishes theoretical true inhibition zone diameters from observed diameters, which suffer from methodological variation. True diameter distributions are described with a normal mixture model. The model was fitted to observed inhibition zone diameters of clinical Escherichia coli strains. Repeated measurements for a quality control strain were used to quantify methodological variation. For 9 of 13 antibiotics analysed, our model predicted error rates of < 0.1% applying current EUCAST CBPs. Error rates were > 0.1% for ampicillin, cefoxitin, cefuroxime and amoxicillin/clavulanic acid. Increasing the susceptible CBP (cefoxitin) and introducing ZMUs (ampicillin, cefuroxime, amoxicillin/clavulanic acid) decreased error rates to < 0.1%. ZMUs contained low numbers of isolates for ampicillin and cefuroxime (3% and 6%), whereas the ZMU for amoxicillin/clavulanic acid contained 41% of all isolates and was considered not practical. We demonstrate that CBPs can be improved and standardized by minimizing methodological categorization error rates. ZMUs may be introduced if an intermediate zone is not appropriate for pharmacokinetic/pharmacodynamic or drug dosing reasons. Optimized CBPs will provide a standardized antibiotic susceptibility testing interpretation at a defined level of probability. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Cost-effectiveness of the Federal stream-gaging program in Virginia
Carpenter, D.H.
1985-01-01
Data uses and funding sources were identified for the 77 continuous stream gages currently being operated in Virginia by the U.S. Geological Survey with a budget of $446,000. Two stream gages were identified as not being used sufficiently to warrant continuing their operation. Operation of these stations should be considered for discontinuation. Data collected at two other stations were identified as having uses primarily related to short-term studies; these stations should also be considered for discontinuation at the end of the data collection phases of the studies. The remaining 73 stations should be kept in the program for the foreseeable future. The current policy for operation of the 77-station program requires a budget of $446,000/yr. The average standard error of estimation of streamflow records is 10.1%. It was shown that this overall level of accuracy at the 77 sites could be maintained with a budget of $430,500 if resources were redistributed among the gages. A minimum budget of $428,500 is required to operate the 77-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, with optimized operation, the average standard error would be 10.4%. The maximum budget analyzed was $650,000, which resulted in an average standard error of 5.5%. The study indicates that a major component of error is caused by lost or missing data. If perfect equipment were available, the standard error for the current program and budget could be reduced to 7.6%. This also can be interpreted to mean that the streamflow data have a standard error of this magnitude during times when the equipment is operating properly. (Author 's abstract)
Bootstrap Standard Error Estimates in Dynamic Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Browne, Michael W.
2010-01-01
Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…
Standard Errors of Equating Differences: Prior Developments, Extensions, and Simulations
ERIC Educational Resources Information Center
Moses, Tim; Zhang, Wenmin
2011-01-01
The purpose of this article was to extend the use of standard errors for equated score differences (SEEDs) to traditional equating functions. The SEEDs are described in terms of their original proposal for kernel equating functions and extended so that SEEDs for traditional linear and traditional equipercentile equating functions can be computed.…
Progress in the improved lattice calculation of direct CP-violation in the Standard Model
NASA Astrophysics Data System (ADS)
Kelly, Christopher
2018-03-01
We discuss the ongoing effort by the RBC & UKQCD collaborations to improve our lattice calculation of the measure of Standard Model direct CP violation, ɛ', with physical kinematics. We present our progress in decreasing the (dominant) statistical error and discuss other related activities aimed at reducing the systematic errors.
The Development of MST Test Information for the Prediction of Test Performances
ERIC Educational Resources Information Center
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.
2017-01-01
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
ERIC Educational Resources Information Center
National Center for Education Statistics, 2010
2010-01-01
This paper presents the supplemental figures, tables, and standard error tables for the report "Student Financing of Undergraduate Education: 2007-08. Web Tables. NCES 2010-162." (Contains 6 figures and 10 tables.) [For the main report, see ED511828.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Jennifer F.; Clifton, Andrew
Currently, cup anemometers on meteorological towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability; however, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install meteorological towers at potential sites. As a result, remote-sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. Although lidars can accurately estimate mean wind speeds and wind directions, there is still a large amount ofmore » uncertainty surrounding the measurement of turbulence using these devices. Errors in lidar turbulence estimates are caused by a variety of factors, including instrument noise, volume averaging, and variance contamination, in which the magnitude of these factors is highly dependent on measurement height and atmospheric stability. As turbulence has a large impact on wind power production, errors in turbulence measurements will translate into errors in wind power prediction. The impact of using lidars rather than cup anemometers for wind power prediction must be understood if lidars are to be considered a viable alternative to cup anemometers.In this poster, the sensitivity of power prediction error to typical lidar turbulence measurement errors is assessed. Turbulence estimates from a vertically profiling WINDCUBE v2 lidar are compared to high-resolution sonic anemometer measurements at field sites in Oklahoma and Colorado to determine the degree of lidar turbulence error that can be expected under different atmospheric conditions. These errors are then incorporated into a power prediction model to estimate the sensitivity of power prediction error to turbulence measurement error. Power prediction models, including the standard binning method and a random forest method, were developed using data from the aeroelastic simulator FAST for a 1.5 MW turbine. The impact of lidar turbulence error on the predicted power from these different models is examined to determine the degree of turbulence measurement accuracy needed for accurate power prediction.« less
Model Error Estimation for the CPTEC Eta Model
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; daSilva, Arlindo
1999-01-01
Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.
Hashimoto, S; Murakami, Y; Taniguchi, K; Nagai, M
1999-12-01
Our purpose was to determine the number of monitoring stations (medical institutions) necessary for estimating incidence rates in the surveillance system of infectious diseases in Japan. Infectious diseases were selected by the type of monitoring stations: 15 diseases in pediatrics stations, influenza in influenza stations, 3 diseases in ophthalmology stations and 5 diseases in the stations of sexually transmitted diseases (STD). For each type of monitoring station, 5 cases of the number of monitoring stations in each health center, including the number determined from presently established standards and the actual number in 1997, were given. It was assumed that monitoring stations were randomly selected among medical institutions in health centers. For each infectious disease, each case and each type of monitoring station, standard error rates of estimated numbers of incidence cases in the whole country were calculated in 1993-1997 using the data of the surveillance of infectious diseases. Among 5 cases of monitoring stations, the case satisfied the condition that those standard error rates were lower than the critical values, was selected. The critical values were 5% in pediatrics and influenza stations, and 10% in ophthalmology and STD stations. The numbers of monitoring stations in the selected cases were 3,000 in pediatrics stations, 5,000 in influenza stations (including all pediatrics stations), 605 in ophthalmology stations and 900 in STD stations.
Failures of Perception in the Low-Prevalence Effect: Evidence From Active and Passive Visual Search
Hout, Michael C.; Walenchok, Stephen C.; Goldinger, Stephen D.; Wolfe, Jeremy M.
2017-01-01
In visual search, rare targets are missed disproportionately often. This low-prevalence effect (LPE) is a robust problem with demonstrable societal consequences. What is the source of the LPE? Is it a perceptual bias against rare targets or a later process, such as premature search termination or motor response errors? In 4 experiments, we examined the LPE using standard visual search (with eye tracking) and 2 variants of rapid serial visual presentation (RSVP) in which observers made present/absent decisions after sequences ended. In all experiments, observers looked for 2 target categories (teddy bear and butterfly) simultaneously. To minimize simple motor errors, caused by repetitive absent responses, we held overall target prevalence at 50%, with 1 low-prevalence and 1 high-prevalence target type. Across conditions, observers either searched for targets among other real-world objects or searched for specific bears or butterflies among within-category distractors. We report 4 main results: (a) In standard search, high-prevalence targets were found more quickly and accurately than low-prevalence targets. (b) The LPE persisted in RSVP search, even though observers never terminated search on their own. (c) Eye-tracking analyses showed that high-prevalence targets elicited better attentional guidance and faster perceptual decisions. And (d) even when observers looked directly at low-prevalence targets, they often (12%–34% of trials) failed to detect them. These results strongly argue that low-prevalence misses represent failures of perception when early search termination or motor errors are controlled. PMID:25915073
Tirilazad mesylate protects stored erythrocytes against osmotic fragility.
Epps, D E; Knechtel, T J; Bacznskyj, O; Decker, D; Guido, D M; Buxser, S E; Mathews, W R; Buffenbarger, S L; Lutzke, B S; McCall, J M
1994-12-01
The hypoosmotic lysis curve of freshly collected human erythrocytes is consistent with a single Gaussian error function with a mean of 46.5 +/- 0.25 mM NaCl and a standard deviation of 5.0 +/- 0.4 mM NaCl. After extended storage of RBCs under standard blood bank conditions the lysis curve conforms to the sum of two error functions instead of a possible shift in the mean and a broadening of a single error function. Thus, two distinct sub-populations with different fragilities are present instead of a single, broadly distributed population. One population is identical to the freshly collected erythrocytes, whereas the other population consists of osmotically fragile cells. The rate of generation of the new, osmotically fragile, population of cells was used to probe the hypothesis that lipid peroxidation is responsible for the induction of membrane fragility. If it is so, then the antioxidant, tirilazad mesylate (U-74,006f), should protect against this degradation of stored erythrocytes. We found that tirilazad mesylate, at 17 microM (1.5 mol% with respect to membrane lecithin), retards significantly the formation of the osmotically fragile RBCs. Concomitantly, the concentration of free hemoglobin which accumulates during storage is markedly reduced by the drug. Since the presence of the drug also decreases the amount of F2-isoprostanes formed during the storage period, an antioxidant mechanism must be operative. These results demonstrate that tirilazad mesylate significantly decreases the number of fragile erythrocytes formed during storage in the blood bank.
Custom map projections for regional groundwater models
Kuniansky, Eve L.
2017-01-01
For regional groundwater flow models (areas greater than 100,000 km2), improper choice of map projection parameters can result in model error for boundary conditions dependent on area (recharge or evapotranspiration simulated by application of a rate using cell area from model discretization) and length (rivers simulated with head-dependent flux boundary). Smaller model areas can use local map coordinates, such as State Plane (United States) or Universal Transverse Mercator (correct zone) without introducing large errors. Map projections vary in order to preserve one or more of the following properties: area, shape, distance (length), or direction. Numerous map projections are developed for different purposes as all four properties cannot be preserved simultaneously. Preservation of area and length are most critical for groundwater models. The Albers equal-area conic projection with custom standard parallels, selected by dividing the length north to south by 6 and selecting standard parallels 1/6th above or below the southern and northern extent, preserves both area and length for continental areas in mid latitudes oriented east-west. Custom map projection parameters can also minimize area and length error in non-ideal projections. Additionally, one must also use consistent vertical and horizontal datums for all geographic data. The generalized polygon for the Floridan aquifer system study area (306,247.59 km2) is used to provide quantitative examples of the effect of map projections on length and area with different projections and parameter choices. Use of improper map projection is one model construction problem easily avoided.
Error model for the SAO 1969 standard earth.
NASA Technical Reports Server (NTRS)
Martin, C. F.; Roy, N. A.
1972-01-01
A method is developed for estimating an error model for geopotential coefficients using satellite tracking data. A single station's apparent timing error for each pass is attributed to geopotential errors. The root sum of the residuals for each station also depends on the geopotential errors, and these are used to select an error model. The model chosen is 1/4 of the difference between the SAO M1 and the APL 3.5 geopotential.
The clinical significance of 10-m walk test standardizations in Parkinson's disease.
Lindholm, Beata; Nilsson, Maria H; Hansson, Oskar; Hagell, Peter
2018-06-06
The 10-m walk test (10MWT) is a widely used measure of gait speed in Parkinson's disease (PD). However, it is unclear if different standardizations of its conduct impact test results. We examined the clinical significance of two aspects of the standardization of the 10MWT in mild PD: static vs. dynamic start, and a single vs. repeated trials. Implications for fall prediction were also explored. 151 people with PD (mean age and PD duration, 68 and 4 years, respectively) completed the 10MWT in comfortable gait speed with static and dynamic start (two trials each), and gait speed (m/s) was recorded. Participants then registered all prospective falls for 6 months. Absolute mean differences between outcomes from the various test conditions ranged between 0.016 and 0.040 m/s (effect sizes, 0.06-0.14) with high levels of agreement (intra-class correlation coefficients, 0.932-0.987) and small standard errors of measurement (0.032-0.076 m/s). Receiver operating characteristic curves showed similar discriminate abilities for prediction of future falls across conditions (areas under curves, 0.70-0.73). Cut-off points were estimated at 1.1-1.2 m/s. Different 10MWT standardizations yield very similar results, suggesting that there is no practical need for an acceleration distance or repeated trials when conducting this test in mild PD.
Bioelectrical impedance analysis: A new tool for assessing fish condition
Hartman, Kyle J.; Margraf, F. Joseph; Hafs, Andrew W.; Cox, M. Keith
2015-01-01
Bioelectrical impedance analysis (BIA) is commonly used in human health and nutrition fields but has only recently been considered as a potential tool for assessing fish condition. Once BIA is calibrated, it estimates fat/moisture levels and energy content without the need to kill fish. Despite the promise held by BIA, published studies have been divided on whether BIA can provide accurate estimates of body composition in fish. In cases where BIA was not successful, the models lacked the range of fat levels or sample sizes we determined were needed for model success (range of dry fat levels of 29%, n = 60, yielding an R2 of 0.8). Reduced range of fat levels requires an increased sample size to achieve that benchmark; therefore, standardization of methods is needed. Here we discuss standardized methods based on a decade of research, identify sources of error, discuss where BIA is headed, and suggest areas for future research.
Space charge enhanced plasma gradient effects on satellite electric field measurements
NASA Technical Reports Server (NTRS)
Diebold, Dan; Hershkowitz, Noah; Dekock, J.; Intrator, T.; Hsieh, M-K.
1991-01-01
It has been recognized that plasma gradients can cause error in magnetospheric electric field measurements made by double probes. Space charge enhanced Plasma Gradient Induced Error (PGIE) is discussed in general terms, presenting the results of a laboratory experiment designed to demonstrate this error, and deriving a simple expression that quantifies this error. Experimental conditions were not identical to magnetospheric conditions, although efforts were made to insure the relevant physics applied to both cases. The experimental data demonstrate some of the possible errors in electric field measurements made by strongly emitting probes due to space charge effects in the presence of plasma gradients. Probe errors in space and laboratory conditions are discussed, as well as experimental error. In the final section, theoretical aspects are examined and an expression is derived for the maximum steady state space charge enhanced PGIE taken by two identical current biased probes.
Mathes, Tim; Klaßen, Pauline; Pieper, Dawid
2017-11-28
Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.
Defining the Relationship Between Human Error Classes and Technology Intervention Strategies
NASA Technical Reports Server (NTRS)
Wiegmann, Douglas A.; Rantanen, Eas M.
2003-01-01
The modus operandi in addressing human error in aviation systems is predominantly that of technological interventions or fixes. Such interventions exhibit considerable variability both in terms of sophistication and application. Some technological interventions address human error directly while others do so only indirectly. Some attempt to eliminate the occurrence of errors altogether whereas others look to reduce the negative consequences of these errors. In any case, technological interventions add to the complexity of the systems and may interact with other system components in unforeseeable ways and often create opportunities for novel human errors. Consequently, there is a need to develop standards for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the biggest benefit to flight safety as well as to mitigate any adverse ramifications. The purpose of this project was to help define the relationship between human error and technological interventions, with the ultimate goal of developing a set of standards for evaluating or measuring the potential benefits of new human error fixes.
Comparative study of anatomical normalization errors in SPM and 3D-SSP using digital brain phantom.
Onishi, Hideo; Matsutake, Yuki; Kawashima, Hiroki; Matsutomo, Norikazu; Amijima, Hizuru
2011-01-01
In single photon emission computed tomography (SPECT) cerebral blood flow studies, two major algorithms are widely used statistical parametric mapping (SPM) and three-dimensional stereotactic surface projections (3D-SSP). The aim of this study is to compare an SPM algorithm-based easy Z score imaging system (eZIS) and a 3D-SSP system in the errors of anatomical standardization using 3D-digital brain phantom images. We developed a 3D-brain digital phantom based on MR images to simulate the effects of head tilt, perfusion defective region size, and count value reduction rate on the SPECT images. This digital phantom was used to compare the errors of anatomical standardization by the eZIS and the 3D-SSP algorithms. While the eZIS allowed accurate standardization of the images of the phantom simulating a head in rotation, lateroflexion, anteflexion, or retroflexion without angle dependency, the standardization by 3D-SSP was not accurate enough at approximately 25° or more head tilt. When the simulated head contained perfusion defective regions, one of the 3D-SSP images showed an error of 6.9% from the true value. Meanwhile, one of the eZIS images showed an error as large as 63.4%, revealing a significant underestimation. When required to evaluate regions with decreased perfusion due to such causes as hemodynamic cerebral ischemia, the 3D-SSP is desirable. In a statistical image analysis, we must reconfirm the image after anatomical standardization by all means.
Cost effectiveness of the stream-gaging program in South Carolina
Barker, A.C.; Wright, B.C.; Bennett, C.S.
1985-01-01
The cost effectiveness of the stream-gaging program in South Carolina was documented for the 1983 water yr. Data uses and funding sources were identified for the 76 continuous stream gages currently being operated in South Carolina. The budget of $422,200 for collecting and analyzing streamflow data also includes the cost of operating stage-only and crest-stage stations. The streamflow records for one stream gage can be determined by alternate, less costly methods, and should be discontinued. The remaining 75 stations should be maintained in the program for the foreseeable future. The current policy for the operation of the 75 stations including the crest-stage and stage-only stations would require a budget of $417,200/yr. The average standard error of estimation of streamflow records is 16.9% for the present budget with missing record included. However, the standard error of estimation would decrease to 8.5% if complete streamflow records could be obtained. It was shown that the average standard error of estimation of 16.9% could be obtained at the 75 sites with a budget of approximately $395,000 if the gaging resources were redistributed among the gages. A minimum budget of $383,500 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 18.6%. The maximum budget analyzed was $850,000, which resulted in an average standard error of 7.6 %. (Author 's abstract)
Medical students' experiences with medical errors: an analysis of medical student essays.
Martinez, William; Lo, Bernard
2008-07-01
This study aimed to examine medical students' experiences with medical errors. In 2001 and 2002, 172 fourth-year medical students wrote an anonymous description of a significant medical error they had witnessed or committed during their clinical clerkships. The assignment represented part of a required medical ethics course. We analysed 147 of these essays using thematic content analysis. Many medical students made or observed significant errors. In either situation, some students experienced distress that seemingly went unaddressed. Furthermore, this distress was sometimes severe and persisted after the initial event. Some students also experienced considerable uncertainty as to whether an error had occurred and how to prevent future errors. Many errors may not have been disclosed to patients, and some students who desired to discuss or disclose errors were apparently discouraged from doing so by senior doctors. Some students criticised senior doctors who attempted to hide errors or avoid responsibility. By contrast, students who witnessed senior doctors take responsibility for errors and candidly disclose errors to patients appeared to recognise the importance of honesty and integrity and said they aspired to these standards. There are many missed opportunities to teach students how to respond to and learn from errors. Some faculty members and housestaff may at times respond to errors in ways that appear to contradict professional standards. Medical educators should increase exposure to exemplary responses to errors and help students to learn from and cope with errors.
Robust Optimal Adaptive Control Method with Large Adaptive Gain
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2009-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.
Tracked ultrasound calibration studies with a phantom made of LEGO bricks
NASA Astrophysics Data System (ADS)
Soehl, Marie; Walsh, Ryan; Rankin, Adam; Lasso, Andras; Fichtinger, Gabor
2014-03-01
In this study, spatial calibration of tracked ultrasound was compared by using a calibration phantom made of LEGO® bricks and two 3-D printed N-wire phantoms. METHODS: The accuracy and variance of calibrations were compared under a variety of operating conditions. Twenty trials were performed using an electromagnetic tracking device with a linear probe and three trials were performed using varied probes, varied tracking devices and the three aforementioned phantoms. The accuracy and variance of spatial calibrations found through the standard deviation and error of the 3-D image reprojection were used to compare the calibrations produced from the phantoms. RESULTS: This study found no significant difference between the measured variables of the calibrations. The average standard deviation of multiple 3-D image reprojections with the highest performing printed phantom and those from the phantom made of LEGO® bricks differed by 0.05 mm and the error of the reprojections differed by 0.13 mm. CONCLUSION: Given that the phantom made of LEGO® bricks is significantly less expensive, more readily available, and more easily modified than precision-machined N-wire phantoms, it prompts to be a viable calibration tool especially for quick laboratory research and proof of concept implementations of tracked ultrasound navigation.
Cardiorespiratory system monitoring using a developed acoustic sensor.
Abbasi-Kesbi, Reza; Valipour, Atefeh; Imani, Khadije
2018-02-01
This Letter proposes a wireless acoustic sensor for monitoring heartbeat and respiration rate based on phonocardiogram (PCG). The developed sensor comprises a processor, a transceiver which operates at industrial, scientific and medical band and the frequency of 2.54 GHz as well as two capacitor microphones which one for recording the heartbeat and another one for respiration rate. To evaluate the precision of the presented sensor in estimating heartbeat and respiration rate, the sensor is tested on the different volunteers and the obtained results are compared with a gold standard as a reference. The results reveal that root-mean-square error are determined <2.27 beats/min and 0.92 breaths/min for the heartbeat and respiration rate in turn. While the standard deviation of the error is obtained <1.26 and 0.63 for heartbeat and respiration rate, respectively. Also, the sensor estimate sounds of [Formula: see text] to [Formula: see text] obtained PCG signal with sensitivity and specificity 98.1% and 98.3% in turn that make 3% improvement than previous works. The results prove that the sensor can be appropriate candidate for recognising abnormal condition in the cardiorespiratory system.
Gray, Christine L; Robinson, Whitney R
2014-07-01
In childhood obesity research, the appearance of height loss, or "shrinkage," indicates measurement error. It is unclear whether a common response--excluding "shrinkers" from analysis--reduces bias. Using data from the National Longitudinal Study of Adolescent Health, we sampled 816 female adolescents (≥17 years) who had attained adult height by 1996 and for whom adult height was consistently measured in 2001 and 2008 ("gold-standard" height). We estimated adolescent obesity prevalence and the association of maternal education with adolescent obesity under 3 conditions: excluding shrinkers (for whom gold-standard height was less than recorded height in 1996), retaining shrinkers, and retaining shrinkers but substituting their gold-standard height. When we estimated obesity prevalence, excluding shrinkers decreased precision without improving validity. When we regressed obesity on maternal education, excluding shrinkers produced less valid and less precise estimates. In some circumstances, ignoring shrinkage is a better strategy than excluding shrinkers.
Asquith, William H.
2014-01-01
A database containing more than 16,300 discharge values and ancillary hydraulic attributes was assembled from summaries of discharge measurement records for 391 USGS streamflow-gauging stations (streamgauges) in Texas. Each discharge is between the 40th- and 60th-percentile daily mean streamflow as determined by period-of-record, streamgauge-specific, flow-duration curves. Each discharge therefore is assumed to represent a discharge measurement made for near-median streamflow conditions, and such conditions are conceptualized as representative of midrange to baseflow conditions in much of the state. The hydraulic attributes of each discharge measurement included concomitant cross-section flow area, water-surface top width, and reported mean velocity. Two regression equations are presented: (1) an expression for discharge and (2) an expression for mean velocity, both as functions of selected hydraulic attributes and watershed characteristics. Specifically, the discharge equation uses cross-sectional area, water-surface top width, contributing drainage area of the watershed, and mean annual precipitation of the location; the equation has an adjusted R-squared of approximately 0.95 and residual standard error of approximately 0.23 base-10 logarithm (cubic meters per second). The mean velocity equation uses discharge, water-surface top width, contributing drainage area, and mean annual precipitation; the equation has an adjusted R-squared of approximately 0.50 and residual standard error of approximately 0.087 third root (meters per second). Residual plots from both equations indicate that reliable estimates of discharge and mean velocity at ungauged stream sites are possible. Further, the relation between contributing drainage area and main-channel slope (a measure of whole-watershed slope) is depicted to aid analyst judgment of equation applicability for ungauged sites. Example applications and computations are provided and discussed within a real-world, discharge-measurement scenario, and an illustration of the development of a preliminary stage-discharge relation using the discharge equation is given.
Disturbance accommodating control design for wind turbines using solvability conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Na; Wright, Alan D.; Balas, Mark J.
In this study, solvability conditions for disturbance accommodating control (DAC) have been discussed and applied on wind turbine controller design in above-rated wind speed to regulate rotor speed and to mitigate turbine structural loads. DAC incorporates a predetermined waveform model and uses it as part of the state-space formulation, which is known as the internal model principle to reduce or minimize the wind disturbance effects on the outputs of the wind turbine. An asymptotically stabilizing DAC controller with disturbance impact on the wind turbine being totally canceled out can be found if certain conditions are fulfilled. Designing a rotor speedmore » regulation controller without steady-state error is important for applying linear control methodology such as DAC on wind turbines. Therefore, solvability conditions of DAC without steady-state error are attractive and can be taken as examples when designing a multitask turbine controller. DAC controllers solved via Moore-Penrose Pseudoinverse and the Kronecker product are discussed, and solvability conditions of using them are given. Additionally, a new solvability condition based on inverting the feed-through D term is proposed for the sake of reducing computational burden in the Kronecker product. Applications of designing collective pitch and independent pitch controllers based on DAC are presented. Recommendations of designing a DAC-based wind turbine controller are given. A DAC controller motivated by the proposed solvability condition that utilizes the inverse of feed-through D term is developed to mitigate the blade flapwise once-per-revolution bending moment together with a standard proportional integral controller in the control loop to assist rotor speed regulation. Simulation studies verify the discussed solvability conditions of DAC and show the effectiveness of the proposed DAC control design methodology.« less
Disturbance accommodating control design for wind turbines using solvability conditions
Wang, Na; Wright, Alan D.; Balas, Mark J.
2017-02-07
In this study, solvability conditions for disturbance accommodating control (DAC) have been discussed and applied on wind turbine controller design in above-rated wind speed to regulate rotor speed and to mitigate turbine structural loads. DAC incorporates a predetermined waveform model and uses it as part of the state-space formulation, which is known as the internal model principle to reduce or minimize the wind disturbance effects on the outputs of the wind turbine. An asymptotically stabilizing DAC controller with disturbance impact on the wind turbine being totally canceled out can be found if certain conditions are fulfilled. Designing a rotor speedmore » regulation controller without steady-state error is important for applying linear control methodology such as DAC on wind turbines. Therefore, solvability conditions of DAC without steady-state error are attractive and can be taken as examples when designing a multitask turbine controller. DAC controllers solved via Moore-Penrose Pseudoinverse and the Kronecker product are discussed, and solvability conditions of using them are given. Additionally, a new solvability condition based on inverting the feed-through D term is proposed for the sake of reducing computational burden in the Kronecker product. Applications of designing collective pitch and independent pitch controllers based on DAC are presented. Recommendations of designing a DAC-based wind turbine controller are given. A DAC controller motivated by the proposed solvability condition that utilizes the inverse of feed-through D term is developed to mitigate the blade flapwise once-per-revolution bending moment together with a standard proportional integral controller in the control loop to assist rotor speed regulation. Simulation studies verify the discussed solvability conditions of DAC and show the effectiveness of the proposed DAC control design methodology.« less
NASA Technical Reports Server (NTRS)
Prive, Nikki C.; Errico, Ronald M.
2013-01-01
A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.
Radiative flux and forcing parameterization error in aerosol-free clear skies
Pincus, Robert; Mlawer, Eli J.; Oreopoulos, Lazaros; ...
2015-07-03
This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m 2, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentiallymore » unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.« less
de Cueto, Marina; Ceballos, Esther; Martinez-Martinez, Luis; Perea, Evelio J.; Pascual, Alvaro
2004-01-01
In order to further decrease the time lapse between initial inoculation of blood culture media and the reporting of results of identification and antimicrobial susceptibility tests for microorganisms causing bacteremia, we performed a prospective study in which specially processed fluid from positive blood culture bottles from Bactec 9240 (Becton Dickinson, Cockeysville, Md.) containing aerobic media were directly inoculated into Vitek 2 system cards (bio-Mérieux, France). Organism identification and susceptibility results were compared with those obtained from cards inoculated with a standardized bacterial suspension obtained following subculture to agar; 100 consecutive positive monomicrobic blood cultures, consisting of 50 gram-negative rods and 50 gram-positive cocci, were included in the study. For gram-negative organisms, 31 of the 50 (62%) showed complete agreement with the standard method for species identification, while none of the 50 gram-positive cocci were correctly identified by the direct method. For gram-negative rods, there were 50% categorical agreements between the direct and standard methods for all drugs tested. The very major error rate was 2.4%, and the major error rate was 0.6%. The overall error rate for gram-negatives was 6.6%. Complete agreement in clinical categories of all antimicrobial agents evaluated was obtained for 19 of 50 (38%) gram-positive cocci evaluated; the overall error rate was 8.4%, with 2.8% minor errors, 2.4% major errors, and 3.2% very major errors. These findings suggest that the Vitek 2 cards inoculated directly from positive Bactec 9240 bottles do not provide acceptable bacterial identification or susceptibility testing in comparison with corresponding cards tested by a standard method. PMID:15297523
[The quality of medication orders--can it be improved?].
Vaknin, Ofra; Wingart-Emerel, Efrat; Stern, Zvi
2003-07-01
Medication errors are a common cause of morbidity and mortality among patients. Medication administration in hospitals is a complicated procedure with the possibility of error at each step. Errors are most commonly found at the prescription and transcription stages, although it is known that most errors can easily be avoided through strict adherence to standardized procedure guidelines. In examination of medication errors reported in the hospital in the year 2000, we found that 38% reported to have resulted from transcription errors. In the year 2001, the hospital initiated a program designed to identify faulty process of orders in an effort to improve the quality and effectiveness of the medication administration process. As part of this program, it was decided to check and evaluate the quality of the written doctor's orders and the transcription of those orders to the nursing cadre, in various hospital units. The study was conducted using a questionnaire which checked compliance to hospital standards with regard to the medication administration process, as applied to 6 units over the course of 8 weeks. Results of the survey showed poor compliance to guidelines on the part of doctors and nurses. Only 18% of doctors' orders in the study and 37% of the nurses' transcriptions were written according to standards. The Emergency Department showed an even lower compliance with only 3% of doctors' orders and 25% of nurses' transcriptions complying to standards. As a result of this study, it was decided to initiate an intensive in-service teaching course to refresh the staff's knowledge of medication administration guidelines. In the future it is recommended that hand-written orders be replaced by computerized orders in an effort to limit the chance of error.
Comparison of photogrammetric and astrometric data reduction results for the wild BC-4 camera
NASA Technical Reports Server (NTRS)
Hornbarger, D. H.; Mueller, I., I.
1971-01-01
The results of astrometric and photogrammetric plate reduction techniques for a short focal length camera are compared. Several astrometric models are tested on entire and limited plate areas to analyze their ability to remove systematic errors from interpolated satellite directions using a rigorous photogrammetric reduction as a standard. Residual plots are employed to graphically illustrate the analysis. Conclusions are made as to what conditions will permit the astrometric reduction to achieve comparable accuracies to those of photogrammetric reduction when applied for short focal length ballistic cameras.
Improving patient safety through quality assurance.
Raab, Stephen S
2006-05-01
Anatomic pathology laboratories use several quality assurance tools to detect errors and to improve patient safety. To review some of the anatomic pathology laboratory patient safety quality assurance practices. Different standards and measures in anatomic pathology quality assurance and patient safety were reviewed. Frequency of anatomic pathology laboratory error, variability in the use of specific quality assurance practices, and use of data for error reduction initiatives. Anatomic pathology error frequencies vary according to the detection method used. Based on secondary review, a College of American Pathologists Q-Probes study showed that the mean laboratory error frequency was 6.7%. A College of American Pathologists Q-Tracks study measuring frozen section discrepancy found that laboratories improved the longer they monitored and shared data. There is a lack of standardization across laboratories even for governmentally mandated quality assurance practices, such as cytologic-histologic correlation. The National Institutes of Health funded a consortium of laboratories to benchmark laboratory error frequencies, perform root cause analysis, and design error reduction initiatives, using quality assurance data. Based on the cytologic-histologic correlation process, these laboratories found an aggregate nongynecologic error frequency of 10.8%. Based on gynecologic error data, the laboratory at my institution used Toyota production system processes to lower gynecologic error frequencies and to improve Papanicolaou test metrics. Laboratory quality assurance practices have been used to track error rates, and laboratories are starting to use these data for error reduction initiatives.
Characteristics of advanced hydrogen maser frequency standards
NASA Technical Reports Server (NTRS)
Peters, H. E.
1973-01-01
Measurements with several operational atomic hydrogen maser standards have been made which illustrate the fundamental characteristics of the maser as well as the analysability of the corrections which are made to relate the oscillation frequency to the free, unperturbed, hydrogen standard transition frequency. Sources of the most important perturbations, and the magnitude of the associated errors, are discussed. A variable volume storage bulb hydrogen maser is also illustrated which can provide on the order of 2 parts in 10 to the 14th power or better accuracy in evaluating the wall shift. Since the other basic error sources combined contribute no more than approximately 1 part in 10 to the 14th power uncertainty, the variable volume storage bulb hydrogen maser will have net intrinsic accuracy capability of the order of 2 parts in 10 to the 14th power or better. This is an order of magnitude less error than anticipated with cesium standards and is comparable to the basic limit expected for a free atom hydrogen beam resonance standard.
On a more rigorous gravity field processing for future LL-SST type gravity satellite missions
NASA Astrophysics Data System (ADS)
Daras, I.; Pail, R.; Murböck, M.
2013-12-01
In order to meet the augmenting demands of the user community concerning accuracies of temporal gravity field models, future gravity missions of low-low satellite-to-satellite tracking (LL-SST) type are planned to carry more precise sensors than their precedents. A breakthrough is planned with the improved LL-SST measurement link, where the traditional K-band microwave instrument of 1μm accuracy will be complemented by an inter-satellite ranging instrument of several nm accuracy. This study focuses on investigations concerning the potential performance of the new sensors and their impact in gravity field solutions. The processing methods for gravity field recovery have to meet the new sensor standards and be able to take full advantage of the new accuracies that they provide. We use full-scale simulations in a realistic environment to investigate whether the standard processing techniques suffice to fully exploit the new sensors standards. We achieve that by performing full numerical closed-loop simulations based on the Integral Equation approach. In our simulation scheme, we simulate dynamic orbits in a conventional tracking analysis to compute pseudo inter-satellite ranges or range-rates that serve as observables. Each part of the processing is validated separately with special emphasis on numerical errors and their impact in gravity field solutions. We demonstrate that processing with standard precision may be a limiting factor for taking full advantage of new generation sensors that future satellite missions will carry. Therefore we have created versions of our simulator with enhanced processing precision with primarily aim to minimize round-off system errors. Results using the enhanced precision show a big reduction of system errors that were present at the standard precision processing even for the error-free scenario, and reveal the improvements the new sensors will bring into the gravity field solutions. As a next step, we analyze the contribution of individual error sources to the system's error budget. More specifically we analyze sensor noise from the laser interferometer and the accelerometers, errors in the kinematic orbits and the background fields as well as temporal and spatial aliasing errors. We give special care on the assessment of error sources with stochastic behavior, such as the laser interferometer and the accelerometers, and their consistent stochastic modeling in frame of the adjustment process.
NASA Astrophysics Data System (ADS)
Luce, C. H.; Tonina, D.; Applebee, R.; DeWeese, T.
2017-12-01
Two common refrains about using the one-dimensional advection diffusion equation to estimate fluid fluxes, thermal conductivity, or bed surface elevation from temperature time series in streambeds are that the solution assumes that 1) the surface boundary condition is a sine wave or nearly so, and 2) there is no gradient in mean temperature with depth. Concerns on these subjects are phrased in various ways, including non-stationarity in frequency, amplitude, or phase. Although the mathematical posing of the original solution to the problem might lead one to believe these constraints exist, the perception that they are a source of error is a fallacy. Here we re-derive the inverse solution of the 1-D advection-diffusion equation starting with an arbitrary surface boundary condition for temperature. In doing so, we demonstrate the frequency-independence of the solution, meaning any single frequency can be used in the frequency-domain solutions to estimate thermal diffusivity and 1-D fluid flux in streambeds, even if the forcing has multiple frequencies. This means that diurnal variations with asymmetric shapes, gradients in the mean temperature with depth, or `non-stationary' amplitude and frequency (or phase) do not actually represent violations of assumptions, and they should not cause errors in estimates when using one of the suite of existing solution methods derived based on a single frequency. Misattribution of errors to these issues constrains progress on solving real sources of error. Numerical and physical experiments are used to verify this conclusion and consider the utility of information at `non-standard' frequencies and multiple frequencies to augment the information derived from time series of temperature.
Quantifying soil carbon loss and uncertainty from a peatland wildfire using multi-temporal LiDAR
Reddy, Ashwan D.; Hawbaker, Todd J.; Wurster, F.; Zhu, Zhiliang; Ward, S.; Newcomb, Doug; Murray, R.
2015-01-01
Peatlands are a major reservoir of global soil carbon, yet account for just 3% of global land cover. Human impacts like draining can hinder the ability of peatlands to sequester carbon and expose their soils to fire under dry conditions. Estimating soil carbon loss from peat fires can be challenging due to uncertainty about pre-fire surface elevations. This study uses multi-temporal LiDAR to obtain pre- and post-fire elevations and estimate soil carbon loss caused by the 2011 Lateral West fire in the Great Dismal Swamp National Wildlife Refuge, VA, USA. We also determine how LiDAR elevation error affects uncertainty in our carbon loss estimate by randomly perturbing the LiDAR point elevations and recalculating elevation change and carbon loss, iterating this process 1000 times. We calculated a total loss using LiDAR of 1.10 Tg C across the 25 km2 burned area. The fire burned an average of 47 cm deep, equivalent to 44 kg C/m2, a value larger than the 1997 Indonesian peat fires (29 kg C/m2). Carbon loss via the First-Order Fire Effects Model (FOFEM) was estimated to be 0.06 Tg C. Propagating the LiDAR elevation error to the carbon loss estimates, we calculated a standard deviation of 0.00009 Tg C, equivalent to 0.008% of total carbon loss. We conclude that LiDAR elevation error is not a significant contributor to uncertainty in soil carbon loss under severe fire conditions with substantial peat consumption. However, uncertainties may be more substantial when soil elevation loss is of a similar or smaller magnitude than the reported LiDAR error.
Intermittent nocturnal hypoxia and metabolic risk in obese adolescents with obstructive sleep apnea.
Narang, Indra; McCrindle, Brian W; Manlhiot, Cedric; Lu, Zihang; Al-Saleh, Suhail; Birken, Catherine S; Hamilton, Jill
2018-01-22
There is conflicting data regarding the independent associations of obstructive sleep apnea (OSA) with metabolic risk in obese youth. Previous studies have not consistently addressed central adiposity, specifically elevated waist to height ratio (WHtR), which is associated with metabolic risk independent of body mass index. The objective of this study was to determine the independent effects of the obstructive apnea-hypopnea index (OAHI) and associated indices of nocturnal hypoxia on metabolic function in obese youth after adjusting for WHtR. Subjects had standardized anthropometric measurements. Fasting blood included insulin, glucose, glycated hemoglobin, alanine transferase, and aspartate transaminase. Insulin resistance was quantified with the homeostatic model assessment. Overnight polysomnography determined the OAHI and nocturnal oxygenation indices. Of the 75 recruited subjects, 23% were diagnosed with OSA. Adjusting for age, gender, and WHtR in multivariable linear regression models, a higher oxygen desaturation index was associated with a higher fasting insulin (coefficient [standard error] = 48.076 [11.255], p < 0.001), higher glycated hemoglobin (coefficient [standard error] = 0.097 [0.041], p = 0.02), higher insulin resistance (coefficient [standard error] = 1.516 [0.364], p < 0.001), elevated alanine transferase (coefficient [standard error] = 11.631 [2.770], p < 0.001), and aspartate transaminase (coefficient [standard error] = 4.880 [1.444], p = 0.001). However, there were no significant associations between OAHI, glucose metabolism, and liver enzymes. Intermittent nocturnal hypoxia rather than the OAHI was associated with metabolic risk in obese youth after adjusting for WHtR. Measures of abdominal adiposity such as WHtR should be considered in future studies that evaluate the impact of OSA on metabolic health.
Asymptotic Standard Errors for Item Response Theory True Score Equating of Polytomous Items
ERIC Educational Resources Information Center
Cher Wong, Cheow
2015-01-01
Building on previous works by Lord and Ogasawara for dichotomous items, this article proposes an approach to derive the asymptotic standard errors of item response theory true score equating involving polytomous items, for equivalent and nonequivalent groups of examinees. This analytical approach could be used in place of empirical methods like…
Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method
ERIC Educational Resources Information Center
Liu, Yuming; Schulz, E. Matthew; Yu, Lei
2008-01-01
A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…
ERIC Educational Resources Information Center
Doppelt, Jerome E.
1956-01-01
The standard error of measurement as a means for estimating the margin of error that should be allowed for in test scores is discussed. The true score measures the performance that is characteristic of the person tested; the variations, plus and minus, around the true score describe a characteristic of the test. When the standard deviation is used…
ERIC Educational Resources Information Center
Sachse, Karoline A.; Haag, Nicole
2017-01-01
Standard errors computed according to the operational practices of international large-scale assessment studies such as the Programme for International Student Assessment's (PISA) or the Trends in International Mathematics and Science Study (TIMSS) may be biased when cross-national differential item functioning (DIF) and item parameter drift are…
ERIC Educational Resources Information Center
Zu, Jiyun; Yuan, Ke-Hai
2012-01-01
In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed-score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the…
Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters
ERIC Educational Resources Information Center
Hoshino, Takahiro; Shigemasu, Kazuo
2008-01-01
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of diameters meter per meter m/m 1 b atomic oxygen-to-carbon ratio mole per mole mol/mol 1 C # number... error between a quantity and its reference e brake-specific emission or fuel consumption gram per... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...
Standard errors in forest area
Joseph McCollum
2002-01-01
I trace the development of standard error equations for forest area, beginning with the theory behind double sampling and the variance of a product. The discussion shifts to the particular problem of forest area - at which time the theory becomes relevant. There are subtle difficulties in figuring out which variance of a product equation should be used. The equations...
ERIC Educational Resources Information Center
Rocconi, Louis M.
2011-01-01
Hierarchical linear models (HLM) solve the problems associated with the unit of analysis problem such as misestimated standard errors, heterogeneity of regression and aggregation bias by modeling all levels of interest simultaneously. Hierarchical linear modeling resolves the problem of misestimated standard errors by incorporating a unique random…
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
ERIC Educational Resources Information Center
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…
Patient Safety: Moving the Bar in Prison Health Care Standards
Greifinger, Robert B.; Mellow, Jeff
2010-01-01
Improvements in community health care quality through error reduction have been slow to transfer to correctional settings. We convened a panel of correctional experts, which recommended 60 patient safety standards focusing on such issues as creating safety cultures at organizational, supervisory, and staff levels through changes to policy and training and by ensuring staff competency, reducing medication errors, encouraging the seamless transfer of information between and within practice settings, and developing mechanisms to detect errors or near misses and to shift the emphasis from blaming staff to fixing systems. To our knowledge, this is the first published set of standards focusing on patient safety in prisons, adapted from the emerging literature on quality improvement in the community. PMID:20864714
Kappa statistic for the clustered dichotomous responses from physicians and patients
Kang, Chaeryon; Qaqish, Bahjat; Monaco, Jane; Sheridan, Stacey L.; Cai, Jianwen
2013-01-01
The bootstrap method for estimating the standard error of the kappa statistic in the presence of clustered data is evaluated. Such data arise, for example, in assessing agreement between physicians and their patients regarding their understanding of the physician-patient interaction and discussions. We propose a computationally efficient procedure for generating correlated dichotomous responses for physicians and assigned patients for simulation studies. The simulation result demonstrates that the proposed bootstrap method produces better estimate of the standard error and better coverage performance compared to the asymptotic standard error estimate that ignores dependence among patients within physicians with at least a moderately large number of clusters. An example of an application to a coronary heart disease prevention study is presented. PMID:23533082
ERIC Educational Resources Information Center
Bouck, Emily C.; Bouck, Mary K.; Joshi, Gauri S.; Johnson, Linley
2016-01-01
Students with learning disabilities struggle with word problems in mathematics classes. Understanding the type of errors students make when working through such mathematical problems can further describe student performance and highlight student difficulties. Through the use of error codes, researchers analyzed the type of errors made by 14 sixth…
Balderson, Michael; Brown, Derek; Johnson, Patricia; Kirkby, Charles
2016-01-01
The purpose of this work was to compare static gantry intensity-modulated radiation therapy (IMRT) with volume-modulated arc therapy (VMAT) in terms of tumor control probability (TCP) under scenarios involving large geometric misses, i.e., those beyond what are accounted for when margin expansion is determined. Using a planning approach typical for these treatments, a linear-quadratic-based model for TCP was used to compare mean TCP values for a population of patients who experiences a geometric miss (i.e., systematic and random shifts of the clinical target volume within the planning target dose distribution). A Monte Carlo approach was used to account for the different biological sensitivities of a population of patients. Interestingly, for errors consisting of coplanar systematic target volume offsets and three-dimensional random offsets, static gantry IMRT appears to offer an advantage over VMAT in that larger shift errors are tolerated for the same mean TCP. For example, under the conditions simulated, erroneous systematic shifts of 15mm directly between or directly into static gantry IMRT fields result in mean TCP values between 96% and 98%, whereas the same errors on VMAT plans result in mean TCP values between 45% and 74%. Random geometric shifts of the target volume were characterized using normal distributions in each Cartesian dimension. When the standard deviations were doubled from those values assumed in the derivation of the treatment margins, our model showed a 7% drop in mean TCP for the static gantry IMRT plans but a 20% drop in TCP for the VMAT plans. Although adding a margin for error to a clinical target volume is perhaps the best approach to account for expected geometric misses, this work suggests that static gantry IMRT may offer a treatment that is more tolerant to geometric miss errors than VMAT. Copyright © 2016 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Muir, J.; Phinn, S. R.; Armston, J.; Scarth, P.; Eyre, T.
2014-12-01
Coarse woody debris (CWD) provides important habitat for many species and plays a vital role in nutrient cycling within an ecosystem. In addition, CWD makes an important contribution to forest biomass and fuel loads. Airborne or space based remote sensing instruments typically do not detect CWD beneath the forest canopy. Terrestrial laser scanning (TLS) provides a ground based method for three-dimensional (3-D) reconstruction of surface features and CWD. This research produced a 3-D reconstruction of the ground surface and automatically classified coarse woody debris from registered TLS scans. The outputs will be used to inform the development of a site-based index for the assessment of forest condition, and quantitative assessments of biomass and fuel loads. A survey grade terrestrial laser scanner (Riegl VZ400) was used to scan 13 positions, in an open eucalypt woodland site at Karawatha Forest Park, near Brisbane, Australia. Scans were registered, and a digital surface model (DSM) produced using an intensity threshold and an iterative morphological filter. The DSMs produced from single scans were compared to the registered multi-scan point cloud using standard error metrics including: Root Mean Squared Error (RMSE), Mean Squared Error (MSE), range, absolute error and signed error. In addition the DSM was compared to a Digital Elevation Model (DEM) produced from Airborne Laser Scanning (ALS). Coarse woody debris was subsequently classified from the DSM using laser pulse properties, including: width and amplitude, as well as point spatial relationships (e.g. nearest neighbour slope vectors). Validation of the coarse woody debris classification was completed using true-colour photographs co-registered to the TLS point cloud. The volume and length of the coarse woody debris was calculated from the classified point cloud. A representative network of TLS sites will allow for up-scaling to large area assessment using airborne or space based sensors to monitor forest condition, biomass and fuel loads.
Analysis of the PLL phase error in presence of simulated ionospheric scintillation events
NASA Astrophysics Data System (ADS)
Forte, B.
2012-01-01
The functioning of standard phase locked loops (PLL), including those used to track radio signals from Global Navigation Satellite Systems (GNSS), is based on a linear approximation which holds in presence of small phase errors. Such an approximation represents a reasonable assumption in most of the propagation channels. However, in presence of a fading channel the phase error may become large, making the linear approximation no longer valid. The PLL is then expected to operate in a non-linear regime. As PLLs are generally designed and expected to operate in their linear regime, whenever the non-linear regime comes into play, they will experience a serious limitation in their capability to track the corresponding signals. The phase error and the performance of a typical PLL embedded into a commercial multiconstellation GNSS receiver were analyzed in presence of simulated ionospheric scintillation. Large phase errors occurred during scintillation-induced signal fluctuations although cycle slips only occurred during the signal re-acquisition after a loss of lock. Losses of lock occurred whenever the signal faded below the minimumC/N0threshold allowed for tracking. The simulations were performed for different signals (GPS L1C/A, GPS L2C, GPS L5 and Galileo L1). L5 and L2C proved to be weaker than L1. It appeared evident that the conditions driving the PLL phase error in the specific case of GPS receivers in presence of scintillation-induced signal perturbations need to be evaluated in terms of the combination of the minimumC/N0 tracking threshold, lock detector thresholds, possible cycle slips in the tracking PLL and accuracy of the observables (i.e. the error propagation onto the observables stage).
Defining the Relationship Between Human Error Classes and Technology Intervention Strategies
NASA Technical Reports Server (NTRS)
Wiegmann, Douglas A.; Rantanen, Esa; Crisp, Vicki K. (Technical Monitor)
2002-01-01
One of the main factors in all aviation accidents is human error. The NASA Aviation Safety Program (AvSP), therefore, has identified several human-factors safety technologies to address this issue. Some technologies directly address human error either by attempting to reduce the occurrence of errors or by mitigating the negative consequences of errors. However, new technologies and system changes may also introduce new error opportunities or even induce different types of errors. Consequently, a thorough understanding of the relationship between error classes and technology "fixes" is crucial for the evaluation of intervention strategies outlined in the AvSP, so that resources can be effectively directed to maximize the benefit to flight safety. The purpose of the present project, therefore, was to examine the repositories of human factors data to identify the possible relationship between different error class and technology intervention strategies. The first phase of the project, which is summarized here, involved the development of prototype data structures or matrices that map errors onto "fixes" (and vice versa), with the hope of facilitating the development of standards for evaluating safety products. Possible follow-on phases of this project are also discussed. These additional efforts include a thorough and detailed review of the literature to fill in the data matrix and the construction of a complete database and standards checklists.
Translating Radiometric Requirements for Satellite Sensors to Match International Standards.
Pearlman, Aaron; Datla, Raju; Kacker, Raghu; Cao, Changyong
2014-01-01
International scientific standards organizations created standards on evaluating uncertainty in the early 1990s. Although scientists from many fields use these standards, they are not consistently implemented in the remote sensing community, where traditional error analysis framework persists. For a satellite instrument under development, this can create confusion in showing whether requirements are met. We aim to create a methodology for translating requirements from the error analysis framework to the modern uncertainty approach using the product level requirements of the Advanced Baseline Imager (ABI) that will fly on the Geostationary Operational Environmental Satellite R-Series (GOES-R). In this paper we prescribe a method to combine several measurement performance requirements, written using a traditional error analysis framework, into a single specification using the propagation of uncertainties formula. By using this approach, scientists can communicate requirements in a consistent uncertainty framework leading to uniform interpretation throughout the development and operation of any satellite instrument.
Translating Radiometric Requirements for Satellite Sensors to Match International Standards
Pearlman, Aaron; Datla, Raju; Kacker, Raghu; Cao, Changyong
2014-01-01
International scientific standards organizations created standards on evaluating uncertainty in the early 1990s. Although scientists from many fields use these standards, they are not consistently implemented in the remote sensing community, where traditional error analysis framework persists. For a satellite instrument under development, this can create confusion in showing whether requirements are met. We aim to create a methodology for translating requirements from the error analysis framework to the modern uncertainty approach using the product level requirements of the Advanced Baseline Imager (ABI) that will fly on the Geostationary Operational Environmental Satellite R-Series (GOES-R). In this paper we prescribe a method to combine several measurement performance requirements, written using a traditional error analysis framework, into a single specification using the propagation of uncertainties formula. By using this approach, scientists can communicate requirements in a consistent uncertainty framework leading to uniform interpretation throughout the development and operation of any satellite instrument. PMID:26601032
Zupanc, Christine M; Burgess-Limerick, Robin J; Wallis, Guy
2007-08-01
To investigate error and reaction time consequences of alternating compatible and incompatible steering arrangements during a simulated obstacle avoidance task. Underground coal mine shuttle cars provide an example of a vehicle in which operators are required to alternate between compatible and incompatible steering configurations. This experiment examines the performance of 48 novice participants in a virtual analogy of an underground coal mine shuttle car. Participants were randomly assigned to a compatible condition, an incompatible condition, an alternating condition in which compatibility alternated within and between hands, or an alternating condition in which compatibility alternated between hands. Participants made fewer steering direction errors and made correct steering responses more quickly in the compatible condition. Error rate decreased over time in the incompatible condition. A compatibility effect for both errors and reaction time was also found when the control-response relationship alternated; however, performance improvements over time were not consistent. Isolating compatibility to a hand resulted in reduced error rate and faster reaction time than when compatibility alternated within and between hands. The consequences of alternating control-response relationships are higher error rates and slower responses, at least in the early stages of learning. This research highlights the importance of ensuring consistently compatible human-machine directional control-response relationships.
Downward longwave surface radiation from sun-synchronous satellite data - Validation of methodology
NASA Technical Reports Server (NTRS)
Darnell, W. L.; Gupta, S. K.; Staylor, W. F.
1986-01-01
An extensive study has been carried out to validate a satellite technique for estimating downward longwave radiation at the surface. The technique, mostly developed earlier, uses operational sun-synchronous satellite data and a radiative transfer model to provide the surface flux estimates. The satellite-derived fluxes were compared directly with corresponding ground-measured fluxes at four different sites in the United States for a common one-year period. This provided a study of seasonal variations as well as a diversity of meteorological conditions. Dome heating errors in the ground-measured fluxes were also investigated and were corrected prior to the comparisons. Comparison of the monthly averaged fluxes from the satellite and ground sources for all four sites for the entire year showed a correlation coefficient of 0.98 and a standard error of estimate of 10 W/sq m. A brief description of the technique is provided, and the results validating the technique are presented.
High accuracy diffuse horizontal irradiance measurements without a shadowband
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlemmer, J.A; Michalsky, J.J.
1995-12-31
The standard method for measuring diffuse horizontal irradiance uses a fixed shadowband to block direct solar radiation. This method requires a correction for the excess skylight blocked by the band, and this correction varies with sky conditions. Alternately, diffuse horizontal irradiance may be calculated from total horizontal and direct normal irradiance. This method is in error because of angular (cosine) response of the total horizontal pyranometer to direct beam irradiance. This paper describes an improved calculation of diffuse horizontal irradiance from total horizontal and direct normal irradiance using a predetermination of the angular response of the total horizontal pyranometer. Wemore » compare these diffuse horizontal irradiance calculations with measurements made with a shading-disk pyranometer that shields direct irradiance using a tracking disk. Results indicate significant improvement in most cases. Remaining disagreement most likely arises from undetected tracking errors and instrument leveling.« less
High accuracy diffuse horizontal irradiance measurements without a shadowband
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlemmer, J.A.; Michalsky, J.J.
1995-10-01
The standard method for measuring diffuse horizontal irradiance uses a fixed shadowband to block direct solar radiation. This method requires a correction for the excess skylight blocked by the band, and this correction varies with sky conditions. Alternately, diffuse horizontal irradiance may be calculated from the total horizontal and direct normal irradiance. This method is in error because of the angular (often referred to as cosine) response of the total horizontal pyranometer to direct beam irradiance. This paper describes an improved calculation of diffuse horizontal irradiance from total horizontal and direct normal irradiance using a predetermination of the angular responsemore » of the total horizontal pyranometer. The authors compare these diffuse horizontal irradiance calculations with measurements made with a shading-disk pyranometer that shields direct irradiance using a tracking disk. The results indicate significant improvement in most cases. The remaining disagreement most likely arises from undetected tracking errors and instrument leveling.« less
Parvin, C A
1993-03-01
The error detection characteristics of quality-control (QC) rules that use control observations within a single analytical run are investigated. Unlike the evaluation of QC rules that span multiple analytical runs, most of the fundamental results regarding the performance of QC rules applied within a single analytical run can be obtained from statistical theory, without the need for simulation studies. The case of two control observations per run is investigated for ease of graphical display, but the conclusions can be extended to more than two control observations per run. Results are summarized in a graphical format that offers many interesting insights into the relations among the various QC rules. The graphs provide heuristic support to the theoretical conclusions that no QC rule is best under all error conditions, but the multirule that combines the mean rule and a within-run standard deviation rule offers an attractive compromise.
Pilot age and error in air taxi crashes.
Rebok, George W; Qiang, Yandong; Baker, Susan P; Li, Guohua
2009-07-01
The associations of pilot error with the type of flight operations and basic weather conditions are well documented. The correlation between pilot characteristics and error is less clear. This study aims to examine whether pilot age is associated with the prevalence and patterns of pilot error in air taxi crashes. Investigation reports from the National Transportation Safety Board for crashes involving non-scheduled Part 135 operations (i.e., air taxis) in the United States between 1983 and 2002 were reviewed to identify pilot error and other contributing factors. Crash circumstances and the presence and type of pilot error were analyzed in relation to pilot age using Chi-square tests. Of the 1751 air taxi crashes studied, 28% resulted from mechanical failure, 25% from loss of control at landing or takeoff, 7% from visual flight rule conditions into instrument meteorological conditions, 7% from fuel starvation, 5% from taxiing, and 28% from other causes. Crashes among older pilots were more likely to occur during the daytime rather than at night and off airport than on airport. The patterns of pilot error in air taxi crashes were similar across age groups. Of the errors identified, 27% were flawed decisions, 26% were inattentiveness, 23% mishandled aircraft kinetics, 15% mishandled wind and/or runway conditions, and 11% were others. Pilot age is associated with crash circumstances but not with the prevalence and patterns of pilot error in air taxi crashes. Lack of age-related differences in pilot error may be attributable to the "safe worker effect."
Do errors matter? Errorless and errorful learning in anomic picture naming.
McKissock, Stephen; Ward, Jamie
2007-06-01
Errorless training methods significantly improve learning in memory-impaired patients relative to errorful training procedures. However, the validity of this technique for acquiring linguistic information in aphasia has rarely been studied. This study contrasts three different treatment conditions over an 8 week period for rehabilitating picture naming in anomia: (1) errorless learning in which pictures are shown and the experimenter provides the name, (2) errorful learning with feedback in which the patient is required to generate a name but the correct name is then supplied by the experimenter, and (3) errorful learning in which no feedback is given. These conditions are compared to an untreated set of matched words. Both errorless and errorful learning with feedback conditions led to significant improvement at a 2-week and 12-14-week retest (errorful without feedback and untreated words were similar). The results suggest that it does not matter whether anomic patients are allowed to make errors in picture naming or not (unlike in memory impaired individuals). What does matter is that a correct response is given as feedback. The results also question the widely held assumption that it is beneficial for a patient to attempt to retrieve a word, given that our errorless condition involved no retrieval effort and had the greatest benefits.
Induced mood and selective attention.
Brand, N; Verspui, L; Oving, A
1997-04-01
Subjects (N = 60) were randomly assigned to an elated, depressed, or neutral mood-induction condition to assess the effect of mood state on cognitive functioning. In the elated condition film fragments expressing happiness and euphoria were shown. In the depressed condition some frightening and distressing film fragments were presented. The neutral group watched no film. Mood states were measured using the Profile of Mood States, and a Stroop task assessed selective attention. Both were presented by computer. The induction groups differed significantly in the expected direction on the mood subscales Anger, Tension, Depression, Vigour, and Fatigue, and also in the mean scale response times, i.e., slower responses for the depressed condition and faster for the elated one. Differences between conditions were found in the errors on the Stroop: in the depressed condition were the fewest errors and significantly longer error reaction times. Speed of error was associated with self-reported fatigue.
Rank score and permutation testing alternatives for regression quantile estimates
Cade, B.S.; Richards, J.D.; Mielke, P.W.
2006-01-01
Performance of quantile rank score tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1) were evaluated by simulation for models with p = 2 and 6 predictors, moderate collinearity among predictors, homogeneous and hetero-geneous errors, small to moderate samples (n = 20–300), and central to upper quantiles (0.50–0.99). Test statistics evaluated were the conventional quantile rank score T statistic distributed as χ2 random variable with q degrees of freedom (where q parameters are constrained by H 0:) and an F statistic with its sampling distribution approximated by permutation. The permutation F-test maintained better Type I errors than the T-test for homogeneous error models with smaller n and more extreme quantiles τ. An F distributional approximation of the F statistic provided some improvements in Type I errors over the T-test for models with > 2 parameters, smaller n, and more extreme quantiles but not as much improvement as the permutation approximation. Both rank score tests required weighting to maintain correct Type I errors when heterogeneity under the alternative model increased to 5 standard deviations across the domain of X. A double permutation procedure was developed to provide valid Type I errors for the permutation F-test when null models were forced through the origin. Power was similar for conditions where both T- and F-tests maintained correct Type I errors but the F-test provided some power at smaller n and extreme quantiles when the T-test had no power because of excessively conservative Type I errors. When the double permutation scheme was required for the permutation F-test to maintain valid Type I errors, power was less than for the T-test with decreasing sample size and increasing quantiles. Confidence intervals on parameters and tolerance intervals for future predictions were constructed based on test inversion for an example application relating trout densities to stream channel width:depth.
Iqbal, Mohammad Asif; Kim, Ki-Hyun
2014-12-19
In the analysis of biogenic volatile organic compounds (BVOCs) in ambient air, preparation of a sub-ppb level standard is an important factor. This task is very challenging as most BVOCs (e.g., monoterpenes) are highly volatile and reactive in nature. As a means to produce sub-ppb gaseous standards for BVOCs, we investigated the dynamic headspace (HS) extraction technique through which their vapors are generated from a liquid standard (mixture of 10 BVOCs: (1) α-pinene, (2) β-pinene, (3) 3-carene, (4) myrcene, (5) α-phellandrene, (6) α-terpinene, (7) R-limonene, (8) γ-terpinene, (9) p-cymene, and (10) Camphene) spiked into a chamber-style impinger. The quantification of BVOCs was made by collection on multiple-bed sorbent tubes (STs) and subsequent analysis by thermal desorption-gas chromatography-mass spectrometry (TD-GC-MS). Using this approach, sub-ppb level mixtures of gaseous BVOCs were generated at different sweep cycles. The mean concentrations of 10 BVOCs generated from the most stable conditions (i.e., in the third sweep cycle) varied in the range of 0.37±0.05 to 7.27±0.86ppb depending on the initial concentration of liquid standard spiked into the system. The reproducibility of the gaseous BVOCs generated as mixture standards, if expressed in terms of relative standard error using the concentration datasets acquired under stable conditions, ranged from 1.64 (α-phellandrene) to 9.67% (R-limonene). Copyright © 2014 Elsevier B.V. All rights reserved.
Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems
NASA Astrophysics Data System (ADS)
Mahdi Alavi, S. M.; Saif, Mehrdad
2013-12-01
This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.
NASA Astrophysics Data System (ADS)
Sin, Kuek Jia; Cheong, Chin Wen; Hooi, Tan Siow
2017-04-01
This study aims to investigate the crude oil volatility using a two components autoregressive conditional heteroscedasticity (ARCH) model with the inclusion of abrupt jump feature. The model is able to capture abrupt jumps, news impact, clustering volatility, long persistence volatility and heavy-tailed distributed error which are commonly observed in the crude oil time series. For the empirical study, we have selected the WTI crude oil index from year 2000 to 2016. The results found that by including the multiple-abrupt jumps in ARCH model, there are significant improvements of estimation evaluations as compared with the standard ARCH models. The outcomes of this study can provide useful information for risk management and portfolio analysis in the crude oil markets.
Neural evidence for enhanced error detection in major depressive disorder.
Chiu, Pearl H; Deldin, Patricia J
2007-04-01
Anomalies in error processing have been implicated in the etiology and maintenance of major depressive disorder. In particular, depressed individuals exhibit heightened sensitivity to error-related information and negative environmental cues, along with reduced responsivity to positive reinforcers. The authors examined the neural activation associated with error processing in individuals diagnosed with and without major depression and the sensitivity of these processes to modulation by monetary task contingencies. The error-related negativity and error-related positivity components of the event-related potential were used to characterize error monitoring in individuals with major depressive disorder and the degree to which these processes are sensitive to modulation by monetary reinforcement. Nondepressed comparison subjects (N=17) and depressed individuals (N=18) performed a flanker task under two external motivation conditions (i.e., monetary reward for correct responses and monetary loss for incorrect responses) and a nonmonetary condition. After each response, accuracy feedback was provided. The error-related negativity component assessed the degree of anomaly in initial error detection, and the error positivity component indexed recognition of errors. Across all conditions, the depressed participants exhibited greater amplitude of the error-related negativity component, relative to the comparison subjects, and equivalent error positivity amplitude. In addition, the two groups showed differential modulation by task incentives in both components. These data implicate exaggerated early error-detection processes in the etiology and maintenance of major depressive disorder. Such processes may then recruit excessive neural and cognitive resources that manifest as symptoms of depression.
1980-03-14
failure Sigmar (Or) in line 50, the standard deviation of the relative error of the weights Sigmap (o) in line 60, the standard deviation of the phase...200, the weight structures in the x and y coordinates Q in line 210, the probability of element failure Sigmar (Or) in line 220, the standard...NUMBER OF ELEMENTS =u;2*H 120 PRINT "Pr’obability of elemenit failure al;O 130 PRINT "Standard dtvi&t ion’ oe r.1&tive ýrror of wl; Sigmar 14 0 PRINT
Liao, J. G.; Mcmurry, Timothy; Berg, Arthur
2014-01-01
Empirical Bayes methods have been extensively used for microarray data analysis by modeling the large number of unknown parameters as random effects. Empirical Bayes allows borrowing information across genes and can automatically adjust for multiple testing and selection bias. However, the standard empirical Bayes model can perform poorly if the assumed working prior deviates from the true prior. This paper proposes a new rank-conditioned inference in which the shrinkage and confidence intervals are based on the distribution of the error conditioned on rank of the data. Our approach is in contrast to a Bayesian posterior, which conditions on the data themselves. The new method is almost as efficient as standard Bayesian methods when the working prior is close to the true prior, and it is much more robust when the working prior is not close. In addition, it allows a more accurate (but also more complex) non-parametric estimate of the prior to be easily incorporated, resulting in improved inference. The new method’s prior robustness is demonstrated via simulation experiments. Application to a breast cancer gene expression microarray dataset is presented. Our R package rank.Shrinkage provides a ready-to-use implementation of the proposed methodology. PMID:23934072
Avulsion research using flume experiments and highly accurate and temporal-rich SfM datasets
NASA Astrophysics Data System (ADS)
Javernick, L.; Bertoldi, W.; Vitti, A.
2017-12-01
SfM's ability to produce high-quality, large-scale digital elevation models (DEMs) of complicated and rapidly evolving systems has made it a valuable technique for low-budget researchers and practitioners. While SfM has provided valuable datasets that capture single-flood event DEMs, there is an increasing scientific need to capture higher temporal resolution datasets that can quantify the evolutionary processes instead of pre- and post-flood snapshots. However, flood events' dangerous field conditions and image matching challenges (e.g. wind, rain) prevent quality SfM-image acquisition. Conversely, flume experiments offer opportunities to document flood events, but achieving consistent and accurate DEMs to detect subtle changes in dry and inundated areas remains a challenge for SfM (e.g. parabolic error signatures).This research aimed at investigating the impact of naturally occurring and manipulated avulsions on braided river morphology and on the encroachment of floodplain vegetation, using laboratory experiments. This required DEMs with millimeter accuracy and precision and at a temporal resolution to capture the processes. SfM was chosen as it offered the most practical method. Through redundant local network design and a meticulous ground control point (GCP) survey with a Leica Total Station in red laser configuration (reported 2 mm accuracy), the SfM residual errors compared to separate ground truthing data produced mean errors of 1.5 mm (accuracy) and standard deviations of 1.4 mm (precision) without parabolic error signatures. Lighting conditions in the flume were limited to uniform, oblique, and filtered LED strips, which removed glint and thus improved bed elevation mean errors to 4 mm, but errors were further reduced by means of an open source software for refraction correction. The obtained datasets have provided the ability to quantify how small flood events with avulsion can have similar morphologic and vegetation impacts as large flood events without avulsion. Further, this research highlights the potential application of SfM in the laboratory and ability to document physical and biological processes at greater spatial and temporal resolution. Marie Sklodowska-Curie Individual Fellowship: River-HMV, 656917
Error image aware content restoration
NASA Astrophysics Data System (ADS)
Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee
2015-12-01
As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.
ASME B89.4.19 Performance Evaluation Tests and Geometric Misalignments in Laser Trackers
Muralikrishnan, B.; Sawyer, D.; Blackburn, C.; Phillips, S.; Borchardt, B.; Estler, W. T.
2009-01-01
Small and unintended offsets, tilts, and eccentricity of the mechanical and optical components in laser trackers introduce systematic errors in the measured spherical coordinates (angles and range readings) and possibly in the calculated lengths of reference artifacts. It is desirable that the tests described in the ASME B89.4.19 Standard [1] be sensitive to these geometric misalignments so that any resulting systematic errors are identified during performance evaluation. In this paper, we present some analysis, using error models and numerical simulation, of the sensitivity of the length measurement system tests and two-face system tests in the B89.4.19 Standard to misalignments in laser trackers. We highlight key attributes of the testing strategy adopted in the Standard and propose new length measurement system tests that demonstrate improved sensitivity to some misalignments. Experimental results with a tracker that is not properly error corrected for the effects of the misalignments validate claims regarding the proposed new length tests. PMID:27504211
NASA GPM GV Science Requirements
NASA Technical Reports Server (NTRS)
Smith, E.
2003-01-01
An important scientific objective of the NASA portion of the GPM Mission is to generate quantitatively-based error characterization information along with the rainrate retrievals emanating from the GPM constellation of satellites. These data must serve four main purposes: (1) they must be of sufficient quality, uniformity, and timeliness to govern the observation weighting schemes used in the data assimilation modules of numerical weather prediction models; (2) they must extend over that portion of the globe accessible by the GPM core satellite to which the NASA GV program is focused - (approx.65 degree inclination); (3) they must have sufficient specificity to enable detection of physically-formulated microphysical and meteorological weaknesses in the standard physical level 2 rainrate algorithms to be used in the GPM Precipitation Processing System (PPS), i.e., algorithms which will have evolved from the TRMM standard physical level 2 algorithms; and (4) they must support the use of physical error modeling as a primary validation tool and as the eventual replacement of the conventional GV approach of statistically intercomparing surface rainrates fiom ground and satellite measurements. This approach to ground validation research represents a paradigm shift vis-&-vis the program developed for the TRMM mission, which conducted ground validation largely as a statistical intercomparison process between raingauge-derived or radar-derived rainrates and the TRMM satellite rainrate retrievals -- long after the original satellite retrievals were archived. This approach has been able to quantify averaged rainrate differences between the satellite algorithms and the ground instruments, but has not been able to explain causes of algorithm failures or produce error information directly compatible with the cost functions of data assimilation schemes. These schemes require periodic and near-realtime bias uncertainty (i.e., global space-time distributed conditional accuracy of the retrieved rainrates) and local error covariance structure (i.e., global space-time distributed error correlation information for the local 4-dimensional space-time domain -- or in simpler terms, the matrix form of precision error). This can only be accomplished by establishing a network of high quality-heavily instrumented supersites selectively distributed at a few oceanic, continental, and coastal sites. Economics and pragmatics dictate that the network must be made up of a relatively small number of sites (6-8) created through international cooperation. This presentation will address some of the details of the methodology behind the error characterization approach, some proposed solutions for expanding site-developed error properties to regional scales, a data processing and communications concept that would enable rapid implementation of algorithm improvement by the algorithm developers, and the likely available options for developing the supersite network.
Chen, Yi-Ching; Lin, Yen-Ting; Chang, Gwo-Ching; Hwang, Ing-Shiou
2017-01-01
The detection of error information is an essential prerequisite of a feedback-based movement. This study investigated the differential behavior and neurophysiological mechanisms of a cyclic force-tracking task using error-reducing and error-enhancing feedback. The discharge patterns of a relatively large number of motor units (MUs) were assessed with custom-designed multi-channel surface electromyography following mathematical decomposition of the experimentally-measured signals. Force characteristics, force-discharge relation, and phase-locking cortical activities in the contralateral motor cortex to individual MUs were contrasted among the low (LSF), normal (NSF), and high scaling factor (HSF) conditions, in which the sizes of online execution errors were displayed with various amplification ratios. Along with a spectral shift of the force output toward a lower band, force output with a more phase-lead became less irregular, and tracking accuracy was worse in the LSF condition than in the HSF condition. The coherent discharge of high phasic (HP) MUs with the target signal was greater, and inter-spike intervals were larger, in the LSF condition than in the HSF condition. Force-tracking in the LSF condition manifested with stronger phase-locked EEG activity in the contralateral motor cortex to discharge of the (HP) MUs (LSF > NSF, HSF). The coherent discharge of the (HP) MUs during the cyclic force-tracking predominated the force-discharge relation, which increased inversely to the error scaling factor. In conclusion, the size of visualized error gates motor unit discharge, force-discharge relation, and the relative influences of the feedback and feedforward processes on force control. A smaller visualized error size favors voluntary force control using a feedforward process, in relation to a selective central modulation that enhance the coherent discharge of (HP) MUs. PMID:28348530
Chen, Yi-Ching; Lin, Yen-Ting; Chang, Gwo-Ching; Hwang, Ing-Shiou
2017-01-01
The detection of error information is an essential prerequisite of a feedback-based movement. This study investigated the differential behavior and neurophysiological mechanisms of a cyclic force-tracking task using error-reducing and error-enhancing feedback. The discharge patterns of a relatively large number of motor units (MUs) were assessed with custom-designed multi-channel surface electromyography following mathematical decomposition of the experimentally-measured signals. Force characteristics, force-discharge relation, and phase-locking cortical activities in the contralateral motor cortex to individual MUs were contrasted among the low (LSF), normal (NSF), and high scaling factor (HSF) conditions, in which the sizes of online execution errors were displayed with various amplification ratios. Along with a spectral shift of the force output toward a lower band, force output with a more phase-lead became less irregular, and tracking accuracy was worse in the LSF condition than in the HSF condition. The coherent discharge of high phasic (HP) MUs with the target signal was greater, and inter-spike intervals were larger, in the LSF condition than in the HSF condition. Force-tracking in the LSF condition manifested with stronger phase-locked EEG activity in the contralateral motor cortex to discharge of the (HP) MUs (LSF > NSF, HSF). The coherent discharge of the (HP) MUs during the cyclic force-tracking predominated the force-discharge relation, which increased inversely to the error scaling factor. In conclusion, the size of visualized error gates motor unit discharge, force-discharge relation, and the relative influences of the feedback and feedforward processes on force control. A smaller visualized error size favors voluntary force control using a feedforward process, in relation to a selective central modulation that enhance the coherent discharge of (HP) MUs.
The Cut-Score Operating Function: A New Tool to Aid in Standard Setting
ERIC Educational Resources Information Center
Grabovsky, Irina; Wainer, Howard
2017-01-01
In this essay, we describe the construction and use of the Cut-Score Operating Function in aiding standard setting decisions. The Cut-Score Operating Function shows the relation between the cut-score chosen and the consequent error rate. It allows error rates to be defined by multiple loss functions and will show the behavior of each loss…
A Hands-On Exercise Improves Understanding of the Standard Error of the Mean
ERIC Educational Resources Information Center
Ryan, Robert S.
2006-01-01
One of the most difficult concepts for statistics students is the standard error of the mean. To improve understanding of this concept, 1 group of students used a hands-on procedure to sample from small populations representing either a true or false null hypothesis. The distribution of 120 sample means (n = 3) from each population had standard…
ERIC Educational Resources Information Center
Li, Deping; Oranje, Andreas
2007-01-01
Two versions of a general method for approximating standard error of regression effect estimates within an IRT-based latent regression model are compared. The general method is based on Binder's (1983) approach, accounting for complex samples and finite populations by Taylor series linearization. In contrast, the current National Assessment of…
The Measurement and Correction of the Periodic Error of the LX200-16 Telescope Driving System
NASA Astrophysics Data System (ADS)
Jeong, Jang Hae; Lee, Young Sam; Lee, Chung Uk
2000-06-01
We examined and corrected the periodic error of the LX200-16 Telescope driving system of Chungbuk National University Campus Observatory. Before correcting, the standard deviation of the periodic error in the direction of East-West was = 7.''2. After correcting,we found that the periodic error was reduced to = 1.''2.
ERIC Educational Resources Information Center
Hodgson, Catherine; Lambon Ralph, Matthew A.
2008-01-01
Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study…
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
Cost-effectiveness of the streamflow-gaging program in Wyoming
Druse, S.A.; Wahl, K.L.
1988-01-01
This report documents the results of a cost-effectiveness study of the streamflow-gaging program in Wyoming. Regression analysis or hydrologic flow-routing techniques were considered for 24 combinations of stations from a 139-station network operated in 1984 to investigate suitability of techniques for simulating streamflow records. Only one station was determined to have sufficient accuracy in the regression analysis to consider discontinuance of the gage. The evaluation of the gaging-station network, which included the use of associated uncertainty in streamflow records, is limited to the nonwinter operation of the 47 stations operated by the Riverton Field Office of the U.S. Geological Survey. The current (1987) travel routes and measurement frequencies require a budget of $264,000 and result in an average standard error in streamflow records of 13.2%. Changes in routes and station visits using the same budget, could optimally reduce the standard error by 1.6%. Budgets evaluated ranged from $235,000 to $400,000. A $235,000 budget increased the optimal average standard error/station from 11.6 to 15.5%, and a $400,000 budget could reduce it to 6.6%. For all budgets considered, lost record accounts for about 40% of the average standard error. (USGS)
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction
Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir
2016-10-20
Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less
Effect of retinal defocus on basketball free throw shooting performance.
Bulson, Ryan C; Ciuffreda, Kenneth J; Hayes, John; Ludlam, Diana P
2015-07-01
Vision plays a critical role in athletic performance; however, previous studies have demonstrated that a variety of simulated athletic sensorimotor tasks can be surprisingly resilient to retinal defocus (blurred vision). The purpose of the present study was to extend this work to determine the effect of retinal defocus on overall basketball free throw performance, as well as for the factors gender, refractive error and experience. Forty-four young adult participants of both genders were recruited. They had a range of refractive errors and basketball experience. Each performed 20 standard basketball free throws under five lens defocus conditions in a randomised manner: plano, +1.50 D, +3.00 D, +4.50 D and +10.00 D. Overall, free throw performance was significantly reduced under the +10.00 D lens defocus condition only. Previous experience, but neither refractive error nor gender, yielded a statistically significant difference in performance. Consistent with previous studies of complex sensorimotor tasks, basketball free throw performance was resilient to low and moderate levels of retinal defocus. Thus, for a relatively non-dynamic motor task at a fixed far distance, such as the basketball free throw, precise visual clarity was not critical. Other factors such as motor memory may be important. However, in the dynamic athletic competitive environment it is likely that visual clarity plays a more critical role in one's performance level, at least for specific task demands. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.
Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir
Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less
Radiology's Achilles' heel: error and variation in the interpretation of the Röntgen image.
Robinson, P J
1997-11-01
The performance of the human eye and brain has failed to keep pace with the enormous technical progress in the first full century of radiology. Errors and variations in interpretation now represent the weakest aspect of clinical imaging. Those interpretations which differ from the consensus view of a panel of "experts" may be regarded as errors; where experts fail to achieve consensus, differing reports are regarded as "observer variation". Errors arise from poor technique, failures of perception, lack of knowledge and misjudgments. Observer variation is substantial and should be taken into account when different diagnostic methods are compared; in many cases the difference between observers outweighs the difference between techniques. Strategies for reducing error include attention to viewing conditions, training of the observers, availability of previous films and relevant clinical data, dual or multiple reporting, standardization of terminology and report format, and assistance from computers. Digital acquisition and display will probably not affect observer variation but the performance of radiologists, as measured by receiver operating characteristic (ROC) analysis, may be improved by computer-directed search for specific image features. Other current developments show that where image features can be comprehensively described, computer analysis can replace the perception function of the observer, whilst the function of interpretation can in some cases be performed better by artificial neural networks. However, computer-assisted diagnosis is still in its infancy and complete replacement of the human observer is as yet a remote possibility.
Comparative study of standard space and real space analysis of quantitative MR brain data.
Aribisala, Benjamin S; He, Jiabao; Blamire, Andrew M
2011-06-01
To compare the robustness of region of interest (ROI) analysis of magnetic resonance imaging (MRI) brain data in real space with analysis in standard space and to test the hypothesis that standard space image analysis introduces more partial volume effect errors compared to analysis of the same dataset in real space. Twenty healthy adults with no history or evidence of neurological diseases were recruited; high-resolution T(1)-weighted, quantitative T(1), and B(0) field-map measurements were collected. Algorithms were implemented to perform analysis in real and standard space and used to apply a simple standard ROI template to quantitative T(1) datasets. Regional relaxation values and histograms for both gray and white matter tissues classes were then extracted and compared. Regional mean T(1) values for both gray and white matter were significantly lower using real space compared to standard space analysis. Additionally, regional T(1) histograms were more compact in real space, with smaller right-sided tails indicating lower partial volume errors compared to standard space analysis. Standard space analysis of quantitative MRI brain data introduces more partial volume effect errors biasing the analysis of quantitative data compared to analysis of the same dataset in real space. Copyright © 2011 Wiley-Liss, Inc.
Effectiveness of Toyota process redesign in reducing thyroid gland fine-needle aspiration error.
Raab, Stephen S; Grzybicki, Dana Marie; Sudilovsky, Daniel; Balassanian, Ronald; Janosky, Janine E; Vrbin, Colleen M
2006-10-01
Our objective was to determine whether the Toyota Production System process redesign resulted in diagnostic error reduction for patients who underwent cytologic evaluation of thyroid nodules. In this longitudinal, nonconcurrent cohort study, we compared the diagnostic error frequency of a thyroid aspiration service before and after implementation of error reduction initiatives consisting of adoption of a standardized diagnostic terminology scheme and an immediate interpretation service. A total of 2,424 patients underwent aspiration. Following terminology standardization, the false-negative rate decreased from 41.8% to 19.1% (P = .006), the specimen nondiagnostic rate increased from 5.8% to 19.8% (P < .001), and the sensitivity increased from 70.2% to 90.6% (P < .001). Cases with an immediate interpretation had a lower noninterpretable specimen rate than those without immediate interpretation (P < .001). Toyota process change led to significantly fewer diagnostic errors for patients who underwent thyroid fine-needle aspiration.
Huang, Jidong; Emery, Sherry
2016-01-01
Background Social media have transformed the communications landscape. People increasingly obtain news and health information online and via social media. Social media platforms also serve as novel sources of rich observational data for health research (including infodemiology, infoveillance, and digital disease detection detection). While the number of studies using social data is growing rapidly, very few of these studies transparently outline their methods for collecting, filtering, and reporting those data. Keywords and search filters applied to social data form the lens through which researchers may observe what and how people communicate about a given topic. Without a properly focused lens, research conclusions may be biased or misleading. Standards of reporting data sources and quality are needed so that data scientists and consumers of social media research can evaluate and compare methods and findings across studies. Objective We aimed to develop and apply a framework of social media data collection and quality assessment and to propose a reporting standard, which researchers and reviewers may use to evaluate and compare the quality of social data across studies. Methods We propose a conceptual framework consisting of three major steps in collecting social media data: develop, apply, and validate search filters. This framework is based on two criteria: retrieval precision (how much of retrieved data is relevant) and retrieval recall (how much of the relevant data is retrieved). We then discuss two conditions that estimation of retrieval precision and recall rely on—accurate human coding and full data collection—and how to calculate these statistics in cases that deviate from the two ideal conditions. We then apply the framework on a real-world example using approximately 4 million tobacco-related tweets collected from the Twitter firehose. Results We developed and applied a search filter to retrieve e-cigarette–related tweets from the archive based on three keyword categories: devices, brands, and behavior. The search filter retrieved 82,205 e-cigarette–related tweets from the archive and was validated. Retrieval precision was calculated above 95% in all cases. Retrieval recall was 86% assuming ideal conditions (no human coding errors and full data collection), 75% when unretrieved messages could not be archived, 86% assuming no false negative errors by coders, and 93% allowing both false negative and false positive errors by human coders. Conclusions This paper sets forth a conceptual framework for the filtering and quality evaluation of social data that addresses several common challenges and moves toward establishing a standard of reporting social data. Researchers should clearly delineate data sources, how data were accessed and collected, and the search filter building process and how retrieval precision and recall were calculated. The proposed framework can be adapted to other public social media platforms. PMID:26920122
Observing human movements helps decoding environmental forces.
Zago, Myrka; La Scaleia, Barbara; Miller, William L; Lacquaniti, Francesco
2011-11-01
Vision of human actions can affect several features of visual motion processing, as well as the motor responses of the observer. Here, we tested the hypothesis that action observation helps decoding environmental forces during the interception of a decelerating target within a brief time window, a task intrinsically very difficult. We employed a factorial design to evaluate the effects of scene orientation (normal or inverted) and target gravity (normal or inverted). Button-press triggered the motion of a bullet, a piston, or a human arm. We found that the timing errors were smaller for upright scenes irrespective of gravity direction in the Bullet group, while the errors were smaller for the standard condition of normal scene and gravity in the Piston group. In the Arm group, instead, performance was better when the directions of scene and target gravity were concordant, irrespective of whether both were upright or inverted. These results suggest that the default viewer-centered reference frame is used with inanimate scenes, such as those of the Bullet and Piston protocols. Instead, the presence of biological movements in animate scenes (as in the Arm protocol) may help processing target kinematics under the ecological conditions of coherence between scene and target gravity directions.
The influence of LED lighting on task accuracy: time of day, gender and myopia effects
NASA Astrophysics Data System (ADS)
Rao, Feng; Chan, A. H. S.; Zhu, Xi-Fang
2017-07-01
In this research, task errors were obtained during performance of a marker location task in which the markers were shown on a computer screen under nine LED lighting conditions; three illuminances (100, 300 and 500 lx) and three color temperatures (3000, 4500 and 6500 K). A total of 47 students participated voluntarily in these tasks. The results showed that task errors in the morning were small and nearly constant across the nine lighting conditions. However in the afternoon, the task errors were significantly larger and varied across lighting conditions. The largest errors for the afternoon session occurred when the color temperature was 4500 K and illuminance 500 lx. There were significant differences between task errors in the morning and afternoon sessions. No significant difference between females and males was found. Task errors for high myopia students were significantly larger than for the low myopia students under the same lighting conditions. In summary, the influence of LED lighting on task accuracy during office hours was not gender dependent, but was time of day and myopia dependent.
Scaglioni-Solano, Pietro; Aragón-Vargas, Luis F
2014-06-01
Standing balance is an important motor task. Postural instability associated with age typically arises from deterioration of peripheral sensory systems. The modified Clinical Test of Sensory Integration for Balance and the Tandem test have been used to screen for balance. Timed tests present some limitations, whereas quantification of the motions of the center of pressure (CoP) with portable and inexpensive equipment may help to improve the sensitivity of these tests and give the possibility of widespread use. This study determines the validity and reliability of the Wii Balance Board (Wii BB) to quantify CoP motions during the mentioned tests. Thirty-seven older adults completed three repetitions of five balance conditions: eyes open, eyes closed, eyes open on a compliant surface, eyes closed on a compliant surface, and tandem stance, all performed on a force plate and a Wii BB simultaneously. Twenty participants repeated the trials for reliability purposes. CoP displacement was the main outcome measure. Regression analysis indicated that the Wii BB has excellent concurrent validity, and Bland-Altman plots showed good agreement between devices with small mean differences and no relationship between the difference and the mean. Intraclass correlation coefficients (ICCs) indicated modest-to-excellent test-retest reliability (ICC=0.64-0.85). Standard error of measurement and minimal detectable change were similar for both devices, except the 'eyes closed' condition, with greater standard error of measurement for the Wii BB. In conclusion, the Wii BB is shown to be a valid and reliable method to quantify CoP displacement in older adults.
NASA Astrophysics Data System (ADS)
Moustris, Konstantinos; Tsiros, Ioannis X.; Tseliou, Areti; Nastos, Panagiotis
2018-04-01
The present study deals with the development and application of artificial neural network models (ANNs) to estimate the values of a complex human thermal comfort-discomfort index associated with urban heat and cool island conditions inside various urban clusters using as only inputs air temperature data from a standard meteorological station. The index used in the study is the Physiologically Equivalent Temperature (PET) index which requires as inputs, among others, air temperature, relative humidity, wind speed, and radiation (short- and long-wave components). For the estimation of PET hourly values, ANN models were developed, appropriately trained, and tested. Model results are compared to values calculated by the PET index based on field monitoring data for various urban clusters (street, square, park, courtyard, and gallery) in the city of Athens (Greece) during an extreme hot weather summer period. For the evaluation of the predictive ability of the developed ANN models, several statistical evaluation indices were applied: the mean bias error, the root mean square error, the index of agreement, the coefficient of determination, the true predictive rate, the false alarm rate, and the Success Index. According to the results, it seems that ANNs present a remarkable ability to estimate hourly PET values within various urban clusters using only hourly values of air temperature. This is very important in cases where the human thermal comfort-discomfort conditions have to be analyzed and the only available parameter is air temperature.
Moustris, Konstantinos; Tsiros, Ioannis X; Tseliou, Areti; Nastos, Panagiotis
2018-04-11
The present study deals with the development and application of artificial neural network models (ANNs) to estimate the values of a complex human thermal comfort-discomfort index associated with urban heat and cool island conditions inside various urban clusters using as only inputs air temperature data from a standard meteorological station. The index used in the study is the Physiologically Equivalent Temperature (PET) index which requires as inputs, among others, air temperature, relative humidity, wind speed, and radiation (short- and long-wave components). For the estimation of PET hourly values, ANN models were developed, appropriately trained, and tested. Model results are compared to values calculated by the PET index based on field monitoring data for various urban clusters (street, square, park, courtyard, and gallery) in the city of Athens (Greece) during an extreme hot weather summer period. For the evaluation of the predictive ability of the developed ANN models, several statistical evaluation indices were applied: the mean bias error, the root mean square error, the index of agreement, the coefficient of determination, the true predictive rate, the false alarm rate, and the Success Index. According to the results, it seems that ANNs present a remarkable ability to estimate hourly PET values within various urban clusters using only hourly values of air temperature. This is very important in cases where the human thermal comfort-discomfort conditions have to be analyzed and the only available parameter is air temperature.
Ensemble-type numerical uncertainty information from single model integrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter
2015-07-01
We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less
Neurochemical enhancement of conscious error awareness.
Hester, Robert; Nandam, L Sanjay; O'Connell, Redmond G; Wagner, Joe; Strudwick, Mark; Nathan, Pradeep J; Mattingley, Jason B; Bellgrove, Mark A
2012-02-22
How the brain monitors ongoing behavior for performance errors is a central question of cognitive neuroscience. Diminished awareness of performance errors limits the extent to which humans engage in corrective behavior and has been linked to loss of insight in a number of psychiatric syndromes (e.g., attention deficit hyperactivity disorder, drug addiction). These conditions share alterations in monoamine signaling that may influence the neural mechanisms underlying error processing, but our understanding of the neurochemical drivers of these processes is limited. We conducted a randomized, double-blind, placebo-controlled, cross-over design of the influence of methylphenidate, atomoxetine, and citalopram on error awareness in 27 healthy participants. The error awareness task, a go/no-go response inhibition paradigm, was administered to assess the influence of monoaminergic agents on performance errors during fMRI data acquisition. A single dose of methylphenidate, but not atomoxetine or citalopram, significantly improved the ability of healthy volunteers to consciously detect performance errors. Furthermore, this behavioral effect was associated with a strengthening of activation differences in the dorsal anterior cingulate cortex and inferior parietal lobe during the methylphenidate condition for errors made with versus without awareness. Our results have implications for the understanding of the neurochemical underpinnings of performance monitoring and for the pharmacological treatment of a range of disparate clinical conditions that are marked by poor awareness of errors.
Jarvis, Stuart; Kovacs, Caroline; Briggs, Jim; Meredith, Paul; Schmidt, Paul E; Featherstone, Peter I; Prytherch, David R; Smith, Gary B
2015-08-01
Although the weightings to be summed in an early warning score (EWS) calculation are small, calculation and other errors occur frequently, potentially impacting on hospital efficiency and patient care. Use of a simpler EWS has the potential to reduce errors. We truncated 36 published 'standard' EWSs so that, for each component, only two scores were possible: 0 when the standard EWS scored 0 and 1 when the standard EWS scored greater than 0. Using 1564,153 vital signs observation sets from 68,576 patient care episodes, we compared the discrimination (measured using the area under the receiver operator characteristic curve--AUROC) of each standard EWS and its truncated 'binary' equivalent. The binary EWSs had lower AUROCs than the standard EWSs in most cases, although for some the difference was not significant. One system, the binary form of the National Early Warning System (NEWS), had significantly better discrimination than all standard EWSs, except for NEWS. Overall, Binary NEWS at a trigger value of 3 would detect as many adverse outcomes as are detected by NEWS using a trigger of 5, but would require a 15% higher triggering rate. The performance of Binary NEWS is only exceeded by that of standard NEWS. It may be that Binary NEWS, as a simplified system, can be used with fewer errors. However, its introduction could lead to significant increases in workload for ward and rapid response team staff. The balance between fewer errors and a potentially greater workload needs further investigation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Kappa statistic for clustered dichotomous responses from physicians and patients.
Kang, Chaeryon; Qaqish, Bahjat; Monaco, Jane; Sheridan, Stacey L; Cai, Jianwen
2013-09-20
The bootstrap method for estimating the standard error of the kappa statistic in the presence of clustered data is evaluated. Such data arise, for example, in assessing agreement between physicians and their patients regarding their understanding of the physician-patient interaction and discussions. We propose a computationally efficient procedure for generating correlated dichotomous responses for physicians and assigned patients for simulation studies. The simulation result demonstrates that the proposed bootstrap method produces better estimate of the standard error and better coverage performance compared with the asymptotic standard error estimate that ignores dependence among patients within physicians with at least a moderately large number of clusters. We present an example of an application to a coronary heart disease prevention study. Copyright © 2013 John Wiley & Sons, Ltd.
Multicollinearity and Regression Analysis
NASA Astrophysics Data System (ADS)
Daoud, Jamal I.
2017-12-01
In regression analysis it is obvious to have a correlation between the response and predictor(s), but having correlation among predictors is something undesired. The number of predictors included in the regression model depends on many factors among which, historical data, experience, etc. At the end selection of most important predictors is something objective due to the researcher. Multicollinearity is a phenomena when two or more predictors are correlated, if this happens, the standard error of the coefficients will increase [8]. Increased standard errors means that the coefficients for some or all independent variables may be found to be significantly different from In other words, by overinflating the standard errors, multicollinearity makes some variables statistically insignificant when they should be significant. In this paper we focus on the multicollinearity, reasons and consequences on the reliability of the regression model.
Fujioka, Toru; Takiguchi, Shinichiro; Yatsuga, Chiho; Hiratani, Michio; Hong, Kang-E M; Shin, Min-Sup; Cho, Sungzoon; Kosaka, Hirotaka; Tomoda, Akemi
2016-01-01
Objective This study was conducted to validate the Advanced Test of Attention (ATA) of the visual attention version of Japanese children with attention deficit/hyperactivity disorder (ADHD) and to evaluate the efficacy of methylphenidate (OROS-MPH) and atomoxetine medications. Methods To assess pharmacotherapy efficacy, the visual version of ATA was administered to 42 children with ADHD. Results were assessed using discriminant analysis, ANOVA for indices of ATA before and after medication treatment, and correlation analysis between the improvement of indices of ATA and clinical symptoms during medication treatment. Results Discriminant analysis showed that 69.0% of ADHD children were assigned correctly. The T score of commission errors increased as the trial progressed on the medication-off condition. T scores of commission errors and standard deviation of response times on medication-on condition were low compared to the medication-off condition. A few significant correlations were found between the improvements of indices of ATA and ADHD-Rating Scale (RS) during treatment. Conclusion The performance of the visual version of ATA on medication-off condition reflected the features of ADHD. Furthermore, the medication treatment effects were confirmed sufficiently. In addition, results suggest that indices of ATA reflected aspects of ADHD symptoms that are difficult to elucidate for ADHD-RS. For assessing symptoms and effects of medical treatment in children with ADHD, ATA might be a useful assessment tool. PMID:26792044
NASA Technical Reports Server (NTRS)
Troy, B. E., Jr.; Maier, E. J.
1975-01-01
The effects of the grid transparency and finite collector size on the values of thermal ion density and temperature determined by the standard RPA (retarding potential analyzer) analysis method are investigated. The current-voltage curves calculated for varying RPA parameters and a given ion mass, temperature, and density are analyzed by the standard RPA method. It is found that only small errors in temperature and density are introduced for an RPA with typical dimensions, and that even when the density error is substantial for nontypical dimensions, the temperature error remains minimum.
Pilot Age and Error in Air-Taxi Crashes
Rebok, George W.; Qiang, Yandong; Baker, Susan P.; Li, Guohua
2010-01-01
Introduction The associations of pilot error with the type of flight operations and basic weather conditions are well documented. The correlation between pilot characteristics and error is less clear. This study aims to examine whether pilot age is associated with the prevalence and patterns of pilot error in air-taxi crashes. Methods Investigation reports from the National Transportation Safety Board for crashes involving non-scheduled Part 135 operations (i.e., air taxis) in the United States between 1983 and 2002 were reviewed to identify pilot error and other contributing factors. Crash circumstances and the presence and type of pilot error were analyzed in relation to pilot age using Chi-square tests. Results Of the 1751 air-taxi crashes studied, 28% resulted from mechanical failure, 25% from loss of control at landing or takeoff, 7% from visual flight rule conditions into instrument meteorological conditions, 7% from fuel starvation, 5% from taxiing, and 28% from other causes. Crashes among older pilots were more likely to occur during the daytime rather than at night and off airport than on airport. The patterns of pilot error in air-taxi crashes were similar across age groups. Of the errors identified, 27% were flawed decisions, 26% were inattentiveness, 23% mishandled aircraft kinetics, 15% mishandled wind and/or runway conditions, and 11% were others. Conclusions Pilot age is associated with crash circumstances but not with the prevalence and patterns of pilot error in air-taxi crashes. Lack of age-related differences in pilot error may be attributable to the “safe worker effect.” PMID:19601508
Gait performance is not influenced by working memory when walking at a self-selected pace.
Grubaugh, Jordan; Rhea, Christopher K
2014-02-01
Gait performance exhibits patterns within the stride-to-stride variability that can be indexed using detrended fluctuation analysis (DFA). Previous work employing DFA has shown that gait patterns can be influenced by constraints, such as natural aging or disease, and they are informative regarding a person's functional ability. Many activities of daily living require concurrent performance in the cognitive and gait domains; specifically working memory is commonly engaged while walking, which is considered dual-tasking. It is unknown if taxing working memory while walking influences gait performance as assessed by DFA. This study used a dual-tasking paradigm to determine if performance decrements are observed in gait or working memory when performed concurrently. Healthy young participants (N = 16) performed a working memory task (automated operation span task) and a gait task (walking at a self-selected speed on a treadmill) in single- and dual-task conditions. A second dual-task condition (reading while walking) was included to control for visual attention, but also introduced a task that taxed working memory over the long term. All trials involving gait lasted at least 10 min. Performance in the working memory task was indexed using five dependent variables (absolute score, partial score, speed error, accuracy error, and math error), while gait performance was indexed by quantifying the mean, standard deviation, and DFA α of the stride interval time series. Two multivariate analyses of variance (one for gait and one for working memory) were used to examine performance in the single- and dual-task conditions. No differences were observed in any of the gait or working memory dependent variables as a function of task condition. The results suggest the locomotor system is adaptive enough to complete a working memory task without compromising gait performance when walking at a self-selected pace.
Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits
NASA Astrophysics Data System (ADS)
Hoogland, Jiri; Kleiss, Ronald
1997-04-01
In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.
Significant and Sustained Reduction in Chemotherapy Errors Through Improvement Science.
Weiss, Brian D; Scott, Melissa; Demmel, Kathleen; Kotagal, Uma R; Perentesis, John P; Walsh, Kathleen E
2017-04-01
A majority of children with cancer are now cured with highly complex chemotherapy regimens incorporating multiple drugs and demanding monitoring schedules. The risk for error is high, and errors can occur at any stage in the process, from order generation to pharmacy formulation to bedside drug administration. Our objective was to describe a program to eliminate errors in chemotherapy use among children. To increase reporting of chemotherapy errors, we supplemented the hospital reporting system with a new chemotherapy near-miss reporting system. After the model for improvement, we then implemented several interventions, including a daily chemotherapy huddle, improvements to the preparation and delivery of intravenous therapy, headphones for clinicians ordering chemotherapy, and standards for chemotherapy administration throughout the hospital. Twenty-two months into the project, we saw a centerline shift in our U chart of chemotherapy errors that reached the patient from a baseline rate of 3.8 to 1.9 per 1,000 doses. This shift has been sustained for > 4 years. In Poisson regression analyses, we found an initial increase in error rates, followed by a significant decline in errors after 16 months of improvement work ( P < .001). After the model for improvement, our improvement efforts were associated with significant reductions in chemotherapy errors that reached the patient. Key drivers for our success included error vigilance through a huddle, standardization, and minimization of interruptions during ordering.
Safe and effective error rate monitors for SS7 signaling links
NASA Astrophysics Data System (ADS)
Schmidt, Douglas C.
1994-04-01
This paper describes SS7 error monitor characteristics, discusses the existing SUERM (Signal Unit Error Rate Monitor), and develops the recently proposed EIM (Error Interval Monitor) for higher speed SS7 links. A SS7 error monitor is considered safe if it ensures acceptable link quality and is considered effective if it is tolerant to short-term phenomena. Formal criteria for safe and effective error monitors are formulated in this paper. This paper develops models of changeover transients, the unstable component of queue length resulting from errors. These models are in the form of recursive digital filters. Time is divided into sequential intervals. The filter's input is the number of errors which have occurred in each interval. The output is the corresponding change in transmit queue length. Engineered EIM's are constructed by comparing an estimated changeover transient with a threshold T using a transient model modified to enforce SS7 standards. When this estimate exceeds T, a changeover will be initiated and the link will be removed from service. EIM's can be differentiated from SUERM by the fact that EIM's monitor errors over an interval while SUERM's count errored messages. EIM's offer several advantages over SUERM's, including the fact that they are safe and effective, impose uniform standards in link quality, are easily implemented, and make minimal use of real-time resources.
The performance of projective standardization for digital subtraction radiography.
Mol, André; Dunn, Stanley M
2003-09-01
We sought to test the performance and robustness of projective standardization in preserving invariant properties of subtraction images in the presence of irreversible projection errors. Study design Twenty bone chips (1-10 mg each) were placed on dentate dry mandibles. Follow-up images were obtained without the bone chips, and irreversible projection errors of up to 6 degrees were introduced. Digitized image intensities were normalized, and follow-up images were geometrically reconstructed by 2 operators using anatomical and fiduciary landmarks. Subtraction images were analyzed by 3 observers. Regression analysis revealed a linear relationship between radiographic estimates of mineral loss and actual mineral loss (R(2) = 0.99; P <.05). The effect of projection error was not significant (general linear model [GLM]: P >.05). There was no difference between the radiographic estimates from images standardized with anatomical landmarks and those standardized with fiduciary landmarks (Wilcoxon signed rank test: P >.05). Operator variability was low for image analysis alone (R(2) = 0.99; P <.05), as well as for the entire procedure (R(2) = 0.98; P <.05). The predicted detection limit was smaller than 1 mg. Subtraction images registered by projective standardization yield estimates of osseous change that are invariant to irreversible projection errors of up to 6 degrees. Within these limits, operator precision is high and anatomical landmarks can be used to establish correspondence.
Consistency and convergence for numerical radiation conditions
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1990-01-01
The problem of imposing radiation conditions at artificial boundaries for the numerical simulation of wave propagation is considered. Emphasis is on the behavior and analysis of the error which results from the restriction of the domain. The theory of error estimation is briefly outlined for boundary conditions. Use is made of the asymptotic analysis of propagating wave groups to derive and analyze boundary operators. For dissipative problems this leads to local, accurate conditions, but falls short in the hyperbolic case. A numerical experiment on the solution of the wave equation with cylindrical symmetry is described. A unified presentation of a number of conditions which have been proposed in the literature is given and the time dependence of the error which results from their use is displayed. The results are in qualitative agreement with theoretical considerations. It was found, however, that for this model problem it is particularly difficult to force the error to decay rapidly in time.
Second Chance: If at First You Do Not Succeed, Set up a Plan and Try, Try Again
ERIC Educational Resources Information Center
Poulsen, John
2012-01-01
Student teachers make errors in their practicum. Then, they learn and fix those errors. This is the standard arc within a successful practicum. Some students make errors that they do not fix and then make more errors that again remain unfixed. This downward spiral increases in pace until the classroom becomes chaos. These students at the…
David W. MacFarlane; Neil R. Ver Planck
2012-01-01
Data from hardwood trees in Michigan were analyzed to investigate how differences in whole-tree form and wood density between trees of different stem diameter relate to residual error in standard-type biomass equations. The results suggested that whole-tree wood density, measured at breast height, explained a significant proportion of residual error in standard-type...
ERIC Educational Resources Information Center
Pan, Tianshu; Yin, Yue
2012-01-01
In the discussion of mean square difference (MSD) and standard error of measurement (SEM), Barchard (2012) concluded that the MSD between 2 sets of test scores is greater than 2(SEM)[superscript 2] and SEM underestimates the score difference between 2 tests when the 2 tests are not parallel. This conclusion has limitations for 2 reasons. First,…
ERIC Educational Resources Information Center
Burns, Matthew K.; Taylor, Crystal N.; Warmbold-Brann, Kristy L.; Preast, June L.; Hosp, John L.; Ford, Jeremy W.
2017-01-01
Intervention researchers often use curriculum-based measurement of reading fluency (CBM-R) with a brief experimental analysis (BEA) to identify an effective intervention for individual students. The current study synthesized data from 22 studies that used CBM-R data within a BEA by computing the standard error of measure (SEM) for the median data…
ERIC Educational Resources Information Center
Choi, Sae Il
2009-01-01
This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…
Missing portion sizes in FFQ--alternatives to use of standard portions.
Køster-Rasmussen, Rasmus; Siersma, Volkert; Halldorsson, Thorhallur I; de Fine Olivarius, Niels; Henriksen, Jan E; Heitmann, Berit L
2015-08-01
Standard portions or substitution of missing portion sizes with medians may generate bias when quantifying the dietary intake from FFQ. The present study compared four different methods to include portion sizes in FFQ. We evaluated three stochastic methods for imputation of portion sizes based on information about anthropometry, sex, physical activity and age. Energy intakes computed with standard portion sizes, defined as sex-specific medians (median), or with portion sizes estimated with multinomial logistic regression (MLR), 'comparable categories' (Coca) or k-nearest neighbours (KNN) were compared with a reference based on self-reported portion sizes (quantified by a photographic food atlas embedded in the FFQ). The Danish Health Examination Survey 2007-2008. The study included 3728 adults with complete portion size data. Compared with the reference, the root-mean-square errors of the mean daily total energy intake (in kJ) computed with portion sizes estimated by the four methods were (men; women): median (1118; 1061), MLR (1060; 1051), Coca (1230; 1146), KNN (1281; 1181). The equivalent biases (mean error) were (in kJ): median (579; 469), MLR (248; 178), Coca (234; 188), KNN (-340; 218). The methods MLR and Coca provided the best agreement with the reference. The stochastic methods allowed for estimation of meaningful portion sizes by conditioning on information about physiology and they were suitable for multiple imputation. We propose to use MLR or Coca to substitute missing portion size values or when portion sizes needs to be included in FFQ without portion size data.
The Effect of Divided Attention on Inhibiting the Gravity Error
ERIC Educational Resources Information Center
Hood, Bruce M.; Wilson, Alice; Dyson, Sally
2006-01-01
Children who could overcome the gravity error on Hood's (1995) tubes task were tested in a condition where they had to monitor two falling balls. This condition significantly impaired search performance with the majority of mistakes being gravity errors. In a second experiment, the effect of monitoring two balls was compared in the tubes task and…
NASA Astrophysics Data System (ADS)
Servilla, M. S.; O'Brien, M.; Costa, D.
2013-12-01
Considerable ecological research performed today occurs through the analysis of data downloaded from various repositories and archives, often resulting in derived or synthetic products generated by automated workflows. These data are only meaningful for research if they are well documented by metadata, lest semantic or data type errors may occur in interpretation or processing. The Long Term Ecological Research (LTER) Network now screens all data packages entering its long-term archive to ensure that each package contains metadata that is complete, of high quality, and accurately describes the structure of its associated data entity and the data are structurally congruent to the metadata. Screening occurs prior to the upload of a data package into the Provenance Aware Synthesis Tracking Architecture (PASTA) data management system through a series of quality checks, thus preventing ambiguously or incorrectly documented data packages from entering the system. The quality checks within PASTA are designed to work specifically with the Ecological Metadata Language (EML), the metadata standard adopted by the LTER Network to describe data generated by their 26 research sites. Each quality check is codified in Java as part of the ecological community-supported Data Manager Library, which is a resource of the EML specification and used as a component of the PASTA software stack. Quality checks test for metadata quality, data integrity, or metadata-data congruence. Quality checks are further classified as either conditional or informational. Conditional checks issue a 'valid', 'warning' or 'error' response. Only an 'error' response blocks the data package from upload into PASTA. Informational checks only provide descriptive content pertaining to a particular facet of the data package. Quality checks are designed by a group of LTER information managers and reviewed by the LTER community before deploying into PASTA. A total of 32 quality checks have been deployed to date. Quality checks can be customized through a configurable template, which includes turning checks 'on' or 'off' and setting the severity of conditional checks. This feature is important to other potential users of the Data Manager Library who wish to configure its quality checks in accordance with the standards of their community. Executing the complete set of quality checks produces a report that describes the result of each check. The report is an XML document that is stored by PASTA for future reference.
Brannon, Timothy S
2006-01-01
Continuous infusion intravenous (IV) drugs in neonatal intensive care are usually prepared based on patient weight so that the dose is readable as a simple multiple of the infusion pump rate. New safety guidelines propose that hospitals switch to using standardized admixtures of these drugs to prevent calculation errors during ad hoc preparation. Extended hierarchical task analysis suggests that switching to standardized admixtures may lead to more errors in programming the pump at the bedside.
Brannon, Timothy S.
2006-01-01
Continuous infusion intravenous (IV) drugs in neonatal intensive care are usually prepared based on patient weight so that the dose is readable as a simple multiple of the infusion pump rate. New safety guidelines propose that hospitals switch to using standardized admixtures of these drugs to prevent calculation errors during ad hoc preparation. Extended hierarchical task analysis suggests that switching to standardized admixtures may lead to more errors in programming the pump at the bedside. PMID:17238482
Standard representation and unified stability analysis for dynamic artificial neural network models.
Kim, Kwang-Ki K; Patrón, Ernesto Ríos; Braatz, Richard D
2018-02-01
An overview is provided of dynamic artificial neural network models (DANNs) for nonlinear dynamical system identification and control problems, and convex stability conditions are proposed that are less conservative than past results. The three most popular classes of dynamic artificial neural network models are described, with their mathematical representations and architectures followed by transformations based on their block diagrams that are convenient for stability and performance analyses. Classes of nonlinear dynamical systems that are universally approximated by such models are characterized, which include rigorous upper bounds on the approximation errors. A unified framework and linear matrix inequality-based stability conditions are described for different classes of dynamic artificial neural network models that take additional information into account such as local slope restrictions and whether the nonlinearities within the DANNs are odd. A theoretical example shows reduced conservatism obtained by the conditions. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Kärhä, Petri; Vaskuri, Anna; Mäntynen, Henrik; Mikkonen, Nikke; Ikonen, Erkki
2017-08-01
Spectral irradiance data are often used to calculate colorimetric properties, such as color coordinates and color temperatures of light sources by integration. The spectral data may contain unknown correlations that should be accounted for in the uncertainty estimation. We propose a new method for estimating uncertainties in such cases. The method goes through all possible scenarios of deviations using Monte Carlo analysis. Varying spectral error functions are produced by combining spectral base functions, and the distorted spectra are used to calculate the colorimetric quantities. Standard deviations of the colorimetric quantities at different scenarios give uncertainties assuming no correlations, uncertainties assuming full correlation, and uncertainties for an unfavorable case of unknown correlations, which turn out to be a significant source of uncertainty. With 1% standard uncertainty in spectral irradiance, the expanded uncertainty of the correlated color temperature of a source corresponding to the CIE Standard Illuminant A may reach as high as 37.2 K in unfavorable conditions, when calculations assuming full correlation give zero uncertainty, and calculations assuming no correlations yield the expanded uncertainties of 5.6 K and 12.1 K, with wavelength steps of 1 nm and 5 nm used in spectral integrations, respectively. We also show that there is an absolute limit of 60.2 K in the error of the correlated color temperature for Standard Illuminant A when assuming 1% standard uncertainty in the spectral irradiance. A comparison of our uncorrelated uncertainties with those obtained using analytical methods by other research groups shows good agreement. We re-estimated the uncertainties for the colorimetric properties of our 1 kW photometric standard lamps using the new method. The revised uncertainty of color temperature is a factor of 2.5 higher than the uncertainty assuming no correlations.
Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels
Laenen, Antonius; Curtis, R. E.
1989-01-01
Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)
Lee, Julia Ai Cheng; Otaiba, Stephanie Al
2017-01-01
In this article, the authors examined the spelling performance of 430 kindergarteners, which included a high risk sample, to determine the relations between end of kindergarten reading and spelling in a high quality language arts setting. The spelling outcomes including the spelling errors between the good and the poor readers were described, analyzed, and compared. The findings suggest that not all the children have acquired the desired standard as outlined by the Common Core State Standards. In addition, not every good reader is a good speller and that not every poor speller is a poor reader. The study shows that spelling tasks that are accompanied by spelling errors analysis provide a powerful window for making instructional sense of children's spelling errors and for individualizing spelling instructional strategies.
Hasty, Robert T; Garbalosa, Ryan C; Barbato, Vincenzo A; Valdes, Pedro J; Powers, David W; Hernandez, Emmanuel; John, Jones S; Suciu, Gabriel; Qureshi, Farheen; Popa-Radu, Matei; San Jose, Sergio; Drexler, Nathaniel; Patankar, Rohan; Paz, Jose R; King, Christopher W; Gerber, Hilary N; Valladares, Michael G; Somji, Alyaz A
2014-05-01
Since its launch in 2001, Wikipedia has become the most popular general reference site on the Internet and a popular source of health care information. To evaluate the accuracy of this resource, the authors compared Wikipedia articles on the most costly medical conditions with standard, evidence-based, peer-reviewed sources. The top 10 most costly conditions in terms of public and private expenditure in the United States were identified, and a Wikipedia article corresponding to each topic was chosen. In a blinded process, 2 randomly assigned investigators independently reviewed each article and identified all assertions (ie, implication or statement of fact) made in it. The reviewer then conducted a literature search to determine whether each assertion was supported by evidence. The assertions found by each reviewer were compared and analyzed to determine whether assertions made by Wikipedia for these conditions were supported by peer-reviewed sources. For commonly identified assertions, there was statistically significant discordance between 9 of the 10 selected Wikipedia articles (coronary artery disease, lung cancer, major depressive disorder, osteoarthritis, chronic obstructive pulmonary disease, hypertension, diabetes mellitus, back pain, and hyperlipidemia) and their corresponding peer-reviewed sources (P<.05) and for all assertions made by Wikipedia for these medical conditions (P<.05 for all 9). Most Wikipedia articles representing the 10 most costly medical conditions in the United States contain many errors when checked against standard peer-reviewed sources. Caution should be used when using Wikipedia to answer questions regarding patient care.
Improving estimates of streamflow characteristics by using Landsat-1 imagery
Hollyday, Este F.
1976-01-01
Imagery from the first Earth Resources Technology Satellite (renamed Landsat-1) was used to discriminate physical features of drainage basins in an effort to improve equations used to estimate streamflow characteristics at gaged and ungaged sites. Records of 20 gaged basins in the Delmarva Peninsula of Maryland, Delaware, and Virginia were analyzed for 40 statistical streamflow characteristics. Equations relating these characteristics to basin characteristics were obtained by a technique of multiple linear regression. A control group of equations contains basin characteristics derived from maps. An experimental group of equations contains basin characteristics derived from maps and imagery. Characteristics from imagery were forest, riparian (streambank) vegetation, water, and combined agricultural and urban land use. These basin characteristics were isolated photographically by techniques of film-density discrimination. The area of each characteristic in each basin was measured photometrically. Comparison of equations in the control group with corresponding equations in the experimental group reveals that for 12 out of 40 equations the standard error of estimate was reduced by more than 10 percent. As an example, the standard error of estimate of the equation for the 5-year recurrence-interval flood peak was reduced from 46 to 32 percent. Similarly, the standard error of the equation for the mean monthly flow for September was reduced from 32 to 24 percent, the standard error for the 7-day, 2-year recurrence low flow was reduced from 136 to 102 percent, and the standard error for the 3-day, 2-year flood volume was reduced from 30 to 12 percent. It is concluded that data from Landsat imagery can substantially improve the accuracy of estimates of some streamflow characteristics at sites in the Delmarva Peninsula.
Study of chromatic adaptation using memory color matches, Part I: neutral illuminants.
Smet, Kevin A G; Zhai, Qiyan; Luo, Ming R; Hanselaer, Peter
2017-04-03
Twelve corresponding color data sets have been obtained using the long-term memory colors of familiar objects as target stimuli. Data were collected for familiar objects with neutral, red, yellow, green and blue hues under 4 approximately neutral illumination conditions on or near the blackbody locus. The advantages of the memory color matching method are discussed in light of other more traditional asymmetric matching techniques. Results were compared to eight corresponding color data sets available in literature. The corresponding color data was used to test several linear (von Kries, RLAB, etc.) and nonlinear (Hunt & Nayatani) chromatic adaptation transforms (CAT). It was found that a simple two-step von Kries, whereby the degree of adaptation D is optimized to minimize the DEu'v' prediction errors, outperformed all other tested models for both memory color and literature corresponding color sets, whereby prediction errors were lower for the memory color sets. The predictive errors were substantially smaller than the standard uncertainty on the average observer and were comparable to what are considered just-noticeable-differences in the CIE u'v' chromaticity diagram, supporting the use of memory color based internal references to study chromatic adaptation mechanisms.
NASA Technical Reports Server (NTRS)
Sutliff, Daniel L.; Remington, Paul J.; Walker, Bruce E.
2003-01-01
A test program to demonstrate simplification of Active Noise Control (ANC) systems relative to standard techniques was performed on the NASA Glenn Active Noise Control Fan from May through September 2001. The target mode was the m = 2 circumferential mode generated by the rotor-stator interaction at 2BPF. Seven radials (combined inlet and exhaust) were present at this condition. Several different error-sensing strategies were implemented. Integration of the error-sensors with passive treatment was investigated. These were: (i) an in-duct linear axial array, (ii) an induct steering array, (iii) a pylon-mounted array, and (iv) a near-field boom array. The effect of incorporating passive treatment was investigated as well as reducing the actuator count. These simplified systems were compared to a fully ANC specified system. Modal data acquired using the Rotating Rake are presented for a range of corrected fan rpm. Simplified control has been demonstrated to be possible but requires a well-known and dominant mode signature. The documented results here in are part III of a three-part series of reports with the same base title. Part I and II document the control system and error-sensing design and implementation.
Polanka, Brittanny M.; Vrany, Elizabeth A.; Patel, Jay; Stewart, Jesse C.
2017-01-01
Abstract We compared the relative importance of atypical major depressive disorder (MDD), nonatypical MDD, and dysthymic disorder in predicting 3-year obesity incidence and change in body mass index and determined whether race/ethnicity moderated these relationships. We examined data from 17,787 initially nonobese adults in the National Epidemiologic Survey on Alcohol and Related Conditions waves 1 (2001–2002) and 2 (2004–2005) who were representative of the US population. Lifetime subtypes of depressive disorders were determined using a structured interview, and obesity outcomes were computed from self-reported height and weight. Atypical MDD (odds ratio (OR) = 1.68, 95% confidence interval (CI): 1.43, 1.97; P < 0.001) and dysthymic disorder (OR = 1.66, 95% CI: 1.29, 2.12; P < 0.001) were stronger predictors of incident obesity than were nonatypical MDD (OR = 1.11, 95% CI: 1.01, 1.22; P = 0.027) and no history of depressive disorder. Atypical MDD (B = 0.41 (standard error, 0.15); P = 0.007) was a stronger predictor of increases in body mass index than were dysthymic disorder (B = −0.31 (standard error, 0.21); P = 0.142), nonatypical MDD (B = 0.007 (standard error, 0.06); P = 0.911), and no history of depressive disorder. Race/ethnicity was a moderator; atypical MDD was a stronger predictor of incident obesity in Hispanics/Latinos (OR = 1.97, 95% CI: 1.73, 2.24; P < 0.001) than in non-Hispanic whites (OR = 1.54, 95% CI: 1.25, 1.91; P < 0.001) and blacks (OR = 1.72, 95% CI: 1.31, 2.26; P < 0.001). US adults with atypical MDD are at particularly high risk of weight gain and obesity, and Hispanics/Latinos may be especially vulnerable to the obesogenic consequences of depressions. PMID:28369312
Acceptance sampling for attributes via hypothesis testing and the hypergeometric distribution
NASA Astrophysics Data System (ADS)
Samohyl, Robert Wayne
2017-10-01
This paper questions some aspects of attribute acceptance sampling in light of the original concepts of hypothesis testing from Neyman and Pearson (NP). Attribute acceptance sampling in industry, as developed by Dodge and Romig (DR), generally follows the international standards of ISO 2859, and similarly the Brazilian standards NBR 5425 to NBR 5427 and the United States Standards ANSI/ASQC Z1.4. The paper evaluates and extends the area of acceptance sampling in two directions. First, by suggesting the use of the hypergeometric distribution to calculate the parameters of sampling plans avoiding the unnecessary use of approximations such as the binomial or Poisson distributions. We show that, under usual conditions, discrepancies can be large. The conclusion is that the hypergeometric distribution, ubiquitously available in commonly used software, is more appropriate than other distributions for acceptance sampling. Second, and more importantly, we elaborate the theory of acceptance sampling in terms of hypothesis testing rigorously following the original concepts of NP. By offering a common theoretical structure, hypothesis testing from NP can produce a better understanding of applications even beyond the usual areas of industry and commerce such as public health and political polling. With the new procedures, both sample size and sample error can be reduced. What is unclear in traditional acceptance sampling is the necessity of linking the acceptable quality limit (AQL) exclusively to the producer and the lot quality percent defective (LTPD) exclusively to the consumer. In reality, the consumer should also be preoccupied with a value of AQL, as should the producer with LTPD. Furthermore, we can also question why type I error is always uniquely associated with the producer as producer risk, and likewise, the same question arises with consumer risk which is necessarily associated with type II error. The resolution of these questions is new to the literature. The article presents R code throughout.
2011-10-01
Phoenix, and Vitek 2 systems). Discordant results were categorized as very major errors (VME), major errors (ME), and minor errors (mE). DNA sequences...01 OCT 2011 2 . REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Carbapenem Susceptibility Testing Errors Using Three Automated...FDA standards required for device approval (11). The Vitek 2 method was the only automated susceptibility method in our study that satisfied FDA
Laboratory errors and patient safety.
Miligy, Dawlat A
2015-01-01
Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that evaluated the encountered laboratory errors and launch the great need for universal standardization and bench marking measures to control the laboratory work.
Precision limits of the twin-beam multiband URSULA
NASA Technical Reports Server (NTRS)
Debiase, G. A.; Paterno, L.; Fedel, B.; Santagati, G.; Ventura, R.
1988-01-01
URSULA is a multiband astronomical photoelectric photometer which minimizes errors introduced by the presence of the atmosphere. It operates with two identical channels, one for the star to be measured and the other for a reference star. After a technical description of the present version of the apparatus, some measurements of stellar sources of different brightness, and in different atmospheric conditions are presented. These measurements, based on observations made with the 91 cm Cassegrain telescope of the Catania Astrophysical Observatory, are used to check the photometer accuracy and compare its performance with that of standard photometers.
Adaptive Flight Control Design with Optimal Control Modification on an F-18 Aircraft Model
NASA Technical Reports Server (NTRS)
Burken, John J.; Nguyen, Nhan T.; Griffin, Brian J.
2010-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to as the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly; however, a large adaptive gain can lead to high-frequency oscillations which can adversely affect the robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient robustness. A damping term (v) is added in the modification to increase damping as needed. Simulations were conducted on a damaged F-18 aircraft (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) with both the standard baseline dynamic inversion controller and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model.
A feasability study of color flow doppler vectorization for automated blood flow monitoring.
Schorer, R; Badoual, A; Bastide, B; Vandebrouck, A; Licker, M; Sage, D
2017-12-01
An ongoing issue in vascular medicine is the measure of the blood flow. Catheterization remains the gold standard measurement method, although non-invasive techniques are an area of intense research. We hereby present a computational method for real-time measurement of the blood flow from color flow Doppler data, with a focus on simplicity and monitoring instead of diagnostics. We then analyze the performance of a proof-of-principle software implementation. We imagined a geometrical model geared towards blood flow computation from a color flow Doppler signal, and we developed a software implementation requiring only a standard diagnostic ultrasound device. Detection performance was evaluated by computing flow and its determinants (flow speed, vessel area, and ultrasound beam angle of incidence) on purposely designed synthetic and phantom-based arterial flow simulations. Flow was appropriately detected in all cases. Errors on synthetic images ranged from nonexistent to substantial depending on experimental conditions. Mean errors on measurements from our phantom flow simulation ranged from 1.2 to 40.2% for angle estimation, and from 3.2 to 25.3% for real-time flow estimation. This study is a proof of concept showing that accurate measurement can be done from automated color flow Doppler signal extraction, providing the industry the opportunity for further optimization using raw ultrasound data.
Computer simulation of storm runoff for three watersheds in Albuquerque, New Mexico
Knutilla, R.L.; Veenhuis, J.E.
1994-01-01
Rainfall-runoff data from three watersheds were selected for calibration and verification of the U.S. Geological Survey's Distributed Routing Rainfall-Runoff Model. The watersheds chosen are residentially developed. The conceptually based model uses an optimization process that adjusts selected parameters to achieve the best fit between measured and simulated runoff volumes and peak discharges. Three of these optimization parameters represent soil-moisture conditions, three represent infiltration, and one accounts for effective impervious area. Each watershed modeled was divided into overland-flow segments and channel segments. The overland-flow segments were further subdivided to reflect pervious and impervious areas. Each overland-flow and channel segment was assigned representative values of area, slope, percentage of imperviousness, and roughness coefficients. Rainfall-runoff data for each watershed were separated into two sets for use in calibration and verification. For model calibration, seven input parameters were optimized to attain a best fit of the data. For model verification, parameter values were set using values from model calibration. The standard error of estimate for calibration of runoff volumes ranged from 19 to 34 percent, and for peak discharge calibration ranged from 27 to 44 percent. The standard error of estimate for verification of runoff volumes ranged from 26 to 31 percent, and for peak discharge verification ranged from 31 to 43 percent.
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Closed-loop stability of linear quadratic optimal systems in the presence of modeling errors
NASA Technical Reports Server (NTRS)
Toda, M.; Patel, R.; Sridhar, B.
1976-01-01
The well-known stabilizing property of linear quadratic state feedback design is utilized to evaluate the robustness of a linear quadratic feedback design in the presence of modeling errors. Two general conditions are obtained for allowable modeling errors such that the resulting closed-loop system remains stable. One of these conditions is applied to obtain two more particular conditions which are readily applicable to practical situations where a designer has information on the bounds of modeling errors. Relations are established between the allowable parameter uncertainty and the weighting matrices of the quadratic performance index, thereby enabling the designer to select appropriate weighting matrices to attain a robust feedback design.
Use of units of measurement error in anthropometric comparisons.
Lucas, Teghan; Henneberg, Maciej
2017-09-01
Anthropometrists attempt to minimise measurement errors, however, errors cannot be eliminated entirely. Currently, measurement errors are simply reported. Measurement errors should be included into analyses of anthropometric data. This study proposes a method which incorporates measurement errors into reported values, replacing metric units with 'units of technical error of measurement (TEM)' by applying these to forensics, industrial anthropometry and biological variation. The USA armed forces anthropometric survey (ANSUR) contains 132 anthropometric dimensions of 3982 individuals. Concepts of duplication and Euclidean distance calculations were applied to the forensic-style identification of individuals in this survey. The National Size and Shape Survey of Australia contains 65 anthropometric measurements of 1265 women. This sample was used to show how a woman's body measurements expressed in TEM could be 'matched' to standard clothing sizes. Euclidean distances show that two sets of repeated anthropometric measurements of the same person cannot be matched (> 0) on measurements expressed in millimetres but can in units of TEM (= 0). Only 81 women can fit into any standard clothing size when matched using centimetres, with units of TEM, 1944 women fit. The proposed method can be applied to all fields that use anthropometry. Units of TEM are considered a more reliable unit of measurement for comparisons.
Estimating Flow-Duration and Low-Flow Frequency Statistics for Unregulated Streams in Oregon
Risley, John; Stonewall, Adam J.; Haluska, Tana
2008-01-01
Flow statistical datasets, basin-characteristic datasets, and regression equations were developed to provide decision makers with surface-water information needed for activities such as water-quality regulation, water-rights adjudication, biological habitat assessment, infrastructure design, and water-supply planning and management. The flow statistics, which included annual and monthly period of record flow durations (5th, 10th, 25th, 50th, and 95th percent exceedances) and annual and monthly 7-day, 10-year (7Q10) and 7-day, 2-year (7Q2) low flows, were computed at 466 streamflow-gaging stations at sites with unregulated flow conditions throughout Oregon and adjacent areas of neighboring States. Regression equations, created from the flow statistics and basin characteristics of the stations, can be used to estimate flow statistics at ungaged stream sites in Oregon. The study area was divided into 10 regression modeling regions based on ecological, topographic, geologic, hydrologic, and climatic criteria. In total, 910 annual and monthly regression equations were created to predict the 7 flow statistics in the 10 regions. Equations to predict the five flow-duration exceedance percentages and the two low-flow frequency statistics were created with Ordinary Least Squares and Generalized Least Squares regression, respectively. The standard errors of estimate of the equations created to predict the 5th and 95th percent exceedances had medians of 42.4 and 64.4 percent, respectively. The standard errors of prediction of the equations created to predict the 7Q2 and 7Q10 low-flow statistics had medians of 51.7 and 61.2 percent, respectively. Standard errors for regression equations for sites in western Oregon were smaller than those in eastern Oregon partly because of a greater density of available streamflow-gaging stations in western Oregon than eastern Oregon. High-flow regression equations (such as the 5th and 10th percent exceedances) also generally were more accurate than the low-flow regression equations (such as the 95th percent exceedance and 7Q10 low-flow statistic). The regression equations predict unregulated flow conditions in Oregon. Flow estimates need to be adjusted if they are used at ungaged sites that are regulated by reservoirs or affected by water-supply and agricultural withdrawals if actual flow conditions are of interest. The regression equations are installed in the USGS StreamStats Web-based tool (http://water.usgs.gov/osw/streamstats/index.html, accessed July 16, 2008). StreamStats provides users with a set of annual and monthly flow-duration and low-flow frequency estimates for ungaged sites in Oregon in addition to the basin characteristics for the sites. Prediction intervals at the 90-percent confidence level also are automatically computed.
Tully, Mary P; Ashcroft, Darren M; Dornan, Tim; Lewis, Penny J; Taylor, David; Wass, Val
2009-01-01
Prescribing errors are common, they result in adverse events and harm to patients and it is unclear how best to prevent them because recommendations are more often based on surmized rather than empirically collected data. The aim of this systematic review was to identify all informative published evidence concerning the causes of and factors associated with prescribing errors in specialist and non-specialist hospitals, collate it, analyse it qualitatively and synthesize conclusions from it. Seven electronic databases were searched for articles published between 1985-July 2008. The reference lists of all informative studies were searched for additional citations. To be included, a study had to be of handwritten prescriptions for adult or child inpatients that reported empirically collected data on the causes of or factors associated with errors. Publications in languages other than English and studies that evaluated errors for only one disease, one route of administration or one type of prescribing error were excluded. Seventeen papers reporting 16 studies, selected from 1268 papers identified by the search, were included in the review. Studies from the US and the UK in university-affiliated hospitals predominated (10/16 [62%]). The definition of a prescribing error varied widely and the included studies were highly heterogeneous. Causes were grouped according to Reason's model of accident causation into active failures, error-provoking conditions and latent conditions. The active failure most frequently cited was a mistake due to inadequate knowledge of the drug or the patient. Skills-based slips and memory lapses were also common. Where error-provoking conditions were reported, there was at least one per error. These included lack of training or experience, fatigue, stress, high workload for the prescriber and inadequate communication between healthcare professionals. Latent conditions included reluctance to question senior colleagues and inadequate provision of training. Prescribing errors are often multifactorial, with several active failures and error-provoking conditions often acting together to cause them. In the face of such complexity, solutions addressing a single cause, such as lack of knowledge, are likely to have only limited benefit. Further rigorous study, seeking potential ways of reducing error, needs to be conducted. Multifactorial interventions across many parts of the system are likely to be required.
Gaze Compensation as a Technique for Improving Hand–Eye Coordination in Prosthetic Vision
Titchener, Samuel A.; Shivdasani, Mohit N.; Fallon, James B.; Petoe, Matthew A.
2018-01-01
Purpose Shifting the region-of-interest within the input image to compensate for gaze shifts (“gaze compensation”) may improve hand–eye coordination in visual prostheses that incorporate an external camera. The present study investigated the effects of eye movement on hand-eye coordination under simulated prosthetic vision (SPV), and measured the coordination benefits of gaze compensation. Methods Seven healthy-sighted subjects performed a target localization-pointing task under SPV. Three conditions were tested, modeling: retinally stabilized phosphenes (uncompensated); gaze compensation; and no phosphene movement (center-fixed). The error in pointing was quantified for each condition. Results Gaze compensation yielded a significantly smaller pointing error than the uncompensated condition for six of seven subjects, and a similar or smaller pointing error than the center-fixed condition for all subjects (two-way ANOVA, P < 0.05). Pointing error eccentricity and gaze eccentricity were moderately correlated in the uncompensated condition (azimuth: R2 = 0.47; elevation: R2 = 0.51) but not in the gaze-compensated condition (azimuth: R2 = 0.01; elevation: R2 = 0.00). Increased variability in gaze at the time of pointing was correlated with greater reduction in pointing error in the center-fixed condition compared with the uncompensated condition (R2 = 0.64). Conclusions Eccentric eye position impedes hand–eye coordination in SPV. While limiting eye eccentricity in uncompensated viewing can reduce errors, gaze compensation is effective in improving coordination for subjects unable to maintain fixation. Translational Relevance The results highlight the present necessity for suppressing eye movement and support the use of gaze compensation to improve hand–eye coordination and localization performance in prosthetic vision. PMID:29321945
An affordable cuff-less blood pressure estimation solution.
Jain, Monika; Kumar, Niranjan; Deb, Sujay
2016-08-01
This paper presents a cuff-less hypertension pre-screening device that non-invasively monitors the Blood Pressure (BP) and Heart Rate (HR) continuously. The proposed device simultaneously records two clinically significant and highly correlated biomedical signals, viz., Electrocardiogram (ECG) and Photoplethysmogram (PPG). The device provides a common data acquisition platform that can interface with PC/laptop, Smart phone/tablet and Raspberry-pi etc. The hardware stores and processes the recorded ECG and PPG in order to extract the real-time BP and HR using kernel regression approach. The BP and HR estimation error is measured in terms of normalized mean square error, Error Standard Deviation (ESD) and Mean Absolute Error (MAE), with respect to a clinically proven digital BP monitor (OMRON HBP1300). The computed error falls under the maximum standard allowable error mentioned by Association for the Advancement of Medical Instrumentation; MAE <; 5 mmHg and ESD <; 8mmHg. The results are validated using two-tailed dependent sample t-test also. The proposed device is a portable low-cost home and clinic bases solution for continuous health monitoring.
Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Wisconsin
Walker, J.F.; Osen, L.L.; Hughes, P.E.
1987-01-01
A minimum budget of $510,000 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gaging stations. At this minimum budget, the theoretical average standard error of instantaneous discharge is 14.4%. The maximum budget analyzed was $650,000 and resulted in an average standard of error of instantaneous discharge of 7.2%.
ERIC Educational Resources Information Center
Goedeme, Tim
2013-01-01
If estimates are based on samples, they should be accompanied by appropriate standard errors and confidence intervals. This is true for scientific research in general, and is even more important if estimates are used to inform and evaluate policy measures such as those aimed at attaining the Europe 2020 poverty reduction target. In this article I…
Comparison of Optimal Design Methods in Inverse Problems
Banks, H. T.; Holm, Kathleen; Kappel, Franz
2011-01-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762
Enhanced Oceanic Operations Human-In-The-Loop In-Trail Procedure Validation Simulation Study
NASA Technical Reports Server (NTRS)
Murdoch, Jennifer L.; Bussink, Frank J. L.; Chamberlain, James P.; Chartrand, Ryan C.; Palmer, Michael T.; Palmer, Susan O.
2008-01-01
The Enhanced Oceanic Operations Human-In-The-Loop In-Trail Procedure (ITP) Validation Simulation Study investigated the viability of an ITP designed to enable oceanic flight level changes that would not otherwise be possible. Twelve commercial airline pilots with current oceanic experience flew a series of simulated scenarios involving either standard or ITP flight level change maneuvers and provided subjective workload ratings, assessments of ITP validity and acceptability, and objective performance measures associated with the appropriate selection, request, and execution of ITP flight level change maneuvers. In the majority of scenarios, subject pilots correctly assessed the traffic situation, selected an appropriate response (i.e., either a standard flight level change request, an ITP request, or no request), and executed their selected flight level change procedure, if any, without error. Workload ratings for ITP maneuvers were acceptable and not substantially higher than for standard flight level change maneuvers, and, for the majority of scenarios and subject pilots, subjective acceptability ratings and comments for ITP were generally high and positive. Qualitatively, the ITP was found to be valid and acceptable. However, the error rates for ITP maneuvers were higher than for standard flight level changes, and these errors may have design implications for both the ITP and the study's prototype traffic display. These errors and their implications are discussed.
Cost-effectiveness of the stream-gaging program in North Carolina
Mason, R.R.; Jackson, N.M.
1985-01-01
This report documents the results of a study of the cost-effectiveness of the stream-gaging program in North Carolina. Data uses and funding sources are identified for the 146 gaging stations currently operated in North Carolina with a budget of $777,600 (1984). As a result of the study, eleven stations are nominated for discontinuance and five for conversion from recording to partial-record status. Large parts of North Carolina 's Coastal Plain are identified as having sparse streamflow data. This sparsity should be remedied as funds become available. Efforts should also be directed toward defining the efforts of drainage improvements on local hydrology and streamflow characteristics. The average standard error of streamflow records in North Carolina is 18.6 percent. This level of accuracy could be improved without increasing cost by increasing the frequency of field visits and streamflow measurements at stations with high standard errors and reducing the frequency at stations with low standard errors. A minimum budget of $762,000 is required to operate the 146-gage program. A budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, and with the optimum allocation of field visits, the average standard error is 17.6 percent.
Infrared thermal imaging of the inner canthus of the eye as an estimator of body core temperature.
Teunissen, L P J; Daanen, H A M
2011-01-01
Several studies suggest that the temperature of the inner canthus of the eye (T(ca)), determined with infrared thermal imaging, is an appropriate method for core temperature estimation in mass screening of fever. However, these studies used the error prone tympanic temperature as a reference. Therefore, we compared T(ca) to oesophageal temperature (T(es)) as gold standard in 10 subjects during four conditions: rest, exercise, recovery and passive heating. T(ca) and T(es) differed significantly during all conditions (mean ΔT(es) - T(ca) 1.80 ± 0.89°C) and their relationship was inconsistent between conditions. Also within the rest condition alone, intersubject variability was too large for a reliable estimation of core temperature. This poses doubts on the use of T(ca) as a technique for core temperature estimation, although generalization of these results to fever detection should be verified experimentally using febrile patients.
Physician's error: medical or legal concept?
Mujovic-Zornic, Hajrija M
2010-06-01
This article deals with the common term of different physician's errors that often happen in daily practice of health care. Author begins with the term of medical malpractice, defined broadly as practice of unjustified acts or failures to act upon the part of a physician or other health care professionals, which results in harm to the patient. It is a common term that includes many types of medical errors, especially physician's errors. The author also discusses the concept of physician's error in particular, which is understood no more in traditional way only as classic error in acting something manually wrong without necessary skills (medical concept), but as an error which violates patient's basic rights and which has its final legal consequence (legal concept). In every case the essential element of liability is to establish this error as a breach of the physician's duty. The first point to note is that the standard of procedure and the standard of due care against which the physician will be judged is not going to be that of the ordinary reasonable man who enjoys no medical expertise. The court's decision should give finale answer and legal qualification in each concrete case. The author's conclusion is that higher protection of human rights in the area of health equaly demands broader concept of physician's error with the accent to its legal subject matter.
High-Accuracy Surface Figure Measurement of Silicon Mirrors at 80 K
NASA Technical Reports Server (NTRS)
Blake, Peter; Mink, Ronald G.; Chambers, John; Davila, Pamela; Robinson, F. David
2004-01-01
This report describes the equipment, experimental methods, and first results at a new facility for interferometric measurement of cryogenically-cooled spherical mirrors at the Goddard Space Flight Center Optics Branch. The procedure, using standard phase-shifting interferometry, has an standard combined uncertainty of 3.6 nm rms in its representation of the two-dimensional surface figure error at 80, and an uncertainty of plus or minus 1 nm in the rms statistic itself. The first mirror tested was a concave spherical silicon foam-core mirror, with a clear aperture of 120 mm. The optic surface was measured at room temperature using standard absolute techniques; and then the change in surface figure error from room temperature to 80 K was measured. The mirror was cooled within a cryostat. and its surface figure error measured through a fused-silica window. The facility and techniques will be used to measure the surface figure error at 20K of prototype lightweight silicon carbide and Cesic mirrors developed by Galileo Avionica (Italy) for the European Space Agency (ESA).
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
The effect of divided attention on novices and experts in laparoscopic task performance.
Ghazanfar, Mudassar Ali; Cook, Malcolm; Tang, Benjie; Tait, Iain; Alijani, Afshin
2015-03-01
Attention is important for the skilful execution of surgery. The surgeon's attention during surgery is divided between surgery and outside distractions. The effect of this divided attention has not been well studied previously. We aimed to compare the effect of dividing attention of novices and experts on a laparoscopic task performance. Following ethical approval, 25 novices and 9 expert surgeons performed a standardised peg transfer task in a laboratory setup under three randomly assigned conditions: silent as control condition and two standardised auditory distracting tasks requiring response (easy and difficult) as study conditions. Human reliability assessment was used for surgical task analysis. Primary outcome measures were correct auditory responses, task time, number of surgical errors and instrument movements. Secondary outcome measures included error rate, error probability and hand specific differences. Non-parametric statistics were used for data analysis. 21109 movements and 9036 total errors were analysed. Novices had increased mean task completion time (seconds) (171 ± 44SD vs. 149 ± 34, p < 0.05), number of total movements (227 ± 27 vs. 213 ± 26, p < 0.05) and number of errors (127 ± 51 vs. 96 ± 28, p < 0.05) during difficult study conditions compared to control. The correct responses to auditory stimuli were less frequent in experts (68 %) compared to novices (80 %). There was a positive correlation between error rate and error probability in novices (r (2) = 0.533, p < 0.05) but not in experts (r (2) = 0.346, p > 0.05). Divided attention conditions in theatre environment require careful consideration during surgical training as the junior surgeons are less able to focus their attention during these conditions.
Strength conditions for the elastic structures with a stress error
NASA Astrophysics Data System (ADS)
Matveev, A. D.
2017-10-01
As is known, the constraints (strength conditions) for the safety factor of elastic structures and design details of a particular class, e.g. aviation structures are established, i.e. the safety factor values of such structures should be within the given range. It should be noted that the constraints are set for the safety factors corresponding to analytical (exact) solutions of elasticity problems represented for the structures. Developing the analytical solutions for most structures, especially irregular shape ones, is associated with great difficulties. Approximate approaches to solve the elasticity problems, e.g. the technical theories of deformation of homogeneous and composite plates, beams and shells, are widely used for a great number of structures. Technical theories based on the hypotheses give rise to approximate (technical) solutions with an irreducible error, with the exact value being difficult to be determined. In static calculations of the structural strength with a specified small range for the safety factors application of technical (by the Theory of Strength of Materials) solutions is difficult. However, there are some numerical methods for developing the approximate solutions of elasticity problems with arbitrarily small errors. In present paper, the adjusted reference (specified) strength conditions for the structural safety factor corresponding to approximate solution of the elasticity problem have been proposed. The stress error estimation is taken into account using the proposed strength conditions. It has been shown that, to fulfill the specified strength conditions for the safety factor of the given structure corresponding to an exact solution, the adjusted strength conditions for the structural safety factor corresponding to an approximate solution are required. The stress error estimation which is the basis for developing the adjusted strength conditions has been determined for the specified strength conditions. The adjusted strength conditions presented by allowable stresses are suggested. Adjusted strength conditions make it possible to determine the set of approximate solutions, whereby meeting the specified strength conditions. Some examples of the specified strength conditions to be satisfied using the technical (by the Theory of Strength of Materials) solutions and strength conditions have been given, as well as the examples of stress conditions to be satisfied using approximate solutions with a small error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farah, J., E-mail: jad.farah@irsn.fr; Clairand, I.; Huet, C.
2015-07-15
Purpose: To investigate the optimal use of XR-RV3 GafChromic{sup ®} films to assess patient skin dose in interventional radiology while addressing the means to reduce uncertainties in dose assessment. Methods: XR-Type R GafChromic films have been shown to represent the most efficient and suitable solution to determine patient skin dose in interventional procedures. As film dosimetry can be associated with high uncertainty, this paper presents the EURADOS WG 12 initiative to carry out a comprehensive study of film characteristics with a multisite approach. The considered sources of uncertainties include scanner, film, and fitting-related errors. The work focused on studying filmmore » behavior with clinical high-dose-rate pulsed beams (previously unavailable in the literature) together with reference standard laboratory beams. Results: First, the performance analysis of six different scanner models has shown that scan uniformity perpendicular to the lamp motion axis and that long term stability are the main sources of scanner-related uncertainties. These could induce errors of up to 7% on the film readings unless regularly checked and corrected. Typically, scan uniformity correction matrices and reading normalization to the scanner-specific and daily background reading should be done. In addition, the analysis on multiple film batches has shown that XR-RV3 films have generally good uniformity within one batch (<1.5%), require 24 h to stabilize after the irradiation and their response is roughly independent of dose rate (<5%). However, XR-RV3 films showed large variations (up to 15%) with radiation quality both in standard laboratory and in clinical conditions. As such, and prior to conducting patient skin dose measurements, it is mandatory to choose the appropriate calibration beam quality depending on the characteristics of the x-ray systems that will be used clinically. In addition, yellow side film irradiations should be preferentially used since they showed a lower dependence on beam parameters compared to white side film irradiations. Finally, among the six different fit equations tested in this work, typically used third order polynomials and more rational and simplistic equations, of the form dose inversely proportional to pixel value, were both found to provide satisfactory results. Fitting-related uncertainty was clearly identified as a major contributor to the overall film dosimetry uncertainty with up to 40% error on the dose estimate. Conclusions: The overall uncertainty associated with the use of XR-RV3 films to determine skin dose in the interventional environment can realistically be estimated to be around 20% (k = 1). This uncertainty can be reduced to within 5% if carefully monitoring scanner, film, and fitting-related errors or it can easily increase to over 40% if minimal care is not taken. This work demonstrates the importance of appropriate calibration, reading, fitting, and other film-related and scan-related processes, which will help improve the accuracy of skin dose measurements in interventional procedures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaques, A.
The data presented in these tables were gathered with the use of the Fortran program FLUIDS which was provided by the National Bureau of Standards. Fluid properties at transitional boundaries and points are those obtained with the best fit equation or formula for that particular fluid. Consequently, at such divergent points as the triple and critical points, the accuracy of the properties given by FLUIDS can be off up to 10% in some cases. In listing the critical and triple point conditions within, values were taken from the National Bureau of Standards' publication ''Thermodynamic Properties of Argon'', not from FLUIDS.more » Outside of these two points, however, the error in FLUIDS is minimal, thus all other data in these tables were obtained through FLUIDS. The Temperature-Entropy Chart for Argon is also taken from NBS' ''Thermodynamic Properties of Argon''.« less
Factors affecting measurement of channel thickness in asymmetrical flow field-flow fractionation.
Dou, Haiyang; Jung, Euo Chang; Lee, Seungho
2015-05-08
Asymmetrical flow field-flow fractionation (AF4) has been considered to be a useful tool for simultaneous separation and characterization of polydisperse macromolecules or colloidal nanoparticles. AF4 analysis requires the knowledge of the channel thickness (w), which is usually measured by injecting a standard with known diffusion coefficient (D) or hydrodynamic diameter (dh). An accurate w determination is a challenge due to its uncertainties arising from the membrane's compressibility, which may vary with experimental condition. In the present study, influence of factors including the size and type of the standard on the measurement of w was systematically investigated. The results revealed that steric effect and the particles-membrane interaction by van der Waals or electrostatic force may result in an error in w measurement. Copyright © 2015 Elsevier B.V. All rights reserved.
Evaluation of RSA set-up from a clinical biplane fluoroscopy system for 3D joint kinematic analysis.
Bonanzinga, Tommaso; Signorelli, Cecilia; Bontempi, Marco; Russo, Alessandro; Zaffagnini, Stefano; Marcacci, Maurilio; Bragonzoni, Laura
2016-01-01
dinamic roentgen stereophotogrammetric analysis (RSA), a technique currently based only on customized radiographic equipment, has been shown to be a very accurate method for detecting three-dimensional (3D) joint motion. The aim of the present work was to evaluate the applicability of an innovative RSA set-up for in vivo knee kinematic analysis, using a biplane fluoroscopic image system. To this end, the Authors describe the set-up as well as a possible protocol for clinical knee joint evaluation. The accuracy of the kinematic measurements is assessed. the Authors evaluated the accuracy of 3D kinematic analysis of the knee in a new RSA set-up, based on a commercial biplane fluoroscopy system integrated into the clinical environment. The study was organized in three main phases: an in vitro test under static conditions, an in vitro test under dynamic conditions reproducing a flexion-extension range of motion (ROM), and an in vivo analysis of the flexion-extension ROM. For each test, the following were calculated, as an indication of the tracking accuracy: mean, minimum, maximum values and standard deviation of the error of rigid body fitting. in terms of rigid body fitting, in vivo test errors were found to be 0.10±0.05 mm. Phantom tests in static and kinematic conditions showed precision levels, for translations and rotations, of below 0.1 mm/0.2° and below 0.5 mm/0.3° respectively for all directions. the results of this study suggest that kinematic RSA can be successfully performed using a standard clinical biplane fluoroscopy system for the acquisition of slow movements of the lower limb. a kinematic RSA set-up using a clinical biplane fluoroscopy system is potentially applicable and provides a useful method for obtaining better characterization of joint biomechanics.
NASA Astrophysics Data System (ADS)
Pokhrel, Samir; Saha, Subodh Kumar; Dhakate, Ashish; Rahman, Hasibur; Chaudhari, Hemantkumar S.; Salunke, Kiran; Hazra, Anupam; Sujith, K.; Sikka, D. R.
2016-04-01
A detailed analysis of sensitivity to the initial condition for the simulation of the Indian summer monsoon using retrospective forecast by the latest version of the Climate Forecast System version-2 (CFSv2) is carried out. This study primarily focuses on the tropical region of Indian and Pacific Ocean basin, with special emphasis on the Indian land region. The simulated seasonal mean and the inter-annual standard deviations of rainfall, upper and lower level atmospheric circulations and Sea Surface Temperature (SST) tend to be more skillful as the lead forecast time decreases (5 month lead to 0 month lead time i.e. L5-L0). In general spatial correlation (bias) increases (decreases) as forecast lead time decreases. This is further substantiated by their averaged value over the selected study regions over the Indian and Pacific Ocean basins. The tendency of increase (decrease) of model bias with increasing (decreasing) forecast lead time also indicates the dynamical drift of the model. Large scale lower level circulation (850 hPa) shows enhancement of anomalous westerlies (easterlies) over the tropical region of the Indian Ocean (Western Pacific Ocean), which indicates the enhancement of model error with the decrease in lead time. At the upper level circulation (200 hPa) biases in both tropical easterly jet and subtropical westerlies jet tend to decrease as the lead time decreases. Despite enhancement of the prediction skill, mean SST bias seems to be insensitive to the initialization. All these biases are significant and together they make CFSv2 vulnerable to seasonal uncertainties in all the lead times. Overall the zeroth lead (L0) seems to have the best skill, however, in case of Indian summer monsoon rainfall (ISMR), the 3 month lead forecast time (L3) has the maximum ISMR prediction skill. This is valid using different independent datasets, wherein these maximum skill scores are 0.64, 0.42 and 0.57 with respect to the Global Precipitation Climatology Project, CPC Merged Analysis of Precipitation and the India Meteorological Department precipitation dataset respectively for L3. Despite significant El-Niño Southern Oscillation (ENSO) spring predictability barrier at L3, the ISMR skill score is highest at L3. Further, large scale zonal wind shear (Webster-Yang index) and SST over Niño3.4 region is best at L1 and L0. This implies that predictability aspect of ISMR is controlled by factors other than ENSO and Indian Ocean Dipole. Also, the model error (forecast error) outruns the error acquired by the inadequacies in the initial conditions (predictability error). Thus model deficiency is having more serious consequences as compared to the initial condition error for the seasonal forecast. All the model parameters show the increase in the predictability error as the lead decreases over the equatorial eastern Pacific basin and peaks at L2, then it further decreases. The dynamical consistency of both the forecast and the predictability error among all the variables indicates that these biases are purely systematic in nature and improvement of the physical processes in the CFSv2 may enhance the overall predictability.
NASA Technical Reports Server (NTRS)
Moore, H. J.; Wu, S. C.
1973-01-01
The effect of reading error on two hypothetical slope frequency distributions and two slope frequency distributions from actual lunar data in order to ensure that these errors do not cause excessive overestimates of algebraic standard deviations for the slope frequency distributions. The errors introduced are insignificant when the reading error is small and the slope length is large. A method for correcting the errors in slope frequency distributions is presented and applied to 11 distributions obtained from Apollo 15, 16, and 17 panoramic camera photographs and Apollo 16 metric camera photographs.
Error-Based Design Space Windowing
NASA Technical Reports Server (NTRS)
Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman
2002-01-01
Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.
Evaluating a medical error taxonomy.
Brixey, Juliana; Johnson, Todd R; Zhang, Jiajie
2002-01-01
Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting.
NASA Astrophysics Data System (ADS)
Duan, Y.; Wilson, A. M.; Barros, A. P.
2014-10-01
A diagnostic analysis of the space-time structure of error in Quantitative Precipitation Estimates (QPE) from the Precipitation Radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the Southern Appalachian Mountains, USA since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 V7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA, and missed detection, MD) and magnitude errors (underestimation, UND, and overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the Southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter), and especially in the inner region. Although UND dominates the magnitude error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total consistent with regional hydrometeorology. The 2A25 V7 product underestimates low level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km) of the terrain topography mask used to remove ground clutter effects. Precipitation associated with small-scale systems (< 25 km2) and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers) fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to under-catch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts), and local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1) representation of concurrent vertically and horizontally varying microphysics; (2) non uniform beam filling (NUBF) effects and ambiguity in the detection of bright band position; and (3) spatial resolution and ground clutter correction.
NASA Astrophysics Data System (ADS)
Duan, Y.; Wilson, A. M.; Barros, A. P.
2015-03-01
A diagnostic analysis of the space-time structure of error in quantitative precipitation estimates (QPEs) from the precipitation radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the southern Appalachian Mountains, USA, since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 Version 7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA; missed detection, MD) and magnitude errors (underestimation, UND; overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter) and especially in the inner region. Although UND dominates the error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total, consistent with regional hydrometeorology. The 2A25 V7 product underestimates low-level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km) of the topography mask used to remove ground-clutter effects. Precipitation associated with small-scale systems (< 25 km2) and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers) fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to undercatch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts), and a local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1) representation of concurrent vertically and horizontally varying microphysics; (2) non-uniform beam filling (NUBF) effects and ambiguity in the detection of bright band position; and (3) spatial resolution and ground-clutter correction.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-19
... correction of wording and typographical errors, and further aligns the FIPS with Key Cryptography Standard... Cryptography Standard (PKCS) 1. NIST published a Federal Register Notice (77 FR 21538) on April 10, 2012 to...
NASA Astrophysics Data System (ADS)
Daly, S.; Rainford, L.; Butler, M. L.
2014-03-01
Several studies have demonstrated the importance of environmental conditions in the radiology reporting environment, with many indicating that incorrect parameters could lead to error and misinterpretation. Literature is available with recommendations as to the levels that should be achieved in clinical practice, but evidence of adherence to these guidelines in radiology reporting environments is absent. This study audited the reporting environments of four teleradiologist and eight hospital based radiology reporting areas. This audit aimed to quantify adherence to guidelines and identify differences in the locations with respect to layout and design, monitor distance and angle as well as the ambient factors of the reporting environments. In line with international recommendations, an audit tool was designed to enquire in relation to the layout and design of reporting environments, monitor angle and distances used by radiologists when reporting, as well as the ambient factors such as noise, light and temperature. The review of conditions were carried out by the same independent auditor for consistency. The results obtained were compared against international standards and current research. Each radiology environment was given an overall compliance score to establish whether or not their environments were in line with recommended guidelines. Poor compliance to international recommendations and standards among radiology reporting environments was identified. Teleradiology reporting environments demonstrated greater compliance than hospital environments. The findings of this study identified a need for greater awareness of environmental and perceptual issues in the clinical setting. Further work involving a larger number of clinical centres is recommended.
Role of memory errors in quantum repeaters
NASA Astrophysics Data System (ADS)
Hartmann, L.; Kraus, B.; Briegel, H.-J.; Dür, W.
2007-03-01
We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication.
NASA Astrophysics Data System (ADS)
Hernández, Mario R.; Francés, Félix
2015-04-01
One phase of the hydrological models implementation process, significantly contributing to the hydrological predictions uncertainty, is the calibration phase in which values of the unknown model parameters are tuned by optimizing an objective function. An unsuitable error model (e.g. Standard Least Squares or SLS) introduces noise into the estimation of the parameters. The main sources of this noise are the input errors and the hydrological model structural deficiencies. Thus, the biased calibrated parameters cause the divergence model phenomenon, where the errors variance of the (spatially and temporally) forecasted flows far exceeds the errors variance in the fitting period, and provoke the loss of part or all of the physical meaning of the modeled processes. In other words, yielding a calibrated hydrological model which works well, but not for the right reasons. Besides, an unsuitable error model yields a non-reliable predictive uncertainty assessment. Hence, with the aim of prevent all these undesirable effects, this research focuses on the Bayesian joint inference (BJI) of both the hydrological and error model parameters, considering a general additive (GA) error model that allows for correlation, non-stationarity (in variance and bias) and non-normality of model residuals. As hydrological model, it has been used a conceptual distributed model called TETIS, with a particular split structure of the effective model parameters. Bayesian inference has been performed with the aid of a Markov Chain Monte Carlo (MCMC) algorithm called Dream-ZS. MCMC algorithm quantifies the uncertainty of the hydrological and error model parameters by getting the joint posterior probability distribution, conditioned on the observed flows. The BJI methodology is a very powerful and reliable tool, but it must be used correctly this is, if non-stationarity in errors variance and bias is modeled, the Total Laws must be taken into account. The results of this research show that the application of BJI with a GA error model outperforms the hydrological parameters robustness (diminishing the divergence model phenomenon) and improves the reliability of the streamflow predictive distribution, in respect of the results of a bad error model as SLS. Finally, the most likely prediction in a validation period, for both BJI+GA and SLS error models shows a similar performance.
Lee, Julia Ai Cheng; Otaiba, Stephanie Al
2016-01-01
In this article, the authors examined the spelling performance of 430 kindergarteners, which included a high risk sample, to determine the relations between end of kindergarten reading and spelling in a high quality language arts setting. The spelling outcomes including the spelling errors between the good and the poor readers were described, analyzed, and compared. The findings suggest that not all the children have acquired the desired standard as outlined by the Common Core State Standards. In addition, not every good reader is a good speller and that not every poor speller is a poor reader. The study shows that spelling tasks that are accompanied by spelling errors analysis provide a powerful window for making instructional sense of children’s spelling errors and for individualizing spelling instructional strategies. PMID:28706433