A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading
NASA Astrophysics Data System (ADS)
Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo
A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).
NASA Technical Reports Server (NTRS)
Levy, G.; Brown, R. A.
1986-01-01
A simple economical objective analysis scheme is devised and tested on real scatterometer data. It is designed to treat dense data such as those of the Seasat A Satellite Scatterometer (SASS) for individual or multiple passes, and preserves subsynoptic scale features. Errors are evaluated with the aid of sampling ('bootstrap') statistical methods. In addition, sensitivity tests have been performed which establish qualitative confidence in calculated fields of divergence and vorticity. The SASS wind algorithm could be improved; however, the data at this point are limited by instrument errors rather than analysis errors. The analysis error is typically negligible in comparison with the instrument error, but amounts to 30 percent of the instrument error in areas of strong wind shear. The scheme is very economical, and thus suitable for large volumes of dense data such as SASS data.
Error Analysis of Present Simple Tense in the Interlanguage of Adult Arab English Language Learners
ERIC Educational Resources Information Center
Muftah, Muneera; Rafik-Galea, Shameem
2013-01-01
The present study analyses errors on present simple tense among adult Arab English language learners. It focuses on the error on 3sg "-s" (the third person singular present tense agreement morpheme "-s"). The learners are undergraduate adult Arabic speakers learning English as a foreign language. The study gathered data from…
Popa, Laurentiu S.; Streng, Martha L.
2017-01-01
Abstract Most hypotheses of cerebellar function emphasize a role in real-time control of movements. However, the cerebellum’s use of current information to adjust future movements and its involvement in sequencing, working memory, and attention argues for predicting and maintaining information over extended time windows. The present study examines the time course of Purkinje cell discharge modulation in the monkey (Macaca mulatta) during manual, pseudo-random tracking. Analysis of the simple spike firing from 183 Purkinje cells during tracking reveals modulation up to 2 s before and after kinematics and position error. Modulation significance was assessed against trial shuffled firing, which decoupled simple spike activity from behavior and abolished long-range encoding while preserving data statistics. Position, velocity, and position errors have the most frequent and strongest long-range feedforward and feedback modulations, with less common, weaker long-term correlations for speed and radial error. Position, velocity, and position errors can be decoded from the population simple spike firing with considerable accuracy for even the longest predictive (-2000 to -1500 ms) and feedback (1500 to 2000 ms) epochs. Separate analysis of the simple spike firing in the initial hold period preceding tracking shows similar long-range feedforward encoding of the upcoming movement and in the final hold period feedback encoding of the just completed movement, respectively. Complex spike analysis reveals little long-term modulation with behavior. We conclude that Purkinje cell simple spike discharge includes short- and long-range representations of both upcoming and preceding behavior that could underlie cerebellar involvement in error correction, working memory, and sequencing. PMID:28413823
Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife
ERIC Educational Resources Information Center
Jennrich, Robert I.
2008-01-01
The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…
Algebra Students' Difficulty with Fractions: An Error Analysis
ERIC Educational Resources Information Center
Brown, George; Quinn, Robert J.
2006-01-01
An analysis of the 1990 National Assessment of Educational Progress (NAEP) found that only 46 percent of all high school seniors demonstrated success with a grasp of decimals, percentages, fractions and simple algebra. This article investigates error patterns that emerge as students attempt to answer questions involving the ability to apply…
Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J
2014-06-01
We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.
Error analysis of multi-needle Langmuir probe measurement technique.
Barjatya, Aroh; Merritt, William
2018-04-01
Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.
Error analysis of multi-needle Langmuir probe measurement technique
NASA Astrophysics Data System (ADS)
Barjatya, Aroh; Merritt, William
2018-04-01
Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.
NASA Astrophysics Data System (ADS)
Ma, Chen-xi; Ding, Guo-qing
2017-10-01
Simple harmonic waves and synthesized simple harmonic waves are widely used in the test of instruments. However, because of the errors caused by clearance of gear and time-delay error of FPGA, it is difficult to control servo electric cylinder in precise simple harmonic motion under high speed, high frequency and large load conditions. To solve the problem, a method of error compensation is proposed in this paper. In the method, a displacement sensor is fitted on the piston rod of the electric cylinder. By using the displacement sensor, the real-time displacement of the piston rod is obtained and fed back to the input of servo motor, then a closed loop control is realized. There is compensation of pulses in the next period of the synthetic waves. This paper uses FPGA as the processing core. The software mainly comprises a waveform generator, an Ethernet module, a memory module, a pulse generator, a pulse selector, a protection module, an error compensation module. A durability of shock absorbers is used as the testing platform. The durability mainly comprises a single electric cylinder, a servo motor for driving the electric cylinder, and the servo motor driver.
Optimal interpolation analysis of leaf area index using MODIS data
Gu, Yingxin; Belair, Stephane; Mahfouf, Jean-Francois; Deblonde, Godelieve
2006-01-01
A simple data analysis technique for vegetation leaf area index (LAI) using Moderate Resolution Imaging Spectroradiometer (MODIS) data is presented. The objective is to generate LAI data that is appropriate for numerical weather prediction. A series of techniques and procedures which includes data quality control, time-series data smoothing, and simple data analysis is applied. The LAI analysis is an optimal combination of the MODIS observations and derived climatology, depending on their associated errors σo and σc. The “best estimate” LAI is derived from a simple three-point smoothing technique combined with a selection of maximum LAI (after data quality control) values to ensure a higher quality. The LAI climatology is a time smoothed mean value of the “best estimate” LAI during the years of 2002–2004. The observation error is obtained by comparing the MODIS observed LAI with the “best estimate” of the LAI, and the climatological error is obtained by comparing the “best estimate” of LAI with the climatological LAI value. The LAI analysis is the result of a weighting between these two errors. Demonstration of the method described in this paper is presented for the 15-km grid of Meteorological Service of Canada (MSC)'s regional version of the numerical weather prediction model. The final LAI analyses have a relatively smooth temporal evolution, which makes them more appropriate for environmental prediction than the original MODIS LAI observation data. They are also more realistic than the LAI data currently used operationally at the MSC which is based on land-cover databases.
The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control
ERIC Educational Resources Information Center
Page, A.; Moreno, R.; Candelas, P.; Belmar, F.
2008-01-01
In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…
Bootstrap Methods: A Very Leisurely Look.
ERIC Educational Resources Information Center
Hinkle, Dennis E.; Winstead, Wayland H.
The Bootstrap method, a computer-intensive statistical method of estimation, is illustrated using a simple and efficient Statistical Analysis System (SAS) routine. The utility of the method for generating unknown parameters, including standard errors for simple statistics, regression coefficients, discriminant function coefficients, and factor…
Analysis of case-only studies accounting for genotyping error.
Cheng, K F
2007-03-01
The case-only design provides one approach to assess possible interactions between genetic and environmental factors. It has been shown that if these factors are conditionally independent, then a case-only analysis is not only valid but also very efficient. However, a drawback of the case-only approach is that its conclusions may be biased by genotyping errors. In this paper, our main aim is to propose a method for analysis of case-only studies when these errors occur. We show that the bias can be adjusted through the use of internal validation data, which are obtained by genotyping some sampled individuals twice. Our analysis is based on a simple and yet highly efficient conditional likelihood approach. Simulation studies considered in this paper confirm that the new method has acceptable performance under genotyping errors.
Hartman Testing of X-Ray Telescopes
NASA Technical Reports Server (NTRS)
Saha, Timo T.; Biskasch, Michael; Zhang, William W.
2013-01-01
Hartmann testing of x-ray telescopes is a simple test method to retrieve and analyze alignment errors and low-order circumferential errors of x-ray telescopes and their components. A narrow slit is scanned along the circumference of the telescope in front of the mirror and the centroids of the images are calculated. From the centroid data, alignment errors, radius variation errors, and cone-angle variation errors can be calculated. Mean cone angle, mean radial height (average radius), and the focal length of the telescope can also be estimated if the centroid data is measured at multiple focal plane locations. In this paper we present the basic equations that are used in the analysis process. These equations can be applied to full circumference or segmented x-ray telescopes. We use the Optical Surface Analysis Code (OSAC) to model a segmented x-ray telescope and show that the derived equations and accompanying analysis retrieves the alignment errors and low order circumferential errors accurately.
More Powerful Tests of Simple Interaction Contrasts in the Two-Way Factorial Design
ERIC Educational Resources Information Center
Hancock, Gregory R.; McNeish, Daniel M.
2017-01-01
For the two-way factorial design in analysis of variance, the current article explicates and compares three methods for controlling the Type I error rate for all possible simple interaction contrasts following a statistically significant interaction, including a proposed modification to the Bonferroni procedure that increases the power of…
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
When Practice Doesn't Lead to Retrieval: An Analysis of Children's Errors with Simple Addition
ERIC Educational Resources Information Center
de Villiers, Celéste; Hopkins, Sarah
2013-01-01
Counting strategies initially used by young children to perform simple addition are often replaced by more efficient counting strategies, decomposition strategies and rule-based strategies until most answers are encoded in memory and can be directly retrieved. Practice is thought to be the key to developing fluent retrieval of addition facts. This…
An Empirical State Error Covariance Matrix for Batch State Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).
NASA Astrophysics Data System (ADS)
Kaggwa, G. B.; Kilpatrick, J. I.; Sader, J. E.; Jarvis, S. P.
2008-07-01
We present definitive interaction measurements of a simple confined liquid (octamethylcyclotetrasiloxane) using artifact-free frequency modulation atomic force microscopy. We use existing theory to decouple the conservative and dissipative components of the interaction, for a known phase offset from resonance (90° phase shift), that has been deliberately introduced into the experiment. Further we show the qualitative influence on the conservative and dissipative components of the interaction of a phase error deliberately introduced into the measurement, highlighting that artifacts, such as oscillatory dissipation, can be readily observed when the phase error is not compensated for in the force analysis.
Error analysis of stochastic gradient descent ranking.
Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan
2013-06-01
Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.
Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel
ERIC Educational Resources Information Center
Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.
2007-01-01
A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…
A simple randomisation procedure for validating discriminant analysis: a methodological note.
Wastell, D G
1987-04-01
Because the goal of discriminant analysis (DA) is to optimise classification, it designedly exaggerates between-group differences. This bias complicates validation of DA. Jack-knifing has been used for validation but is inappropriate when stepwise selection (SWDA) is employed. A simple randomisation test is presented which is shown to give correct decisions for SWDA. The general superiority of randomisation tests over orthodox significance tests is discussed. Current work on non-parametric methods of estimating the error rates of prediction rules is briefly reviewed.
ERIC Educational Resources Information Center
Brown, Simon
2009-01-01
Many students have some difficulty with calculations. Simple dimensional analysis provides a systematic means of checking for errors and inconsistencies and for developing both new insight and new relationships between variables. Teaching dimensional analysis at even the most basic level strengthens the insight and confidence of students, and…
Barton, Lorna; Futtermenger, Judith; Gaddi, Yash; Kang, Angela; Rivers, Jon; Spriggs, David; Jenkins, Paul F; Thompson, Campbell H; Thomas, Josephine S
2012-04-01
This study aimed to quantify and compare the prevalence of simple prescribing errors made by clinicians in the first 24 hours of a general medical patient's hospital admission. Four public or private acute care hospitals across Australia and New Zealand each audited 200 patients' drug charts. Patient demographics, pharmacist review and pre-defined prescribing errors were recorded. At least one simple error was present on the medication charts of 672/715 patients, with a linear relationship between the number of medications prescribed and the number of errors (r = 0.571, p < 0.001). The four sites differed significantly in the prevalence of different types of simple prescribing errors. Pharmacists were more likely to review patients aged > or = 75 years (39.9% vs 26.0%; p < 0.001) and those with more than 10 drug prescriptions (39.4% vs 25.7%; p < 0.001). Patients reviewed by a pharmacist were less likely to have inadequate documentation of allergies (13.5% vs 29.4%, p < 0.001). Simple prescribing errors are common, although their nature differs from site to site. Clinical pharmacists target patients with the most complex health situations, and their involvement leads to improved documentation.
A simple finite element method for linear hyperbolic problems
Mu, Lin; Ye, Xiu
2017-09-14
Here, we introduce a simple finite element method for solving first order hyperbolic equations with easy implementation and analysis. Our new method, with a symmetric, positive definite system, is designed to use discontinuous approximations on finite element partitions consisting of arbitrary shape of polygons/polyhedra. Error estimate is established. Extensive numerical examples are tested that demonstrate the robustness and flexibility of the method.
A simple finite element method for linear hyperbolic problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Ye, Xiu
Here, we introduce a simple finite element method for solving first order hyperbolic equations with easy implementation and analysis. Our new method, with a symmetric, positive definite system, is designed to use discontinuous approximations on finite element partitions consisting of arbitrary shape of polygons/polyhedra. Error estimate is established. Extensive numerical examples are tested that demonstrate the robustness and flexibility of the method.
Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F
2016-01-01
In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.
Huh, Yeamin; Smith, David E.; Feng, Meihau Rose
2014-01-01
Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879
Brannon, Timothy S
2006-01-01
Continuous infusion intravenous (IV) drugs in neonatal intensive care are usually prepared based on patient weight so that the dose is readable as a simple multiple of the infusion pump rate. New safety guidelines propose that hospitals switch to using standardized admixtures of these drugs to prevent calculation errors during ad hoc preparation. Extended hierarchical task analysis suggests that switching to standardized admixtures may lead to more errors in programming the pump at the bedside.
Brannon, Timothy S.
2006-01-01
Continuous infusion intravenous (IV) drugs in neonatal intensive care are usually prepared based on patient weight so that the dose is readable as a simple multiple of the infusion pump rate. New safety guidelines propose that hospitals switch to using standardized admixtures of these drugs to prevent calculation errors during ad hoc preparation. Extended hierarchical task analysis suggests that switching to standardized admixtures may lead to more errors in programming the pump at the bedside. PMID:17238482
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Assessing Gaussian Assumption of PMU Measurement Error Using Field Data
Wang, Shaobu; Zhao, Junbo; Huang, Zhenyu; ...
2017-10-13
Gaussian PMU measurement error has been assumed for many power system applications, such as state estimation, oscillatory modes monitoring, voltage stability analysis, to cite a few. This letter proposes a simple yet effective approach to assess this assumption by using the stability property of a probability distribution and the concept of redundant measurement. Extensive results using field PMU data from WECC system reveal that the Gaussian assumption is questionable.
Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMs
NASA Astrophysics Data System (ADS)
Diehl, S. E.; Ochoa, A., Jr.; Dressendorfer, P. V.; Koga, P.; Kolasinski, W. A.
1982-12-01
Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors.
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Moes, Timothy R.
1994-01-01
Presented is a feasibility and error analysis for a hypersonic flush airdata system on a hypersonic flight experiment (HYFLITE). HYFLITE heating loads make intrusive airdata measurement impractical. Although this analysis is specifically for the HYFLITE vehicle and trajectory, the problems analyzed are generally applicable to hypersonic vehicles. A layout of the flush-port matrix is shown. Surface pressures are related airdata parameters using a simple aerodynamic model. The model is linearized using small perturbations and inverted using nonlinear least-squares. Effects of various error sources on the overall uncertainty are evaluated using an error simulation. Error sources modeled include boundarylayer/viscous interactions, pneumatic lag, thermal transpiration in the sensor pressure tubing, misalignment in the matrix layout, thermal warping of the vehicle nose, sampling resolution, and transducer error. Using simulated pressure data for input to the estimation algorithm, effects caused by various error sources are analyzed by comparing estimator outputs with the original trajectory. To obtain ensemble averages the simulation is run repeatedly and output statistics are compiled. Output errors resulting from the various error sources are presented as a function of Mach number. Final uncertainties with all modeled error sources included are presented as a function of Mach number.
Lock-in amplifier error prediction and correction in frequency sweep measurements.
Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose
2007-01-01
This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
1983-03-01
Decision Tree -------------------- 62 4-E. PACKAGE unitrep Action/Area Selection flow Chart 82 4-7. PACKAGE unitrep Control Flow Chart...the originetor wculd manually draft simple, readable, formatted iressages using "-i predef.ined forms and decision logic trees . This alternative was...Study Analysis DATA CCNTENT ERRORS PERCENT OF ERRORS Character Type 2.1 Calcvlations/Associations 14.3 Message Identification 4.? Value Pisiratch 22.E
Sensorimotor Learning of Acupuncture Needle Manipulation Using Visual Feedback
Jung, Won-Mo; Lim, Jinwoong; Lee, In-Seon; Park, Hi-Joon; Wallraven, Christian; Chae, Younbyoung
2015-01-01
Objective Humans can acquire a wide variety of motor skills using sensory feedback pertaining to discrepancies between intended and actual movements. Acupuncture needle manipulation involves sophisticated hand movements and represents a fundamental skill for acupuncturists. We investigated whether untrained students could improve their motor performance during acupuncture needle manipulation using visual feedback (VF). Methods Twenty-one untrained medical students were included, randomly divided into concurrent (n = 10) and post-trial (n = 11) VF groups. Both groups were trained in simple lift/thrusting techniques during session 1, and in complicated lift/thrusting techniques in session 2 (eight training trials per session). We compared the motion patterns and error magnitudes of pre- and post-training tests. Results During motion pattern analysis, both the concurrent and post-trial VF groups exhibited greater improvements in motion patterns during the complicated lifting/thrusting session. In the magnitude error analysis, both groups also exhibited reduced error magnitudes during the simple lifting/thrusting session. For the training period, the concurrent VF group exhibited reduced error magnitudes across all training trials, whereas the post-trial VF group was characterized by greater error magnitudes during initial trials, which gradually reduced during later trials. Conclusions Our findings suggest that novices can improve the sophisticated hand movements required for acupuncture needle manipulation using sensorimotor learning with VF. Use of two types of VF can be beneficial for untrained students in terms of learning how to manipulate acupuncture needles, using either automatic or cognitive processes. PMID:26406248
A simple finite element method for the Stokes equations
Mu, Lin; Ye, Xiu
2017-03-21
The goal of this paper is to introduce a simple finite element method to solve the Stokes equations. This method is in primal velocity-pressure formulation and is so simple such that both velocity and pressure are approximated by piecewise constant functions. Implementation issues as well as error analysis are investigated. A basis for a divergence free subspace of the velocity field is constructed so that the original saddle point problem can be reduced to a symmetric and positive definite system with much fewer unknowns. The numerical experiments indicate that the method is accurate.
A simple finite element method for the Stokes equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Ye, Xiu
The goal of this paper is to introduce a simple finite element method to solve the Stokes equations. This method is in primal velocity-pressure formulation and is so simple such that both velocity and pressure are approximated by piecewise constant functions. Implementation issues as well as error analysis are investigated. A basis for a divergence free subspace of the velocity field is constructed so that the original saddle point problem can be reduced to a symmetric and positive definite system with much fewer unknowns. The numerical experiments indicate that the method is accurate.
Accuracy limitations of hyperbolic multilateration systems
DOT National Transportation Integrated Search
1973-03-22
The report is an analysis of the accuracy limitations of hyperbolic multilateration systems. A central result is a demonstration that the inverse of the covariance matrix for positional errors corresponds to the moment of inertia matrix of a simple m...
A new SAS program for behavioral analysis of Electrical Penetration Graph (EPG) data
USDA-ARS?s Scientific Manuscript database
A new program is introduced that uses SAS software to duplicate output of descriptive statistics from the Sarria Excel workbook for EPG waveform analysis. Not only are publishable means and standard errors or deviations output, the user also is guided through four relatively simple sub-programs for ...
Popa, Laurentiu S.; Hewitt, Angela L.; Ebner, Timothy J.
2012-01-01
The cerebellum has been implicated in processing motor errors required for online control of movement and motor learning. The dominant view is that Purkinje cell complex spike discharge signals motor errors. This study investigated whether errors are encoded in the simple spike discharge of Purkinje cells in monkeys trained to manually track a pseudo-randomly moving target. Four task error signals were evaluated based on cursor movement relative to target movement. Linear regression analyses based on firing residuals ensured that the modulation with a specific error parameter was independent of the other error parameters and kinematics. The results demonstrate that simple spike firing in lobules IV–VI is significantly correlated with position, distance and directional errors. Independent of the error signals, the same Purkinje cells encode kinematics. The strongest error modulation occurs at feedback timing. However, in 72% of cells at least one of the R2 temporal profiles resulting from regressing firing with individual errors exhibit two peak R2 values. For these bimodal profiles, the first peak is at a negative τ (lead) and a second peak at a positive τ (lag), implying that Purkinje cells encode both prediction and feedback about an error. For the majority of the bimodal profiles, the signs of the regression coefficients or preferred directions reverse at the times of the peaks. The sign reversal results in opposing simple spike modulation for the predictive and feedback components. Dual error representations may provide the signals needed to generate sensory prediction errors used to update a forward internal model. PMID:23115173
Analysis and Calibration of Sources of Electronic Error in PSD Sensor Response.
Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Tsirigotis, Georgios
2016-04-29
In order to obtain very precise measurements of the position of agents located at a considerable distance using a sensor system based on position sensitive detectors (PSD), it is necessary to analyze and mitigate the factors that generate substantial errors in the system's response. These sources of error can be divided into electronic and geometric factors. The former stem from the nature and construction of the PSD as well as the performance, tolerances and electronic response of the system, while the latter are related to the sensor's optical system. Here, we focus solely on the electrical effects, since the study, analysis and correction of these are a prerequisite for subsequently addressing geometric errors. A simple calibration method is proposed, which considers PSD response, component tolerances, temperature variations, signal frequency used, signal to noise ratio (SNR), suboptimal operational amplifier parameters, and analog to digital converter (ADC) quantitation SNRQ, etc. Following an analysis of these effects and calibration of the sensor, it was possible to correct the errors, thus rendering the effects negligible, as reported in the results section.
Analysis and Calibration of Sources of Electronic Error in PSD Sensor Response
Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Tsirigotis, Georgios
2016-01-01
In order to obtain very precise measurements of the position of agents located at a considerable distance using a sensor system based on position sensitive detectors (PSD), it is necessary to analyze and mitigate the factors that generate substantial errors in the system’s response. These sources of error can be divided into electronic and geometric factors. The former stem from the nature and construction of the PSD as well as the performance, tolerances and electronic response of the system, while the latter are related to the sensor’s optical system. Here, we focus solely on the electrical effects, since the study, analysis and correction of these are a prerequisite for subsequently addressing geometric errors. A simple calibration method is proposed, which considers PSD response, component tolerances, temperature variations, signal frequency used, signal to noise ratio (SNR), suboptimal operational amplifier parameters, and analog to digital converter (ADC) quantitation SNRQ, etc. Following an analysis of these effects and calibration of the sensor, it was possible to correct the errors, thus rendering the effects negligible, as reported in the results section. PMID:27136562
Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas
2012-08-01
In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.
Use of autocorrelation scanning in DNA copy number analysis.
Zhang, Liangcai; Zhang, Li
2013-11-01
Data quality is a critical issue in the analyses of DNA copy number alterations obtained from microarrays. It is commonly assumed that copy number alteration data can be modeled as piecewise constant and the measurement errors of different probes are independent. However, these assumptions do not always hold in practice. In some published datasets, we find that measurement errors are highly correlated between probes that interrogate nearby genomic loci, and the piecewise-constant model does not fit the data well. The correlated errors cause problems in downstream analysis, leading to a large number of DNA segments falsely identified as having copy number gains and losses. We developed a simple tool, called autocorrelation scanning profile, to assess the dependence of measurement error between neighboring probes. Autocorrelation scanning profile can be used to check data quality and refine the analysis of DNA copy number data, which we demonstrate in some typical datasets. lzhangli@mdanderson.org. Supplementary data are available at Bioinformatics online.
Reanalysis, compatibility and correlation in analysis of modified antenna structures
NASA Technical Reports Server (NTRS)
Levy, R.
1989-01-01
A simple computational procedure is synthesized to process changes in the microwave-antenna pathlength-error measure when there are changes in the antenna structure model. The procedure employs structural modification reanalysis methods combined with new extensions of correlation analysis to provide the revised rms pathlength error. Mainframe finite-element-method processing of the structure model is required only for the initial unmodified structure, and elementary postprocessor computations develop and deal with the effects of the changes. Several illustrative computational examples are included. The procedure adapts readily to processing spectra of changes for parameter studies or sensitivity analyses.
Eigenvector method for umbrella sampling enables error analysis
Thiede, Erik H.; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R.
2016-01-01
Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence. PMID:27586912
NASA Astrophysics Data System (ADS)
Allen, J. Icarus; Holt, Jason T.; Blackford, Jerry; Proctor, Roger
2007-12-01
Marine systems models are becoming increasingly complex and sophisticated, but far too little attention has been paid to model errors and the extent to which model outputs actually relate to ecosystem processes. Here we describe the application of summary error statistics to a complex 3D model (POLCOMS-ERSEM) run for the period 1988-1989 in the southern North Sea utilising information from the North Sea Project, which collected a wealth of observational data. We demonstrate that to understand model data misfit and the mechanisms creating errors, we need to use a hierarchy of techniques, including simple correlations, model bias, model efficiency, binary discriminator analysis and the distribution of model errors to assess model errors spatially and temporally. We also demonstrate that a linear cost function is an inappropriate measure of misfit. This analysis indicates that the model has some skill for all variables analysed. A summary plot of model performance indicates that model performance deteriorates as we move through the ecosystem from the physics, to the nutrients and plankton.
On the impact of relatedness on SNP association analysis.
Gross, Arnd; Tönjes, Anke; Scholz, Markus
2017-12-06
When testing for SNP (single nucleotide polymorphism) associations in related individuals, observations are not independent. Simple linear regression assuming independent normally distributed residuals results in an increased type I error and the power of the test is also affected in a more complicate manner. Inflation of type I error is often successfully corrected by genomic control. However, this reduces the power of the test when relatedness is of concern. In the present paper, we derive explicit formulae to investigate how heritability and strength of relatedness contribute to variance inflation of the effect estimate of the linear model. Further, we study the consequences of variance inflation on hypothesis testing and compare the results with those of genomic control correction. We apply the developed theory to the publicly available HapMap trio data (N=129), the Sorbs (a self-contained population with N=977 characterised by a cryptic relatedness structure) and synthetic family studies with different sample sizes (ranging from N=129 to N=999) and different degrees of relatedness. We derive explicit and easily to apply approximation formulae to estimate the impact of relatedness on the variance of the effect estimate of the linear regression model. Variance inflation increases with increasing heritability. Relatedness structure also impacts the degree of variance inflation as shown for example family structures. Variance inflation is smallest for HapMap trios, followed by a synthetic family study corresponding to the trio data but with larger sample size than HapMap. Next strongest inflation is observed for the Sorbs, and finally, for a synthetic family study with a more extreme relatedness structure but with similar sample size as the Sorbs. Type I error increases rapidly with increasing inflation. However, for smaller significance levels, power increases with increasing inflation while the opposite holds for larger significance levels. When genomic control is applied, type I error is preserved while power decreases rapidly with increasing variance inflation. Stronger relatedness as well as higher heritability result in increased variance of the effect estimate of simple linear regression analysis. While type I error rates are generally inflated, the behaviour of power is more complex since power can be increased or reduced in dependence on relatedness and the heritability of the phenotype. Genomic control cannot be recommended to deal with inflation due to relatedness. Although it preserves type I error, the loss in power can be considerable. We provide a simple formula for estimating variance inflation given the relatedness structure and the heritability of a trait of interest. As a rule of thumb, variance inflation below 1.05 does not require correction and simple linear regression analysis is still appropriate.
The application of sensitivity analysis to models of large scale physiological systems
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1974-01-01
A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.
Fore, Amanda M; Sculli, Gary L; Albee, Doreen; Neily, Julia
2013-01-01
To implement the sterile cockpit principle to decrease interruptions and distractions during high volume medication administration and reduce the number of medication errors. While some studies have described the importance of reducing interruptions as a tactic to reduce medication errors, work is needed to assess the impact on patient outcomes. Data regarding the type and frequency of distractions were collected during the first 11 weeks of implementation. Medication error rates were tracked 1 year before and after 1 year implementation. Simple regression analysis showed a decrease in the mean number of distractions, (β = -0.193, P = 0.02) over time. The medication error rate decreased by 42.78% (P = 0.04) after implementation of the sterile cockpit principle. The use of crew resource management techniques, including the sterile cockpit principle, applied to medication administration has a significant impact on patient safety. Applying the sterile cockpit principle to inpatient medical units is a feasible approach to reduce the number of distractions during the administration of medication, thus, reducing the likelihood of medication error. 'Do Not Disturb' signs and vests are inexpensive, simple interventions that can be used as reminders to decrease distractions. © 2012 Blackwell Publishing Ltd.
Omorczyk, Jarosław; Nosiadek, Leszek; Ambroży, Tadeusz; Nosiadek, Andrzej
2015-01-01
The main aim of this study was to verify the usefulness of selected simple methods of recording and fast biomechanical analysis performed by judges of artistic gymnastics in assessing a gymnast's movement technique. The study participants comprised six artistic gymnastics judges, who assessed back handsprings using two methods: a real-time observation method and a frame-by-frame video analysis method. They also determined flexion angles of knee and hip joints using the computer program. In the case of the real-time observation method, the judges gave a total of 5.8 error points with an arithmetic mean of 0.16 points for the flexion of the knee joints. In the high-speed video analysis method, the total amounted to 8.6 error points and the mean value amounted to 0.24 error points. For the excessive flexion of hip joints, the sum of the error values was 2.2 error points and the arithmetic mean was 0.06 error points during real-time observation. The sum obtained using frame-by-frame analysis method equaled 10.8 and the mean equaled 0.30 error points. Error values obtained through the frame-by-frame video analysis of movement technique were higher than those obtained through the real-time observation method. The judges were able to indicate the number of the frame in which the maximal joint flexion occurred with good accuracy. Using the real-time observation method as well as the high-speed video analysis performed without determining the exact angle for assessing movement technique were found to be insufficient tools for improving the quality of judging.
Effect of lethality on the extinction and on the error threshold of quasispecies.
Tejero, Hector; Marín, Arturo; Montero, Francisco
2010-02-21
In this paper the effect of lethality on error threshold and extinction has been studied in a population of error-prone self-replicating molecules. For given lethality and a simple fitness landscape, three dynamic regimes can be obtained: quasispecies, error catastrophe, and extinction. Using a simple model in which molecules are classified as master, lethal and non-lethal mutants, it is possible to obtain the mutation rates of the transitions between the three regimes analytically. The numerical resolution of the extended model, in which molecules are classified depending on their Hamming distance to the master sequence, confirms the results obtained in the simple model and shows how an error catastrophe regime changes when lethality is taken in account. (c) 2009 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Khamukhin, A. A.
2017-02-01
Simple navigation algorithms are needed for small autonomous unmanned aerial vehicles (UAVs). These algorithms can be implemented in a small microprocessor with low power consumption. This will help to reduce the weight of the UAVs computing equipment and to increase the flight range. The proposed algorithm uses only the number of opaque channels (ommatidia in bees) through which a target can be seen by moving an observer from location 1 to 2 toward the target. The distance estimation is given relative to the distance between locations 1 and 2. The simple scheme of an appositional compound eye to develop calculation formula is proposed. The distance estimation error analysis shows that it decreases with an increase of the total number of opaque channels to a certain limit. An acceptable error of about 2 % is achieved with the angle of view from 3 to 10° when the total number of opaque channels is 21600.
Representations of Intervals and Optimal Error Bounds.
1980-07-01
OAA629-8O-C-0ONI UNCLASS I FI IEDMRC TSR-2098 NL 11111L 3 -2 11111 ~ 13.6 1111 125 .4 111.6 MCROCOPY RESOLUTION TEST CHART NATIONA’ 13UREAU OF STANDARDS...geometric and harmonic means, Excess width Work Unit Number 3 (Numerical Analysis and Computer Science) Sponsored by the United States Army under...example in the next section, following which the general theory will be dis- cussed. 3 . An example of an optimal point and error bound. A simple
Multiple-generator errors are unavoidable under model misspecification.
Jewett, D L; Zhang, Z
1995-08-01
Model misspecification poses a major problem for dipole source localization (DSL) because it causes insidious multiple-generator errors (MulGenErrs) to occur in the fitted dipole parameters. This paper describes how and why this occurs, based upon simple algebraic considerations. MulGenErrs must occur, to some degree, in any DSL analysis of real data because there is model misspecification and mathematically the equations used for the simultaneously active generators must be of a different form than the equations for each generator active alone.
Evaluation of a native vegetation masking technique
NASA Technical Reports Server (NTRS)
Kinsler, M. C.
1984-01-01
A crop masking technique based on Ashburn's vegetative index (AVI) was used to evaluate native vegetation as an indicator of crop moisture condition. A mask of the range areas (native vegetation) was generated for each of thirteen Great Plains LANDSAT MSS sample segments. These masks were compared to the digitized ground truth and accuracies were computed. An analysis of the types of errors indicates a consistency in errors among the segments. The mask represents a simple quick-look technique for evaluating vegetative cover.
Simulation of wave propagation in three-dimensional random media
NASA Technical Reports Server (NTRS)
Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1993-01-01
Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.
An investigation of error characteristics and coding performance
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1992-01-01
The performance of forward error correcting coding schemes on errors anticipated for the Earth Observation System (EOS) Ku-band downlink are studied. The EOS transmits picture frame data to the ground via the Telemetry Data Relay Satellite System (TDRSS) to a ground-based receiver at White Sands. Due to unintentional RF interference from other systems operating in the Ku band, the noise at the receiver is non-Gaussian which may result in non-random errors output by the demodulator. That is, the downlink channel cannot be modeled by a simple memoryless Gaussian-noise channel. From previous experience, it is believed that those errors are bursty. The research proceeded by developing a computer based simulation, called Communication Link Error ANalysis (CLEAN), to model the downlink errors, forward error correcting schemes, and interleavers used with TDRSS. To date, the bulk of CLEAN was written, documented, debugged, and verified. The procedures for utilizing CLEAN to investigate code performance were established and are discussed.
In large epidemiological studies, many researchers use surrogates of air pollution exposure such as geographic information system (GIS)-based characterizations of traffic or simple housing characteristics. It is important to validate these surrogates against measured pollutant co...
Measurement-based reliability/performability models
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen
1987-01-01
Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.
A simplified satellite navigation system for an autonomous Mars roving vehicle.
NASA Technical Reports Server (NTRS)
Janosko, R. E.; Shen, C. N.
1972-01-01
The use of a retroflecting satellite and a laser rangefinder to navigate a Martian roving vehicle is considered in this paper. It is shown that a simple system can be employed to perform this task. An error analysis is performed on the navigation equations and it is shown that the error inherent in the scheme proposed can be minimized by the proper choice of measurement geometry. A nonlinear programming approach is used to minimize the navigation error subject to constraints that are due to geometric and laser requirements. The problem is solved for a particular set of laser parameters and the optimal solution is presented.
Avery, Anthony J; Rodgers, Sarah; Cantrill, Judith A; Armstrong, Sarah; Elliott, Rachel; Howard, Rachel; Kendrick, Denise; Morris, Caroline J; Murray, Scott A; Prescott, Robin J; Cresswell, Kathrin; Sheikh, Aziz
2009-01-01
Background Medication errors are an important cause of morbidity and mortality in primary care. The aims of this study are to determine the effectiveness, cost effectiveness and acceptability of a pharmacist-led information-technology-based complex intervention compared with simple feedback in reducing proportions of patients at risk from potentially hazardous prescribing and medicines management in general (family) practice. Methods Research subject group: "At-risk" patients registered with computerised general practices in two geographical regions in England. Design: Parallel group pragmatic cluster randomised trial. Interventions: Practices will be randomised to either: (i) Computer-generated feedback; or (ii) Pharmacist-led intervention comprising of computer-generated feedback, educational outreach and dedicated support. Primary outcome measures: The proportion of patients in each practice at six and 12 months post intervention: - with a computer-recorded history of peptic ulcer being prescribed non-selective non-steroidal anti-inflammatory drugs - with a computer-recorded diagnosis of asthma being prescribed beta-blockers - aged 75 years and older receiving long-term prescriptions for angiotensin converting enzyme inhibitors or loop diuretics without a recorded assessment of renal function and electrolytes in the preceding 15 months. Secondary outcome measures; These relate to a number of other examples of potentially hazardous prescribing and medicines management. Economic analysis: An economic evaluation will be done of the cost per error avoided, from the perspective of the UK National Health Service (NHS), comparing the pharmacist-led intervention with simple feedback. Qualitative analysis: A qualitative study will be conducted to explore the views and experiences of health care professionals and NHS managers concerning the interventions, and investigate possible reasons why the interventions prove effective, or conversely prove ineffective. Sample size: 34 practices in each of the two treatment arms would provide at least 80% power (two-tailed alpha of 0.05) to demonstrate a 50% reduction in error rates for each of the three primary outcome measures in the pharmacist-led intervention arm compared with a 11% reduction in the simple feedback arm. Discussion At the time of submission of this article, 72 general practices have been recruited (36 in each arm of the trial) and the interventions have been delivered. Analysis has not yet been undertaken. Trial registration Current controlled trials ISRCTN21785299 PMID:19409095
Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons
Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit
2012-01-01
In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π. PMID:24027379
Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.
Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit
2013-08-01
In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.
Pattern of refractive errors among patients at a tertiary hospital in Kathmandu.
Rizyal, A; Ghising, R; Shrestha, R K; Kansakar, I
2011-09-01
A hospital based cross sectional study was carried out to determine the pattern of refractive errors among patients attending the out patient department, Department of Ophthalmology, Nepal Medical College Teaching Hospital. A total of 1100 patients were evaluated, (male 43.67%; female 56.33%). Simple myopic astigmatism was the most prevalent type of refractive error accounting for 27.18% followed by simple myopia (21.66%) and compound myopic astigmatism (19.48%). Simple hypermetropia (15.03%) and mixed astigmatism (4.3%) were also noted. Simple myopia was prevalent among the younger age group in the first to third decades, whereas hypermetropia was seen in the older patients in the third to fifth decades.
Design of the Detector II: A CMOS Gate Array for the Study of Concurrent Error Detection Techniques.
1987-07-01
detection schemes and temporary failures. The circuit consists- or of six different adders with concurrent error detection schemes . The error detection... schemes are - simple duplication, duplication with functional dual implementation, duplication with different &I [] .6implementations, two-rail encoding...THE SYSTEM. .. .... ...... ...... ...... 5 7. DESIGN OF CED SCHEMES .. ... ...... ...... ........ 7 7.1 Simple Duplication
NASA Astrophysics Data System (ADS)
Reis, D. S.; Stedinger, J. R.; Martins, E. S.
2005-10-01
This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.
Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data
Zhao, Shanshan
2014-01-01
Summary Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling design. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.
ERIC Educational Resources Information Center
Language Teaching, 2013
2013-01-01
In 1975 (Vol. 8.4, 201-218), S. P. Corder, during the course of his state-of-the-art review "Error analysis, interlanguage and second language acquisition," focused on the recent literature on simplified linguistic systems and suggested that research had shown that interlanguage systems often resemble other simple codes such as pidgins…
Extraction of the proton radius from electron-proton scattering data
Lee, Gabriel; Arrington, John R.; Hill, Richard J.
2015-07-27
We perform a new analysis of electron-proton scattering data to determine the proton electric and magnetic radii, enforcing model-independent constraints from form factor analyticity. A wide-ranging study of possible systematic effects is performed. An improved analysis is developed that rebins data taken at identical kinematic settings and avoids a scaling assumption of systematic errors with statistical errors. Employing standard models for radiative corrections, our improved analysis of the 2010 Mainz A1 Collaboration data yields a proton electric radius r E = 0.895(20) fm and magnetic radius r M = 0.776(38) fm. A similar analysis applied to world data (excluding Mainzmore » data) implies r E = 0.916(24) fm and r M = 0.914(35) fm. The Mainz and world values of the charge radius are consistent, and a simple combination yields a value r E = 0.904(15) fm that is 4σ larger than the CREMA Collaboration muonic hydrogen determination. The Mainz and world values of the magnetic radius differ by 2.7σ, and a simple average yields r M = 0.851(26) fm. As a result, the circumstances under which published muonic hydrogen and electron scattering data could be reconciled are discussed, including a possible deficiency in the standard radiative correction model which requires further analysis.« less
Modeling gene expression measurement error: a quasi-likelihood approach
Strimmer, Korbinian
2003-01-01
Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637
Multi-hole pressure probes to wind tunnel experiments and air data systems
NASA Astrophysics Data System (ADS)
Shevchenko, A. M.; Shmakov, A. S.
2017-10-01
The problems to develop a multihole pressure system to measure flow angularity, Mach number and dynamic head for wind tunnel experiments or air data systems are discussed. A simple analytical model with separation of variables is derived for the multihole spherical pressure probe. The proposed model is uniform for small subsonic and supersonic speeds. An error analysis was performed. The error functions are obtained, allowing to estimate the influence of the Mach number, the pitch angle, the location of the pressure ports on the uncertainty of determining the flow parameters.
MO-FG-202-06: Improving the Performance of Gamma Analysis QA with Radiomics- Based Image Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wootton, L; Nyflot, M; Ford, E
2016-06-15
Purpose: The use of gamma analysis for IMRT quality assurance has well-known limitations. Traditionally, a simple thresholding technique is used to evaluated passing criteria. However, like any image the gamma distribution is rich in information which thresholding mostly discards. We therefore propose a novel method of analyzing gamma images that uses quantitative image features borrowed from radiomics, with the goal of improving error detection. Methods: 368 gamma images were generated from 184 clinical IMRT beams. For each beam the dose to a phantom was measured with EPID dosimetry and compared to the TPS dose calculated with and without normally distributedmore » (2mm sigma) errors in MLC positions. The magnitude of 17 intensity histogram and size-zone radiomic features were derived from each image. The features that differed most significantly between image sets were determined with ROC analysis. A linear machine-learning model was trained on these features to classify images as with or without errors on 180 gamma images.The model was then applied to an independent validation set of 188 additional gamma distributions, half with and half without errors. Results: The most significant features for detecting errors were histogram kurtosis (p=0.007) and three size-zone metrics (p<1e-6 for each). The sizezone metrics detected clusters of high gamma-value pixels under mispositioned MLCs. The model applied to the validation set had an AUC of 0.8, compared to 0.56 for traditional gamma analysis with the decision threshold restricted to 98% or less. Conclusion: A radiomics-based image analysis method was developed that is more effective in detecting error than traditional gamma analysis. Though the pilot study here considers only MLC position errors, radiomics-based methods for other error types are being developed, which may provide better error detection and useful information on the source of detected errors. This work was partially supported by a grant from the Agency for Healthcare Research and Quality, grant number R18 HS022244-01.« less
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji
This paper presents a new unified analysis of estimate errors by model-matching extended-back-EMF estimation methods for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using model-matching extended-back-EMF estimation methods.
Modeling and simulation for fewer-axis grinding of complex surface
NASA Astrophysics Data System (ADS)
Li, Zhengjian; Peng, Xiaoqiang; Song, Ci
2017-10-01
As the basis of fewer-axis grinding of complex surface, the grinding mathematical model is of great importance. A mathematical model of the grinding wheel was established, and then coordinate and normal vector of the wheel profile could be calculated. Through normal vector matching at the cutter contact point and the coordinate system transformation, the grinding mathematical model was established to work out the coordinate of the cutter location point. Based on the model, interference analysis was simulated to find out the right position and posture of workpiece for grinding. Then positioning errors of the workpiece including the translation positioning error and the rotation positioning error were analyzed respectively, and the main locating datum was obtained. According to the analysis results, the grinding tool path was planned and generated to grind the complex surface, and good form accuracy was obtained. The grinding mathematical model is simple, feasible and can be widely applied.
Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T
2016-02-01
The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.
Stekel, Dov J.; Sarti, Donatella; Trevino, Victor; Zhang, Lihong; Salmon, Mike; Buckley, Chris D.; Stevens, Mark; Pallen, Mark J.; Penn, Charles; Falciani, Francesco
2005-01-01
A key step in the analysis of microarray data is the selection of genes that are differentially expressed. Ideally, such experiments should be properly replicated in order to infer both technical and biological variability, and the data should be subjected to rigorous hypothesis tests to identify the differentially expressed genes. However, in microarray experiments involving the analysis of very large numbers of biological samples, replication is not always practical. Therefore, there is a need for a method to select differentially expressed genes in a rational way from insufficiently replicated data. In this paper, we describe a simple method that uses bootstrapping to generate an error model from a replicated pilot study that can be used to identify differentially expressed genes in subsequent large-scale studies on the same platform, but in which there may be no replicated arrays. The method builds a stratified error model that includes array-to-array variability, feature-to-feature variability and the dependence of error on signal intensity. We apply this model to the characterization of the host response in a model of bacterial infection of human intestinal epithelial cells. We demonstrate the effectiveness of error model based microarray experiments and propose this as a general strategy for a microarray-based screening of large collections of biological samples. PMID:15800204
Differential processing of melodic, rhythmic and simple tone deviations in musicians--an MEG study.
Lappe, Claudia; Lappe, Markus; Pantev, Christo
2016-01-01
Rhythm and melody are two basic characteristics of music. Performing musicians have to pay attention to both, and avoid errors in either aspect of their performance. To investigate the neural processes involved in detecting melodic and rhythmic errors from auditory input we tested musicians on both kinds of deviations in a mismatch negativity (MMN) design. We found that MMN responses to a rhythmic deviation occurred at shorter latencies than MMN responses to a melodic deviation. Beamformer source analysis showed that the melodic deviation activated superior temporal, inferior frontal and superior frontal areas whereas the activation pattern of the rhythmic deviation focused more strongly on inferior and superior parietal areas, in addition to superior temporal cortex. Activation in the supplementary motor area occurred for both types of deviations. We also recorded responses to similar pitch and tempo deviations in a simple, non-musical repetitive tone pattern. In this case, there was no latency difference between the MMNs and cortical activation was smaller and mostly limited to auditory cortex. The results suggest that prediction and error detection of musical stimuli in trained musicians involve a broad cortical network and that rhythmic and melodic errors are processed in partially different cortical streams. Copyright © 2015 Elsevier Inc. All rights reserved.
Executive Council lists and general practitioner files
Farmer, R. D. T.; Knox, E. G.; Cross, K. W.; Crombie, D. L.
1974-01-01
An investigation of the accuracy of general practitioner and Executive Council files was approached by a comparison of the two. High error rates were found, including both file errors and record errors. On analysis it emerged that file error rates could not be satisfactorily expressed except in a time-dimensioned way, and we were unable to do this within the context of our study. Record error rates and field error rates were expressible as proportions of the number of records on both the lists; 79·2% of all records exhibited non-congruencies and particular information fields had error rates ranging from 0·8% (assignation of sex) to 68·6% (assignation of civil state). Many of the errors, both field errors and record errors, were attributable to delayed updating of mutable information. It is concluded that the simple transfer of Executive Council lists to a computer filing system would not solve all the inaccuracies and would not in itself permit Executive Council registers to be used for any health care applications requiring high accuracy. For this it would be necessary to design and implement a purpose designed health care record system which would include, rather than depend upon, the general practitioner remuneration system. PMID:4816588
On-line estimation of error covariance parameters for atmospheric data assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick P.
1995-01-01
A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.
Analysis of Slug Tests in Formations of High Hydraulic Conductivity
Butler, J.J.; Garnett, E.J.; Healey, J.M.
2003-01-01
A new procedure is presented for the analysis of slug tests performed in partially penetrating wells in formations of high hydraulic conductivity. This approach is a simple, spreadsheet-based implementation of existing models that can be used for analysis of tests from confined or unconfined aquifers. Field examples of tests exhibiting oscillatory and nonoscillatory behavior are used to illustrate the procedure and to compare results with estimates obtained using alternative approaches. The procedure is considerably simpler than recently proposed methods for this hydrogeologic setting. Although the simplifications required by the approach can introduce error into hydraulic-conductivity estimates, this additional error becomes negligible when appropriate measures are taken in the field. These measures are summarized in a set of practical field guidelines for slug tests in highly permeable aquifers.
Robust LOD scores for variance component-based linkage analysis.
Blangero, J; Williams, J T; Almasy, L
2000-01-01
The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.
Analysis of slug tests in formations of high hydraulic conductivity.
Butler, James J; Garnett, Elizabeth J; Healey, John M
2003-01-01
A new procedure is presented for the analysis of slug tests performed in partially penetrating wells in formations of high hydraulic conductivity. This approach is a simple, spreadsheet-based implementation of existing models that can be used for analysis of tests from confined or unconfined aquifers. Field examples of tests exhibiting oscillatory and nonoscillatory behavior are used to illustrate the procedure and to compare results with estimates obtained using alternative approaches. The procedure is considerably simpler than recently proposed methods for this hydrogeologic setting. Although the simplifications required by the approach can introduce error into hydraulic-conductivity estimates, this additional error becomes negligible when appropriate measures are taken in the field. These measures are summarized in a set of practical field guidelines for slug tests in highly permeable aquifers.
The Length of a Pestle: A Class Exercise in Measurement and Statistical Analysis.
ERIC Educational Resources Information Center
O'Reilly, James E.
1986-01-01
Outlines the simple exercise of measuring the length of an object as a concrete paradigm of the entire process of making chemical measurements and treating the resulting data. Discusses the procedure, significant figures, measurement error, spurious data, rejection of results, precision and accuracy, and student responses. (TW)
Kitchen Physics: Lessons in Fluid Pressure and Error Analysis
ERIC Educational Resources Information Center
Vieyra, Rebecca Elizabeth; Vieyra, Chrystian; Macchia, Stefano
2017-01-01
Although the advent and popularization of the "flipped classroom" tends to center around at-home video lectures, teachers are increasingly turning to at-home labs for enhanced student engagement. This paper describes two simple at-home experiments that can be accomplished in the kitchen. The first experiment analyzes the density of four…
Atlas of the Light Curves and Phase Plane Portraits of Selected Long-Period Variables
NASA Astrophysics Data System (ADS)
Kudashkina, L. S.; Andronov, I. L.
2017-12-01
For a group of the Mira-type stars, semi-regular variables and some RV Tau - type stars the limit cycles were computed and plotted using the phase plane diagrams. As generalized coordinates x and x', we have used φ - the brightness of the star and its phase derivative. We have used mean phase light curves using observations of various authors from the databases of AAVSO, AFOEV, VSOLJ, ASAS and approximated using a trigonometric polynomial of statistically optimal degree. For a simple sine-like light curve, the limit cycle is a simple ellipse. In a case of more complicated light curve, in which harmonics are statistically significant, the limit cycle has deviations from the ellipse. In an addition to a classical analysis, we use the error estimates of the smoothing function and its derivative to constrain an "error corridor" in the phase plane.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)
2000-01-01
Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.
Comparative study of standard space and real space analysis of quantitative MR brain data.
Aribisala, Benjamin S; He, Jiabao; Blamire, Andrew M
2011-06-01
To compare the robustness of region of interest (ROI) analysis of magnetic resonance imaging (MRI) brain data in real space with analysis in standard space and to test the hypothesis that standard space image analysis introduces more partial volume effect errors compared to analysis of the same dataset in real space. Twenty healthy adults with no history or evidence of neurological diseases were recruited; high-resolution T(1)-weighted, quantitative T(1), and B(0) field-map measurements were collected. Algorithms were implemented to perform analysis in real and standard space and used to apply a simple standard ROI template to quantitative T(1) datasets. Regional relaxation values and histograms for both gray and white matter tissues classes were then extracted and compared. Regional mean T(1) values for both gray and white matter were significantly lower using real space compared to standard space analysis. Additionally, regional T(1) histograms were more compact in real space, with smaller right-sided tails indicating lower partial volume errors compared to standard space analysis. Standard space analysis of quantitative MRI brain data introduces more partial volume effect errors biasing the analysis of quantitative data compared to analysis of the same dataset in real space. Copyright © 2011 Wiley-Liss, Inc.
2013-01-01
A new approach, the projective system approach, is proposed to realize modified projective synchronization between two different chaotic systems. By simple analysis of trajectories in the phase space, a projective system of the original chaotic systems is obtained to replace the errors system to judge the occurrence of modified projective synchronization. Theoretical analysis and numerical simulations show that, although the projective system may not be unique, modified projective synchronization can be achieved provided that the origin of any of projective systems is asymptotically stable. Furthermore, an example is presented to illustrate that even a necessary and sufficient condition for modified projective synchronization can be derived by using the projective system approach. PMID:24187522
Hanson, Sonya M.; Ekins, Sean; Chodera, John D.
2015-01-01
All experimental assay data contains error, but the magnitude, type, and primary origin of this error is often not obvious. Here, we describe a simple set of assay modeling techniques based on the bootstrap principle that allow sources of error and bias to be simulated and propagated into assay results. We demonstrate how deceptively simple operations—such as the creation of a dilution series with a robotic liquid handler—can significantly amplify imprecision and even contribute substantially to bias. To illustrate these techniques, we review an example of how the choice of dispensing technology can impact assay measurements, and show how large contributions to discrepancies between assays can be easily understood and potentially corrected for. These simple modeling techniques—illustrated with an accompanying IPython notebook—can allow modelers to understand the expected error and bias in experimental datasets, and even help experimentalists design assays to more effectively reach accuracy and imprecision goals. PMID:26678597
NASA Astrophysics Data System (ADS)
Song, Huixu; Shi, Zhaoyao; Chen, Hongfang; Sun, Yanqiang
2018-01-01
This paper presents a novel experimental approach and a simple model for verifying that spherical mirror of laser tracking system could lessen the effect of rotation errors of gimbal mount axes based on relative motion thinking. Enough material and evidence are provided to support that this simple model could replace complex optical system in laser tracking system. This experimental approach and model interchange the kinematic relationship between spherical mirror and gimbal mount axes in laser tracking system. Being fixed stably, gimbal mount axes' rotation error motions are replaced by spatial micro-displacements of spherical mirror. These motions are simulated by driving spherical mirror along the optical axis and vertical direction with the use of precision positioning platform. The effect on the laser ranging measurement accuracy of displacement caused by the rotation errors of gimbal mount axes could be recorded according to the outcome of laser interferometer. The experimental results show that laser ranging measurement error caused by the rotation errors is less than 0.1 μm if radial error motion and axial error motion are under 10 μm. The method based on relative motion thinking not only simplifies the experimental procedure but also achieves that spherical mirror owns the ability to reduce the effect of rotation errors of gimbal mount axes in laser tracking system.
[Medical errors: inevitable but preventable].
Giard, R W
2001-10-27
Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.
NASA Astrophysics Data System (ADS)
Hinton, Courtney; Punjabi, Alkesh; Ali, Halima
2008-11-01
The simple map is the simplest map that has topology of divertor tokamaks [1]. Recently, the action-angle coordinates for simple map are analytically calculated, and simple map is constructed in action-angle coordinates [2]. Action-angle coordinates for simple map can not be inverted to real space coordinates (R,Z). Because there is logarithmic singularity on the ideal separatrix, trajectories can not cross separatrix [2]. Simple map in action-angle coordinates is applied to calculate stochastic broadening due to magnetic noise and field errors. Mode numbers for noise + field errors from the DIII-D tokamak are used. Mode numbers are (m,n)=(3,1), (4,1), (6,2), (7,2), (8,2), (9,3), (10,3), (11,3), (12,3) [3]. The common amplitude δ is varied from 0.8X10-5 to 2.0X10-5. For this noise and field errors, the width of stochastic layer in simple map is calculated. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793 1. A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Let. A 364, 140--145 (2007). 2. O. Kerwin, A. Punjabi, and H. Ali, to appear in Physics of Plasmas. 3. A. Punjabi and H. Ali, P1.012, 35^th EPS Conference on Plasma Physics, June 9-13, 2008, Hersonissos, Crete, Greece.
Avery, Anthony J; Rodgers, Sarah; Cantrill, Judith A; Armstrong, Sarah; Elliott, Rachel; Howard, Rachel; Kendrick, Denise; Morris, Caroline J; Murray, Scott A; Prescott, Robin J; Cresswell, Kathrin; Sheikh, Aziz
2009-05-01
Medication errors are an important cause of morbidity and mortality in primary care. The aims of this study are to determine the effectiveness, cost effectiveness and acceptability of a pharmacist-led information-technology-based complex intervention compared with simple feedback in reducing proportions of patients at risk from potentially hazardous prescribing and medicines management in general (family) practice. RESEARCH SUBJECT GROUP: "At-risk" patients registered with computerised general practices in two geographical regions in England. Parallel group pragmatic cluster randomised trial. Practices will be randomised to either: (i) Computer-generated feedback; or (ii) Pharmacist-led intervention comprising of computer-generated feedback, educational outreach and dedicated support. The proportion of patients in each practice at six and 12 months post intervention: - with a computer-recorded history of peptic ulcer being prescribed non-selective non-steroidal anti-inflammatory drugs; - with a computer-recorded diagnosis of asthma being prescribed beta-blockers; - aged 75 years and older receiving long-term prescriptions for angiotensin converting enzyme inhibitors or loop diuretics without a recorded assessment of renal function and electrolytes in the preceding 15 months. SECONDARY OUTCOME MEASURES; These relate to a number of other examples of potentially hazardous prescribing and medicines management. An economic evaluation will be done of the cost per error avoided, from the perspective of the UK National Health Service (NHS), comparing the pharmacist-led intervention with simple feedback. QUALITATIVE ANALYSIS: A qualitative study will be conducted to explore the views and experiences of health care professionals and NHS managers concerning the interventions, and investigate possible reasons why the interventions prove effective, or conversely prove ineffective. 34 practices in each of the two treatment arms would provide at least 80% power (two-tailed alpha of 0.05) to demonstrate a 50% reduction in error rates for each of the three primary outcome measures in the pharmacist-led intervention arm compared with a 11% reduction in the simple feedback arm. At the time of submission of this article, 72 general practices have been recruited (36 in each arm of the trial) and the interventions have been delivered. Analysis has not yet been undertaken.
Evaluation of dynamic electromagnetic tracking deviation
NASA Astrophysics Data System (ADS)
Hummel, Johann; Figl, Michael; Bax, Michael; Shahidi, Ramin; Bergmann, Helmar; Birkfellner, Wolfgang
2009-02-01
Electromagnetic tracking systems (EMTS's) are widely used in clinical applications. Many reports have evaluated their static behavior and errors caused by metallic objects were examined. Although there exist some publications concerning the dynamic behavior of EMTS's the measurement protocols are either difficult to reproduce with respect of the movement path or only accomplished at high technical effort. Because dynamic behavior is of major interest with respect to clinical applications we established a simple but effective modal measurement easy to repeat at other laboratories. We built a simple pendulum where the sensor of our EMTS (Aurora, NDI, CA) could be mounted. The pendulum was mounted on a special bearing to guarantee that the pendulum path is planar. This assumption was tested before starting the measurements. All relevant parameters defining the pendulum motion such as rotation center and length are determined by static measurement at satisfactory accuracy. Then position and orientation data were gathered over a time period of 8 seconds and timestamps were recorded. Data analysis provided a positioning error and an overall error combining both position and orientation. All errors were calculated by means of the well know equations concerning pendulum movement. Additionally, latency - the elapsed time from input motion until the immediate consequences of that input are available - was calculated using well-known equations for mechanical pendulums for different velocities. We repeated the measurements with different metal objects (rods made of stainless steel type 303 and 416) between field generator and pendulum. We found a root mean square error (eRMS) of 1.02mm with respect to the distance of the sensor position to the fit plane (maximum error emax = 2.31mm, minimum error emin = -2.36mm). The eRMS for positional error amounted to 1.32mm while the overall error was 3.24 mm. The latency at a pendulum angle of 0° (vertical) was 7.8ms.
NASA Astrophysics Data System (ADS)
Heavens, A. F.; Seikel, M.; Nord, B. D.; Aich, M.; Bouffanais, Y.; Bassett, B. A.; Hobson, M. P.
2014-12-01
The Fisher Information Matrix formalism (Fisher 1935) is extended to cases where the data are divided into two parts (X, Y), where the expectation value of Y depends on X according to some theoretical model, and X and Y both have errors with arbitrary covariance. In the simplest case, (X, Y) represent data pairs of abscissa and ordinate, in which case the analysis deals with the case of data pairs with errors in both coordinates, but X can be any measured quantities on which Y depends. The analysis applies for arbitrary covariance, provided all errors are Gaussian, and provided the errors in X are small, both in comparison with the scale over which the expected signal Y changes, and with the width of the prior distribution. This generalizes the Fisher Matrix approach, which normally only considers errors in the `ordinate' Y. In this work, we include errors in X by marginalizing over latent variables, effectively employing a Bayesian hierarchical model, and deriving the Fisher Matrix for this more general case. The methods here also extend to likelihood surfaces which are not Gaussian in the parameter space, and so techniques such as DALI (Derivative Approximation for Likelihoods) can be generalized straightforwardly to include arbitrary Gaussian data error covariances. For simple mock data and theoretical models, we compare to Markov Chain Monte Carlo experiments, illustrating the method with cosmological supernova data. We also include the new method in the FISHER4CAST software.
Spatial interpolation of solar global radiation
NASA Astrophysics Data System (ADS)
Lussana, C.; Uboldi, F.; Antoniazzi, C.
2010-09-01
Solar global radiation is defined as the radiant flux incident onto an area element of the terrestrial surface. Its direct knowledge plays a crucial role in many applications, from agrometeorology to environmental meteorology. The ARPA Lombardia's meteorological network includes about one hundred of pyranometers, mostly distributed in the southern part of the Alps and in the centre of the Po Plain. A statistical interpolation method based on an implementation of the Optimal Interpolation is applied to the hourly average of the solar global radiation observations measured by the ARPA Lombardia's network. The background field is obtained using SMARTS (The Simple Model of the Atmospheric Radiative Transfer of Sunshine, Gueymard, 2001). The model is initialised by assuming clear sky conditions and it takes into account the solar position and orography related effects (shade and reflection). The interpolation of pyranometric observations introduces in the analysis fields information about cloud presence and influence. A particular effort is devoted to prevent observations affected by large errors of different kinds (representativity errors, systematic errors, gross errors) from entering the analysis procedure. The inclusion of direct cloud information from satellite observations is also planned.
NASA Technical Reports Server (NTRS)
Kalton, G.
1983-01-01
A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.
Kitchen Physics: Lessons in Fluid Pressure and Error Analysis
NASA Astrophysics Data System (ADS)
Vieyra, Rebecca Elizabeth; Vieyra, Chrystian; Macchia, Stefano
2017-02-01
Although the advent and popularization of the "flipped classroom" tends to center around at-home video lectures, teachers are increasingly turning to at-home labs for enhanced student engagement. This paper describes two simple at-home experiments that can be accomplished in the kitchen. The first experiment analyzes the density of four liquids using a waterproof case and a smartphone barometer in a container, sink, or tub. The second experiment determines the relationship between pressure and temperature of an ideal gas in a constant volume container placed momentarily in a refrigerator freezer. These experiences provide a ripe opportunity both for learning fundamental physics concepts as well as to investigate a variety of error analysis techniques that are frequently overlooked in introductory physics courses.
25+ Years of the Hubble Space Telescope and a Simple Error That Cost Millions
ERIC Educational Resources Information Center
Shakerin, Said
2016-01-01
A simple mistake in properly setting up a measuring device caused millions of dollars to be spent in correcting the initial optical failure of the Hubble Space Telescope (HST). This short article is intended as a lesson for a physics laboratory and discussion of errors in measurement.
Examining Errors in Simple Spreadsheet Modeling from Different Research Perspectives
ERIC Educational Resources Information Center
Kadijevich, Djordje M.
2012-01-01
By using a sample of 1st-year undergraduate business students, this study dealt with the development of simple (deterministic and non-optimization) spreadsheet models of income statements within an introductory course on business informatics. The study examined students' errors in doing this for business situations of their choice and found three…
Automatic-repeat-request error control schemes
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.; Miller, M. J.
1983-01-01
Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.
A novel approach to solve nonlinear Fredholm integral equations of the second kind.
Li, Hu; Huang, Jin
2016-01-01
In this paper, we present a novel approach to solve nonlinear Fredholm integral equations of the second kind. This algorithm is constructed by the integral mean value theorem and Newton iteration. Convergence and error analysis of the numerical solutions are given. Moreover, Numerical examples show the algorithm is very effective and simple.
Analysis of Calibration Errors for Both Short and Long Stroke White Light Experiments
NASA Technical Reports Server (NTRS)
Pan, Xaiopei
2006-01-01
This work will analyze focusing and tilt variations introduced by thermal changes in calibration processes. In particular the accuracy limits are presented for common short- and long-stroke experiments. A new, simple, practical calibration scheme is proposed and analyzed based on the SIM PlanetQuest's Micro-Arcsecond Metrology (MAM) testbed experiments.
Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, Joseph E.; Brown, Judith Alice
In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less
Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques
Bishop, Joseph E.; Brown, Judith Alice
2018-06-15
In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less
Calibration of a stack of NaI scintillators at the Berkeley Bevalac
NASA Technical Reports Server (NTRS)
Schindler, S. M.; Buffington, A.; Lau, K.; Rasmussen, I. L.
1983-01-01
An analysis of the carbon and argon data reveals that essentially all of the charge-changing fragmentation reactions within the stack can be identified and removed by imposing the simple criteria relating the observed energy deposition profiles to the expected Bragg curve depositions. It is noted that these criteria are even capable of identifying approximately one-third of the expected neutron-stripping interactions, which in these cases have anomalous deposition profiles. The contribution of mass error from uncertainty in delta E has an upper limit of 0.25 percent for Mn; this produces an associated mass error for the experiment of about 0.14 amu. It is believed that this uncertainty will change little with changing gamma. Residual errors in the mapping produce even smaller mass errors for lighter isotopes, whereas photoelectron fluctuations and delta-ray effects are approximately the same independent of the charge and energy deposition.
Phillips, A. M.; Birch, N. C.; Ribbans, W. J.
1997-01-01
Twenty-five orthopaedic surgeons underwent eight motor and sensory tests while using four different glove combinations and without gloves. As well as single and double latex, surgeons wore a simple Kevlar glove with latex inside and outside and then wore a Kevlar and Medak glove with latex inside and outside, as recommended by the manufacturers. The effect of learning with each sequence was neutralised by randomising the glove order. The time taken to complete each test was recorded and, where appropriate, error rates were noted. Simple sensory tests took progressively longer to perform so that using the thickest glove combination led to the completion times being doubled. Error rates increased significantly. Tests of stereognosis also took longer and use of the thickest glove combination caused these tests to take three times as long on average. Error rates again increased significantly. However, prolongation of motor tasks was less marked. We conclude that, armed with this quantitative analysis of sensitivity and dexterity impairment, surgeons can judge the relative difficulties that may be incurred as a result of wearing the gloves against the benefits that they offer in protection. PMID:9135240
Schoenberg, Mike R; Rum, Ruba S
2017-11-01
Rapid, clear and efficient communication of neuropsychological results is essential to benefit patient care. Errors in communication are a lead cause of medical errors; nevertheless, there remains a lack of consistency in how neuropsychological scores are communicated. A major limitation in the communication of neuropsychological results is the inconsistent use of qualitative descriptors for standardized test scores and the use of vague terminology. PubMed search from 1 Jan 2007 to 1 Aug 2016 to identify guidelines or consensus statements for the description and reporting of qualitative terms to communicate neuropsychological test scores was conducted. The review found the use of confusing and overlapping terms to describe various ranges of percentile standardized test scores. In response, we propose a simplified set of qualitative descriptors for normalized test scores (Q-Simple) as a means to reduce errors in communicating test results. The Q-Simple qualitative terms are: 'very superior', 'superior', 'high average', 'average', 'low average', 'borderline' and 'abnormal/impaired'. A case example illustrates the proposed Q-Simple qualitative classification system to communicate neuropsychological results for neurosurgical planning. The Q-Simple qualitative descriptor system is aimed as a means to improve and standardize communication of standardized neuropsychological test scores. Research are needed to further evaluate neuropsychological communication errors. Conveying the clinical implications of neuropsychological results in a manner that minimizes risk for communication errors is a quintessential component of evidence-based practice. Copyright © 2017 Elsevier B.V. All rights reserved.
[Quantitative analysis of nucleotide mixtures with terahertz time domain spectroscopy].
Zhang, Zeng-yan; Xiao, Ti-qiao; Zhao, Hong-wei; Yu, Xiao-han; Xi, Zai-jun; Xu, Hong-jie
2008-09-01
Adenosine, thymidine, guanosine, cytidine and uridine form the building blocks of ribose nucleic acid (RNA) and deoxyribose nucleic acid (DNA). Nucleosides and their derivants are all have biological activities. Some of them can be used as medicine directly or as materials to synthesize other medicines. It is meaningful to detect the component and content in nucleosides mixtures. In the present paper, components and contents of the mixtures of adenosine, thymidine, guanosine, cytidine and uridine were analyzed. THz absorption spectra of pure nucleosides were set as standard spectra. The mixture's absorption spectra were analyzed by linear regression with non-negative constraint to identify the components and their relative content in the mixtures. The experimental and analyzing results show that it is simple and effective to get the components and their relative percentage in the mixtures by terahertz time domain spectroscopy with a relative error less than 10%. Component which is absent could be excluded exactly by this method, and the error sources were also analyzed. All the experiments and analysis confirms that this method is of no damage or contamination to the sample. This means that it will be a simple, effective and new method in biochemical materials analysis, which extends the application field of THz-TDS.
How to test validity in orthodontic research: a mixed dentition analysis example.
Donatelli, Richard E; Lee, Shin-Jae
2015-02-01
The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Berthet, Lionel; Marty, Renaud; Bourgin, François; Viatgé, Julie; Piotte, Olivier; Perrin, Charles
2017-04-01
An increasing number of operational flood forecasting centres assess the predictive uncertainty associated with their forecasts and communicate it to the end users. This information can match the end-users needs (i.e. prove to be useful for an efficient crisis management) only if it is reliable: reliability is therefore a key quality for operational flood forecasts. In 2015, the French flood forecasting national and regional services (Vigicrues network; www.vigicrues.gouv.fr) implemented a framework to compute quantitative discharge and water level forecasts and to assess the predictive uncertainty. Among the possible technical options to achieve this goal, a statistical analysis of past forecasting errors of deterministic models has been selected (QUOIQUE method, Bourgin, 2014). It is a data-based and non-parametric approach based on as few assumptions as possible about the forecasting error mathematical structure. In particular, a very simple assumption is made regarding the predictive uncertainty distributions for large events outside the range of the calibration data: the multiplicative error distribution is assumed to be constant, whatever the magnitude of the flood. Indeed, the predictive distributions may not be reliable in extrapolation. However, estimating the predictive uncertainty for these rare events is crucial when major floods are of concern. In order to improve the forecasts reliability for major floods, an attempt at combining the operational strength of the empirical statistical analysis and a simple error modelling is done. Since the heteroscedasticity of forecast errors can considerably weaken the predictive reliability for large floods, this error modelling is based on the log-sinh transformation which proved to reduce significantly the heteroscedasticity of the transformed error in a simulation context, even for flood peaks (Wang et al., 2012). Exploratory tests on some operational forecasts issued during the recent floods experienced in France (major spring floods in June 2016 on the Loire river tributaries and flash floods in fall 2016) will be shown and discussed. References Bourgin, F. (2014). How to assess the predictive uncertainty in hydrological modelling? An exploratory work on a large sample of watersheds, AgroParisTech Wang, Q. J., Shrestha, D. L., Robertson, D. E. and Pokhrel, P (2012). A log-sinh transformation for data normalization and variance stabilization. Water Resources Research, , W05514, doi:10.1029/2011WR010973
NASA Astrophysics Data System (ADS)
Forsyth, J. M.
1983-02-01
In this document the authors summarize our investigation of the reflecting properties of X-ray multilayers. The breadth of this investigation indicates the utility of the difference equation formalism in the analysis of such structure. The formalism is particularly useful in analyzing multilayers whose structure is not a simple periodic bilayer. The complexity in structure can be either intentional, as in multilayers made by in-situ reflectance monitoring, or it can be a consequence of a degradation mechanism, such as random thickness errors or interlayer diffusion. Both the analysis of thickness errors and the analysis of interlayer diffusion are conceptually simple, effectively one dimensional problems that are straightforwared to pose. In the authors analysis of in-situ reflectance monitoring, they provide a quantitative understanding of an experimentally successful process that has not previously been treated theoretically. As X-ray multilayers come into wider use, there will undoubtedly be an increasing need for a more precise understanding of their reflecting properties. Thus, it is expected that in the future more detailed modeling will be undertaken of less easily specified structures than those above. The authors believe that their formalism will continue to prove useful in the modeling of these more complex structures. One such structure that may be of interest is that of a multilayer degraded by interfacial roughness.
SU-E-T-88: Comprehensive Automated Daily QA for Hypo- Fractionated Treatments
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGuinness, C; Morin, O
2014-06-01
Purpose: The trend towards more SBRT treatments with fewer high dose fractions places increased importance on daily QA. Patient plan specific QA with 3%/3mm gamma analysis and daily output constancy checks may not be enough to guarantee the level of accuracy required for SBRT treatments. But increasing the already extensive amount of QA procedures that are required is a daunting proposition. We performed a feasibility study for more comprehensive automated daily QA that could improve the diagnostic capabilities of QA without increasing workload. Methods: We performed the study on a Siemens Artiste linear accelerator using the integrated flat panel EPID.more » We included square fields, a picket fence, overlap and representative IMRT fields to measure output, flatness, symmetry, beam center, and percent difference from the standard. We also imposed a set of machine errors: MLC leaf position, machine output, and beam steering to compare with the standard. Results: Daily output was consistent within +/− 1%. Change in steering current by 1.4% and 2.4% resulted in a 3.2% and 6.3% change in flatness. 1 and 2mm MLC leaf offset errors were visibly obvious in difference plots, but passed a 3%/3mm gamma analysis. A simple test of transmission in a picket fence can catch a leaf offset error of a single leaf by 1mm. The entire morning QA sequence is performed in less than 30 minutes and images are automatically analyzed. Conclusion: Automated QA procedures could be used to provide more comprehensive information about the machine with less time and human involvement. We have also shown that other simple tests are better able to catch MLC leaf position errors than a 3%/3mm gamma analysis commonly used for IMRT and modulated arc treatments. Finally, this information could be used to watch trends of the machine and predict problems before they lead to costly machine downtime.« less
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji; Sano, Kousuke
This paper presents a new unified analysis of estimate errors by model-matching phase-estimation methods such as rotor-flux state-observers, back EMF state-observers, and back EMF disturbance-observers, for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using one of the model-matching phase-estimation methods.
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Combating omission errors through task analysis and good reminders.
Reason, J
2002-03-01
Leaving out necessary task steps is the single most common human error type. Certain task steps possess characteristics that are more likely to provoke omissions than others, and can be identified in advance. The paper reports two studies. The first, involving a simple photocopier, established that failing to remove the last page of the original is the commonest omission. This step possesses four distinct error-provoking features that combine their effects in an additive fashion. The second study examined the degree to which everyday memory aids satisfy five features of a good reminder: conspicuity, contiguity, content, context, and countability. A close correspondence was found between the percentage use of strategies and the degree to which they satisfied these five criteria. A three stage omission management programme was outlined: task analysis (identifying discrete task steps) of some safety critical activity; assessing the omission likelihood of each step; and the choice and application of a suitable reminder. Such a programme is applicable to a variety of healthcare procedures.
Modal cost analysis for simple continua
NASA Technical Reports Server (NTRS)
Hu, A.; Skelton, R. E.; Yang, T. Y.
1988-01-01
The most popular finite element codes are based upon appealing theories of convergence of modal frequencies. For example, the popularity of cubic elements for beam-like structures is due to the rapid convergence of modal frequencies and stiffness properties. However, for those problems in which the primary consideration is the accuracy of response of the structure at specified locations, it is more important to obtain accuracy in the modal costs than in the modal frequencies. The modal cost represents the contribution of a mode in the norm of the response vector. This paper provides a complete modal cost analysis for simple continua such as beam-like structures. Upper bounds are developed for mode truncation errors in the model reduction process and modal cost analysis dictates which modes to retain in order to reduce the model for control design purposes.
An improved semi-implicit method for structural dynamics analysis
NASA Technical Reports Server (NTRS)
Park, K. C.
1982-01-01
A semi-implicit algorithm is presented for direct time integration of the structural dynamics equations. The algorithm avoids the factoring of the implicit difference solution matrix and mitigates the unacceptable accuracy losses which plagued previous semi-implicit algorithms. This substantial accuracy improvement is achieved by augmenting the solution matrix with two simple diagonal matrices of the order of the integration truncation error.
Characterizing error distributions for MISR and MODIS optical depth data
NASA Astrophysics Data System (ADS)
Paradise, S.; Braverman, A.; Kahn, R.; Wilson, B.
2008-12-01
The Multi-angle Imaging SpectroRadiometer (MISR) and Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's EOS satellites collect massive, long term data records on aerosol amounts and particle properties. MISR and MODIS have different but complementary sampling characteristics. In order to realize maximum scientific benefit from these data, the nature of their error distributions must be quantified and understood so that discrepancies between them can be rectified and their information combined in the most beneficial way. By 'error' we mean all sources of discrepancies between the true value of the quantity of interest and the measured value, including instrument measurement errors, artifacts of retrieval algorithms, and differential spatial and temporal sampling characteristics. Previously in [Paradise et al., Fall AGU 2007: A12A-05] we presented a unified, global analysis and comparison of MISR and MODIS measurement biases and variances over lives of the missions. We used AErosol RObotic NETwork (AERONET) data as ground truth and evaluated MISR and MODIS optical depth distributions relative to AERONET using simple linear regression. However, AERONET data are themselves instrumental measurements subject to sources of uncertainty. In this talk, we discuss results from an improved analysis of MISR and MODIS error distributions that uses errors-in-variables regression, accounting for uncertainties in both the dependent and independent variables. We demonstrate on optical depth data, but the method is generally applicable to other aerosol properties as well.
Specificity of Atmosphere Correction of Satellite Ocean Color Data in Far-Eastern Region
NASA Astrophysics Data System (ADS)
Trusenkova, O.; Kachur, V.; Aleksanin, A. I.
2016-02-01
It was carried out an error analysis of satellite reflectance coefficients (Rrs) of MODIS/AQUA colour data for two atmospheric correction algorithms (NIR, MUMM) in the Far-Eastern region. Some sets of unique data of in situ and satellite measurements have been analysed. A set has some measurements with ASD spectroradiometer for each satellite pass. The measurement allocations were selected so the Chlorophyll-a concentration has high variability. Analysis of arbitrary set demonstrated that the main error component is systematic error, and it has simple relations on Rrs values. The reasons of such error behavior are considered. The most probable explanation of the large errors of oceanic color parameters in the Far-Eastern region is the ability of high concentrations of continental aerosol. A comparison of satellite and in situ measurements at AERONET stations of USA and South Korea regions has been made. It was shown that for NIR-correction of the atmosphere influence the error values in these two regions have differences up to 10 times for almost the same water turbidity and relatively good accuracy of computation of aerosol optical thickness. The study was supported by grant Russian Scientific Foundation No. 14-50-00034, by grant of Russian Foundation of Basic Research No.15-35-21032-mol-a-ved, and by Program of Basic Research "Far East" of Far Eastern Branch of Russian Academy of Sciences.
The Effect of Error Correlation on Interfactor Correlation in Psychometric Measurement
ERIC Educational Resources Information Center
Westfall, Peter H.; Henning, Kevin S. S.; Howell, Roy D.
2012-01-01
This article shows how interfactor correlation is affected by error correlations. Theoretical and practical justifications for error correlations are given, and a new equivalence class of models is presented to explain the relationship between interfactor correlation and error correlations. The class allows simple, parsimonious modeling of error…
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.
Lin, Johnny; Bentler, Peter M
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.
NASA Technical Reports Server (NTRS)
Ulvestad, J. S.
1992-01-01
A consider error covariance analysis was performed in order to investigate the orbit-determination performance attainable using two-way (coherent) 8.4-GHz (X-band) Doppler data for two segments of the planned Mars Observer trajectory. The analysis includes the effects of the current level of calibration errors in tropospheric delay, ionospheric delay, and station locations, with particular emphasis placed on assessing the performance of several candidate elevation-dependent data-weighting functions. One weighting function was found that yields good performance for a variety of tracking geometries. This weighting function is simple and robust; it reduces the danger of error that might exist if an analyst had to select one of several different weighting functions that are highly sensitive to the exact choice of parameters and to the tracking geometry. Orbit-determination accuracy improvements that may be obtained through the use of calibration data derived from Global Positioning System (GPS) satellites also were investigated, and can be as much as a factor of three in some components of the spacecraft state vector. Assuming that both station-location errors and troposphere calibration errors are reduced simultaneously, the recommended data-weighting function need not be changed when GPS calibrations are incorporated in the orbit-determination process.
Crowding with conjunctions of simple features.
Põder, Endel; Wagemans, Johan
2007-11-20
Several recent studies have related crowding with the feature integration stage in visual processing. In order to understand the mechanisms involved in this stage, it is important to use stimuli that have several features to integrate, and these features should be clearly defined and measurable. In this study, Gabor patches were used as target and distractor stimuli. The stimuli differed in three dimensions: spatial frequency, orientation, and color. A group of 3, 5, or 7 objects was presented briefly at 4 deg eccentricity of the visual field. The observers' task was to identify the object located in the center of the group. A strong effect of the number of distractors was observed, consistent with various spatial pooling models. The analysis of incorrect responses revealed that these were a mix of feature errors and mislocalizations of the target object. Feature errors were not purely random, but biased by the features of distractors. We propose a simple feature integration model that predicts most of the observed regularities.
Bonetti, Jennifer; Quarino, Lawrence
2014-05-01
This study has shown that the combination of simple techniques with the use of multivariate statistics offers the potential for the comparative analysis of soil samples. Five samples were obtained from each of twelve state parks across New Jersey in both the summer and fall seasons. Each sample was examined using particle-size distribution, pH analysis in both water and 1 M CaCl2 , and a loss on ignition technique. Data from each of the techniques were combined, and principal component analysis (PCA) and canonical discriminant analysis (CDA) were used for multivariate data transformation. Samples from different locations could be visually differentiated from one another using these multivariate plots. Hold-one-out cross-validation analysis showed error rates as low as 3.33%. Ten blind study samples were analyzed resulting in no misclassifications using Mahalanobis distance calculations and visual examinations of multivariate plots. Seasonal variation was minimal between corresponding samples, suggesting potential success in forensic applications. © 2014 American Academy of Forensic Sciences.
The problem with simple lumped parameter models: Evidence from tritium mean transit times
NASA Astrophysics Data System (ADS)
Stewart, Michael; Morgenstern, Uwe; Gusyev, Maksym; Maloszewski, Piotr
2017-04-01
Simple lumped parameter models (LPMs) based on assuming homogeneity and stationarity in catchments and groundwater bodies are widely used to model and predict hydrological system outputs. However, most systems are not homogeneous or stationary, and errors resulting from disregard of the real heterogeneity and non-stationarity of such systems are not well understood and rarely quantified. As an example, mean transit times (MTTs) of streamflow are usually estimated from tracer data using simple LPMs. The MTT or transit time distribution of water in a stream reveals basic catchment properties such as water flow paths, storage and mixing. Importantly however, Kirchner (2016a) has shown that there can be large (several hundred percent) aggregation errors in MTTs inferred from seasonal cycles in conservative tracers such as chloride or stable isotopes when they are interpreted using simple LPMs (i.e. a range of gamma models or GMs). Here we show that MTTs estimated using tritium concentrations are similarly affected by aggregation errors due to heterogeneity and non-stationarity when interpreted using simple LPMs (e.g. GMs). The tritium aggregation error series from the strong nonlinearity between tritium concentrations and MTT, whereas for seasonal tracer cycles it is due to the nonlinearity between tracer cycle amplitudes and MTT. In effect, water from young subsystems in the catchment outweigh water from old subsystems. The main difference between the aggregation errors with the different tracers is that with tritium it applies at much greater ages than it does with seasonal tracer cycles. We stress that the aggregation errors arise when simple LPMs are applied (with simple LPMs the hydrological system is assumed to be a homogeneous whole with parameters representing averages for the system). With well-chosen compound LPMs (which are combinations of simple LPMs) on the other hand, aggregation errors are very much smaller because young and old water flows are treated separately. "Well-chosen" means that the compound LPM is based on hydrologically- and geologically-validated information, and the choice can be assisted by matching simulations to time series of tritium measurements. References: Kirchner, J.W. (2016a): Aggregation in environmental systems - Part 1: Seasonal tracer cycles quantify young water fractions, but not mean transit times, in spatially heterogeneous catchments. Hydrol. Earth Syst. Sci. 20, 279-297. Stewart, M.K., Morgenstern, U., Gusyev, M.A., Maloszewski, P. 2016: Aggregation effects on tritium-based mean transit times and young water fractions in spatially heterogeneous catchments and groundwater systems, and implications for past and future applications of tritium. Submitted to Hydrol. Earth Syst. Sci., 10 October 2016, doi:10.5194/hess-2016-532.
Zhang, Wei; Regterschot, G Ruben H; Wahle, Fabian; Geraedts, Hilde; Baldus, Heribert; Zijlstra, Wiebren
2014-01-01
Falls result in substantial disability, morbidity, and mortality among older people. Early detection of fall risks and timely intervention can prevent falls and injuries due to falls. Simple field tests, such as repeated chair rise, are used in clinical assessment of fall risks in older people. Development of on-body sensors introduces potential beneficial alternatives for traditional clinical methods. In this article, we present a pendant sensor based chair rise detection and analysis algorithm for fall risk assessment in older people. The recall and the precision of the transfer detection were 85% and 87% in standard protocol, and 61% and 89% in daily life activities. Estimation errors of chair rise performance indicators: duration, maximum acceleration, peak power and maximum jerk were tested in over 800 transfers. Median estimation error in transfer peak power ranged from 1.9% to 4.6% in various tests. Among all the performance indicators, maximum acceleration had the lowest median estimation error of 0% and duration had the highest median estimation error of 24% over all tests. The developed algorithm might be feasible for continuous fall risk assessment in older people.
Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You
2013-11-04
A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.
Jember, Abebaw; Hailu, Mignote; Messele, Anteneh; Demeke, Tesfaye; Hassen, Mohammed
2018-01-01
A medication error (ME) is any preventable event that may cause or lead to inappropriate medication use or patient harm. Voluntary reporting has a principal role in appreciating the extent and impact of medication errors. Thus, exploration of the proportion of medication error reporting and associated factors among nurses is important to inform service providers and program implementers so as to improve the quality of the healthcare services. Institution based quantitative cross-sectional study was conducted among 397 nurses from March 6 to May 10, 2015. Stratified sampling followed by simple random sampling technique was used to select the study participants. The data were collected using structured self-administered questionnaire which was adopted from studies conducted in Australia and Jordan. A pilot study was carried out to validate the questionnaire before data collection for this study. Bivariate and multivariate logistic regression models were fitted to identify factors associated with the proportion of medication error reporting among nurses. An adjusted odds ratio with 95% confidence interval was computed to determine the level of significance. The proportion of medication error reporting among nurses was found to be 57.4%. Regression analysis showed that sex, marital status, having made a medication error and medication error experience were significantly associated with medication error reporting. The proportion of medication error reporting among nurses in this study was found to be higher than other studies.
NASA Astrophysics Data System (ADS)
Waeldele, F.
1983-01-01
The influence of sample shape deviations on the measurement uncertainties and the optimization of computer aided coordinate measurement were investigated for a circle and a cylinder. Using the complete error propagation law in matrix form the parameter uncertainties are calculated, taking the correlation between the measurement points into account. Theoretical investigations show that the measuring points have to be equidistantly distributed and that for a cylindrical body a measuring point distribution along a cross section is better than along a helical line. The theoretically obtained expressions to calculate the uncertainties prove to be a good estimation basis. The simple error theory is not satisfactory for estimation. The complete statistical data analysis theory helps to avoid aggravating measurement errors and to adjust the number of measuring points to the required measuring uncertainty.
Reconstruction of regional mean temperature for East Asia since 1900s and its uncertainties
NASA Astrophysics Data System (ADS)
Hua, W.
2017-12-01
Regional average surface air temperature (SAT) is one of the key variables often used to investigate climate change. Unfortunately, because of the limited observations over East Asia, there were also some gaps in the observation data sampling for regional mean SAT analysis, which was important to estimate past climate change. In this study, the regional average temperature of East Asia since 1900s is calculated by the Empirical Orthogonal Function (EOF)-based optimal interpolation (OA) method with considering the data errors. The results show that our estimate is more precise and robust than the results from simple average, which provides a better way for past climate reconstruction. In addition to the reconstructed regional average SAT anomaly time series, we also estimated uncertainties of reconstruction. The root mean square error (RMSE) results show that the the error decreases with respect to time, and are not sufficiently large to alter the conclusions on the persist warming in East Asia during twenty-first century. Moreover, the test of influence of data error on reconstruction clearly shows the sensitivity of reconstruction to the size of the data error.
Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Abotteen, K. M. (Principal Investigator)
1980-01-01
The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.
Practical Session: Simple Linear Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).
Use of total electron content data to analyze ionosphere electron density gradients
NASA Astrophysics Data System (ADS)
Nava, B.; Radicella, S. M.; Leitinger, R.; Coïsson, P.
In the presence of electron density gradients the thin shell approximation for the ionosphere, used together with a simple mapping function to convert slant total electron content (TEC) to vertical TEC, could lead to TEC conversion errors. These "mapping function errors" can therefore be used to detect the electron density gradients in the ionosphere. In the present work GPS derived slant TEC data have been used to investigate the effects of the electron density gradients in the middle and low latitude ionosphere under geomagnetic quiet and disturbed conditions. In particular the data corresponding to the geographic area of the American Sector for the days 5-7 April 2000 have been used to perform a complete analysis of mapping function errors based on the "coinciding pierce point technique". The results clearly illustrate the electron density gradient effects according to the locations considered and to the actual levels of disturbance of the ionosphere. In addition, the possibility to assess an ionospheric shell height able to minimize the mapping function errors has been verified.
Atmospheric refraction effects on baseline error in satellite laser ranging systems
NASA Technical Reports Server (NTRS)
Im, K. E.; Gardner, C. S.
1982-01-01
Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.
The cerebellum for jocks and nerds alike.
Popa, Laurentiu S; Hewitt, Angela L; Ebner, Timothy J
2014-01-01
Historically the cerebellum has been implicated in the control of movement. However, the cerebellum's role in non-motor functions, including cognitive and emotional processes, has also received increasing attention. Starting from the premise that the uniform architecture of the cerebellum underlies a common mode of information processing, this review examines recent electrophysiological findings on the motor signals encoded in the cerebellar cortex and then relates these signals to observations in the non-motor domain. Simple spike firing of individual Purkinje cells encodes performance errors, both predicting upcoming errors as well as providing feedback about those errors. Further, this dual temporal encoding of prediction and feedback involves a change in the sign of the simple spike modulation. Therefore, Purkinje cell simple spike firing both predicts and responds to feedback about a specific parameter, consistent with computing sensory prediction errors in which the predictions about the consequences of a motor command are compared with the feedback resulting from the motor command execution. These new findings are in contrast with the historical view that complex spikes encode errors. Evaluation of the kinematic coding in the simple spike discharge shows the same dual temporal encoding, suggesting this is a common mode of signal processing in the cerebellar cortex. Decoding analyses show the considerable accuracy of the predictions provided by Purkinje cells across a range of times. Further, individual Purkinje cells encode linearly and independently a multitude of signals, both kinematic and performance errors. Therefore, the cerebellar cortex's capacity to make associations across different sensory, motor and non-motor signals is large. The results from studying how Purkinje cells encode movement signals suggest that the cerebellar cortex circuitry can support associative learning, sequencing, working memory, and forward internal models in non-motor domains.
The cerebellum for jocks and nerds alike
Popa, Laurentiu S.; Hewitt, Angela L.; Ebner, Timothy J.
2014-01-01
Historically the cerebellum has been implicated in the control of movement. However, the cerebellum's role in non-motor functions, including cognitive and emotional processes, has also received increasing attention. Starting from the premise that the uniform architecture of the cerebellum underlies a common mode of information processing, this review examines recent electrophysiological findings on the motor signals encoded in the cerebellar cortex and then relates these signals to observations in the non-motor domain. Simple spike firing of individual Purkinje cells encodes performance errors, both predicting upcoming errors as well as providing feedback about those errors. Further, this dual temporal encoding of prediction and feedback involves a change in the sign of the simple spike modulation. Therefore, Purkinje cell simple spike firing both predicts and responds to feedback about a specific parameter, consistent with computing sensory prediction errors in which the predictions about the consequences of a motor command are compared with the feedback resulting from the motor command execution. These new findings are in contrast with the historical view that complex spikes encode errors. Evaluation of the kinematic coding in the simple spike discharge shows the same dual temporal encoding, suggesting this is a common mode of signal processing in the cerebellar cortex. Decoding analyses show the considerable accuracy of the predictions provided by Purkinje cells across a range of times. Further, individual Purkinje cells encode linearly and independently a multitude of signals, both kinematic and performance errors. Therefore, the cerebellar cortex's capacity to make associations across different sensory, motor and non-motor signals is large. The results from studying how Purkinje cells encode movement signals suggest that the cerebellar cortex circuitry can support associative learning, sequencing, working memory, and forward internal models in non-motor domains. PMID:24987338
Monitoring robot actions for error detection and recovery
NASA Technical Reports Server (NTRS)
Gini, M.; Smith, R.
1987-01-01
Reliability is a serious problem in computer controlled robot systems. Although robots serve successfully in relatively simple applications such as painting and spot welding, their potential in areas such as automated assembly is hampered by programming problems. A program for assembling parts may be logically correct, execute correctly on a simulator, and even execute correctly on a robot most of the time, yet still fail unexpectedly in the face of real world uncertainties. Recovery from such errors is far more complicated than recovery from simple controller errors, since even expected errors can often manifest themselves in unexpected ways. Here, a novel approach is presented for improving robot reliability. Instead of anticipating errors, researchers use knowledge-based programming techniques so that the robot can autonomously exploit knowledge about its task and environment to detect and recover from failures. They describe preliminary experiment of a system that they designed and constructed.
NASA Technical Reports Server (NTRS)
Abbott, Mark R.
1996-01-01
Our first activity is based on delivery of code to Bob Evans (University of Miami) for integration and eventual delivery to the MODIS Science Data Support Team. As we noted in our previous semi-annual report, coding required the development and analysis of an end-to-end model of fluorescence line height (FLH) errors and sensitivity. This model is described in a paper in press in Remote Sensing of the Environment. Once the code was delivered to Miami, we continue to use this error analysis to evaluate proposed changes in MODIS sensor specifications and performance. Simply evaluating such changes on a band by band basis may obscure the true impacts of changes in sensor performance that are manifested in the complete algorithm. This is especially true with FLH that is sensitive to band placement and width. The error model will be used by Howard Gordon (Miami) to evaluate the effects of absorbing aerosols on the FLH algorithm performance. Presently, FLH relies only on simple corrections for atmospheric effects (viewing geometry, Rayleigh scattering) without correcting for aerosols. Our analysis suggests that aerosols should have a small impact relative to changes in the quantum yield of fluorescence in phytoplankton. However, the effect of absorbing aerosol is a new process and will be evaluated by Gordon.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry
The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce themore » required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.« less
Learning a locomotor task: with or without errors?
Marchal-Crespo, Laura; Schneider, Jasmin; Jaeger, Lukas; Riener, Robert
2014-03-04
Robotic haptic guidance is the most commonly used robotic training strategy to reduce performance errors while training. However, research on motor learning has emphasized that errors are a fundamental neural signal that drive motor adaptation. Thus, researchers have proposed robotic therapy algorithms that amplify movement errors rather than decrease them. However, to date, no study has analyzed with precision which training strategy is the most appropriate to learn an especially simple task. In this study, the impact of robotic training strategies that amplify or reduce errors on muscle activation and motor learning of a simple locomotor task was investigated in twenty two healthy subjects. The experiment was conducted with the MAgnetic Resonance COmpatible Stepper (MARCOS) a special robotic device developed for investigations in the MR scanner. The robot moved the dominant leg passively and the subject was requested to actively synchronize the non-dominant leg to achieve an alternating stepping-like movement. Learning with four different training strategies that reduce or amplify errors was evaluated: (i) Haptic guidance: errors were eliminated by passively moving the limbs, (ii) No guidance: no robot disturbances were presented, (iii) Error amplification: existing errors were amplified with repulsive forces, (iv) Noise disturbance: errors were evoked intentionally with a randomly-varying force disturbance on top of the no guidance strategy. Additionally, the activation of four lower limb muscles was measured by the means of surface electromyography (EMG). Strategies that reduce or do not amplify errors limit muscle activation during training and result in poor learning gains. Adding random disturbing forces during training seems to increase attention, and therefore improve motor learning. Error amplification seems to be the most suitable strategy for initially less skilled subjects, perhaps because subjects could better detect their errors and correct them. Error strategies have a great potential to evoke higher muscle activation and provoke better motor learning of simple tasks. Neuroimaging evaluation of brain regions involved in learning can provide valuable information on observed behavioral outcomes related to learning processes. The impacts of these strategies on neurological patients need further investigations.
Stress free configuration of the human eye.
Elsheikh, Ahmed; Whitford, Charles; Hamarashid, Rosti; Kassem, Wael; Joda, Akram; Büchler, Philippe
2013-02-01
Numerical simulations of eye globes often rely on topographies that have been measured in vivo using devices such as the Pentacam or OCT. The topographies, which represent the form of the already stressed eye under the existing intraocular pressure, introduce approximations in the analysis. The accuracy of the simulations could be improved if either the stress state of the eye under the effect of intraocular pressure is determined, or the stress-free form of the eye estimated prior to conducting the analysis. This study reviews earlier attempts to address this problem and assesses the performance of an iterative technique proposed by Pandolfi and Holzapfel [1], which is both simple to implement and promises high accuracy in estimating the eye's stress-free form. A parametric study has been conducted and demonstrated reliance of the error level on the level of flexibility of the eye model, especially in the cornea region. However, in all cases considered 3-4 analysis iterations were sufficient to produce a stress-free form with average errors in node location <10(-6)mm and a maximal error <10(-4)mm. This error level, which is similar to what has been achieved with other methods and orders of magnitude lower than the accuracy of current clinical topography systems, justifies the use of the technique as a pre-processing step in ocular numerical simulations. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Todling, Ricardo
2015-01-01
Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.
[Practical aspects regarding sample size in clinical research].
Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S
1996-01-01
The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.
Calibrated Bayes Factors Should Not Be Used: A Reply to Hoijtink, van Kooten, and Hulsker.
Morey, Richard D; Wagenmakers, Eric-Jan; Rouder, Jeffrey N
2016-01-01
Hoijtink, Kooten, and Hulsker ( 2016 ) present a method for choosing the prior distribution for an analysis with Bayes factor that is based on controlling error rates, which they advocate as an alternative to our more subjective methods (Morey & Rouder, 2014 ; Rouder, Speckman, Sun, Morey, & Iverson, 2009 ; Wagenmakers, Wetzels, Borsboom, & van der Maas, 2011 ). We show that the method they advocate amounts to a simple significance test, and that the resulting Bayes factors are not interpretable. Additionally, their method fails in common circumstances, and has the potential to yield arbitrarily high Type II error rates. After critiquing their method, we outline the position on subjectivity that underlies our advocacy of Bayes factors.
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis
Lin, Johnny; Bentler, Peter M.
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511
Preventing medical errors by designing benign failures.
Grout, John R
2003-07-01
One way to successfully reduce medical errors is to design health care systems that are more resistant to the tendencies of human beings to err. One interdisciplinary approach entails creating design changes, mitigating human errors, and making human error irrelevant to outcomes. This approach is intended to facilitate the creation of benign failures, which have been called mistake-proofing devices and forcing functions elsewhere. USING FAULT TREES TO DESIGN FORCING FUNCTIONS: A fault tree is a graphical tool used to understand the relationships that either directly cause or contribute to the cause of a particular failure. A careful analysis of a fault tree enables the analyst to anticipate how the process will behave after the change. EXAMPLE OF AN APPLICATION: A scenario in which a patient is scalded while bathing can serve as an example of how multiple fault trees can be used to design forcing functions. The first fault tree shows the undesirable event--patient scalded while bathing. The second fault tree has a benign event--no water. Adding a scald valve changes the outcome from the undesirable event ("patient scalded while bathing") to the benign event ("no water") Analysis of fault trees does not ensure or guarantee that changes necessary to eliminate error actually occur. Most mistake-proofing is used to prevent simple errors and to create well-defended processes, but complex errors can also result. The utilization of mistake-proofing or forcing functions can be thought of as changing the logic of a process. Errors that formerly caused undesirable failures can be converted into the causes of benign failures. The use of fault trees can provide a variety of insights into the design of forcing functions that will improve patient safety.
Fault-tolerant simple quantum-bit commitment unbreakable by individual attacks
NASA Astrophysics Data System (ADS)
Shimizu, Kaoru; Imoto, Nobuyuki
2002-03-01
This paper proposes a simple scheme for quantum-bit commitment that is secure against individual particle attacks, where a sender is unable to use quantum logical operations to manipulate multiparticle entanglement for performing quantum collective and coherent attacks. Our scheme employs a cryptographic quantum communication channel defined in a four-dimensional Hilbert space and can be implemented by using single-photon interference. For an ideal case of zero-loss and noiseless quantum channels, our basic scheme relies only on the physical features of quantum states. Moreover, as long as the bit-flip error rates are sufficiently small (less than a few percent), we can improve our scheme and make it fault tolerant by adopting simple error-correcting codes with a short length. Compared with the well-known Brassard-Crepeau-Jozsa-Langlois 1993 (BCJL93) protocol, our scheme is mathematically far simpler, more efficient in terms of transmitted photon number, and better tolerant of bit-flip errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahl, D.E.; Jakowatz, C.V. Jr.; Ghiglia, D.C.
1991-01-01
Autofocus methods in SAR and self-survey techniques in SONAR have a common mathematical basis in that they both involve estimation and correction of phase errors introduced by sensor position uncertainties. Time delay estimation and correlation methods have been shown to be effective in solving the self-survey problem for towed SONAR arrays. Since it can be shown that platform motion errors introduce similar time-delay estimation problems in SAR imaging, the question arises as to whether such techniques could be effectively employed for autofocus of SAR imagery. With a simple mathematical model for motion errors in SAR, we will show why suchmore » correlation/time-delay techniques are not nearly as effective as established SAR autofocus algorithms such as phase gradient autofocus or sub-aperture based methods. This analysis forms an important bridge between signal processing methodologies for SAR and SONAR. 5 refs., 4 figs.« less
An analysis of estimation of pulmonary blood flow by the single-breath method
NASA Technical Reports Server (NTRS)
Srinivasan, R.
1986-01-01
The single-breath method represents a simple noninvasive technique for the assessment of capillary blood flow across the lung. However, this method has not gained widespread acceptance, because its accuracy is still being questioned. A rigorous procedure is described for estimating pulmonary blood flow (PBF) using data obtained with the aid of the single-breath method. Attention is given to the minimization of data-processing errors in the presence of measurement errors and to questions regarding a correction for possible loss of CO2 in the lung tissue. It is pointed out that the estimations are based on the exact solution of the underlying differential equations which describe the dynamics of gas exchange in the lung. The reported study demonstrates the feasibility of obtaining highly reliable estimates of PBF from expiratory data in the presence of random measurement errors.
Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.; Van Meter, James R.
2005-01-01
A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.
Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira
2015-12-18
For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.
Use of Total Electron Content data to analyze ionosphere electron density gradients
NASA Astrophysics Data System (ADS)
Nava, B.; Radicella, S. M.; Leitinger, R.; Coisson, P.
In presence of electron density gradients the thin shell approximation for the ionosphere used together with a simple mapping function to convert slant Total Electron Content TEC to vertical TEC could lead to TEC conversion errors Therefore these mapping function errors can be used to identify the effects of the electron density gradients in the ionosphere In the present work high precision GPS derived slant TEC data have been used to investigate the effects of the electron density gradients in the middle and low latitude ionosphere under geomagnetic quiet and disturbed conditions In particular the data corresponding to the geographic area of the American sector for the days 5-7 April 2000 have been used to perform a complete analysis of mapping function errors based on the coinciding pierce point technique The results clearly illustrate the electron density gradient effects according to the locations considered and to the actual levels of disturbance of the ionosphere
Different grades MEMS accelerometers error characteristics
NASA Astrophysics Data System (ADS)
Pachwicewicz, M.; Weremczuk, J.
2017-08-01
The paper presents calibration effects of two different MEMS accelerometers of different price and quality grades and discusses different accelerometers errors types. The calibration for error determining is provided by reference centrifugal measurements. The design and measurement errors of the centrifuge are discussed as well. It is shown that error characteristics of the sensors are very different and it is not possible to use simple calibration methods presented in the literature in both cases.
Cao, Zhaoliang; Mu, Quanquan; Hu, Lifa; Lu, Xinghai; Xuan, Li
2009-09-28
A simple method for evaluating the wavefront compensation error of diffractive liquid-crystal wavefront correctors (DLCWFCs) for atmospheric turbulence correction is reported. A simple formula which describes the relationship between pixel number, DLCWFC aperture, quantization level, and atmospheric coherence length was derived based on the calculated atmospheric turbulence wavefronts using Kolmogorov atmospheric turbulence theory. It was found that the pixel number across the DLCWFC aperture is a linear function of the telescope aperture and the quantization level, and it is an exponential function of the atmosphere coherence length. These results are useful for people using DLCWFCs in atmospheric turbulence correction for large-aperture telescopes.
A correlated meta-analysis strategy for data mining "OMIC" scans.
Province, Michael A; Borecki, Ingrid B
2013-01-01
Meta-analysis is becoming an increasingly popular and powerful tool to integrate findings across studies and OMIC dimensions. But there is the danger that hidden dependencies between putatively "independent" studies can cause inflation of type I error, due to reinforcement of the evidence from false-positive findings. We present here a simple method for conducting meta-analyses that automatically estimates the degree of any such non-independence between OMIC scans and corrects the inference for it, retaining the proper type I error structure. The method does not require the original data from the source studies, but operates only on summary analysis results from these in OMIC scans. The method is applicable in a wide variety of situations including combining GWAS and or sequencing scan results across studies with dependencies due to overlapping subjects, as well as to scans of correlated traits, in a meta-analysis scan for pleiotropic genetic effects. The method correctly detects which scans are actually independent in which case it yields the traditional meta-analysis, so it may safely be used in all cases, when there is even a suspicion of correlation amongst scans.
Automated SEM Modal Analysis Applied to the Diogenites
NASA Technical Reports Server (NTRS)
Bowman, L. E.; Spilde, M. N.; Papike, James J.
1996-01-01
Analysis of volume proportions of minerals, or modal analysis, is routinely accomplished by point counting on an optical microscope, but the process, particularly on brecciated samples such as the diogenite meteorites, is tedious and prone to error by misidentification of very small fragments, which may make up a significant volume of the sample. Precise volume percentage data can be gathered on a scanning electron microscope (SEM) utilizing digital imaging and an energy dispersive spectrometer (EDS). This form of automated phase analysis reduces error, and at the same time provides more information than could be gathered using simple point counting alone, such as particle morphology statistics and chemical analyses. We have previously studied major, minor, and trace-element chemistry of orthopyroxene from a suite of diogenites. This abstract describes the method applied to determine the modes on this same suite of meteorites and the results of that research. The modal abundances thus determined add additional information on the petrogenesis of the diogenites. In addition, low-abundance phases such as spinels were located for further analysis by this method.
Instrumental variables vs. grouping approach for reducing bias due to measurement error.
Batistatou, Evridiki; McNamee, Roseanne
2008-01-01
Attenuation of the exposure-response relationship due to exposure measurement error is often encountered in epidemiology. Given that error cannot be totally eliminated, bias correction methods of analysis are needed. Many methods require more than one exposure measurement per person to be made, but the `group mean OLS method,' in which subjects are grouped into several a priori defined groups followed by ordinary least squares (OLS) regression on the group means, can be applied with one measurement. An alternative approach is to use an instrumental variable (IV) method in which both the single error-prone measure and an IV are used in IV analysis. In this paper we show that the `group mean OLS' estimator is equal to an IV estimator with the group mean used as IV, but that the variance estimators for the two methods are different. We derive a simple expression for the bias in the common estimator which is a simple function of group size, reliability and contrast of exposure between groups, and show that the bias can be very small when group size is large. We compare this method with a new proposal (group mean ranking method), also applicable with a single exposure measurement, in which the IV is the rank of the group means. When there are two independent exposure measurements per subject, we propose a new IV method (EVROS IV) and compare it with Carroll and Stefanski's (CS IV) proposal in which the second measure is used as an IV; the new IV estimator combines aspects of the `group mean' and `CS' strategies. All methods are evaluated in terms of bias, precision and root mean square error via simulations and a dataset from occupational epidemiology. The `group mean ranking method' does not offer much improvement over the `group mean method.' Compared with the `CS' method, the `EVROS' method is less affected by low reliability of exposure. We conclude that the group IV methods we propose may provide a useful way to handle mismeasured exposures in epidemiology with or without replicate measurements. Our finding may also have implications for the use of aggregate variables in epidemiology to control for unmeasured confounding.
Potas, Jason Robert; de Castro, Newton Gonçalves; Maddess, Ted; de Souza, Marcio Nogueira
2015-01-01
Experimental electrophysiological assessment of evoked responses from regenerating nerves is challenging due to the typical complex response of events dispersed over various latencies and poor signal-to-noise ratio. Our objective was to automate the detection of compound action potential events and derive their latencies and magnitudes using a simple cross-correlation template comparison approach. For this, we developed an algorithm called Waveform Similarity Analysis. To test the algorithm, challenging signals were generated in vivo by stimulating sural and sciatic nerves, whilst recording evoked potentials at the sciatic nerve and tibialis anterior muscle, respectively, in animals recovering from sciatic nerve transection. Our template for the algorithm was generated based on responses evoked from the intact side. We also simulated noisy signals and examined the output of the Waveform Similarity Analysis algorithm with imperfect templates. Signals were detected and quantified using Waveform Similarity Analysis, which was compared to event detection, latency and magnitude measurements of the same signals performed by a trained observer, a process we called Trained Eye Analysis. The Waveform Similarity Analysis algorithm could successfully detect and quantify simple or complex responses from nerve and muscle compound action potentials of intact or regenerated nerves. Incorrectly specifying the template outperformed Trained Eye Analysis for predicting signal amplitude, but produced consistent latency errors for the simulated signals examined. Compared to the trained eye, Waveform Similarity Analysis is automatic, objective, does not rely on the observer to identify and/or measure peaks, and can detect small clustered events even when signal-to-noise ratio is poor. Waveform Similarity Analysis provides a simple, reliable and convenient approach to quantify latencies and magnitudes of complex waveforms and therefore serves as a useful tool for studying evoked compound action potentials in neural regeneration studies.
Potas, Jason Robert; de Castro, Newton Gonçalves; Maddess, Ted; de Souza, Marcio Nogueira
2015-01-01
Experimental electrophysiological assessment of evoked responses from regenerating nerves is challenging due to the typical complex response of events dispersed over various latencies and poor signal-to-noise ratio. Our objective was to automate the detection of compound action potential events and derive their latencies and magnitudes using a simple cross-correlation template comparison approach. For this, we developed an algorithm called Waveform Similarity Analysis. To test the algorithm, challenging signals were generated in vivo by stimulating sural and sciatic nerves, whilst recording evoked potentials at the sciatic nerve and tibialis anterior muscle, respectively, in animals recovering from sciatic nerve transection. Our template for the algorithm was generated based on responses evoked from the intact side. We also simulated noisy signals and examined the output of the Waveform Similarity Analysis algorithm with imperfect templates. Signals were detected and quantified using Waveform Similarity Analysis, which was compared to event detection, latency and magnitude measurements of the same signals performed by a trained observer, a process we called Trained Eye Analysis. The Waveform Similarity Analysis algorithm could successfully detect and quantify simple or complex responses from nerve and muscle compound action potentials of intact or regenerated nerves. Incorrectly specifying the template outperformed Trained Eye Analysis for predicting signal amplitude, but produced consistent latency errors for the simulated signals examined. Compared to the trained eye, Waveform Similarity Analysis is automatic, objective, does not rely on the observer to identify and/or measure peaks, and can detect small clustered events even when signal-to-noise ratio is poor. Waveform Similarity Analysis provides a simple, reliable and convenient approach to quantify latencies and magnitudes of complex waveforms and therefore serves as a useful tool for studying evoked compound action potentials in neural regeneration studies. PMID:26325291
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-04-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.
Precise and Scalable Static Program Analysis of NASA Flight Software
NASA Technical Reports Server (NTRS)
Brat, G.; Venet, A.
2005-01-01
Recent NASA mission failures (e.g., Mars Polar Lander and Mars Orbiter) illustrate the importance of having an efficient verification and validation process for such systems. One software error, as simple as it may be, can cause the loss of an expensive mission, or lead to budget overruns and crunched schedules. Unfortunately, traditional verification methods cannot guarantee the absence of errors in software systems. Therefore, we have developed the CGS static program analysis tool, which can exhaustively analyze large C programs. CGS analyzes the source code and identifies statements in which arrays are accessed out of bounds, or, pointers are used outside the memory region they should address. This paper gives a high-level description of CGS and its theoretical foundations. It also reports on the use of CGS on real NASA software systems used in Mars missions (from Mars PathFinder to Mars Exploration Rover) and on the International Space Station.
Statistical analysis of the determinations of the Sun's Galactocentric distance
NASA Astrophysics Data System (ADS)
Malkin, Zinovy
2013-02-01
Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.
A second-order all-digital phase-locked loop
NASA Technical Reports Server (NTRS)
Holmes, J. K.; Tegnelia, C. R.
1974-01-01
A simple second-order digital phase-locked loop has been designed to synchronize itself to a square-wave subcarrier. Analysis and experimental performance are given for both acquisition behavior and steady-state phase error performance. In addition, the damping factor and the noise bandwidth are derived analytically. Although all the data are given for the square-wave subcarrier case, the results are applicable to arbitrary subcarriers that are odd symmetric about their transition region.
Howling, D. H.; Fitzgerald, P. J.
1959-01-01
The Schwarzschild-Villiger effect has been experimentally demonstrated with the optical system used in this laboratory. Using a photographic mosaic specimen as a model, it has been shown that the conclusions of Naora are substantiated and that the SV effect, in large or small magnitude, is always present in optical systems. The theoretical transmission error arising from the presence of the SV effect has been derived for various optical conditions of measurement. The results have been experimentally confirmed. The SV contribution of the substage optics of microspectrophotometers has also been considered. A simple method of evaluating a flare function f(A) is advanced which provides a measure of the SV error present in a system. It is demonstrated that measurements of specimens of optical density less than unity can be made with less than 1 per cent error, when using illuminating beam diameter/specimen diameter ratios of unity and uncoated optical surfaces. For denser specimens it is shown that care must be taken to reduce the illuminating beam/specimen diameter ratio to a value dictated by the magnitude of a flare function f(A), evaluated for a particular optical system, in order to avoid excessive transmission error. It is emphasized that observed densities (transmissions) are not necessarily true densities (transmissions) because of the possibility of SV error. The ambiguity associated with an estimation of stray-light error by means of an opaque object has also been demonstrated. The errors illustrated are not necessarily restricted to microspectrophotometry but may possibly be found in such fields as spectral analysis, the interpretation of x-ray diffraction patterns, the determination of ionizing particle tracks and particle densities in photographic emulsions, and in many other types of photometric analysis. PMID:14403512
A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test
NASA Technical Reports Server (NTRS)
Reeder, James R.
2002-01-01
The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.
NASA Astrophysics Data System (ADS)
Xu, Xianfeng; Cai, Luzhong; Li, Dailin; Mao, Jieying
2010-04-01
In phase-shifting interferometry (PSI) the reference wave is usually supposed to be an on-axis plane wave. But in practice a slight tilt of reference wave often occurs, and this tilt will introduce unexpected errors of the reconstructed object wave-front. Usually the least-square method with iterations, which is time consuming, is employed to analyze the phase errors caused by the tilt of reference wave. Here a simple effective algorithm is suggested to detect and then correct this kind of errors. In this method, only some simple mathematic operation is used, avoiding using least-square equations as needed in most methods reported before. It can be used for generalized phase-shifting interferometry with two or more frames for both smooth and diffusing objects, and the excellent performance has been verified by computer simulations. The numerical simulations show that the wave reconstruction errors can be reduced by 2 orders of magnitude.
Gill, Andrew B; Anandappa, Gayathri; Patterson, Andrew J; Priest, Andrew N; Graves, Martin J; Janowitz, Tobias; Jodrell, Duncan I; Eisen, Tim; Lomas, David J
2015-02-01
This study introduces the use of 'error-category mapping' in the interpretation of pharmacokinetic (PK) model parameter results derived from dynamic contrast-enhanced (DCE-) MRI data. Eleven patients with metastatic renal cell carcinoma were enrolled in a multiparametric study of the treatment effects of bevacizumab. For the purposes of the present analysis, DCE-MRI data from two identical pre-treatment examinations were analysed by application of the extended Tofts model (eTM), using in turn a model arterial input function (AIF), an individually-measured AIF and a sample-average AIF. PK model parameter maps were calculated. Errors in the signal-to-gadolinium concentration ([Gd]) conversion process and the model-fitting process itself were assigned to category codes on a voxel-by-voxel basis, thereby forming a colour-coded 'error-category map' for each imaged slice. These maps were found to be repeatable between patient visits and showed that the eTM converged adequately in the majority of voxels in all the tumours studied. However, the maps also clearly indicated sub-regions of low Gd uptake and of non-convergence of the model in nearly all tumours. The non-physical condition ve ≥ 1 was the most frequently indicated error category and appeared sensitive to the form of AIF used. This simple method for visualisation of errors in DCE-MRI could be used as a routine quality-control technique and also has the potential to reveal otherwise hidden patterns of failure in PK model applications. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Why Does a Method That Fails Continue To Be Used: The Answer
Templeton, Alan R.
2009-01-01
It has been claimed that hundreds of researchers use nested clade phylogeographic analysis (NCPA) based on what the method promises rather than requiring objective validation of the method. The supposed failure of NCPA is based upon the argument that validating it by using positive controls ignored type I error, and that computer simulations have shown a high type I error. The first argument is factually incorrect: the previously published validation analysis fully accounted for both type I and type II errors. The simulations that indicate a 75% type I error rate have serious flaws and only evaluate outdated versions of NCPA. These outdated type I error rates fall precipitously when the 2003 version of single locus NCPA is used or when the 2002 multi-locus version of NCPA is used. It is shown that the treewise type I errors in single-locus NCPA can be corrected to the desired nominal level by a simple statistical procedure, and that multilocus NCPA reconstructs a simulated scenario used to discredit NCPA with 100% accuracy. Hence, NCPA is a not a failed method at all, but rather has been validated both by actual data and by simulated data in a manner that satisfies the published criteria given by its critics. The critics have come to different conclusions because they have focused on the pre-2002 versions of NCPA and have failed to take into account the extensive developments in NCPA since 2002. Hence, researchers can choose to use NCPA based upon objective critical validation that shows that NCPA delivers what it promises. PMID:19335340
Simple Forest Canopy Thermal Exitance Model
NASA Technical Reports Server (NTRS)
Smith J. A.; Goltz, S. M.
1999-01-01
We describe a model to calculate brightness temperature and surface energy balance for a forest canopy system. The model is an extension of an earlier vegetation only model by inclusion of a simple soil layer. The root mean square error in brightness temperature for a dense forest canopy was 2.5 C. Surface energy balance predictions were also in good agreement. The corresponding root mean square errors for net radiation, latent, and sensible heat were 38.9, 30.7, and 41.4 W/sq m respectively.
Zhou, Mu; Xu, Yu Bin; Ma, Lin; Tian, Shuo
2012-01-01
The expected errors of RADAR sensor networks with linear probabilistic location fingerprints inside buildings with varying Wi-Fi Gaussian strength are discussed. As far as we know, the statistical errors of equal and unequal-weighted RADAR networks have been suggested as a better way to evaluate the behavior of different system parameters and the deployment of reference points (RPs). However, up to now, there is still not enough related work on the relations between the statistical errors, system parameters, number and interval of the RPs, let alone calculating the correlated analytical expressions of concern. Therefore, in response to this compelling problem, under a simple linear distribution model, much attention will be paid to the mathematical relations of the linear expected errors, number of neighbors, number and interval of RPs, parameters in logarithmic attenuation model and variations of radio signal strength (RSS) at the test point (TP) with the purpose of constructing more practical and reliable RADAR location sensor networks (RLSNs) and also guaranteeing the accuracy requirements for the location based services in future ubiquitous context-awareness environments. Moreover, the numerical results and some real experimental evaluations of the error theories addressed in this paper will also be presented for our future extended analysis. PMID:22737027
Zhou, Mu; Xu, Yu Bin; Ma, Lin; Tian, Shuo
2012-01-01
The expected errors of RADAR sensor networks with linear probabilistic location fingerprints inside buildings with varying Wi-Fi Gaussian strength are discussed. As far as we know, the statistical errors of equal and unequal-weighted RADAR networks have been suggested as a better way to evaluate the behavior of different system parameters and the deployment of reference points (RPs). However, up to now, there is still not enough related work on the relations between the statistical errors, system parameters, number and interval of the RPs, let alone calculating the correlated analytical expressions of concern. Therefore, in response to this compelling problem, under a simple linear distribution model, much attention will be paid to the mathematical relations of the linear expected errors, number of neighbors, number and interval of RPs, parameters in logarithmic attenuation model and variations of radio signal strength (RSS) at the test point (TP) with the purpose of constructing more practical and reliable RADAR location sensor networks (RLSNs) and also guaranteeing the accuracy requirements for the location based services in future ubiquitous context-awareness environments. Moreover, the numerical results and some real experimental evaluations of the error theories addressed in this paper will also be presented for our future extended analysis.
Postural control model interpretation of stabilogram diffusion analysis
NASA Technical Reports Server (NTRS)
Peterka, R. J.
2000-01-01
Collins and De Luca [Collins JJ. De Luca CJ (1993) Exp Brain Res 95: 308-318] introduced a new method known as stabilogram diffusion analysis that provides a quantitative statistical measure of the apparently random variations of center-of-pressure (COP) trajectories recorded during quiet upright stance in humans. This analysis generates a stabilogram diffusion function (SDF) that summarizes the mean square COP displacement as a function of the time interval between COP comparisons. SDFs have a characteristic two-part form that suggests the presence of two different control regimes: a short-term open-loop control behavior and a longer-term closed-loop behavior. This paper demonstrates that a very simple closed-loop control model of upright stance can generate realistic SDFs. The model consists of an inverted pendulum body with torque applied at the ankle joint. This torque includes a random disturbance torque and a control torque. The control torque is a function of the deviation (error signal) between the desired upright body position and the actual body position, and is generated in proportion to the error signal, the derivative of the error signal, and the integral of the error signal [i.e. a proportional, integral and derivative (PID) neural controller]. The control torque is applied with a time delay representing conduction, processing, and muscle activation delays. Variations in the PID parameters and the time delay generate variations in SDFs that mimic real experimental SDFs. This model analysis allows one to interpret experimentally observed changes in SDFs in terms of variations in neural controller and time delay parameters rather than in terms of open-loop versus closed-loop behavior.
Digital repeat analysis; setup and operation.
Nol, J; Isouard, G; Mirecki, J
2006-06-01
Since the emergence of digital imaging, there have been questions about the necessity of continuing reject analysis programs in imaging departments to evaluate performance and quality. As a marketing strategy, most suppliers of digital technology focus on the supremacy of the technology and its ability to reduce the number of repeats, resulting in less radiation doses given to patients and increased productivity in the department. On the other hand, quality assurance radiographers and radiologists believe that repeats are mainly related to positioning skills, and repeat analysis is the main tool to plan training needs to up-skill radiographers. A comparative study between conventional and digital imaging was undertaken to compare outcomes and evaluate the need for reject analysis. However, digital technology still being at its early development stages, setting a credible reject analysis program became the major task of the study. It took the department, with the help of the suppliers of the computed radiography reader and the picture archiving and communication system, over 2 years of software enhancement to build a reliable digital repeat analysis system. The results were supportive of both philosophies; the number of repeats as a result of exposure factors was reduced dramatically; however, the percentage of repeats as a result of positioning skills was slightly on the increase for the simple reason that some rejects in the conventional system qualifying for both exposure and positioning errors were classified as exposure error. The ability of digitally adjusting dark or light images reclassified some of those images as positioning errors.
Olson, Stephen M; Hussaini, Mohammad; Lewis, James S
2011-05-01
Frozen section analysis is an essential tool for assessing margins intra-operatively to assure complete resection. Many institutions evaluate surgical defect edge tissue provided by the surgeon after the main lesion has been removed. With the increasing use of transoral laser microsurgery, this method is becoming even more prevalent. We sought to evaluate error rates at our large academic institution and to see if sampling errors could be reduced by the simple method change of taking an additional third section on these specimens. All head and neck tumor resection cases from January 2005 through August 2008 with margins evaluated by frozen section were identified by database search. These cases were analyzed by cutting two levels during frozen section and a third permanent section later. All resection cases from August 2008 through July 2009 were identified as well. These were analyzed by cutting three levels during frozen section (the third a 'much deeper' level) and a fourth permanent section later. Error rates for both of these periods were determined. Errors were separated into sampling and interpretation types. There were 4976 total frozen section specimens from 848 patients. The overall error rate was 2.4% for all frozen sections where just two levels were evaluated and was 2.5% when three levels were evaluated (P=0.67). The sampling error rate was 1.6% for two-level sectioning and 1.2% for three-level sectioning (P=0.42). However, when considering only the frozen section cases where tumor was ultimately identified (either at the time of frozen section or on permanent sections) the sampling error rate for two-level sectioning was 15.3 versus 7.4% for three-level sectioning. This difference was statistically significant (P=0.006). Cutting a single additional 'deeper' level at the time of frozen section identifies more tumor-bearing specimens and may reduce the number of sampling errors.
Improved methods for the measurement and analysis of stellar magnetic fields
NASA Technical Reports Server (NTRS)
Saar, Steven H.
1988-01-01
The paper presents several improved methods for the measurement of magnetic fields on cool stars which take into account simple radiative transfer effects and the exact Zeeman patterns. Using these methods, high-resolution, low-noise data can be fitted with theoretical line profiles to determine the mean magnetic field strength in stellar active regions and a model-dependent fraction of the stellar surface (filling factor) covered by these regions. Random errors in the derived field strength and filling factor are parameterized in terms of signal-to-noise ratio, wavelength, spectral resolution, stellar rotation rate, and the magnetic parameters themselves. Weak line blends, if left uncorrected, can have significant systematic effects on the derived magnetic parameters, and thus several methods are developed to compensate partially for them. The magnetic parameters determined by previous methods likely have systematic errors because of such line blends and because of line saturation effects. Other sources of systematic error are explored in detail. These sources of error currently make it difficult to determine the magnetic parameters of individual stars to better than about + or - 20 percent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, Dong, E-mail: d.qiu@uq.edu.au; Zhang, Mingxing
2014-08-15
A simple and inclusive method is proposed for accurate determination of the habit plane between bicrystals in transmission electron microscope. Whilst this method can be regarded as a variant of surface trace analysis, the major innovation lies in the improved accuracy and efficiency of foil thickness measurement, which involves a simple tilt of the thin foil about a permanent tilting axis of the specimen holder, rather than cumbersome tilt about the surface trace of the habit plane. Experimental study has been done to validate this proposed method in determining the habit plane between lamellar α{sub 2} plates and γ matrixmore » in a Ti–Al–Nb alloy. Both high accuracy (± 1°) and high precision (± 1°) have been achieved by using the new method. The source of the experimental errors as well as the applicability of this method is discussed. Some tips to minimise the experimental errors are also suggested. - Highlights: • An improved algorithm is formulated to measure the foil thickness. • Habit plane can be determined with a single tilt holder based on the new algorithm. • Better accuracy and precision within ± 1° are achievable using the proposed method. • The data for multi-facet determination can be collected simultaneously.« less
Chernyshev, Boris V; Bryzgalov, Dmitri V; Lazarev, Ivan E; Chernysheva, Elena G
2016-08-03
Current understanding of feature binding remains controversial. Studies involving mismatch negativity (MMN) measurement show a low level of binding, whereas behavioral experiments suggest a higher level. We examined the possibility that the two levels of feature binding coexist and may be shown within one experiment. The electroencephalogram was recorded while participants were engaged in an auditory two-alternative choice task, which was a combination of the oddball and the condensation tasks. Two types of deviant target stimuli were used - complex stimuli, which required feature conjunction to be identified, and simple stimuli, which differed from standard stimuli in a single feature. Two behavioral outcomes - correct responses and errors - were analyzed separately. Responses to complex stimuli were slower and less accurate than responses to simple stimuli. MMN was prominent and its amplitude was similar for both simple and complex stimuli, whereas the respective stimuli differed from standards in a single feature or two features respectively. Errors in response only to complex stimuli were associated with decreased MMN amplitude. P300 amplitude was greater for complex stimuli than for simple stimuli. Our data are compatible with the explanation that feature binding in auditory modality depends on two concurrent levels of processing. We speculate that the earlier level related to MMN generation is an essential and critical stage. Yet, a later analysis is also carried out, affecting P300 amplitude and response time. The current findings provide resolution to conflicting views on the nature of feature binding and show that feature binding is a distributed multilevel process.
A method to map errors in the deformable registration of 4DCT images1
Vaman, Constantin; Staub, David; Williamson, Jeffrey; Murphy, Martin J.
2010-01-01
Purpose: To present a new approach to the problem of estimating errors in deformable image registration (DIR) applied to sequential phases of a 4DCT data set. Methods: A set of displacement vector fields (DVFs) are made by registering a sequence of 4DCT phases. The DVFs are assumed to display anatomical movement, with the addition of errors due to the imaging and registration processes. The positions of physical landmarks in each CT phase are measured as ground truth for the physical movement in the DVF. Principal component analysis of the DVFs and the landmarks is used to identify and separate the eigenmodes of physical movement from the error eigenmodes. By subtracting the physical modes from the principal components of the DVFs, the registration errors are exposed and reconstructed as DIR error maps. The method is demonstrated via a simple numerical model of 4DCT DVFs that combines breathing movement with simulated maps of spatially correlated DIR errors. Results: The principal components of the simulated DVFs were observed to share the basic properties of principal components for actual 4DCT data. The simulated error maps were accurately recovered by the estimation method. Conclusions: Deformable image registration errors can have complex spatial distributions. Consequently, point-by-point landmark validation can give unrepresentative results that do not accurately reflect the registration uncertainties away from the landmarks. The authors are developing a method for mapping the complete spatial distribution of DIR errors using only a small number of ground truth validation landmarks. PMID:21158288
A Very Low Cost BCH Decoder for High Immunity of On-Chip Memories
NASA Astrophysics Data System (ADS)
Seo, Haejun; Han, Sehwan; Heo, Yoonseok; Cho, Taewon
BCH(Bose-Chaudhuri-Hoquenbhem) code, a type of block codes-cyclic codes, has very strong error-correcting ability which is vital for performing the error protection on the memory system. BCH code has many kinds of dual algorithms, PGZ(Pererson-Gorenstein-Zierler) algorithm out of them is advantageous in view of correcting the errors through the simple calculation in t value. However, this is problematic when this becomes 0 (divided by zero) in case ν ≠ t. In this paper, the circuit would be simplified by suggesting the multi-mode hardware architecture in preparation that v were 0~3. First, production cost would be less thanks to the smaller number of gates. Second, lessening power consumption could lengthen the recharging period. The very low cost and simple datapath make our design a good choice in small-footprint SoC(System on Chip) as ECC(Error Correction Code/Circuit) in memory system.
Fisher classifier and its probability of error estimation
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.
Discrete Tchebycheff orthonormal polynomials and applications
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.
Location error uncertainties - an advanced using of probabilistic inverse theory
NASA Astrophysics Data System (ADS)
Debski, Wojciech
2016-04-01
The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analyzed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. While estimating of the earthquake foci location is relatively simple a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling, and apriori uncertainties. In this presentation we addressed this task when statistics of observational and/or modeling errors are unknown. This common situation requires introduction of apriori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland we illustrate an approach based on an analysis of Shanon's entropy calculated for the aposteriori distribution. We show that this meta-characteristic of the aposteriori distribution carries some information on uncertainties of the solution found.
Multiframe video coding for improved performance over wireless channels.
Budagavi, M; Gibson, J D
2001-01-01
We propose and evaluate a multi-frame extension to block motion compensation (BMC) coding of videoconferencing-type video signals for wireless channels. The multi-frame BMC (MF-BMC) coder makes use of the redundancy that exists across multiple frames in typical videoconferencing sequences to achieve additional compression over that obtained by using the single frame BMC (SF-BMC) approach, such as in the base-level H.263 codec. The MF-BMC approach also has an inherent ability of overcoming some transmission errors and is thus more robust when compared to the SF-BMC approach. We model the error propagation process in MF-BMC coding as a multiple Markov chain and use Markov chain analysis to infer that the use of multiple frames in motion compensation increases robustness. The Markov chain analysis is also used to devise a simple scheme which randomizes the selection of the frame (amongst the multiple previous frames) used in BMC to achieve additional robustness. The MF-BMC coders proposed are a multi-frame extension of the base level H.263 coder and are found to be more robust than the base level H.263 coder when subjected to simulated errors commonly encountered on wireless channels.
Performance of concatenated Reed-Solomon/Viterbi channel coding
NASA Technical Reports Server (NTRS)
Divsalar, D.; Yuen, J. H.
1982-01-01
The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.
Quantifying errors without random sampling.
Phillips, Carl V; LaPole, Luwanna M
2003-06-12
All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.
Decodoku: Quantum error rorrection as a simple puzzle game
NASA Astrophysics Data System (ADS)
Wootton, James
To build quantum computers, we need to detect and manage any noise that occurs. This will be done using quantum error correction. At the hardware level, QEC is a multipartite system that stores information non-locally. Certain measurements are made which do not disturb the stored information, but which do allow signatures of errors to be detected. Then there is a software problem. How to take these measurement outcomes and determine: a) The errors that caused them, and (b) how to remove their effects. For qubit error correction, the algorithms required to do this are well known. For qudits, however, current methods are far from optimal. We consider the error correction problem of qubit surface codes. At the most basic level, this is a problem that can be expressed in terms of a grid of numbers. Using this fact, we take the inherent problem at the heart of quantum error correction, remove it from its quantum context, and presented in terms of simple grid based puzzle games. We have developed three versions of these puzzle games, focussing on different aspects of the required algorithms. These have been presented and iOS and Android apps, allowing the public to try their hand at developing good algorithms to solve the puzzles. For more information, see www.decodoku.com. Funding from the NCCR QSIT.
A geometric model for initial orientation errors in pigeon navigation.
Postlethwaite, Claire M; Walker, Michael M
2011-01-21
All mobile animals respond to gradients in signals in their environment, such as light, sound, odours and magnetic and electric fields, but it remains controversial how they might use these signals to navigate over long distances. The Earth's surface is essentially two-dimensional, so two stimuli are needed to act as coordinates for navigation. However, no environmental fields are known to be simple enough to act as perpendicular coordinates on a two-dimensional grid. Here, we propose a model for navigation in which we assume that an animal has a simplified 'cognitive map' in which environmental stimuli act as perpendicular coordinates. We then investigate how systematic deviation of the contour lines of the environmental signals from a simple orthogonal arrangement can cause errors in position determination and lead to systematic patterns of directional errors in initial homing directions taken by pigeons. The model reproduces patterns of initial orientation errors seen in previously collected data from homing pigeons, predicts that errors should increase with distance from the loft, and provides a basis for efforts to identify further sources of orientation errors made by homing pigeons. Copyright © 2010 Elsevier Ltd. All rights reserved.
Langarika-Rocafort, Argia; Emparanza, José Ignacio; Aramendi, José F; Castellano, Julen; Calleja-González, Julio
2017-01-01
To examine the intra-observer reliability and agreement between five methods of measurement for dorsiflexion during Weight Bearing Dorsiflexion Lunge Test and to assess the degree of agreement between three methods in female athletes. Repeated measurements study design. Volleyball club. Twenty-five volleyball players. Dorsiflexion was evaluated using five methods: heel-wall distance, first toe-wall distance, inclinometer at tibia, inclinometer at Achilles tendon and the dorsiflexion angle obtained by a simple trigonometric function. For the statistical analysis, agreement was studied using the Bland-Altman method, the Standard Error of Measurement and the Minimum Detectable Change. Reliability analysis was performed using the Intraclass Correlation Coefficient. Measurement methods using the inclinometer had more than 6° of measurement error. The angle calculated by trigonometric function had 3.28° error. The reliability of inclinometer based methods had ICC values < 0.90. Distance based methods and trigonometric angle measurement had an ICC values > 0.90. Concerning the agreement between methods, there was from 1.93° to 14.42° bias, and from 4.24° to 7.96° random error. To assess DF angle in WBLT, the angle calculated by a trigonometric function is the most repeatable method. The methods of measurement cannot be used interchangeably. Copyright © 2016 Elsevier Ltd. All rights reserved.
Space-Time Error Representation and Estimation in Navier-Stokes Calculations
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2006-01-01
The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.
Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher
2017-05-18
Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.
An Empirical State Error Covariance Matrix Orbit Determination Example
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.
Dawdy, M R; Munter, D W; Gilmore, R A
1997-03-01
This study was designed to examine the relationship between patient entry rates (a measure of physician work load) and documentation errors/omissions in both handwritten and dictated emergency treatment records. The study was carried out in two phases. Phase I examined handwritten records and Phase II examined dictated and transcribed records. A total of 838 charts for three common chief complaints (chest pain, abdominal pain, asthma/chronic obstructive pulmonary disease) were retrospectively reviewed and scored for the presence or absence of 11 predetermined criteria. Patient entry rates were determined by reviewing the emergency department patient registration logs. The data were analyzed using simple correlation and linear regression analysis. A positive correlation was found between patient entry rates and documentation errors in handwritten charts. No such correlation was found in the dictated charts. We conclude that work load may negatively affect documentation accuracy when charts are handwritten. However, the use of dictation services may minimize or eliminate this effect.
Zheng, Hai-ming; Li, Guang-jie; Wu, Hao
2015-06-01
Differential optical absorption spectroscopy (DOAS) is a commonly used atmospheric pollution monitoring method. Denoising of monitoring spectral data will improve the inversion accuracy. Fourier transform filtering method is effectively capable of filtering out the noise in the spectral data. But the algorithm itself can introduce errors. In this paper, a chirp-z transform method is put forward. By means of the local thinning of Fourier transform spectrum, it can retain the denoising effect of Fourier transform and compensate the error of the algorithm, which will further improve the inversion accuracy. The paper study on the concentration retrieving of SO2 and NO2. The results show that simple division causes bigger error and is not very stable. Chirp-z transform is proved to be more accurate than Fourier transform. Results of the frequency spectrum analysis show that Fourier transform cannot solve the distortion and weakening problems of characteristic absorption spectrum. Chirp-z transform shows ability in fine refactoring of specific frequency spectrum.
Improving Histopathology Laboratory Productivity: Process Consultancy and A3 Problem Solving.
Yörükoğlu, Kutsal; Özer, Erdener; Alptekin, Birsen; Öcal, Cem
2017-01-01
The ISO 17020 quality program has been run in our pathology laboratory for four years to establish an action plan for correction and prevention of identified errors. In this study, we aimed to evaluate the errors that we could not identify through ISO 17020 and/or solve by means of process consulting. Process consulting is carefully intervening in a group or team to help it to accomplish its goals. The A3 problem solving process was run under the leadership of a 'workflow, IT and consultancy manager'. An action team was established consisting of technical staff. A root cause analysis was applied for target conditions, and the 6-S method was implemented for solution proposals. Applicable proposals were activated and the results were rated by six-sigma analysis. Non-applicable proposals were reported to the laboratory administrator. A mislabelling error was the most complained issue triggering all pre-analytical errors. There were 21 non-value added steps grouped in 8 main targets on the fish bone graphic (transporting, recording, moving, individual, waiting, over-processing, over-transaction and errors). Unnecessary redundant requests, missing slides, archiving issues, redundant activities, and mislabelling errors were proposed to be solved by improving visibility and fixing spaghetti problems. Spatial re-organization, organizational marking, re-defining some operations, and labeling activities raised the six sigma score from 24% to 68% for all phases. Operational transactions such as implementation of a pathology laboratory system was suggested for long-term improvement. Laboratory management is a complex process. Quality control is an effective method to improve productivity. Systematic checking in a quality program may not always find and/or solve the problems. External observation may reveal crucial indicators about the system failures providing very simple solutions.
Enhancing the sensitivity to new physics in the tt¯ invariant mass distribution
NASA Astrophysics Data System (ADS)
Álvarez, Ezequiel
2012-08-01
We propose selection cuts on the LHC tt¯ production sample which should enhance the sensitivity to new physics signals in the study of the tt¯ invariant mass distribution. We show that selecting events in which the tt¯ object has little transverse and large longitudinal momentum enlarges the quark-fusion fraction of the sample and therefore increases its sensitivity to new physics which couples to quarks and not to gluons. We find that systematic error bars play a fundamental role and assume a simple model for them. We check how a non-visible new particle would become visible after the selection cuts enhance its resonance bump. A final realistic analysis should be done by the experimental groups with a correct evaluation of the systematic error bars.
Applying Jlint to Space Exploration Software
NASA Technical Reports Server (NTRS)
Artho, Cyrille; Havelund, Klaus
2004-01-01
Java is a very successful programming language which is also becoming widespread in embedded systems, where software correctness is critical. Jlint is a simple but highly efficient static analyzer that checks a Java program for several common errors, such as null pointer exceptions, and overflow errors. It also includes checks for multi-threading problems, such as deadlocks and data races. The case study described here shows the effectiveness of Jlint in find-false positives in the multi-threading warnings gives an insight into design patterns commonly used in multi-threaded code. The results show that a few analysis techniques are sufficient to avoid almost all false positives. These techniques include investigating all possible callers and a few code idioms. Verifying the correct application of these patterns is still crucial, because their correct usage is not trivial.
NASA Astrophysics Data System (ADS)
Cavallari, Francesca; de Gruttola, Michele; Di Guida, Salvatore; Govi, Giacomo; Innocente, Vincenzo; Pfeiffer, Andreas; Pierro, Antonio
2011-12-01
Automatic, synchronous and reliable population of the condition databases is critical for the correct operation of the online selection as well as of the offline reconstruction and analysis of data. In this complex infrastructure, monitoring and fast detection of errors is a very challenging task. In this paper, we describe the CMS experiment system to process and populate the Condition Databases and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are automatically collected using centralized jobs or are "dropped" by the users in dedicated services (offline and online drop-box), which synchronize them and take care of writing them into the online database. Then they are automatically streamed to the offline database, and thus are immediately accessible offline worldwide. The condition data are managed by different users using a wide range of applications.In normal operation the database monitor is used to provide simple timing information and the history of all transactions for all database accounts, and in the case of faults it is used to return simple error messages and more complete debugging information.
Linear and Order Statistics Combiners for Pattern Classification
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)
2001-01-01
Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.
The use of self checks and voting in software error detection - An empirical study
NASA Technical Reports Server (NTRS)
Leveson, Nancy G.; Cha, Stephen S.; Knight, John C.; Shimeall, Timothy J.
1990-01-01
The results of an empirical study of software error detection using self checks and N-version voting are presented. Working independently, each of 24 programmers first prepared a set of self checks using just the requirements specification of an aerospace application, and then each added self checks to an existing implementation of that specification. The modified programs were executed to measure the error-detection performance of the checks and to compare this with error detection using simple voting among multiple versions. The analysis of the checks revealed that there are great differences in the ability of individual programmers to design effective checks. It was found that some checks that might have been effective failed to detect an error because they were badly placed, and there were numerous instances of checks signaling nonexistent errors. In general, specification-based checks alone were not as effective as specification-based checks combined with code-based checks. Self checks made it possible to identify faults that had not been detected previously by voting 28 versions of the program over a million randomly generated inputs. This appeared to result from the fact that the self checks could examine the internal state of the executing program, whereas voting examines only final results of computations. If internal states had to be identical in N-version voting systems, then there would be no reason to write multiple versions.
Indirect measurements of hydrogen: The deficit method for a many-component system
NASA Astrophysics Data System (ADS)
Levine, Timothy E.; Yu, Ning; Kodali, Padma; Walter, Kevin C.; Nastasi, Michael; Tesmer, Joseph R.; Maggiore, Carl J.; Mayer, James W.
We have developed a simple technique for determining hydrogen atomic fraction from the ion backscattering spectrometry (IBS) signals of the remaining species. This technique uses the surface heights of various IBS signals in the form of a linear matrix equation. We apply this technique to in situ analysis of ion-beam-induced densification of sol-gel zirconia thin films, where hydrogen is the most volatile species during irradiation. Attendant errors are discussed with an emphasis on stopping powers and Bragg's rule.
Practical uncertainty reduction and quantification in shock physics measurements
Akin, M. C.; Nguyen, J. H.
2015-04-20
We report the development of a simple error analysis sampling method for identifying intersections and inflection points to reduce total uncertainty in experimental data. This technique was used to reduce uncertainties in sound speed measurements by 80% over conventional methods. Here, we focused on its impact on a previously published set of Mo sound speed data and possible implications for phase transition and geophysical studies. However, this technique's application can be extended to a wide range of experimental data.
Caffeine enhances real-world language processing: evidence from a proofreading task.
Brunyé, Tad T; Mahoney, Caroline R; Rapp, David N; Ditman, Tali; Taylor, Holly A
2012-03-01
Caffeine has become the most prevalently consumed psychostimulant in the world, but its influences on daily real-world functioning are relatively unknown. The present work investigated the effects of caffeine (0 mg, 100 mg, 200 mg, 400 mg) on a commonplace language task that required readers to identify and correct 4 error types in extended discourse: simple local errors (misspelling 1- to 2-syllable words), complex local errors (misspelling 3- to 5-syllable words), simple global errors (incorrect homophones), and complex global errors (incorrect subject-verb agreement and verb tense). In 2 placebo-controlled, double-blind studies using repeated-measures designs, we found higher detection and repair rates for complex global errors, asymptoting at 200 mg in low consumers (Experiment 1) and peaking at 400 mg in high consumers (Experiment 2). In both cases, covariate analyses demonstrated that arousal state mediated the relationship between caffeine consumption and the detection and repair of complex global errors. Detection and repair rates for the other 3 error types were not affected by caffeine consumption. Taken together, we demonstrate that caffeine has differential effects on error detection and repair as a function of dose and error type, and this relationship is closely tied to caffeine's effects on subjective arousal state. These results support the notion that central nervous system stimulants may enhance global processing of language-based materials and suggest that such effects may originate in caffeine-related right hemisphere brain processes. Implications for understanding the relationships between caffeine consumption and real-world cognitive functioning are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Multifractal diffusion entropy analysis: Optimal bin width of probability histograms
NASA Astrophysics Data System (ADS)
Jizba, Petr; Korbel, Jan
2014-11-01
In the framework of Multifractal Diffusion Entropy Analysis we propose a method for choosing an optimal bin-width in histograms generated from underlying probability distributions of interest. The method presented uses techniques of Rényi’s entropy and the mean squared error analysis to discuss the conditions under which the error in the multifractal spectrum estimation is minimal. We illustrate the utility of our approach by focusing on a scaling behavior of financial time series. In particular, we analyze the S&P500 stock index as sampled at a daily rate in the time period 1950-2013. In order to demonstrate a strength of the method proposed we compare the multifractal δ-spectrum for various bin-widths and show the robustness of the method, especially for large values of q. For such values, other methods in use, e.g., those based on moment estimation, tend to fail for heavy-tailed data or data with long correlations. Connection between the δ-spectrum and Rényi’s q parameter is also discussed and elucidated on a simple example of multiscale time series.
Error compensation for thermally induced errors on a machine tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krulewich, D.A.
1996-11-08
Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.
Management of high-risk perioperative systems.
Dain, Steven
2006-06-01
The perioperative system is a complex system that requires people, materials, and processes to come together in a highly ordered and timely manner. However, when working in this high-risk system, even well-organized, knowledgeable, vigilant, and well-intentioned individuals will eventually make errors. All systems need to be evaluated on a continual basis to reduce the risk of errors, make errors more easily recognizable, and provide methods for error mitigation. A simple approach to risk management that may be applied in clinical medicine is discussed.
Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.
Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał
2016-08-01
Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Kedir, Jafer; Girma, Abonesh
2014-10-01
Refractive error is one of the major causes of blindness and visual impairment in children; but community based studies are scarce especially in rural parts of Ethiopia. So, this study aims to assess the prevalence of refractive error and its magnitude as a cause of visual impairment among school-age children of rural community. This community-based cross-sectional descriptive study was conducted from March 1 to April 30, 2009 in rural villages of Goro district of Gurage Zone, found south west of Addis Ababa, the capital of Ethiopia. A multistage cluster sampling method was used with simple random selection of representative villages in the district. Chi-Square and t-tests were used in the data analysis. A total of 570 school-age children (age 7-15) were evaluated, 54% boys and 46% girls. The prevalence of refractive error was 3.5% (myopia 2.6% and hyperopia 0.9%). Refractive error was the major cause of visual impairment accounting for 54% of all causes in the study group. No child was found wearing corrective spectacles during the study period. Refractive error was the commonest cause of visual impairment in children of the district, but no measures were taken to reduce the burden in the community. So, large scale community level screening for refractive error should be conducted and integrated with regular school eye screening programs. Effective strategies need to be devised to provide low cost corrective spectacles in the rural community.
Statistical methods for astronomical data with upper limits. I - Univariate distributions
NASA Technical Reports Server (NTRS)
Feigelson, E. D.; Nelson, P. I.
1985-01-01
The statistical treatment of univariate censored data is discussed. A heuristic derivation of the Kaplan-Meier maximum-likelihood estimator from first principles is presented which results in an expression amenable to analytic error analysis. Methods for comparing two or more censored samples are given along with simple computational examples, stressing the fact that most astronomical problems involve upper limits while the standard mathematical methods require lower limits. The application of univariate survival analysis to six data sets in the recent astrophysical literature is described, and various aspects of the use of survival analysis in astronomy, such as the limitations of various two-sample tests and the role of parametric modelling, are discussed.
Wind scatterometry with improved ambiguity selection and rain modeling
NASA Astrophysics Data System (ADS)
Draper, David Willis
Although generally accurate, the quality of SeaWinds on QuikSCAT scatterometer ocean vector winds is compromised by certain natural phenomena and retrieval algorithm limitations. This dissertation addresses three main contributors to scatterometer estimate error: poor ambiguity selection, estimate uncertainty at low wind speeds, and rain corruption. A quality assurance (QA) analysis performed on SeaWinds data suggests that about 5% of SeaWinds data contain ambiguity selection errors and that scatterometer estimation error is correlated with low wind speeds and rain events. Ambiguity selection errors are partly due to the "nudging" step (initialization from outside data). A sophisticated new non-nudging ambiguity selection approach produces generally more consistent wind than the nudging method in moderate wind conditions. The non-nudging method selects 93% of the same ambiguities as the nudged data, validating both techniques, and indicating that ambiguity selection can be accomplished without nudging. Variability at low wind speeds is analyzed using tower-mounted scatterometer data. According to theory, below a threshold wind speed, the wind fails to generate the surface roughness necessary for wind measurement. A simple analysis suggests the existence of the threshold in much of the tower-mounted scatterometer data. However, the backscatter does not "go to zero" beneath the threshold in an uncontrolled environment as theory suggests, but rather has a mean drop and higher variability below the threshold. Rain is the largest weather-related contributor to scatterometer error, affecting approximately 4% to 10% of SeaWinds data. A simple model formed via comparison of co-located TRMM PR and SeaWinds measurements characterizes the average effect of rain on SeaWinds backscatter. The model is generally accurate to within 3 dB over the tropics. The rain/wind backscatter model is used to simultaneously retrieve wind and rain from SeaWinds measurements. The simultaneous wind/rain (SWR) estimation procedure can improve wind estimates during rain, while providing a scatterometer-based rain rate estimate. SWR also affords improved rain flagging for low to moderate rain rates. QuikSCAT-retrieved rain rates correlate well with TRMM PR instantaneous measurements and TMI monthly rain averages. SeaWinds rain measurements can be used to supplement data from other rain-measuring instruments, filling spatial and temporal gaps in coverage.
Decision aids for multiple-decision disease management as affected by weather input errors.
Pfender, W F; Gent, D H; Mahaffee, W F; Coop, L B; Fox, A D
2011-06-01
Many disease management decision support systems (DSSs) rely, exclusively or in part, on weather inputs to calculate an indicator for disease hazard. Error in the weather inputs, typically due to forecasting, interpolation, or estimation from off-site sources, may affect model calculations and management decision recommendations. The extent to which errors in weather inputs affect the quality of the final management outcome depends on a number of aspects of the disease management context, including whether management consists of a single dichotomous decision, or of a multi-decision process extending over the cropping season(s). Decision aids for multi-decision disease management typically are based on simple or complex algorithms of weather data which may be accumulated over several days or weeks. It is difficult to quantify accuracy of multi-decision DSSs due to temporally overlapping disease events, existence of more than one solution to optimizing the outcome, opportunities to take later recourse to modify earlier decisions, and the ongoing, complex decision process in which the DSS is only one component. One approach to assessing importance of weather input errors is to conduct an error analysis in which the DSS outcome from high-quality weather data is compared with that from weather data with various levels of bias and/or variance from the original data. We illustrate this analytical approach for two types of DSS, an infection risk index for hop powdery mildew and a simulation model for grass stem rust. Further exploration of analysis methods is needed to address problems associated with assessing uncertainty in multi-decision DSSs.
Quantum error-correction failure distributions: Comparison of coherent and stochastic error models
NASA Astrophysics Data System (ADS)
Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.
2017-06-01
We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.
NASA Technical Reports Server (NTRS)
Migneault, G. E.
1979-01-01
Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques.
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-01-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037
An IMU-to-Body Alignment Method Applied to Human Gait Analysis.
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-12-10
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.
A ground track control algorithm for the Topographic Mapping Laser Altimeter (TMLA)
NASA Technical Reports Server (NTRS)
Blaes, V.; Mcintosh, R.; Roszman, L.; Cooley, J.
1993-01-01
The results of an analysis of an algorithm that will provide autonomous onboard orbit control using orbits determined with Global Positioning System (GPS) data. The algorithm uses the GPS data to (1) compute the ground track error relative to a fixed longitude grid, and (2) determine the altitude adjustment required to correct the longitude error. A program was written on a personal computer (PC) to test the concept for numerous altitudes and values of solar flux using a simplified orbit model including only the J sub 2 zonal harmonic and simple orbit decay computations. The algorithm was then implemented in a precision orbit propagation program having a full range of perturbations. The analysis showed that, even with all perturbations (including actual time histories of solar flux variation), the algorithm could effectively control the spacecraft ground track and yield more than 99 percent Earth coverage in the time required to complete one coverage cycle on the fixed grid (220 to 230 days depending on altitude and overlap allowance).
Avery, Anthony J; Rodgers, Sarah; Cantrill, Judith A; Armstrong, Sarah; Cresswell, Kathrin; Eden, Martin; Elliott, Rachel A; Howard, Rachel; Kendrick, Denise; Morris, Caroline J; Prescott, Robin J; Swanwick, Glen; Franklin, Matthew; Putman, Koen; Boyd, Matthew; Sheikh, Aziz
2012-01-01
Summary Background Medication errors are common in primary care and are associated with considerable risk of patient harm. We tested whether a pharmacist-led, information technology-based intervention was more effective than simple feedback in reducing the number of patients at risk of measures related to hazardous prescribing and inadequate blood-test monitoring of medicines 6 months after the intervention. Methods In this pragmatic, cluster randomised trial general practices in the UK were stratified by research site and list size, and randomly assigned by a web-based randomisation service in block sizes of two or four to one of two groups. The practices were allocated to either computer-generated simple feedback for at-risk patients (control) or a pharmacist-led information technology intervention (PINCER), composed of feedback, educational outreach, and dedicated support. The allocation was masked to general practices, patients, pharmacists, researchers, and statisticians. Primary outcomes were the proportions of patients at 6 months after the intervention who had had any of three clinically important errors: non-selective non-steroidal anti-inflammatory drugs (NSAIDs) prescribed to those with a history of peptic ulcer without co-prescription of a proton-pump inhibitor; β blockers prescribed to those with a history of asthma; long-term prescription of angiotensin converting enzyme (ACE) inhibitor or loop diuretics to those 75 years or older without assessment of urea and electrolytes in the preceding 15 months. The cost per error avoided was estimated by incremental cost-effectiveness analysis. This study is registered with Controlled-Trials.com, number ISRCTN21785299. Findings 72 general practices with a combined list size of 480 942 patients were randomised. At 6 months' follow-up, patients in the PINCER group were significantly less likely to have been prescribed a non-selective NSAID if they had a history of peptic ulcer without gastroprotection (OR 0·58, 95% CI 0·38–0·89); a β blocker if they had asthma (0·73, 0·58–0·91); or an ACE inhibitor or loop diuretic without appropriate monitoring (0·51, 0·34–0·78). PINCER has a 95% probability of being cost effective if the decision-maker's ceiling willingness to pay reaches £75 per error avoided at 6 months. Interpretation The PINCER intervention is an effective method for reducing a range of medication errors in general practices with computerised clinical records. Funding Patient Safety Research Portfolio, Department of Health, England. PMID:22357106
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
Li, Zhengqiang; Li, Kaitao; Li, Donghui; Yang, Jiuchun; Xu, Hua; Goloub, Philippe; Victori, Stephane
2016-09-20
The Cimel new technologies allow both daytime and nighttime aerosol optical depth (AOD) measurements. Although the daytime AOD calibration protocols are well established, accurate and simple nighttime calibration is still a challenging task. Standard lunar-Langley and intercomparison calibration methods both require specific conditions in terms of atmospheric stability and site condition. Additionally, the lunar irradiance model also has some known limits on its uncertainty. This paper presents a simple calibration method that transfers the direct-Sun calibration constant, V0,Sun, to the lunar irradiance calibration coefficient, CMoon. Our approach is a pure calculation method, independent of site limits, e.g., Moon phase. The method is also not affected by the lunar irradiance model limitations, which is the largest error source of traditional calibration methods. Besides, this new transfer calibration approach is easy to use in the field since CMoon can be obtained directly once V0,Sun is known. Error analysis suggests that the average uncertainty of CMoon over the 440-1640 nm bands obtained with the transfer method is 2.4%-2.8%, depending on the V0,Sun approach (Langley or intercomparison), which is comparable with that of lunar-Langley approach, theoretically. In this paper, the Sun-Moon transfer and the Langley methods are compared based on site measurements in Beijing, and the day-night measurement continuity and performance are analyzed.
I'll take that to go: Big data bags and minimal identifiers for exchange of large, complex datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chard, Kyle; D'Arcy, Mike; Heavner, Benjamin D.
Big data workflows often require the assembly and exchange of complex, multi-element datasets. For example, in biomedical applications, the input to an analytic pipeline can be a dataset consisting thousands of images and genome sequences assembled from diverse repositories, requiring a description of the contents of the dataset in a concise and unambiguous form. Typical approaches to creating datasets for big data workflows assume that all data reside in a single location, requiring costly data marshaling and permitting errors of omission and commission because dataset members are not explicitly specified. We address these issues by proposing simple methods and toolsmore » for assembling, sharing, and analyzing large and complex datasets that scientists can easily integrate into their daily workflows. These tools combine a simple and robust method for describing data collections (BDBags), data descriptions (Research Objects), and simple persistent identifiers (Minids) to create a powerful ecosystem of tools and services for big data analysis and sharing. We present these tools and use biomedical case studies to illustrate their use for the rapid assembly, sharing, and analysis of large datasets.« less
Explicit bounds for the positive root of classes of polynomials with applications
NASA Astrophysics Data System (ADS)
Herzberger, Jürgen
2003-03-01
We consider a certain type of polynomial equations for which there exists--according to Descartes' rule of signs--only one simple positive root. These equations are occurring in Numerical Analysis when calculating or estimating the R-order or Q-order of convergence of certain iterative processes with an error-recursion of special form. On the other hand, these polynomial equations are very common as defining equations for the effective rate of return for certain cashflows like bonds or annuities in finance. The effective rate of interest i* for those cashflows is i*=q*-1, where q* is the unique positive root of such polynomial. We construct bounds for i* for a special problem concerning an ordinary simple annuity which is obtained by changing the conditions of such an annuity with given data applying the German rule (Preisangabeverordnung or short PAngV). Moreover, we consider a number of results for such polynomial roots in Numerical Analysis showing that by a simple variable transformation we can derive several formulas out of earlier results by applying this transformation. The same is possible in finance in order to generalize results to more complicated cashflows.
Global horizontal irradiance clear sky models : implementation and analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Joshua S.; Hansen, Clifford W.; Reno, Matthew J.
2012-03-01
Clear sky models estimate the terrestrial solar radiation under a cloudless sky as a function of the solar elevation angle, site altitude, aerosol concentration, water vapor, and various atmospheric conditions. This report provides an overview of a number of global horizontal irradiance (GHI) clear sky models from very simple to complex. Validation of clear-sky models requires comparison of model results to measured irradiance during clear-sky periods. To facilitate validation, we present a new algorithm for automatically identifying clear-sky periods in a time series of GHI measurements. We evaluate the performance of selected clear-sky models using measured data from 30 differentmore » sites, totaling about 300 site-years of data. We analyze the variation of these errors across time and location. In terms of error averaged over all locations and times, we found that complex models that correctly account for all the atmospheric parameters are slightly more accurate than other models, but, primarily at low elevations, comparable accuracy can be obtained from some simpler models. However, simpler models often exhibit errors that vary with time of day and season, whereas the errors for complex models vary less over time.« less
Degradation data analysis based on a generalized Wiener process subject to measurement error
NASA Astrophysics Data System (ADS)
Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar
2017-09-01
Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.
Barringer, J.L.; Johnsson, P.A.
1996-01-01
Titrations for alkalinity and acidity using the technique described by Gran (1952, Determination of the equivalence point in potentiometric titrations, Part II: The Analyst, v. 77, p. 661-671) have been employed in the analysis of low-pH natural waters. This report includes a synopsis of the theory and calculations associated with Gran's technique and presents a simple and inexpensive method for performing alkalinity and acidity determinations. However, potential sources of error introduced by the chemical character of some waters may limit the utility of Gran's technique. Therefore, the cost- and time-efficient method for performing alkalinity and acidity determinations described in this report is useful for exploring the suitability of Gran's technique in studies of water chemistry.
NASA Astrophysics Data System (ADS)
Mahmood, Ehab A.; Rana, Sohel; Hussin, Abdul Ghapor; Midi, Habshah
2016-06-01
The circular regression model may contain one or more data points which appear to be peculiar or inconsistent with the main part of the model. This may be occur due to recording errors, sudden short events, sampling under abnormal conditions etc. The existence of these data points "outliers" in the data set cause lot of problems in the research results and the conclusions. Therefore, we should identify them before applying statistical analysis. In this article, we aim to propose a statistic to identify outliers in the both of the response and explanatory variables of the simple circular regression model. Our proposed statistic is robust circular distance RCDxy and it is justified by the three robust measurements such as proportion of detection outliers, masking and swamping rates.
Preparing data for analysis using microsoft Excel.
Elliott, Alan C; Hynan, Linda S; Reisch, Joan S; Smith, Janet P
2006-09-01
A critical component essential to good research is the accurate and efficient collection and preparation of data for analysis. Most medical researchers have little or no training in data management, often causing not only excessive time spent cleaning data but also a risk that the data set contains collection or recording errors. The implementation of simple guidelines based on techniques used by professional data management teams will save researchers time and money and result in a data set better suited to answer research questions. Because Microsoft Excel is often used by researchers to collect data, specific techniques that can be implemented in Excel are presented.
Systematic errors of EIT systems determined by easily-scalable resistive phantoms.
Hahn, G; Just, A; Dittmar, J; Hellige, G
2008-06-01
We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, R; Kamima, T; Tachibana, H
2016-06-15
Purpose: To investigate the effect of the trajectory files from linear accelerator for Clarkson-based independent dose verification in IMRT and VMAT plans. Methods: A CT-based independent dose verification software (Simple MU Analysis: SMU, Triangle Products, Japan) with a Clarksonbased algorithm was modified to calculate dose using the trajectory log files. Eclipse with the three techniques of step and shoot (SS), sliding window (SW) and Rapid Arc (RA) was used as treatment planning system (TPS). In this study, clinically approved IMRT and VMAT plans for prostate and head and neck (HN) at two institutions were retrospectively analyzed to assess the dosemore » deviation between DICOM-RT plan (PL) and trajectory log file (TJ). An additional analysis was performed to evaluate MLC error detection capability of SMU when the trajectory log files was modified by adding systematic errors (0.2, 0.5, 1.0 mm) and random errors (5, 10, 30 mm) to actual MLC position. Results: The dose deviations for prostate and HN in the two sites were 0.0% and 0.0% in SS, 0.1±0.0%, 0.1±0.1% in SW and 0.6±0.5%, 0.7±0.9% in RA, respectively. The MLC error detection capability shows the plans for HN IMRT were the most sensitive and 0.2 mm of systematic error affected 0.7% dose deviation on average. Effect of the MLC random error did not affect dose error. Conclusion: The use of trajectory log files including actual information of MLC location, gantry angle, etc should be more effective for an independent verification. The tolerance level for the secondary check using the trajectory file may be similar to that of the verification using DICOM-RT plan file. From the view of the resolution of MLC positional error detection, the secondary check could detect the MLC position error corresponding to the treatment sites and techniques. This research is partially supported by Japan Agency for Medical Research and Development (AMED)« less
Neural network for photoplethysmographic respiratory rate monitoring
NASA Astrophysics Data System (ADS)
Johansson, Anders
2001-10-01
The photoplethysmographic signal (PPG) includes respiratory components seen as frequency modulation of the heart rate (respiratory sinus arrhythmia, RSA), amplitude modulation of the cardiac pulse, and respiratory induced intensity variations (RIIV) in the PPG baseline. The aim of this study was to evaluate the accuracy of these components in determining respiratory rate, and to combine the components in a neural network for improved accuracy. The primary goal is to design a PPG ventilation monitoring system. PPG signals were recorded from 15 healthy subjects. From these signals, the systolic waveform, diastolic waveform, respiratory sinus arrhythmia, pulse amplitude and RIIV were extracted. By using simple algorithms, the rates of false positive and false negative detection of breaths were calculated for each of the five components in a separate analysis. Furthermore, a simple neural network (NN) was tried out in a combined pattern recognition approach. In the separate analysis, the error rates (sum of false positives and false negatives) ranged from 9.7% (pulse amplitude) to 14.5% (systolic waveform). The corresponding value of the NN analysis was 9.5-9.6%.
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A major research and technology program in Probabilistic Structural Analysis Methods (PSAM) is currently being sponsored by the NASA Lewis Research Center with Southwest Research Institute as the prime contractor. This program is motivated by the need to accurately predict structural response in an environment where the loadings, the material properties, and even the structure may be considered random. The heart of PSAM is a software package which combines advanced structural analysis codes with a fast probability integration (FPI) algorithm for the efficient calculation of stochastic structural response. The basic idea of PAAM is simple: make an approximate calculation of system response, including calculation of the associated probabilities, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The deterministic solution resulting should give a reasonable and realistic description of performance-limiting system responses, although some error will be inevitable. If the simple model has correctly captured the basic mechanics of the system, however, including the proper functional dependence of stress, frequency, etc. on design parameters, then the response sensitivities calculated may be of significantly higher accuracy.
Lina, Xu; Feng, Li; Yanyun, Zhang; Nan, Gao; Mingfang, Hu
2016-12-01
To explore the phonological characteristics and rehabilitation training of abnormal velar in patients with functional articulation disorders (FAD). Eighty-seven patients with FAD were observed of the phonological characteristics of velar. Seventy-two patients with abnormal velar accepted speech training. The correlation and simple linear regression analysis were carried out on abnormal velar articulation and age. The articulation disorder of /g/ mainly showed replacement by /d/, /b/ or omission. /k/ mainly showed replacement by /d/, /t/, /g/, /p/, /b/. /h/ mainly showed replacement by /g/, /f/, /p/, /b/ or omission. The common erroneous articulation forms of /g/, /k/, /h/ were fronting of tongue and replacement by bilabial consonants. When velar combined with vowels contained /a/ and /e/, the main error was fronting of tongue. When velar combined with vowels contained /u/, the errors trended to be replacement by bilabial consonants. After 3 to 10 times of speech training, the number of erroneous words decreased to (6.24±2.61) from (40.28±6.08) before the speech training was established, the difference was statistically significant (Z=-7.379, P=0.000). The number of erroneous words was negatively correlated with age (r=-0.691, P=0.000). The result of simple linear regression analysis showed that the determination coefficient was 0.472. The articulation disorder of velar mainly shows replacement, varies with the vowels. The targeted rehabilitation training hereby established is significantly effective. Age plays an important role in the outcome of velar.
Design of Complex BPF with Automatic Digital Tuning Circuit for Low-IF Receivers
NASA Astrophysics Data System (ADS)
Kondo, Hideaki; Sawada, Masaru; Murakami, Norio; Masui, Shoichi
This paper describes the architecture and implementations of an automatic digital tuning circuit for a complex bandpass filter (BPF) in a low-power and low-cost transceiver for applications such as personal authentication and wireless sensor network systems. The architectural design analysis demonstrates that an active RC filter in a low-IF architecture can be at least 47.7% smaller in area than a conventional gm-C filter; in addition, it features a simple implementation of an associated tuning circuit. The principle of simultaneous tuning of both the center frequency and bandwidth through calibration of a capacitor array is illustrated as based on an analysis of filter characteristics, and a scalable automatic digital tuning circuit with simple analog blocks and control logic having only 835 gates is introduced. The developed capacitor tuning technique can achieve a tuning error of less than ±3.5% and lower a peaking in the passband filter characteristics. An experimental complex BPF using 0.18µm CMOS technology can successfully reduce the tuning error from an initial value of -20% to less than ±2.5% after tuning. The filter block dimensions are 1.22mm × 1.01mm; and in measurement results of the developed complex BPF with the automatic digital tuning circuit, current consumption is 705µA and the image rejection ratio is 40.3dB. Complete evaluation of the BPF indicates that this technique can be applied to low-power, low-cost transceivers.
Process yield improvements with process control terminal for varian serial ion implanters
NASA Astrophysics Data System (ADS)
Higashi, Harry; Soni, Ameeta; Martinez, Larry; Week, Ken
Implant processes in a modern wafer production fab are extremely complex. There can be several types of misprocessing, i.e. wrong dose or species, double implants and missed implants. Process Control Terminals (PCT) for Varian 350Ds installed at Intel fabs were found to substantially reduce the number of misprocessing steps. This paper describes those misprocessing steps and their subsequent reduction with use of PCTs. Reliable and simple process control with serial process ion implanters has been in increasing demand. A well designed process control terminal greatly increases device yield by monitoring all pertinent implanter functions and enabling process engineering personnel to set up process recipes for simple and accurate system operation. By programming user-selectable interlocks, implant errors are reduced and those that occur are logged for further analysis and prevention. A process control terminal should also be compatible with office personal computers for greater flexibility in system use and data analysis. The impact from the capability of a process control terminal is increased productivity, ergo higher device yield.
Risk-Based Sampling: I Don't Want to Weight in Vain.
Powell, Mark R
2015-12-01
Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.
Empirical State Error Covariance Matrix for Batch Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joe
2015-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.
Explicitly solvable complex Chebyshev approximation problems related to sine polynomials
NASA Technical Reports Server (NTRS)
Freund, Roland
1989-01-01
Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.
Testing Intelligently Includes Double-Checking Wechsler IQ Scores
ERIC Educational Resources Information Center
Kuentzel, Jeffrey G.; Hetterscheidt, Lesley A.; Barnett, Douglas
2011-01-01
The rigors of standardized testing make for numerous opportunities for examiner error, including simple computational mistakes in scoring. Although experts recommend that test scoring be double-checked, the extent to which independent double-checking would reduce scoring errors is not known. A double-checking procedure was established at a…
Evaluation of the CEAS model for barley yields in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Barnett, T. L. (Principal Investigator)
1981-01-01
The CEAS yield model is based upon multiple regression analysis at the CRD and state levels. For the historical time series, yield is regressed on a set of variables derived from monthly mean temperature and monthly precipitation. Technological trend is represented by piecewise linear and/or quadriatic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-79) demonstrated that biases are small and performance as indicated by the root mean square errors are acceptable for intended application, however, model response for individual years particularly unusual years, is not very reliable and shows some large errors. The model is objective, adequate, timely, simple and not costly. It considers scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.
An Analysis of Website Accessibility in Higher Education in Indonesia Based on WCAG 2.0 Guidelines
NASA Astrophysics Data System (ADS)
Arasid, W.; Abdullah, A. G.; Wahyudin, D.; Abdullah, C. U.; Widiaty, I.; Zakaria, D.; Amelia, N.; Juhana, A.
2018-02-01
Website accessibility is a simple way to access a website by everyone so that information on the website can be easily understood. This study aims to improve the accessibility of universities’ website to analyze website accessibility problems based on WCAG 2.0 guidelines. This study analyzed 13 universities’ websites in West Java, Indonesia by using TAW as an evaluation tool. The evaluation results were presented in a graph showing the error rate of each university’s website. The same errors that occurred in almost all websites were: non-text content, info and relationships, page title, link purpose, language of page, on input, labels and instructions, parsing, and name, role, value criteria. This study was expected to provide information to the university and to perform as guidelines for website accessibility improvements.
Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model
NASA Astrophysics Data System (ADS)
Yu, Lean; Wang, Shouyang; Lai, K. K.
Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.
NASA Astrophysics Data System (ADS)
Reem, Daniel; De Pierro, Alvaro
2017-04-01
Many problems in science and engineering involve, as part of their solution process, the consideration of a separable function which is the sum of two convex functions, one of them possibly non-smooth. Recently a few works have discussed inexact versions of several accelerated proximal methods aiming at solving this minimization problem. This paper shows that inexact versions of a method of Beck and Teboulle (fast iterative shrinkable tresholding algorithm) preserve, in a Hilbert space setting, the same (non-asymptotic) rate of convergence under some assumptions on the decay rate of the error terms The notion of inexactness discussed here seems to be rather simple, but, interestingly, when comparing to related works, closely related decay rates of the errors terms yield closely related convergence rates. The derivation sheds some light on the somewhat mysterious origin of some parameters which appear in various accelerated methods. A consequence of the analysis is that the accelerated method is perturbation resilient, making it suitable, in principle, for the superiorization methodology. By taking this into account, we re-examine the superiorization methodology and significantly extend its scope. This work was supported by FAPESP 2013/19504-9. The second author was supported also by CNPq grant 306030/2014-4.
NASA Astrophysics Data System (ADS)
Debski, Wojciech
2015-06-01
The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analysed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. Although estimating of the earthquake foci location is relatively simple, a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling and a priori uncertainties. In this paper, we addressed this task when statistics of observational and/or modelling errors are unknown. This common situation requires introduction of a priori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland, we propose an approach based on an analysis of Shanon's entropy calculated for the a posteriori distribution. We show that this meta-characteristic of the a posteriori distribution carries some information on uncertainties of the solution found.
Fluorescence errors in integrating sphere measurements of remote phosphor type LED light sources
NASA Astrophysics Data System (ADS)
Keppens, A.; Zong, Y.; Podobedov, V. B.; Nadal, M. E.; Hanselaer, P.; Ohno, Y.
2011-05-01
The relative spectral radiant flux error caused by phosphor fluorescence during integrating sphere measurements is investigated both theoretically and experimentally. Integrating sphere and goniophotometer measurements are compared and used for model validation, while a case study provides additional clarification. Criteria for reducing fluorescence errors to a degree of negligibility as well as a fluorescence error correction method based on simple matrix algebra are presented. Only remote phosphor type LED light sources are studied because of their large phosphor surfaces and high application potential in general lighting.
An Activation-Based Model of Routine Sequence Errors
2015-04-01
part of the ACT-R frame- work (e.g., Anderson, 1983), we adopt a newer, richer no- tion of priming as part of our approach ( Harrison & Trafton, 2010...2014). Other models of routine sequence errors, such as the in- teractive activation network ( IAN ) model (Cooper & Shal- lice, 2006) and the simple...error patterns that results from an interface layout shift. The ideas behind our expanded priming approach, however, could apply to IAN , which uses
Erratum: A Simple, Analytical Model of Collisionless Magnetic Reconnection in a Pair Plasma
NASA Technical Reports Server (NTRS)
Hesse, Michael; Zenitani, Seiji; Kuznetsova, Masha; Klimas, Alex
2011-01-01
The following describes a list of errata in our paper, "A simple, analytical model of collisionless magnetic reconnection in a pair plasma." It supersedes an earlier erratum. We recently discovered an error in the derivation of the outflow-to-inflow density ratio.
Performance Metrics, Error Modeling, and Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling
2016-01-01
A common set of statistical metrics has been used to summarize the performance of models or measurements- the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying uncertainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling methodology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.
Report of the Odyssey FPGA Independent Assessment Team
NASA Technical Reports Server (NTRS)
Mayer, Donald C.; Katz, Richard B.; Osborn, Jon V.; Soden, Jerry M.; Barto, R.; Day, John H. (Technical Monitor)
2001-01-01
An independent assessment team (IAT) was formed and met on April 2, 2001, at Lockheed Martin in Denver, Colorado, to aid in understanding a technical issue for the Mars Odyssey spacecraft scheduled for launch on April 7, 2001. An RP1280A field-programmable gate array (FPGA) from a lot of parts common to the SIRTF, Odyssey, and Genesis missions had failed on a SIRTF printed circuit board. A second FPGA from an earlier Odyssey circuit board was also known to have failed and was also included in the analysis by the IAT. Observations indicated an abnormally high failure rate for flight RP1280A devices (the first flight lot produced using this flow) at Lockheed Martin and the causes of these failures were not determined. Standard failure analysis techniques were applied to these parts, however, additional diagnostic techniques unique for devices of this class were not used, and the parts were prematurely submitted to a destructive physical analysis, making a determination of the root cause of failure difficult. Any of several potential failure scenarios may have caused these failures, including electrostatic discharge, electrical overstress, manufacturing defects, board design errors, board manufacturing errors, FPGA design errors, or programmer errors. Several of these mechanisms would have relatively benign consequences for disposition of the parts currently installed on boards in the Odyssey spacecraft if established as the root cause of failure. However, other potential failure mechanisms could have more dire consequences. As there is no simple way to determine the likely failure mechanisms with reasonable confidence before Odyssey launch, it is not possible for the IAT to recommend a disposition for the other parts on boards in the Odyssey spacecraft based on sound engineering principles.
NASA Astrophysics Data System (ADS)
Hacker, Joshua; Vandenberghe, Francois; Jung, Byoung-Jo; Snyder, Chris
2017-04-01
Effective assimilation of cloud-affected radiance observations from space-borne imagers, with the aim of improving cloud analysis and forecasting, has proven to be difficult. Large observation biases, nonlinear observation operators, and non-Gaussian innovation statistics present many challenges. Ensemble-variational data assimilation (EnVar) systems offer the benefits of flow-dependent background error statistics from an ensemble, and the ability of variational minimization to handle nonlinearity. The specific benefits of ensemble statistics, relative to static background errors more commonly used in variational systems, have not been quantified for the problem of assimilating cloudy radiances. A simple experiment framework is constructed with a regional NWP model and operational variational data assimilation system, to provide the basis understanding the importance of ensemble statistics in cloudy radiance assimilation. Restricting the observations to those corresponding to clouds in the background forecast leads to innovations that are more Gaussian. The number of large innovations is reduced compared to the more general case of all observations, but not eliminated. The Huber norm is investigated to handle the fat tails of the distributions, and allow more observations to be assimilated without the need for strict background checks that eliminate them. Comparing assimilation using only ensemble background error statistics with assimilation using only static background error statistics elucidates the importance of the ensemble statistics. Although the cost functions in both experiments converge to similar values after sufficient outer-loop iterations, the resulting cloud water, ice, and snow content are greater in the ensemble-based analysis. The subsequent forecasts from the ensemble-based analysis also retain more condensed water species, indicating that the local environment is more supportive of clouds. In this presentation we provide details that explain the apparent benefit from using ensembles for cloudy radiance assimilation in an EnVar context.
[Medication errors in a neonatal unit: One of the main adverse events].
Esqué Ruiz, M T; Moretones Suñol, M G; Rodríguez Miguélez, J M; Sánchez Ortiz, E; Izco Urroz, M; de Lamo Camino, M; Figueras Aloy, J
2016-04-01
Neonatal units are one of the hospital areas most exposed to the committing of treatment errors. A medication error (ME) is defined as the avoidable incident secondary to drug misuse that causes or may cause harm to the patient. The aim of this paper is to present the incidence of ME (including feeding) reported in our neonatal unit and its characteristics and possible causal factors. A list of the strategies implemented for prevention is presented. An analysis was performed on the ME declared in a neonatal unit. A total of 511 MEs have been reported over a period of seven years in the neonatal unit. The incidence in the critical care unit was 32.2 per 1000 hospital days or 20 per 100 patients, of which 0.22 per 1000 days had serious repercussions. The ME reported were, 39.5% prescribing errors, 68.1% administration errors, 0.6% were adverse drug reactions. Around two-thirds (65.4%) were produced by drugs, with 17% being intercepted. The large majority (89.4%) had no impact on the patient, but 0.6% caused permanent damage or death. Nurses reported 65.4% of MEs. The most commonly implicated causal factor was distraction (59%). Simple corrective action (alerts), and intermediate (protocols, clinical sessions and courses) and complex actions (causal analysis, monograph) were performed. It is essential to determine the current state of ME, in order to establish preventive measures and, together with teamwork and good practices, promote a climate of safety. Copyright © 2015 Asociación Española de Pediatría. Published by Elsevier España, S.L.U. All rights reserved.
Field evaluation of distance-estimation error during wetland-dependent bird surveys
Nadeau, Christopher P.; Conway, Courtney J.
2012-01-01
Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x̄error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x̄error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shumilin, V. P.; Shumilin, A. V.; Shumilin, N. V., E-mail: vladimirshumilin@yahoo.com
2015-11-15
The paper is devoted to comparison of experimental data with theoretical predictions concerning the dependence of the current of accelerated ions on the operating voltage of a Hall thruster with an anode layer. The error made in the paper published by the authors in Plasma Phys. Rep. 40, 229 (2014) occurred because of a misprint in the Encyclopedia of Low-Temperature Plasma. In the present paper, this error is corrected. It is shown that the simple model proposed in the above-mentioned paper is in qualitative and quantitative agreement with experimental results.
Detection of IMRT delivery errors based on a simple constancy check of transit dose by using an EPID
NASA Astrophysics Data System (ADS)
Baek, Tae Seong; Chung, Eun Ji; Son, Jaeman; Yoon, Myonggeun
2015-11-01
Beam delivery errors during intensity modulated radiotherapy (IMRT) were detected based on a simple constancy check of the transit dose by using an electronic portal imaging device (EPID). Twenty-one IMRT plans were selected from various treatment sites, and the transit doses during treatment were measured by using an EPID. Transit doses were measured 11 times for each course of treatment, and the constancy check was based on gamma index (3%/3 mm) comparisons between a reference dose map (the first measured transit dose) and test dose maps (the following ten measured dose maps). In a simulation using an anthropomorphic phantom, the average passing rate of the tested transit dose was 100% for three representative treatment sites (head & neck, chest, and pelvis), indicating that IMRT was highly constant for normal beam delivery. The average passing rate of the transit dose for 1224 IMRT fields from 21 actual patients was 97.6% ± 2.5%, with the lower rate possibly being due to inaccuracies of patient positioning or anatomic changes. An EPIDbased simple constancy check may provide information about IMRT beam delivery errors during treatment.
NASA Astrophysics Data System (ADS)
Smith, Gennifer T.; Dwork, Nicholas; Khan, Saara A.; Millet, Matthew; Magar, Kiran; Javanmard, Mehdi; Bowden, Audrey K.
2017-03-01
Urinalysis dipsticks were designed to revolutionize urine-based medical diagnosis. They are cheap, extremely portable, and have multiple assays patterned on a single platform. They were also meant to be incredibly easy to use. Unfortunately, there are many aspects in both the preparation and the analysis of the dipsticks that are plagued by user error. This high error is one reason that dipsticks have failed to flourish in both the at-home market and in low-resource settings. Sources of error include: inaccurate volume deposition, varying lighting conditions, inconsistent timing measurements, and misinterpreted color comparisons. We introduce a novel manifold and companion software for dipstick urinalysis that eliminates the aforementioned error sources. A micro-volume slipping manifold ensures precise sample delivery, an opaque acrylic box guarantees consistent lighting conditions, a simple sticker-based timing mechanism maintains accurate timing, and custom software that processes video data captured by a mobile phone ensures proper color comparisons. We show that the results obtained with the proposed device are as accurate and consistent as a properly executed dip-and-wipe method, the industry gold-standard, suggesting the potential for this strategy to enable confident urinalysis testing. Furthermore, the proposed all-acrylic slipping manifold is reusable and low in cost, making it a potential solution for at-home users and low-resource settings.
Nutt, John G.; Horak, Fay B.
2011-01-01
Background. This study asked whether older adults were more likely than younger adults to err in the initial direction of their anticipatory postural adjustment (APA) prior to a step (indicating a motor program error), whether initial motor program errors accounted for reaction time differences for step initiation, and whether initial motor program errors were linked to inhibitory failure. Methods. In a stepping task with choice reaction time and simple reaction time conditions, we measured forces under the feet to quantify APA onset and step latency and we used body kinematics to quantify forward movement of center of mass and length of first step. Results. Trials with APA errors were almost three times as common for older adults as for younger adults, and they were nine times more likely in choice reaction time trials than in simple reaction time trials. In trials with APA errors, step latency was delayed, correlation between APA onset and step latency was diminished, and forward motion of the center of mass prior to the step was increased. Participants with more APA errors tended to have worse Stroop interference scores, regardless of age. Conclusions. The results support the hypothesis that findings of slow choice reaction time step initiation in older adults are attributable to inclusion of trials with incorrect initial motor preparation and that these errors are caused by deficits in response inhibition. By extension, the results also suggest that mixing of trials with correct and incorrect initial motor preparation might explain apparent choice reaction time slowing with age in upper limb tasks. PMID:21498431
Regression-based model of skin diffuse reflectance for skin color analysis
NASA Astrophysics Data System (ADS)
Tsumura, Norimichi; Kawazoe, Daisuke; Nakaguchi, Toshiya; Ojima, Nobutoshi; Miyake, Yoichi
2008-11-01
A simple regression-based model of skin diffuse reflectance is developed based on reflectance samples calculated by Monte Carlo simulation of light transport in a two-layered skin model. This reflectance model includes the values of spectral reflectance in the visible spectra for Japanese women. The modified Lambert Beer law holds in the proposed model with a modified mean free path length in non-linear density space. The averaged RMS and maximum errors of the proposed model were 1.1 and 3.1%, respectively, in the above range.
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1984-01-01
Several short summaries of the work performed during this reporting period are presented. Topics discussed in this document include: (1) resilient seeded errors via simple techniques; (2) knowledge representation for engineering design; (3) analysis of faults in a multiversion software experiment; (4) implementation of parallel programming environment; (5) symbolic execution of concurrent programs; (6) two computer graphics systems for visualization of pressure distribution and convective density particles; (7) design of a source code management system; (8) vectorizing incomplete conjugate gradient on the Cyber 203/205; (9) extensions of domain testing theory and; (10) performance analyzer for the pisces system.
One Step Quantum Key Distribution Based on EPR Entanglement.
Li, Jian; Li, Na; Li, Lei-Lei; Wang, Tao
2016-06-30
A novel quantum key distribution protocol is presented, based on entanglement and dense coding and allowing asymptotically secure key distribution. Considering the storage time limit of quantum bits, a grouping quantum key distribution protocol is proposed, which overcomes the vulnerability of first protocol and improves the maneuverability. Moreover, a security analysis is given and a simple type of eavesdropper's attack would introduce at least an error rate of 46.875%. Compared with the "Ping-pong" protocol involving two steps, the proposed protocol does not need to store the qubit and only involves one step.
NASA Astrophysics Data System (ADS)
Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini
2017-04-01
Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.
Attitude control with realization of linear error dynamics
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Bach, Ralph E.
1993-01-01
An attitude control law is derived to realize linear unforced error dynamics with the attitude error defined in terms of rotation group algebra (rather than vector algebra). Euler parameters are used in the rotational dynamics model because they are globally nonsingular, but only the minimal three Euler parameters are used in the error dynamics model because they have no nonlinear mathematical constraints to prevent the realization of linear error dynamics. The control law is singular only when the attitude error angle is exactly pi rad about any eigenaxis, and a simple intuitive modification at the singularity allows the control law to be used globally. The forced error dynamics are nonlinear but stable. Numerical simulation tests show that the control law performs robustly for both initial attitude acquisition and attitude control.
Children's Task-Switching Efficiency: Missing Our Cue?
ERIC Educational Resources Information Center
Holt, Anna E.; Deák, Gedeon
2015-01-01
In simple rule-switching tests, 3- and 4-year-olds can follow each of two sorting rules but sometimes make perseverative errors when switching. Older children make few errors but respond slowly when switching. These age-related changes might reflect the maturation of executive functions (e.g., inhibition). However, they might also reflect…
NASA Technical Reports Server (NTRS)
Jaeger, R. J.; Agarwal, G. C.; Gottlieb, G. L.
1978-01-01
Subjects can correct their own errors of movement more quickly than they can react to external stimuli by using three general categories of feedback: (1) knowledge of results, primarily visually mediated; (2) proprioceptive or kinaesthetic such as from muscle spindles and joint receptors, and (3) corollary discharge or efference copy within the central nervous system. The effects of these feedbacks on simple reaction time, choice reaction time, and error correction time were studied in four normal human subjects. The movement used was plantarflexion and dorsiflexion of the ankle joint. The feedback loops were modified, by changing the sign of the visual display to alter the subject's perception of results, and by applying vibration at 100 Hz simultaneously to both the agonist and antagonist muscles of the ankle joint. The central processing was interfered with when the subjects were given moderate doses of alcohol (blood alcohol concentration levels of up to 0.07%). Vibration and alcohol increase both the simple and choice reaction times but not the error correction time.
Moment expansion for ionospheric range error
NASA Technical Reports Server (NTRS)
Mallinckrodt, A.; Reich, R.; Parker, H.; Berbert, J.
1972-01-01
On a plane earth, the ionospheric or tropospheric range error depends only on the total refractivity content or zeroth moment of the refracting layer and the elevation angle. On a spherical earth, however, the dependence is more complex; so for more accurate results it has been necessary to resort to complex ray-tracing calculations. A simple, high-accuracy alternative to the ray-tracing calculation is presented. By appropriate expansion of the angular dependence in the ray-tracing integral in a power series in height, an expression is obtained for the range error in terms of a simple function of elevation angle, E, at the expansion height and of the mth moment of the refractivity, N, distribution about the expansion height. The rapidity of convergence is heavily dependent on the choice of expansion height. For expansion heights in the neighborhood of the centroid of the layer (300-490 km), the expansion to N = 2 (three terms) gives results accurate to about 0.4% at E = 10 deg. As an analytic tool, the expansion affords some insight on the influence of layer shape on range errors in special problems.
Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan
2015-07-22
Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the traditional PPP mode because of its advantages of independence, high positioning precision, and real-time performance. It could be an alternative solution for regional positioning service before global PPP service comes into operation.
Sokolenko, Stanislav; Aucoin, Marc G
2015-09-04
The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small as 2.5 % under a wide range of conditions. Both the simulation framework and error correction method represent examples of time-course analysis that can be applied to further developments in (1)H-NMR methodology and the more general application of quantitative metabolomics.
Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan
2015-01-01
Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the traditional PPP mode because of its advantages of independence, high positioning precision, and real-time performance. It could be an alternative solution for regional positioning service before global PPP service comes into operation. PMID:26205276
Models for forecasting hospital bed requirements in the acute sector.
Farmer, R D; Emami, J
1990-01-01
STUDY OBJECTIVE--The aim was to evaluate the current approach to forecasting hospital bed requirements. DESIGN--The study was a time series and regression analysis. The time series for mean duration of stay for general surgery in the age group 15-44 years (1969-1982) was used in the evaluation of different methods of forecasting future values of mean duration of stay and its subsequent use in the formation of hospital bed requirements. RESULTS--It has been suggested that the simple trend fitting approach suffers from model specification error and imposes unjustified restrictions on the data. Time series approach (Box-Jenkins method) was shown to be a more appropriate way of modelling the data. CONCLUSION--The simple trend fitting approach is inferior to the time series approach in modelling hospital bed requirements. PMID:2277253
Kaufmann, Jost; Roth, Bernhard; Engelhardt, Thomas; Lechleuthner, Alex; Laschat, Michael; Hadamitzky, Christoph; Wappler, Frank; Hellmich, Martin
2018-01-01
Drug dosing errors pose a particular threat to children in prehospital emergency care. With the Pediatric emergency ruler (PaedER), we developed a simple height-based dose recommendation system and evaluated its effectiveness in a pre-post interventional trial as the Ethics Committee disapproved randomization due to the expected positive effect of the PaedER on outcome. Pre-interventional data were retrospectively retrieved from the electronic records and medical protocols of the Cologne Emergency Medical Service over a two-year period prior to the introduction of the PaedER. Post-interventional data were collected prospectively over a six-year period in a federal state-wide open trial. The administered doses of either intravenous or intraosseous fentanyl, midazolam, ketamine or epinephrine were recorded. Primary outcome measure was the number and severity of drug dose deviation from recommended dose (DRD) based on the patient's weight. Fifty-nine pre-interventional and 91 post-interventional prehospital drug administrations in children were analyzed. The rate of DRD > 300% overall medications were 22.0% in the pre- and 2.2% in the post-interventional group (p < 0.001). All administrations of epinephrine occurred excessive (DRD > 300%) in pre-interventional and none in post-interventional patients (p < 0.001). The use of the PaedER resulted in a 90% reduction of medication errors (95% CI: 57% to 98%; p < 0.001) and prevented all potentially life-threatening errors associated with epinephrine administration. There is an urgent need to increase the safety of emergency drug dosing in children during emergencies. A simple height-based system can support health care providers and helps to avoid life-threatening medication errors.
Ohta, Megumi; Midorikawa, Taishi; Hikihara, Yuki; Masuo, Yoshihisa; Sakamoto, Shizuo; Torii, Suguru; Kawakami, Yasuo; Fukunaga, Tetsuo; Kanehisa, Hiroaki
2017-02-01
This study examined the validity of segmental bioelectrical impedance (BI) analysis for predicting the fat-free masses (FFMs) of whole-body and body segments in children including overweight individuals. The FFM and impedance (Z) values of arms, trunk, legs, and whole body were determined using a dual-energy X-ray absorptiometry and segmental BI analyses, respectively, in 149 boys and girls aged 6 to 12 years, who were divided into model-development (n = 74), cross-validation (n = 35), and overweight (n = 40) groups. Simple regression analysis was applied to (length) 2 /Z (BI index) for each of the whole-body and 3 segments to develop the prediction equations of the measured FFM of the related body part. In the model-development group, the BI index of each of the 3 segments and whole body was significantly correlated to the measured FFM (R 2 = 0.867-0.932, standard error of estimation = 0.18-1.44 kg (5.9%-8.7%)). There was no significant difference between the measured and predicted FFM values without systematic error. The application of each equation derived in the model-development group to the cross-validation and overweight groups did not produce significant differences between the measured and predicted FFM values and systematic errors, with an exception that the arm FFM in the overweight group was overestimated. Segmental bioelectrical impedance analysis is useful for predicting the FFM of each of whole-body and body segments in children including overweight individuals, although the application for estimating arm FFM in overweight individuals requires a certain modification.
NASA Astrophysics Data System (ADS)
Schultz, Michael; Verbesselt, Jan; Herold, Martin; Avitabile, Valerio
2013-10-01
Researchers who use remotely sensed data can spend half of their total effort analysing prior data. If this data preprocessing does not match the application, this time spent on data analysis can increase considerably and can lead to inaccuracies. Despite the existence of a number of methods for pre-processing Landsat time series, each method has shortcomings, particularly for mapping forest changes under varying illumination, data availability and atmospheric conditions. Based on the requirements of mapping forest changes as defined by the United Nations (UN) Reducing Emissions from Forest Degradation and Deforestation (REDD) program, the accurate reporting of the spatio-temporal properties of these changes is necessary. We compared the impact of three fundamentally different radiometric preprocessing techniques Moderate Resolution Atmospheric TRANsmission (MODTRAN), Second Simulation of a Satellite Signal in the Solar Spectrum (6S) and simple Dark Object Subtraction (DOS) on mapping forest changes using Landsat time series data. A modification of Breaks For Additive Season and Trend (BFAST) monitor was used to jointly map the spatial and temporal agreement of forest changes at test sites in Ethiopia and Viet Nam. The suitability of the pre-processing methods for the occurring forest change drivers was assessed using recently captured Ground Truth and high resolution data (1000 points). A method for creating robust generic forest maps used for the sampling design is presented. An assessment of error sources has been performed identifying haze as a major source for time series analysis commission error.
Uematsu, Masahiro; Ito, Makiko; Hama, Yukihiro; Inomata, Takayuki; Fujii, Masahiro; Nishio, Teiji; Nakamura, Naoki; Nakagawa, Keiichi
2012-01-01
In this paper, we suggest a new method for verifying the motion of a binary multileaf collimator (MLC) in helical tomotherapy. For this we used a combination of a cylindrical scintillator and a general‐purpose camcorder. The camcorder records the light from the scintillator following photon irradiation, which we use to track the motion of the binary MLC. The purpose of this study is to demonstrate the feasibility of this method as a binary MLC quality assurance (QA) tool. First, the verification was performed using a simple binary MLC pattern with a constant leaf open time; secondly, verification using the binary MLC pattern used in a clinical setting was also performed. Sinograms of simple binary MLC patterns, in which leaves that were open were detected as “open” from the measured light, define the sensitivity which, in this case, was 1.000. On the other hand, the specificity, which gives the fraction of closed leaves detected as “closed”, was 0.919. The leaf open error identified by our method was −1.3±7.5%. The 68.6% of observed leaves were performed within ± 3% relative error. The leaf open error was expressed by the relative errors calculated on the sinogram. In the clinical binary MLC pattern, the sensitivity and specificity were 0.994 and 0.997, respectively. The measurement could be performed with −3.4±8.0% leaf open error. The 77.5% of observed leaves were performed within ± 3% relative error. With this method, we can easily verify the motion of the binary MLC, and the measurement unit developed was found to be an effective QA tool. PACS numbers: 87.56.Fc, 87.56.nk PMID:22231222
Simulation of wave propagation in three-dimensional random media
NASA Astrophysics Data System (ADS)
Coles, Wm. A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1995-04-01
Quantitative error analyses for the simulation of wave propagation in three-dimensional random media, when narrow angular scattering is assumed, are presented for plane-wave and spherical-wave geometry. This includes the errors that result from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive indices of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared with the spatial spectra of
Photometric method for determination of acidity constants through integral spectra analysis
NASA Astrophysics Data System (ADS)
Zevatskiy, Yuriy Eduardovich; Ruzanov, Daniil Olegovich; Samoylov, Denis Vladimirovich
2015-04-01
An express method for determination of acidity constants of organic acids, based on the analysis of the integral transmittance vs. pH dependence is developed. The integral value is registered as a photocurrent of photometric device simultaneously with potentiometric titration. The proposed method allows to obtain pKa using only simple and low-cost instrumentation. The optical part of the experimental setup has been optimized through the exclusion of the monochromator device. Thus it only takes 10-15 min to obtain one pKa value with the absolute error of less than 0.15 pH units. Application limitations and reliability of the method have been tested for a series of organic acids of various nature.
A simple method for processing data with least square method
NASA Astrophysics Data System (ADS)
Wang, Chunyan; Qi, Liqun; Chen, Yongxiang; Pang, Guangning
2017-08-01
The least square method is widely used in data processing and error estimation. The mathematical method has become an essential technique for parameter estimation, data processing, regression analysis and experimental data fitting, and has become a criterion tool for statistical inference. In measurement data analysis, the distribution of complex rules is usually based on the least square principle, i.e., the use of matrix to solve the final estimate and to improve its accuracy. In this paper, a new method is presented for the solution of the method which is based on algebraic computation and is relatively straightforward and easy to understand. The practicability of this method is described by a concrete example.
Moessfit. A free Mössbauer fitting program
NASA Astrophysics Data System (ADS)
Kamusella, Sirko; Klauss, Hans-Henning
2016-12-01
A free data analysis program for Mössbauer spectroscopy was developed to solve commonly faced problems such as simultaneous fitting of multiple data sets, Maximum Entropy Method and a proper error estimation. The program is written in C++ using the Qt application framework and the Gnu Scientific Library. Moessfit makes use of multithreading to reasonably apply the multi core CPU capacities of modern PC. The whole fit is specified in a text input file issued to simplify work flow for the user and provide a simple start in the Mössbauer data analysis for beginners. However, the possibility to define arbitrary parameter dependencies and distributions as well as relaxation spectra makes Moessfit interesting for advanced user as well.
Chin, Sang Hoon; Kim, Young Jae; Song, Ho Seong; Kim, Dug Young
2006-10-10
We propose a simple but powerful scheme for the complete analysis of the frequency chirp of a gain-switched optical pulse using a fringe-resolved interferometric two-photon absorption autocorrelator. A frequency chirp imposed on the gain-switched pulse from a laser diode was retrieved from both the intensity autocorrelation trace and the envelope of the second-harmonic interference fringe pattern. To verify the accuracy of the proposed phase retrieval method, we have performed an optical pulse compression experiment by using dispersion-compensating fibers with different lengths. We have obtained close agreement by less than a 1% error between the compressed pulse widths and numerically calculated pulse widths.
Asymptotic boundary conditions for dissipative waves: General theory
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1990-01-01
An outstanding issue in the computational analysis of time dependent problems is the imposition of appropriate radiation boundary conditions at artificial boundaries. Accurate conditions are developed which are based on the asymptotic analysis of wave propagation over long ranges. Employing the method of steepest descents, dominant wave groups are identified and simple approximations to the dispersion relation are considered in order to derive local boundary operators. The existence of a small number of dominant wave groups may be expected for systems with dissipation. Estimates of the error as a function of domain size are derived under general hypotheses, leading to convergence results. Some practical aspects of the numerical construction of the asymptotic boundary operators are also discussed.
The Analysis of Fluorescence Decay by a Method of Moments
Isenberg, Irvin; Dyson, Robert D.
1969-01-01
The fluorescence decay of the excited state of most biopolymers, and biopolymer conjugates and complexes, is not, in general, a simple exponential. The method of moments is used to establish a means of analyzing such multi-exponential decays. The method is tested by the use of computer simulated data, assuming that the limiting error is determined by noise generated by a pseudorandom number generator. Multi-exponential systems with relatively closely spaced decay constants may be successfully analyzed. The analyses show the requirements, in terms of precision, that data must meet. The results may be used both as an aid in the design of equipment and in the analysis of data subsequently obtained. PMID:5353139
Analytical Assessment of Simultaneous Parallel Approach Feasibility from Total System Error
NASA Technical Reports Server (NTRS)
Madden, Michael M.
2014-01-01
In a simultaneous paired approach to closely-spaced parallel runways, a pair of aircraft flies in close proximity on parallel approach paths. The aircraft pair must maintain a longitudinal separation within a range that avoids wake encounters and, if one of the aircraft blunders, avoids collision. Wake avoidance defines the rear gate of the longitudinal separation. The lead aircraft generates a wake vortex that, with the aid of crosswinds, can travel laterally onto the path of the trail aircraft. As runway separation decreases, the wake has less distance to traverse to reach the path of the trail aircraft. The total system error of each aircraft further reduces this distance. The total system error is often modeled as a probability distribution function. Therefore, Monte-Carlo simulations are a favored tool for assessing a "safe" rear-gate. However, safety for paired approaches typically requires that a catastrophic wake encounter be a rare one-in-a-billion event during normal operation. Using a Monte-Carlo simulation to assert this event rarity with confidence requires a massive number of runs. Such large runs do not lend themselves to rapid turn-around during the early stages of investigation when the goal is to eliminate the infeasible regions of the solution space and to perform trades among the independent variables in the operational concept. One can employ statistical analysis using simplified models more efficiently to narrow the solution space and identify promising trades for more in-depth investigation using Monte-Carlo simulations. These simple, analytical models not only have to address the uncertainty of the total system error but also the uncertainty in navigation sources used to alert an abort of the procedure. This paper presents a method for integrating total system error, procedure abort rates, avionics failures, and surveillance errors into a statistical analysis that identifies the likely feasible runway separations for simultaneous paired approaches.
NASA Astrophysics Data System (ADS)
Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun
2018-03-01
Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.
Avery, Anthony J; Rodgers, Sarah; Cantrill, Judith A; Armstrong, Sarah; Cresswell, Kathrin; Eden, Martin; Elliott, Rachel A; Howard, Rachel; Kendrick, Denise; Morris, Caroline J; Prescott, Robin J; Swanwick, Glen; Franklin, Matthew; Putman, Koen; Boyd, Matthew; Sheikh, Aziz
2012-04-07
Medication errors are common in primary care and are associated with considerable risk of patient harm. We tested whether a pharmacist-led, information technology-based intervention was more effective than simple feedback in reducing the number of patients at risk of measures related to hazardous prescribing and inadequate blood-test monitoring of medicines 6 months after the intervention. In this pragmatic, cluster randomised trial general practices in the UK were stratified by research site and list size, and randomly assigned by a web-based randomisation service in block sizes of two or four to one of two groups. The practices were allocated to either computer-generated simple feedback for at-risk patients (control) or a pharmacist-led information technology intervention (PINCER), composed of feedback, educational outreach, and dedicated support. The allocation was masked to researchers and statisticians involved in processing and analysing the data. The allocation was not masked to general practices, pharmacists, patients, or researchers who visited practices to extract data. [corrected]. Primary outcomes were the proportions of patients at 6 months after the intervention who had had any of three clinically important errors: non-selective non-steroidal anti-inflammatory drugs (NSAIDs) prescribed to those with a history of peptic ulcer without co-prescription of a proton-pump inhibitor; β blockers prescribed to those with a history of asthma; long-term prescription of angiotensin converting enzyme (ACE) inhibitor or loop diuretics to those 75 years or older without assessment of urea and electrolytes in the preceding 15 months. The cost per error avoided was estimated by incremental cost-effectiveness analysis. This study is registered with Controlled-Trials.com, number ISRCTN21785299. 72 general practices with a combined list size of 480,942 patients were randomised. At 6 months' follow-up, patients in the PINCER group were significantly less likely to have been prescribed a non-selective NSAID if they had a history of peptic ulcer without gastroprotection (OR 0·58, 95% CI 0·38-0·89); a β blocker if they had asthma (0·73, 0·58-0·91); or an ACE inhibitor or loop diuretic without appropriate monitoring (0·51, 0·34-0·78). PINCER has a 95% probability of being cost effective if the decision-maker's ceiling willingness to pay reaches £75 per error avoided at 6 months. The PINCER intervention is an effective method for reducing a range of medication errors in general practices with computerised clinical records. Patient Safety Research Portfolio, Department of Health, England. Copyright © 2012 Elsevier Ltd. All rights reserved.
Refractive Status at Birth: Its Relation to Newborn Physical Parameters at Birth and Gestational Age
Varghese, Raji Mathew; Sreenivas, Vishnubhatla; Puliyel, Jacob Mammen; Varughese, Sara
2009-01-01
Background Refractive status at birth is related to gestational age. Preterm babies have myopia which decreases as gestational age increases and term babies are known to be hypermetropic. This study looked at the correlation of refractive status with birth weight in term and preterm babies, and with physical indicators of intra-uterine growth such as the head circumference and length of the baby at birth. Methods All babies delivered at St. Stephens Hospital and admitted in the nursery were eligible for the study. Refraction was performed within the first week of life. 0.8% tropicamide with 0.5% phenylephrine was used to achieve cycloplegia and paralysis of accommodation. 599 newborn babies participated in the study. Data pertaining to the right eye is utilized for all the analyses except that for anisometropia where the two eyes were compared. Growth parameters were measured soon after birth. Simple linear regression analysis was performed to see the association of refractive status, (mean spherical equivalent (MSE), astigmatism and anisometropia) with each of the study variables, namely gestation, length, weight and head circumference. Subsequently, multiple linear regression was carried out to identify the independent predictors for each of the outcome parameters. Results Simple linear regression showed a significant relation between all 4 study variables and refractive error but in multiple regression only gestational age and weight were related to refractive error. The partial correlation of weight with MSE adjusted for gestation was 0.28 and that of gestation with MSE adjusted for weight was 0.10. Birth weight had a higher correlation to MSE than gestational age. Conclusion This is the first study to look at refractive error against all these growth parameters, in preterm and term babies at birth. It would appear from this study that birth weight rather than gestation should be used as criteria for screening for refractive error, especially in developing countries where the incidence of intrauterine malnutrition is higher. PMID:19214228
Design of a microbial contamination detector and analysis of error sources in its optical path.
Zhang, Chao; Yu, Xiang; Liu, Xingju; Zhang, Lei
2014-05-01
Microbial contamination is a growing concern in the food safety today. To effectively control the types and degree of microbial contamination during food production, this paper introduces a design for a microbial contamination detector that can be used for quick in-situ examination. The designed detector can identify the category of microbial contamination by locating its characteristic absorption peak and then can calculate the concentration of the microbial contamination by fitting the absorbance vs. concentration lines of standard samples with gradient concentrations. Based on traditional scanning grating detection system, this design improves the light splitting unit to expand the scanning range and enhance the accuracy of output wavelength. The motor rotation angle φ is designed to have a linear relationship with the output wavelength angle λ, which simplifies the conversion of output spectral curves into wavelength vs. light intensity curves. In this study, we also derive the relationship between the device's major sources of errors and cumulative error of the output wavelengths, and suggest a simple correction for these errors. The proposed design was applied to test pigments and volatile basic nitrogen (VBN) which evaluated microbial contamination degrees of meats, and the deviations between the measured values and the pre-set values were only in a low range of 1.15% - 1.27%.
NASA Astrophysics Data System (ADS)
Halomoan Siregar, Budi; Dewi, Izwita; Andriani, Ade
2018-03-01
The purpose of this study is to analyse the types of students errors and causes of them in solving of pedagogic problems. The type of this research is qualitative descriptive, conducted on 34 students of mathematics education in academic year 2017 to 2018. The data in this study is obtained through interviews and tests. Furthermore, the data is then analyzed through three stages: 1) data reduction, 2) data description, and 3) conclusions. The data is reduced by organizing and classifying them in order to obtain meaningful information. After reducing, then the data presented in a simple form of narrative, graphics, and tables to illustrate clearly the errors of students. Based on the information then drawn a conclusion. The results of this study indicate that the students made various errors: 1) they made a mistake in answer what being asked at the problem, because they misunderstood the problem, 2) they fail to plan the learning process based on constructivism, due to lack of understanding of how to design the learning, 3) they determine an inappropriate learning tool, because they did not understand what kind of learning tool is relevant to use.
Analysis of imperfections in the coherent optical excitation of single atoms to Rydberg states
NASA Astrophysics Data System (ADS)
de Léséleuc, Sylvain; Barredo, Daniel; Lienhard, Vincent; Browaeys, Antoine; Lahaye, Thierry
2018-05-01
We study experimentally various physical limitations and technical imperfections that lead to damping and finite contrast of optically driven Rabi oscillations between ground and Rydberg states of a single atom. Finite contrast is due to preparation and detection errors, and we show how to model and measure them accurately. Part of these errors originates from the finite lifetime of Rydberg states, and we observe its n3 scaling with the principal quantum number n . To explain the damping of Rabi oscillations, we use simple numerical models taking into account independently measured experimental imperfections and show that the observed damping actually results from the accumulation of several small effects, each at the level of a few percent. We discuss prospects for improving the coherence of ground-Rydberg Rabi oscillations in view of applications in quantum simulation and quantum information processing with arrays of single Rydberg atoms.
Development of single cell lithium ion battery model using Scilab/Xcos
NASA Astrophysics Data System (ADS)
Arianto, Sigit; Yunaningsih, Rietje Y.; Astuti, Edi Tri; Hafiz, Samsul
2016-02-01
In this research, a lithium battery model, as a component in a simulation environment, was developed and implemented using Scicos/Xcos graphical language programming. Scicos used in this research was actually Xcos that is a variant of Scicos which is embedded in Scilab. The equivalent circuit model used in modeling the battery was Double Polarization (DP) model. DP model consists of one open circuit voltage (VOC), one internal resistance (Ri), and two parallel RC circuits. The parameters of the battery were extracted using Hybrid Power Pulse Characterization (HPPC) testing. In this experiment, the Double Polarization (DP) electrical circuit model was used to describe the lithium battery dynamic behavior. The results of simulation of the model were validated with the experimental results. Using simple error analysis, it was found out that the biggest error was 0.275 Volt. It was occurred mostly at the low end of the state of charge (SOC).
On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models
NASA Astrophysics Data System (ADS)
Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.
2017-12-01
Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.
In Search of Grid Converged Solutions
NASA Technical Reports Server (NTRS)
Lockard, David P.
2010-01-01
Assessing solution error continues to be a formidable task when numerically solving practical flow problems. Currently, grid refinement is the primary method used for error assessment. The minimum grid spacing requirements to achieve design order accuracy for a structured-grid scheme are determined for several simple examples using truncation error evaluations on a sequence of meshes. For certain methods and classes of problems, obtaining design order may not be sufficient to guarantee low error. Furthermore, some schemes can require much finer meshes to obtain design order than would be needed to reduce the error to acceptable levels. Results are then presented from realistic problems that further demonstrate the challenges associated with using grid refinement studies to assess solution accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@riken.jp
We improve the ellipticity of re-smeared artificial image (ERA) method of point-spread function (PSF) correction in a weak lensing shear analysis in order to treat the realistic shape of galaxies and the PSF. This is done by re-smearing the PSF and the observed galaxy image using a re-smearing function (RSF) and allows us to use a new PSF with a simple shape and to correct the PSF effect without any approximations or assumptions. We perform a numerical test to show that the method applied for galaxies and PSF with some complicated shapes can correct the PSF effect with a systematicmore » error of less than 0.1%. We also apply the ERA method for real data of the Abell 1689 cluster to confirm that it is able to detect the systematic weak lensing shear pattern. The ERA method requires less than 0.1 or 1 s to correct the PSF for each object in a numerical test and a real data analysis, respectively.« less
NASA Technical Reports Server (NTRS)
Migneault, G. E.
1979-01-01
Emulation techniques applied to the analysis of the reliability of highly reliable computer systems for future commercial aircraft are described. The lack of credible precision in reliability estimates obtained by analytical modeling techniques is first established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Next, the technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. Use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques. Finally an illustrative example is presented to demonstrate from actual use the promise of the proposed application of emulation.
An IMU-to-Body Alignment Method Applied to Human Gait Analysis
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-01-01
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis. PMID:27973406
CBrowse: a SAM/BAM-based contig browser for transcriptome assembly visualization and analysis.
Li, Pei; Ji, Guoli; Dong, Min; Schmidt, Emily; Lenox, Douglas; Chen, Liangliang; Liu, Qi; Liu, Lin; Zhang, Jie; Liang, Chun
2012-09-15
To address the impending need for exploring rapidly increased transcriptomics data generated for non-model organisms, we developed CBrowse, an AJAX-based web browser for visualizing and analyzing transcriptome assemblies and contigs. Designed in a standard three-tier architecture with a data pre-processing pipeline, CBrowse is essentially a Rich Internet Application that offers many seamlessly integrated web interfaces and allows users to navigate, sort, filter, search and visualize data smoothly. The pre-processing pipeline takes the contig sequence file in FASTA format and its relevant SAM/BAM file as the input; detects putative polymorphisms, simple sequence repeats and sequencing errors in contigs and generates image, JSON and database-compatible CSV text files that are directly utilized by different web interfaces. CBowse is a generic visualization and analysis tool that facilitates close examination of assembly quality, genetic polymorphisms, sequence repeats and/or sequencing errors in transcriptome sequencing projects. CBrowse is distributed under the GNU General Public License, available at http://bioinfolab.muohio.edu/CBrowse/ liangc@muohio.edu or liangc.mu@gmail.com; glji@xmu.edu.cn Supplementary data are available at Bioinformatics online.
Breakdown of Spatial Parallel Coding in Children's Drawing
ERIC Educational Resources Information Center
De Bruyn, Bart; Davis, Alyson
2005-01-01
When drawing real scenes or copying simple geometric figures young children are highly sensitive to parallel cues and use them effectively. However, this sensitivity can break down in surprisingly simple tasks such as copying a single line where robust directional errors occur despite the presence of parallel cues. Before we can conclude that this…
Hughes, Paul; Deng, Wenjie; Olson, Scott C; Coombs, Robert W; Chung, Michael H; Frenkel, Lisa M
2016-03-01
Accurate analysis of minor populations of drug-resistant HIV requires analysis of a sufficient number of viral templates. We assessed the effect of experimental conditions on the analysis of HIV pol 454 pyrosequences generated from plasma using (1) the "Insertion-deletion (indel) and Carry Forward Correction" (ICC) pipeline, which clusters sequence reads using a nonsubstitution approach and can correct for indels and carry forward errors, and (2) the "Primer Identification (ID)" method, which facilitates construction of a consensus sequence to correct for sequencing errors and allelic skewing. The Primer ID and ICC methods produced similar estimates of viral diversity, but differed in the number of sequence variants generated. Sequence preparation for ICC was comparably simple, but was limited by an inability to assess the number of templates analyzed and allelic skewing. The more costly Primer ID method corrected for allelic skewing and provided the number of viral templates analyzed, which revealed that amplifiable HIV templates varied across specimens and did not correlate with clinical viral load. This latter observation highlights the value of the Primer ID method, which by determining the number of templates amplified, enables more accurate assessment of minority species in the virus population, which may be relevant to prescribing effective antiretroviral therapy.
ERIC Educational Resources Information Center
Kane, Michael
2011-01-01
Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…
Viète's Formula and an Error Bound without Taylor's Theorem
ERIC Educational Resources Information Center
Boucher, Chris
2018-01-01
This note presents a derivation of Viète's classic product approximation of pi that relies on only the Pythagorean Theorem. We also give a simple error bound for the approximation that, while not optimal, still reveals the exponential convergence of the approximation and whose derivation does not require Taylor's Theorem.
Data-driven region-of-interest selection without inflating Type I error rate.
Brooks, Joseph L; Zoumpoulaki, Alexia; Bowman, Howard
2017-01-01
In ERP and other large multidimensional neuroscience data sets, researchers often select regions of interest (ROIs) for analysis. The method of ROI selection can critically affect the conclusions of a study by causing the researcher to miss effects in the data or to detect spurious effects. In practice, to avoid inflating Type I error rate (i.e., false positives), ROIs are often based on a priori hypotheses or independent information. However, this can be insensitive to experiment-specific variations in effect location (e.g., latency shifts) reducing power to detect effects. Data-driven ROI selection, in contrast, is nonindependent and uses the data under analysis to determine ROI positions. Therefore, it has potential to select ROIs based on experiment-specific information and increase power for detecting effects. However, data-driven methods have been criticized because they can substantially inflate Type I error rate. Here, we demonstrate, using simulations of simple ERP experiments, that data-driven ROI selection can indeed be more powerful than a priori hypotheses or independent information. Furthermore, we show that data-driven ROI selection using the aggregate grand average from trials (AGAT), despite being based on the data at hand, can be safely used for ROI selection under many circumstances. However, when there is a noise difference between conditions, using the AGAT can inflate Type I error and should be avoided. We identify critical assumptions for use of the AGAT and provide a basis for researchers to use, and reviewers to assess, data-driven methods of ROI localization in ERP and other studies. © 2016 Society for Psychophysiological Research.
Clarke, D L; Kong, V Y; Naidoo, L C; Furlong, H; Aldous, C
2013-01-01
Acute surgical patients are particularly vulnerable to human error. The Acute Physiological Support Team (APST) was created with the twin objectives of identifying high-risk acute surgical patients in the general wards and reducing both the incidence of error and impact of error on these patients. A number of error taxonomies were used to understand the causes of human error and a simple risk stratification system was adopted to identify patients who are particularly at risk of error. During the period November 2012-January 2013 a total of 101 surgical patients were cared for by the APST at Edendale Hospital. The average age was forty years. There were 36 females and 65 males. There were 66 general surgical patients and 35 trauma patients. Fifty-six patients were referred on the day of their admission. The average length of stay in the APST was four days. Eleven patients were haemo-dynamically unstable on presentation and twelve were clinically septic. The reasons for referral were sepsis,(4) respiratory distress,(3) acute kidney injury AKI (38), post-operative monitoring (39), pancreatitis,(3) ICU down-referral,(7) hypoxia,(5) low GCS,(1) coagulopathy.(1) The mortality rate was 13%. A total of thirty-six patients experienced 56 errors. A total of 143 interventions were initiated by the APST. These included institution or adjustment of intravenous fluids (101), blood transfusion,(12) antibiotics,(9) the management of neutropenic sepsis,(1) central line insertion,(3) optimization of oxygen therapy,(7) correction of electrolyte abnormality,(8) correction of coagulopathy.(2) CONCLUSION: Our intervention combined current taxonomies of error with a simple risk stratification system and is a variant of the defence in depth strategy of error reduction. We effectively identified and corrected a significant number of human errors in high-risk acute surgical patients. This audit has helped understand the common sources of error in the general surgical wards and will inform on-going error reduction initiatives. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Model studies of the beam-filling error for rain-rate retrieval with microwave radiometers
NASA Technical Reports Server (NTRS)
Ha, Eunho; North, Gerald R.
1995-01-01
Low-frequency (less than 20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important scales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal, and the mixed lognormal ('mixed' here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notionof climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.
Paediatric Refractive Errors in an Eye Clinic in Osogbo, Nigeria.
Michaeline, Isawumi; Sheriff, Agboola; Bimbo, Ayegoro
2016-03-01
Paediatric ophthalmology is an emerging subspecialty in Nigeria and as such there is paucity of data on refractive errors in the country. This study set out to determine the pattern of refractive errors in children attending an eye clinic in South West Nigeria. A descriptive study of 180 consecutive subjects seen over a 2-year period. Presenting complaints, presenting visual acuity (PVA), age and sex were recorded. Clinical examination of the anterior and posterior segments of the eyes, extraocular muscle assessment and refraction were done. The types of refractive errors and their grades were determined. Corrected VA was obtained. Data was analysed using descriptive statistics in proportions, chi square with p value <0.05. The age range of subjects was between 3 and 16 years with mean age = 11.7 and SD = 0.51; with males making up 33.9%.The commonest presenting complaint was blurring of distant vision (40%), presenting visual acuity 6/9 (33.9%), normal vision constituted >75.0%, visual impairment20% and low vision 23.3%. Low grade spherical and cylindrical errors occurred most frequently (35.6% and 59.9% respectively). Regular astigmatism was significantly more common, P <0.001. The commonest diagnosis was simple myopic astigmatism (41.1%). Four cases of strabismus were seen. Simple spherical and cylindrical errors were the commonest types of refractive errors seen. Visual impairment and low vision occurred and could be a cause of absenteeism from school. Low-cost spectacle production or dispensing unit and health education are advocated for the prevention of visual impairment in a hospital set-up.
CCD image sensor induced error in PIV applications
NASA Astrophysics Data System (ADS)
Legrand, M.; Nogueira, J.; Vargas, A. A.; Ventas, R.; Rodríguez-Hidalgo, M. C.
2014-06-01
The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (˜0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Kavetski, Dmitri
2010-10-01
A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.
Assessing uncertainty in high-resolution spatial climate data across the US Northeast.
Bishop, Daniel A; Beier, Colin M
2013-01-01
Local and regional-scale knowledge of climate change is needed to model ecosystem responses, assess vulnerabilities and devise effective adaptation strategies. High-resolution gridded historical climate (GHC) products address this need, but come with multiple sources of uncertainty that are typically not well understood by data users. To better understand this uncertainty in a region with a complex climatology, we conducted a ground-truthing analysis of two 4 km GHC temperature products (PRISM and NRCC) for the US Northeast using 51 Cooperative Network (COOP) weather stations utilized by both GHC products. We estimated GHC prediction error for monthly temperature means and trends (1980-2009) across the US Northeast and evaluated any landscape effects (e.g., elevation, distance from coast) on those prediction errors. Results indicated that station-based prediction errors for the two GHC products were similar in magnitude, but on average, the NRCC product predicted cooler than observed temperature means and trends, while PRISM was cooler for means and warmer for trends. We found no evidence for systematic sources of uncertainty across the US Northeast, although errors were largest at high elevations. Errors in the coarse-scale (4 km) digital elevation models used by each product were correlated with temperature prediction errors, more so for NRCC than PRISM. In summary, uncertainty in spatial climate data has many sources and we recommend that data users develop an understanding of uncertainty at the appropriate scales for their purposes. To this end, we demonstrate a simple method for utilizing weather stations to assess local GHC uncertainty and inform decisions among alternative GHC products.
Pan, Hong-Wei; Li, Wei; Li, Rong-Guo; Li, Yong; Zhang, Yi; Sun, En-Hua
2018-01-01
Rapid identification and determination of the antibiotic susceptibility profiles of the infectious agents in patients with bloodstream infections are critical steps in choosing an effective targeted antibiotic for treatment. However, there has been minimal effort focused on developing combined methods for the simultaneous direct identification and antibiotic susceptibility determination of bacteria in positive blood cultures. In this study, we constructed a lysis-centrifugation-wash procedure to prepare a bacterial pellet from positive blood cultures, which can be used directly for identification by matrix-assisted laser desorption/ionization-time-of-flight mass spectrometry (MALDI-TOF MS) and antibiotic susceptibility testing by the Vitek 2 system. The method was evaluated using a total of 129 clinical bacteria-positive blood cultures. The whole sample preparation process could be completed in <15 min. The correct rate of direct MALDI-TOF MS identification was 96.49% for gram-negative bacteria and 97.22% for gram-positive bacteria. Vitek 2 antimicrobial susceptibility testing of gram-negative bacteria showed an agreement rate of antimicrobial categories of 96.89% with a minor error, major error, and very major error rate of 2.63, 0.24, and 0.24%, respectively. Category agreement of antimicrobials against gram-positive bacteria was 92.81%, with a minor error, major error, and very major error rate of 4.51, 1.22, and 1.46%, respectively. These results indicated that our direct antibiotic susceptibility analysis method worked well compared to the conventional culture-dependent laboratory method. Overall, this fast, easy, and accurate method can facilitate the direct identification and antibiotic susceptibility testing of bacteria in positive blood cultures.
Climbing fibers predict movement kinematics and performance errors.
Streng, Martha L; Popa, Laurentiu S; Ebner, Timothy J
2017-09-01
Requisite for understanding cerebellar function is a complete characterization of the signals provided by complex spike (CS) discharge of Purkinje cells, the output neurons of the cerebellar cortex. Numerous studies have provided insights into CS function, with the most predominant view being that they are evoked by error events. However, several reports suggest that CSs encode other aspects of movements and do not always respond to errors or unexpected perturbations. Here, we evaluated CS firing during a pseudo-random manual tracking task in the monkey ( Macaca mulatta ). This task provides extensive coverage of the work space and relative independence of movement parameters, delivering a robust data set to assess the signals that activate climbing fibers. Using reverse correlation, we determined feedforward and feedback CSs firing probability maps with position, velocity, and acceleration, as well as position error, a measure of tracking performance. The direction and magnitude of the CS modulation were quantified using linear regression analysis. The major findings are that CSs significantly encode all three kinematic parameters and position error, with acceleration modulation particularly common. The modulation is not related to "events," either for position error or kinematics. Instead, CSs are spatially tuned and provide a linear representation of each parameter evaluated. The CS modulation is largely predictive. Similar analyses show that the simple spike firing is modulated by the same parameters as the CSs. Therefore, CSs carry a broader array of signals than previously described and argue for climbing fiber input having a prominent role in online motor control. NEW & NOTEWORTHY This article demonstrates that complex spike (CS) discharge of cerebellar Purkinje cells encodes multiple parameters of movement, including motor errors and kinematics. The CS firing is not driven by error or kinematic events; instead it provides a linear representation of each parameter. In contrast with the view that CSs carry feedback signals, the CSs are predominantly predictive of upcoming position errors and kinematics. Therefore, climbing fibers carry multiple and predictive signals for online motor control. Copyright © 2017 the American Physiological Society.
Simple method for quick estimation of aquifer hydrogeological parameters
NASA Astrophysics Data System (ADS)
Ma, C.; Li, Y. Y.
2017-08-01
Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.
Pillai, Anilkumar; Medford, Andrew R L
2013-01-01
Correct coding is essential for accurate reimbursement for clinical activity. Published data confirm that significant aberrations in coding occur, leading to considerable financial inaccuracies especially in interventional procedures such as endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). Previous data reported a 15% coding error for EBUS-TBNA in a U.K. service. We hypothesised that greater physician involvement with coders would reduce EBUS-TBNA coding errors and financial disparity. The study was done as a prospective cohort study in the tertiary EBUS-TBNA service in Bristol. 165 consecutive patients between October 2009 and March 2012 underwent EBUS-TBNA for evaluation of unexplained mediastinal adenopathy on computed tomography. The chief coder was prospectively electronically informed of all procedures and cross-checked on a prospective database and by Trust Informatics. Cost and coding analysis was performed using the 2010-2011 tariffs. All 165 procedures (100%) were coded correctly as verified by Trust Informatics. This compares favourably with the 14.4% coding inaccuracy rate for EBUS-TBNA in a previous U.K. prospective cohort study [odds ratio 201.1 (1.1-357.5), p = 0.006]. Projected income loss was GBP 40,000 per year in the previous study, compared to a GBP 492,195 income here with no coding-attributable loss in revenue. Greater physician engagement with coders prevents coding errors and financial losses which can be significant especially in interventional specialties. The intervention can be as cheap, quick and simple as a prospective email to the coding team with cross-checks by Trust Informatics and against a procedural database. We suggest that all specialties should engage more with their coders using such a simple intervention to prevent revenue losses. Copyright © 2013 S. Karger AG, Basel.
Working Memory Load Strengthens Reward Prediction Errors.
Collins, Anne G E; Ciullo, Brittany; Frank, Michael J; Badre, David
2017-04-19
Reinforcement learning (RL) in simple instrumental tasks is usually modeled as a monolithic process in which reward prediction errors (RPEs) are used to update expected values of choice options. This modeling ignores the different contributions of different memory and decision-making systems thought to contribute even to simple learning. In an fMRI experiment, we investigated how working memory (WM) and incremental RL processes interact to guide human learning. WM load was manipulated by varying the number of stimuli to be learned across blocks. Behavioral results and computational modeling confirmed that learning was best explained as a mixture of two mechanisms: a fast, capacity-limited, and delay-sensitive WM process together with slower RL. Model-based analysis of fMRI data showed that striatum and lateral prefrontal cortex were sensitive to RPE, as shown previously, but, critically, these signals were reduced when the learning problem was within capacity of WM. The degree of this neural interaction related to individual differences in the use of WM to guide behavioral learning. These results indicate that the two systems do not process information independently, but rather interact during learning. SIGNIFICANCE STATEMENT Reinforcement learning (RL) theory has been remarkably productive at improving our understanding of instrumental learning as well as dopaminergic and striatal network function across many mammalian species. However, this neural network is only one contributor to human learning and other mechanisms such as prefrontal cortex working memory also play a key role. Our results also show that these other players interact with the dopaminergic RL system, interfering with its key computation of reward prediction errors. Copyright © 2017 the authors 0270-6474/17/374332-11$15.00/0.
Errors made by animals in memory paradigms are not always due to failure of memory.
Wilkie, D M; Willson, R J; Carr, J A
1999-01-01
It is commonly assumed that errors in animal memory paradigms such as delayed matching to sample, radial mazes, and food-cache recovery are due to failures in memory for information necessary to perform the task successfully. A body of research, reviewed here, suggests that this is not always the case: animals sometimes make errors despite apparently being able to remember the appropriate information. In this paper a case study of this phenomenon is described, along with a demonstration of a simple procedural modification that successfully reduced these non-memory errors, thereby producing a better measure of memory.
ERIC Educational Resources Information Center
Peterson, Karen I.
2008-01-01
The experiment developed in this article addresses the concept of equipment calibration for reducing systematic error. It also suggests simple student-prepared sucrose solutions for which accurate densities are known, but not readily available to students. Densities are measured with simple glassware that has been calibrated using the density of…
AIS spectra of desert shrub canopies
NASA Technical Reports Server (NTRS)
Murray, R.; Isaacson, D. L.; Schrumpf, B. J.; Ripple, W. J.; Lewis, A. J.
1986-01-01
Airborne Imaging Spectrometer (AIS) data were collected 30 August 1985 from a desert shrub community in central Oregon. Spectra from artificial targets placed on the test site and from bare soil, big sagebrush (Artemesia tridentata wyomingensis), silver sagebrush (Artemesia cana bolander), and exposed volcanic rocks were studied. Spectral data from grating position 3 (tree mode) were selected from 25 ground positions for analysis by Principal Factor Analysis (PFA). In this grating position, as many as six factors were identified as significant in contributing to spectral structure. Channels 74 through 84 (tree mode) best characterized between-class differences. Other channels were identified as nondiscriminating and as associated with such errors as excessive atmospheric absorption and grating positin changes. The test site was relatively simple with the two species (A. tridentata and A. cana) representing nearly 95% of biomass and with only two mineral backgrounds, a montmorillonitic soil and volcanic rocks. If, as in this study, six factors of spectral structure can be extracted from a single grating position from data acquired over a simple vegetation community, then AIS data must be considered rich in information-gathering potential.
Fantuzzo, J. A.; Mirabella, V. R.; Zahn, J. D.
2017-01-01
Abstract Synapse formation analyses can be performed by imaging and quantifying fluorescent signals of synaptic markers. Traditionally, these analyses are done using simple or multiple thresholding and segmentation approaches or by labor-intensive manual analysis by a human observer. Here, we describe Intellicount, a high-throughput, fully-automated synapse quantification program which applies a novel machine learning (ML)-based image processing algorithm to systematically improve region of interest (ROI) identification over simple thresholding techniques. Through processing large datasets from both human and mouse neurons, we demonstrate that this approach allows image processing to proceed independently of carefully set thresholds, thus reducing the need for human intervention. As a result, this method can efficiently and accurately process large image datasets with minimal interaction by the experimenter, making it less prone to bias and less liable to human error. Furthermore, Intellicount is integrated into an intuitive graphical user interface (GUI) that provides a set of valuable features, including automated and multifunctional figure generation, routine statistical analyses, and the ability to run full datasets through nested folders, greatly expediting the data analysis process. PMID:29218324
Thieler, E. Robert; Himmelstoss, Emily A.; Zichichi, Jessica L.; Ergul, Ayhan
2009-01-01
The Digital Shoreline Analysis System (DSAS) version 4.0 is a software extension to ESRI ArcGIS v.9.2 and above that enables a user to calculate shoreline rate-of-change statistics from multiple historic shoreline positions. A user-friendly interface of simple buttons and menus guides the user through the major steps of shoreline change analysis. Components of the extension and user guide include (1) instruction on the proper way to define a reference baseline for measurements, (2) automated and manual generation of measurement transects and metadata based on user-specified parameters, and (3) output of calculated rates of shoreline change and other statistical information. DSAS computes shoreline rates of change using four different methods: (1) endpoint rate, (2) simple linear regression, (3) weighted linear regression, and (4) least median of squares. The standard error, correlation coefficient, and confidence interval are also computed for the simple and weighted linear-regression methods. The results of all rate calculations are output to a table that can be linked to the transect file by a common attribute field. DSAS is intended to facilitate the shoreline change-calculation process and to provide rate-of-change information and the statistical data necessary to establish the reliability of the calculated results. The software is also suitable for any generic application that calculates positional change over time, such as assessing rates of change of glacier limits in sequential aerial photos, river edge boundaries, land-cover changes, and so on.
On-board error correction improves IR earth sensor accuracy
NASA Astrophysics Data System (ADS)
Alex, T. K.; Kasturirangan, K.; Shrivastava, S. K.
1989-10-01
Infra-red earth sensors are used in satellites for attitude sensing. Their accuracy is limited by systematic and random errors. The sources of errors in a scanning infra-red earth sensor are analyzed in this paper. The systematic errors arising from seasonal variation of infra-red radiation, oblate shape of the earth, ambient temperature of sensor, changes in scan/spin rates have been analyzed. Simple relations are derived using least square curve fitting for on-board correction of these errors. Random errors arising out of noise from detector and amplifiers, instability of alignment and localized radiance anomalies are analyzed and possible correction methods are suggested. Sun and Moon interference on earth sensor performance has seriously affected a number of missions. The on-board processor detects Sun/Moon interference and corrects the errors on-board. It is possible to obtain eight times improvement in sensing accuracy, which will be comparable with ground based post facto attitude refinement.
Space charge enhanced plasma gradient effects on satellite electric field measurements
NASA Technical Reports Server (NTRS)
Diebold, Dan; Hershkowitz, Noah; Dekock, J.; Intrator, T.; Hsieh, M-K.
1991-01-01
It has been recognized that plasma gradients can cause error in magnetospheric electric field measurements made by double probes. Space charge enhanced Plasma Gradient Induced Error (PGIE) is discussed in general terms, presenting the results of a laboratory experiment designed to demonstrate this error, and deriving a simple expression that quantifies this error. Experimental conditions were not identical to magnetospheric conditions, although efforts were made to insure the relevant physics applied to both cases. The experimental data demonstrate some of the possible errors in electric field measurements made by strongly emitting probes due to space charge effects in the presence of plasma gradients. Probe errors in space and laboratory conditions are discussed, as well as experimental error. In the final section, theoretical aspects are examined and an expression is derived for the maximum steady state space charge enhanced PGIE taken by two identical current biased probes.
Simulation of rare events in quantum error correction
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Vargo, Alexander
2013-12-01
We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.
Mitchell, W G; Chavez, J M; Baker, S A; Guzman, B L; Azen, S P
1990-07-01
Maturation of sustained attention was studied in a group of 52 hyperactive elementary school children and 152 controls using a microcomputer-based test formatted to resemble a video game. In nonhyperactive children, both simple and complex reaction time decreased with age, as did variability of response time. Omission errors were extremely infrequent on simple reaction time and decreased with age on the more complex tasks. Commission errors had an inconsistent relationship with age. Hyperactive children were slower, more variable, and made more errors on all segments of the game than did controls. Both motor speed and calculated mental speed were slower in hyperactive children, with greater discrepancy for responses directed to the nondominant hand, suggesting that a selective right hemisphere deficit may be present in hyperactives. A summary score (number of individual game scores above the 95th percentile) of 4 or more detected 60% of hyperactive subjects with a false positive rate of 5%. Agreement with the Matching Familiar Figures Test was 75% in the hyperactive group.
A novel algorithm for laser self-mixing sensors used with the Kalman filter to measure displacement
NASA Astrophysics Data System (ADS)
Sun, Hui; Liu, Ji-Gou
2018-07-01
This paper proposes a simple and effective method for estimating the feedback level factor C in a self-mixing interferometric sensor. It is used with a Kalman filter to retrieve the displacement. Without the complicated and onerous calculation process of the general C estimation method, a final equation is obtained. Thus, the estimation of C only involves a few simple calculations. It successfully retrieves the sinusoidal and aleatory displacement by means of simulated self-mixing signals in both weak and moderate feedback regimes. To deal with the errors resulting from noise and estimate bias of C and to further improve the retrieval precision, a Kalman filter is employed following the general phase unwrapping method. The simulation and experiment results show that the retrieved displacement using the C obtained with the proposed method is comparable to the joint estimation of C and α. Besides, the Kalman filter can significantly decrease measurement errors, especially the error caused by incorrectly locating the peak and valley positions of the signal.
How does negative emotion cause false memories?
Brainerd, C J; Stein, L M; Silveira, R A; Rohenkohl, G; Reyna, V F
2008-09-01
Remembering negative events can stimulate high levels of false memory, relative to remembering neutral events. In experiments in which the emotional valence of encoded materials was manipulated with their arousal levels controlled, valence produced a continuum of memory falsification. Falsification was highest for negative materials, intermediate for neutral materials, and lowest for positive materials. Conjoint-recognition analysis produced a simple process-level explanation: As one progresses from positive to neutral to negative valence, false memory increases because (a) the perceived meaning resemblance between false and true items increases and (b) subjects are less able to use verbatim memories of true items to suppress errors.
Evidence flow graph methods for validation and verification of expert systems
NASA Technical Reports Server (NTRS)
Becker, Lee A.; Green, Peter G.; Bhatnagar, Jayant
1989-01-01
The results of an investigation into the use of evidence flow graph techniques for performing validation and verification of expert systems are given. A translator to convert horn-clause rule bases into evidence flow graphs, a simulation program, and methods of analysis were developed. These tools were then applied to a simple rule base which contained errors. It was found that the method was capable of identifying a variety of problems, for example that the order of presentation of input data or small changes in critical parameters could affect the output from a set of rules.
One Step Quantum Key Distribution Based on EPR Entanglement
Li, Jian; Li, Na; Li, Lei-Lei; Wang, Tao
2016-01-01
A novel quantum key distribution protocol is presented, based on entanglement and dense coding and allowing asymptotically secure key distribution. Considering the storage time limit of quantum bits, a grouping quantum key distribution protocol is proposed, which overcomes the vulnerability of first protocol and improves the maneuverability. Moreover, a security analysis is given and a simple type of eavesdropper’s attack would introduce at least an error rate of 46.875%. Compared with the “Ping-pong” protocol involving two steps, the proposed protocol does not need to store the qubit and only involves one step. PMID:27357865
NASA Technical Reports Server (NTRS)
Carter, Gregory A.; Spiering, Bruce A.
2000-01-01
The present study utilized regression analysis to identify: wavebands and band ratios within the 400-850 nm range that could be used to estimate total chlorophyll concentration with minimal error; and simple regression models that were most effective in estimating chlorophyll concentrations were measured for two broadleaved species, a broadleaved vine, a needle-leaved conifer, and a representative of the grass family.Overall, reflectance, transmittance, and absorptance corresponded most precisely with chlorophyll concentration at wavelengths near 700 nm, although regressions were strong as well in the 550-625 nm range.
First Accelerator Test of the Kinematic Lightweight Energy Meter (KLEM) Prototype
NASA Technical Reports Server (NTRS)
Bashindzhagyan, G.; Adams, J. H.; Bashindzhagyan, P.; Chilingarian, A.; Donnelly, J.; Drury, L.; Egorov, N.; Golubkov, S.; Grebenyuk, V.; Kalinin, A.;
2002-01-01
The essence of the KLEM (Kinematic Lightweight Energy Meter) instrument is to directly measure the elemental energy spectra of high-energy cosmic rays by determining the angular distribution of secondary particles produced in a target. The first test of the simple KLEM prototype has been performed at the CERN SPS test-beam with 180 GeV pions during 2001. The results of the first test analysis confirm that, using the KLEM method, the energy of 180 GeV pions can be measured with a relative error of about 67%, which is very close to the results of the simulation (65%).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Yongpeng; Northwest Institute of Nuclear Technology, P.O. Box 69-13, Xi'an 710024; Liu Guozhi
In this paper, the Child-Langmuir law and Langmuir-Blodgett law are generalized to the relativistic regime by a simple method. Two classical laws suitable for the nonrelativistic regime are modified to simple approximate expressions applicable for calculating the space-charge-limited currents of one-dimensional steady-state planar diodes and coaxial diodes under the relativistic regime. The simple approximate expressions, extending the Child-Langmuir law and Langmuir-Blodgett law to fit the full range of voltage, have small relative errors less than 1% for one-dimensional planar diodes and less than 5% for coaxial diodes.
Atmospheric Tracer Inverse Modeling Using Markov Chain Monte Carlo (MCMC)
NASA Astrophysics Data System (ADS)
Kasibhatla, P.
2004-12-01
In recent years, there has been an increasing emphasis on the use of Bayesian statistical estimation techniques to characterize the temporal and spatial variability of atmospheric trace gas sources and sinks. The applications have been varied in terms of the particular species of interest, as well as in terms of the spatial and temporal resolution of the estimated fluxes. However, one common characteristic has been the use of relatively simple statistical models for describing the measurement and chemical transport model error statistics and prior source statistics. For example, multivariate normal probability distribution functions (pdfs) are commonly used to model these quantities and inverse source estimates are derived for fixed values of pdf paramaters. While the advantage of this approach is that closed form analytical solutions for the a posteriori pdfs of interest are available, it is worth exploring Bayesian analysis approaches which allow for a more general treatment of error and prior source statistics. Here, we present an application of the Markov Chain Monte Carlo (MCMC) methodology to an atmospheric tracer inversion problem to demonstrate how more gereral statistical models for errors can be incorporated into the analysis in a relatively straightforward manner. The MCMC approach to Bayesian analysis, which has found wide application in a variety of fields, is a statistical simulation approach that involves computing moments of interest of the a posteriori pdf by efficiently sampling this pdf. The specific inverse problem that we focus on is the annual mean CO2 source/sink estimation problem considered by the TransCom3 project. TransCom3 was a collaborative effort involving various modeling groups and followed a common modeling and analysis protocoal. As such, this problem provides a convenient case study to demonstrate the applicability of the MCMC methodology to atmospheric tracer source/sink estimation problems.
Inter comparison of Tropical Indian Ocean features in different ocean reanalysis products
NASA Astrophysics Data System (ADS)
Karmakar, Ananya; Parekh, Anant; Chowdary, J. S.; Gnanaseelan, C.
2017-09-01
This study makes an inter comparison of ocean state of the Tropical Indian Ocean (TIO) in different ocean reanalyses such as global ocean data assimilation system (GODAS), ensemble coupled data assimilation (ECDA), ocean reanalysis system 4 (ORAS4) and simple ocean data assimilation (SODA) with reference to the in-situ buoy observations, satellite observed sea surface temperature (SST), EN4 analysis and ocean surface current analysis real time (OSCAR). Analysis of mean state of SST and sea surface salinity (SSS) reveals that ORAS4 is better comparable with satellite observations as well as EN4 analysis, and is followed by SODA, ECDA and GODAS. The surface circulation in ORAS4 is closer to OSCAR compared to the other reanalyses. However mixed layer depth (MLD) is better simulated by SODA, followed by ECDA, ORAS4 and GODAS. Seasonal evolution of error indicates that the highest deviation in SST and MLD over the TIO exists during spring and summer in GODAS. Statistical analysis with concurrent data of EN4 for the period of 1980-2010 supports that the difference and standard deviation (variability strength) ratio for SSS and MLD is mostly greater than one. In general the strength of variability is overestimated by all the reanalyses. Further comparison with in-situ buoy observations supports that MLD errors over the equatorial Indian Ocean (EIO) and the Bay of Bengal are higher than with EN4 analysis. Overall ORAS4 displays higher correlation and lower error among all reanalyses with respect to both EN4 analysis and buoy observations. Major issues in the reanalyses are the underestimation of upper ocean stability in the TIO, underestimation of surface current in the EIO, overestimation of vertical shear of current and improper variability in different oceanic variables. To improve the skill of reanalyses over the TIO, salinity vertical structure and upper ocean circulation need to be better represented in reanalyses.
Drewniak, Elizabeth I.; Jay, Gregory D.; Fleming, Braden C.; Crisco, Joseph J.
2009-01-01
In attempts to better understand the etiology of osteoarthritis, a debilitating joint disease that results in the degeneration of articular cartilage in synovial joints, researchers have focused on joint tribology, the study of joint friction, lubrication, and wear. Several different approaches have been used to investigate the frictional properties of articular cartilage. In this study, we examined two analysis methods for calculating the coefficient of friction (μ) using a simple pendulum system and BL6 murine knee joints (n=10) as the fulcrum. A Stanton linear decay model (Lin μ) and an exponential model that accounts for viscous damping (Exp μ) were fit to the decaying pendulum oscillations. Root mean square error (RMSE), asymptotic standard error (ASE), and coefficient of variation (CV) were calculated to evaluate the fit and measurement precision of each model. This investigation demonstrated that while Lin μ was more repeatable, based on CV (5.0% for Lin μ; 18% for Exp μ), Exp μ provided a better fitting model, based on RMSE (0.165° for Exp μ; 0.391° for Lin μ) and ASE (0.033 for Exp μ; 0.185 for Lin μ), and had a significantly lower coefficient of friction value (0.022±0.007 for Exp μ; 0.042±0.016 for Lin μ) (p=0.001). This study details the use of a simple pendulum for examining cartilage properties in situ that will have applications investigating cartilage mechanics in a variety of species. The Exp μ model provided a more accurate fit to the experimental data for predicting the frictional properties of intact joints in pendulum systems. PMID:19632680
Interactions of timing and prediction error learning.
Kirkpatrick, Kimberly
2014-01-01
Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields. Copyright © 2013 Elsevier B.V. All rights reserved.
Bacon, Dave; Flammia, Steven T
2009-09-18
The difficulty in producing precisely timed and controlled quantum gates is a significant source of error in many physical implementations of quantum computers. Here we introduce a simple universal primitive, adiabatic gate teleportation, which is robust to timing errors and many control errors and maintains a constant energy gap throughout the computation above a degenerate ground state space. This construction allows for geometric robustness based upon the control of two independent qubit interactions. Further, our piecewise adiabatic evolution easily relates to the quantum circuit model, enabling the use of standard methods from fault-tolerance theory for establishing thresholds.
The thinking doctor: clinical decision making in contemporary medicine.
Trimble, Michael; Hamilton, Paul
2016-08-01
Diagnostic errors are responsible for a significant number of adverse events. Logical reasoning and good decision-making skills are key factors in reducing such errors, but little emphasis has traditionally been placed on how these thought processes occur, and how errors could be minimised. In this article, we explore key cognitive ideas that underpin clinical decision making and suggest that by employing some simple strategies, physicians might be better able to understand how they make decisions and how the process might be optimised. © 2016 Royal College of Physicians.
Lobach, Iryna; Mallick, Bani; Carroll, Raymond J
2011-01-01
Case-control studies are widely used to detect gene-environment interactions in the etiology of complex diseases. Many variables that are of interest to biomedical researchers are difficult to measure on an individual level, e.g. nutrient intake, cigarette smoking exposure, long-term toxic exposure. Measurement error causes bias in parameter estimates, thus masking key features of data and leading to loss of power and spurious/masked associations. We develop a Bayesian methodology for analysis of case-control studies for the case when measurement error is present in an environmental covariate and the genetic variable has missing data. This approach offers several advantages. It allows prior information to enter the model to make estimation and inference more precise. The environmental covariates measured exactly are modeled completely nonparametrically. Further, information about the probability of disease can be incorporated in the estimation procedure to improve quality of parameter estimates, what cannot be done in conventional case-control studies. A unique feature of the procedure under investigation is that the analysis is based on a pseudo-likelihood function therefore conventional Bayesian techniques may not be technically correct. We propose an approach using Markov Chain Monte Carlo sampling as well as a computationally simple method based on an asymptotic posterior distribution. Simulation experiments demonstrated that our method produced parameter estimates that are nearly unbiased even for small sample sizes. An application of our method is illustrated using a population-based case-control study of the association between calcium intake with the risk of colorectal adenoma development.
AMPLISAS: a web server for multilocus genotyping using next-generation amplicon sequencing data.
Sebastian, Alvaro; Herdegen, Magdalena; Migalska, Magdalena; Radwan, Jacek
2016-03-01
Next-generation sequencing (NGS) technologies are revolutionizing the fields of biology and medicine as powerful tools for amplicon sequencing (AS). Using combinations of primers and barcodes, it is possible to sequence targeted genomic regions with deep coverage for hundreds, even thousands, of individuals in a single experiment. This is extremely valuable for the genotyping of gene families in which locus-specific primers are often difficult to design, such as the major histocompatibility complex (MHC). The utility of AS is, however, limited by the high intrinsic sequencing error rates of NGS technologies and other sources of error such as polymerase amplification or chimera formation. Correcting these errors requires extensive bioinformatic post-processing of NGS data. Amplicon Sequence Assignment (AMPLISAS) is a tool that performs analysis of AS results in a simple and efficient way, while offering customization options for advanced users. AMPLISAS is designed as a three-step pipeline consisting of (i) read demultiplexing, (ii) unique sequence clustering and (iii) erroneous sequence filtering. Allele sequences and frequencies are retrieved in excel spreadsheet format, making them easy to interpret. AMPLISAS performance has been successfully benchmarked against previously published genotyped MHC data sets obtained with various NGS technologies. © 2015 John Wiley & Sons Ltd.
On Accuracy of Adaptive Grid Methods for Captured Shocks
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2002-01-01
The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.
NASA Technical Reports Server (NTRS)
Tai, Chang-Kou
1988-01-01
A new method (constrained sinusoidal crossover adjustment) for removing the orbit error in satellite altimetry is tested (using crossovers accumulated in the first 91 days of the Geosat non-repeat era in the tropical Pacific) and found to have excellent qualities. Two features distinguish the new method from the conventional bias-and-tilt crossover adjustment. First, a sine wave (with wavelength equaling the circumference of the Earth) is used to represent the orbit error for each satellite revolution, instead of the bias-and-tilt (and curvature, if necessary) approach for each segment of the satellite ground track. Secondly, the indeterminacy of the adjustment process is removed by a simple constraint minimizing the amplitudes of the sine waves, rather than by fixing selected tracks. Overall the new method is more accurate, more efficient, and much less cumbersome than the old. The idea of restricting the crossover adjustment to crossovers between tracks that are less than certain days apart in order to preserve the large-scale long-term oceanic variability is also tested with inconclusive results because the orbit error was unusually nonstationary in the initial 91 days of the GEOSAT mission.
Erdeniz, Burak; Rohe, Tim; Done, John; Seidler, Rachael D
2013-01-01
Conventional neuroimaging techniques provide information about condition-related changes of the BOLD (blood-oxygen-level dependent) signal, indicating only where and when the underlying cognitive processes occur. Recently, with the help of a new approach called "model-based" functional neuroimaging (fMRI), researchers are able to visualize changes in the internal variables of a time varying learning process, such as the reward prediction error or the predicted reward value of a conditional stimulus. However, despite being extremely beneficial to the imaging community in understanding the neural correlates of decision variables, a model-based approach to brain imaging data is also methodologically challenging due to the multicollinearity problem in statistical analysis. There are multiple sources of multicollinearity in functional neuroimaging including investigations of closely related variables and/or experimental designs that do not account for this. The source of multicollinearity discussed in this paper occurs due to correlation between different subjective variables that are calculated very close in time. Here, we review methodological approaches to analyzing such data by discussing the special case of separating the reward prediction error signal from reward outcomes.
NASA Astrophysics Data System (ADS)
1996-05-01
Some of the structures in Figure 1 from "Preparing "Chameleon Balls" from Natural Plants: Simple Handmade pH Indicator and Teaching Material for Chemical Education"[Kanda, N.; Asano, T.; Itoh, T.; Onoda, M. J. Chem. Educ. 1995, 72, 1131] were incorrect due to a staff error. The correct figure appears below.
A Simple Geometric Method of Estimating the Error in Using Vieta's Product for [pi
ERIC Educational Resources Information Center
Osler, T. J.
2007-01-01
Vieta's famous product using factors that are nested radicals is the oldest infinite product as well as the first non-iterative method for finding [pi]. In this paper a simple geometric construction intimately related to this product is described. The construction provides the same approximations to [pi] as are given by partial products from…
Erratum: Simple Seismic Tests of the Solar Core
NASA Astrophysics Data System (ADS)
Kennedy, Dallas C.
2000-12-01
In the article ``Simple Seismic Tests of the Solar Core'' by Dallas C. Kennedy (ApJ, 540, 1109 [2000]), Figures 1, 2, and 3 in the print edition of the Journal were unreadable because of problems with the electronic file format. The figures in the electronic edition were unaffected. The figures should have appeared as below. The Press sincerely regrets this error.
Boyle, John J.; Kume, Maiko; Wyczalkowski, Matthew A.; Taber, Larry A.; Pless, Robert B.; Xia, Younan; Genin, Guy M.; Thomopoulos, Stavros
2014-01-01
When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601
Using an Integrative Approach To Teach Hebrew Grammar in an Elementary Immersion Class.
ERIC Educational Resources Information Center
Eckstein, Peter
The 12-week program described here was designed to improve a Hebrew language immersion class' ability to correctly use the simple past and present tenses. The target group was a sixth-grade class that achieved a 65.68 percent error-free rate on a pre-test; the project's objective was to achieve 90 percent error free tests, using student…
Serafini, A; Troiano, G; Franceschini, E; Calzoni, P; Nante, N; Scapellato, C
2016-01-01
Risk management is a set of actions to recognize or identify risks, errors and their consequences and to take the steps to counter it. The aim of our study was to apply FMECA (Failure Mode, Effects and Criticality Analysis) to the Activated Protein C resistance (APCR) test in order to detect and avoid mistakes in this process. We created a team and the process was divided in phases and sub phases. For each phase we calculated the probability of occurrence (O) of an error, the detectability score (D) and the severity (S). The product of these three indexes yields the RPN (Risk Priority Number). Phases with a higher RPN need corrective actions with a higher priority. The calculation of RPN showed that more than 20 activities have a score higher than 150 and need important preventive actions; 8 have a score between 100 and 150. Only 23 actions obtained an acceptable score lower than 100. This was one of the first experience of application of FMECA analysis to a laboratory process, and the first one which applies this technique to the identification of the factor V Leiden, and our results confirm that FMECA could be a simple, powerful and useful tool in risk management and helps to identify quickly the criticality in a laboratory process.
Du, Jiaying; Gerdtman, Christer; Lindén, Maria
2018-04-06
Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented.
Gerdtman, Christer
2018-01-01
Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented. PMID:29642412
NASA Astrophysics Data System (ADS)
Duffy, Alan; Yates, Brian; Takacs, Peter
2012-09-01
The Optical Metrology Facility at the Canadian Light Source (CLS) has recently purchased MountainsMap surface analysis software from Digital Surf and we report here our experiences with this package and its usefulness as a tool for examining metrology data of synchrotron x-ray mirrors. The package has a number of operators that are useful for determining surface roughness and slope error including compliance with ISO standards (viz. ISO 4287 and ISO 25178). The software is extensible with MATLAB scripts either by loading an m-file or by a user written script. This makes it possible to apply a custom operator to measurement data sets. Using this feature we have applied the simple six-line MATLAB code for the direct least square fitting of ellipses developed by Fitzgibbon et. al. to investigate the residual slope error of elliptical mirrors upon the removal of the best-fit-ellipse. The software includes support for many instruments (e.g. Zygo, MicroMap, etc...) and can import ASCII data (e.g. LTP data). The stitching module allows the user to assemble overlapping images and we report on our experiences with this feature applied to MicroMap surface roughness data. The power spectral density function was determined for the stitched and unstitched data and compared.
Schoenberg, Mike R; Osborn, Katie E; Mahone, E Mark; Feigon, Maia; Roth, Robert M; Pliskin, Neil H
2017-11-08
Errors in communication are a leading cause of medical errors. A potential source of error in communicating neuropsychological results is confusion in the qualitative descriptors used to describe standardized neuropsychological data. This study sought to evaluate the extent to which medical consumers of neuropsychological assessments believed that results/findings were not clearly communicated. In addition, preference data for a variety of qualitative descriptors commonly used to communicate normative neuropsychological test scores were obtained. Preference data were obtained for five qualitative descriptor systems as part of a larger 36-item internet-based survey of physician satisfaction with neuropsychological services. A new qualitative descriptor system termed the Simplified Qualitative Classification System (Q-Simple) was proposed to reduce the potential for communication errors using seven terms: very superior, superior, high average, average, low average, borderline, and abnormal/impaired. A non-random convenience sample of 605 clinicians identified from four United States academic medical centers from January 1, 2015 through January 7, 2016 were invited to participate. A total of 182 surveys were completed. A minority of clinicians (12.5%) indicated that neuropsychological study results were not clearly communicated. When communicating neuropsychological standardized scores, the two most preferred qualitative descriptor systems were by Heaton and colleagues (26%) and a newly proposed Q-simple system (22%). Comprehensive norms for an extended Halstead-Reitan battery: Demographic corrections, research findings, and clinical applications. Odessa, TX: Psychological Assessment Resources) (26%) and the newly proposed Q-Simple system (22%). Initial findings highlight the need to improve and standardize communication of neuropsychological results. These data offer initial guidance for preferred terms to communicate test results and form a foundation for more standardized practice among neuropsychologists. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Jeyasingh, Suganthi; Veluchamy, Malathi
2017-05-01
Early diagnosis of breast cancer is essential to save lives of patients. Usually, medical datasets include a large variety of data that can lead to confusion during diagnosis. The Knowledge Discovery on Database (KDD) process helps to improve efficiency. It requires elimination of inappropriate and repeated data from the dataset before final diagnosis. This can be done using any of the feature selection algorithms available in data mining. Feature selection is considered as a vital step to increase the classification accuracy. This paper proposes a Modified Bat Algorithm (MBA) for feature selection to eliminate irrelevant features from an original dataset. The Bat algorithm was modified using simple random sampling to select the random instances from the dataset. Ranking was with the global best features to recognize the predominant features available in the dataset. The selected features are used to train a Random Forest (RF) classification algorithm. The MBA feature selection algorithm enhanced the classification accuracy of RF in identifying the occurrence of breast cancer. The Wisconsin Diagnosis Breast Cancer Dataset (WDBC) was used for estimating the performance analysis of the proposed MBA feature selection algorithm. The proposed algorithm achieved better performance in terms of Kappa statistic, Mathew’s Correlation Coefficient, Precision, F-measure, Recall, Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE). Creative Commons Attribution License
Fang, Cheng; Butler, David Lee
2013-05-01
In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.
NASA Technical Reports Server (NTRS)
Davidson, Frederic M.; Sun, Xiaoli; Field, Christopher T.
1995-01-01
Laser altimeters measure the time of flight of the laser pulses to determine the range of the target. The simplest altimeter receiver consists of a photodetector followed by a leading edge detector. A time interval unit (TIU) measures the time from the transmitted laser pulse to the leading edge of the received pulse as it crosses a preset threshold. However, the ranging error of this simple detection scheme depends on the received, pulse amplitude, pulse shape, and the threshold. In practice, the pulse shape and the amplitude are determined by the target target characteristics which has to be assumed unknown prior to the measurement. The ranging error can be improved if one also measures the pulse width and use the average of the leading and trailing edges (half pulse width) as the pulse arrival time. The ranging error becomes independent of the received pulse amplitude and the pulse width as long as the pulse shape is symmetric. The pulse width also gives the slope of the target. The ultimate detection scheme is to digitize the received waveform and calculate the centroid as the pulse arrival time. The centroid detection always gives unbiased measurement even for asymmetric pulses. In this report, we analyze the laser altimeter ranging errors for these three detection schemes using the Mars Orbital Laser Altimeter (MOLA) as an example.
Liquid crystal point diffraction interferometer. Ph.D. Thesis - Arizona Univ., 1995
NASA Technical Reports Server (NTRS)
Mercer, Carolyn R.
1995-01-01
A new instrument, the liquid crystal point diffraction-interferometer (LCPDI), has been developed for the measurement of phase objects. This instrument maintains the compact, robust design of Linnik's point diffraction interferometer (PDI) and adds to it phase stepping capability for quantitative interferogram analysis. The result is a compact, simple to align, environmentally insensitive interferometer capable of accurately measuring optical wavefronts with very high data density and with automated data reduction. This dissertation describes the theory of both the PDI and liquid crystal phase control. The design considerations for the LCPDI are presented, including manufacturing considerations. The operation and performance of the LCPDI are discussed, including sections regarding alignment, calibration, and amplitude modulation effects. The LCPDI is then demonstrated using two phase objects: defocus difference wavefront, and a temperature distribution across a heated chamber filled with silicone oil. The measured results are compared to theoretical or independently measured results and show excellent agreement. A computer simulation of the LCPDI was performed to verify the source of observed periodic phase measurement error. The error stems from intensity variations caused by dye molecules rotating within the liquid crystal layer. Methods are discussed for reducing this error. Algorithms are presented which reduce this error; they are also useful for any phase-stepping interferometer that has unwanted intensity fluctuations, such as those caused by unregulated lasers.
Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...
2015-12-10
We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less
Comparative analysis of techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Hitt, E. F.; Bridgman, M. S.; Robinson, A. C.
1981-01-01
Performability analysis is a technique developed for evaluating the effectiveness of fault-tolerant computing systems in multiphase missions. Performability was evaluated for its accuracy, practical usefulness, and relative cost. The evaluation was performed by applying performability and the fault tree method to a set of sample problems ranging from simple to moderately complex. The problems involved as many as five outcomes, two to five mission phases, permanent faults, and some functional dependencies. Transient faults and software errors were not considered. A different analyst was responsible for each technique. Significantly more time and effort were required to learn performability analysis than the fault tree method. Performability is inherently as accurate as fault tree analysis. For the sample problems, fault trees were more practical and less time consuming to apply, while performability required less ingenuity and was more checkable. Performability offers some advantages for evaluating very complex problems.
Automatic network coupling analysis for dynamical systems based on detailed kinetic models.
Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich
2005-10-01
We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.
Mesh refinement in finite element analysis by minimization of the stiffness matrix trace
NASA Technical Reports Server (NTRS)
Kittur, Madan G.; Huston, Ronald L.
1989-01-01
Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution.
Szilágyi, N; Kovács, R; Kenyeres, I; Csikor, Zs
2013-01-01
Biofilm development in a fixed bed biofilm reactor system performing municipal wastewater treatment was monitored aiming at accumulating colonization and maximum biofilm mass data usable in engineering practice for process design purposes. Initially a 6 month experimental period was selected for investigations where the biofilm formation and the performance of the reactors were monitored. The results were analyzed by two methods: for simple, steady-state process design purposes the maximum biofilm mass on carriers versus influent load and a time constant of the biofilm growth were determined, whereas for design approaches using dynamic models a simple biofilm mass prediction model including attachment and detachment mechanisms was selected and fitted to the experimental data. According to a detailed statistical analysis, the collected data have not allowed us to determine both the time constant of biofilm growth and the maximum biofilm mass on carriers at the same time. The observed maximum biofilm mass could be determined with a reasonable error and ranged between 438 gTS/m(2) carrier surface and 843 gTS/m(2), depending on influent load, and hydrodynamic conditions. The parallel analysis of the attachment-detachment model showed that the experimental data set allowed us to determine the attachment rate coefficient which was in the range of 0.05-0.4 m d(-1) depending on influent load and hydrodynamic conditions.
Performability modeling based on real data: A case study
NASA Technical Reports Server (NTRS)
Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.
1988-01-01
Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of apparent types of errors.
Performability modeling based on real data: A casestudy
NASA Technical Reports Server (NTRS)
Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.
1987-01-01
Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different types of errors.
Optimal wavefront control for adaptive segmented mirrors
NASA Technical Reports Server (NTRS)
Downie, John D.; Goodman, Joseph W.
1989-01-01
A ground-based astronomical telescope with a segmented primary mirror will suffer image-degrading wavefront aberrations from at least two sources: (1) atmospheric turbulence and (2) segment misalignment or figure errors of the mirror itself. This paper describes the derivation of a mirror control feedback matrix that assumes the presence of both types of aberration and is optimum in the sense that it minimizes the mean-squared residual wavefront error. Assumptions of the statistical nature of the wavefront measurement errors, atmospheric phase aberrations, and segment misalignment errors are made in the process of derivation. Examples of the degree of correlation are presented for three different types of wavefront measurement data and compared to results of simple corrections.
An improved VSS NLMS algorithm for active noise cancellation
NASA Astrophysics Data System (ADS)
Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan
2017-08-01
In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.
In vivo measurement of mechanical properties of human long bone by using sonic sound
NASA Astrophysics Data System (ADS)
Hossain, M. Jayed; Rahman, M. Moshiur; Alam, Morshed
2016-07-01
Vibration analysis has evaluated as non-invasive techniques for the in vivo assessment of bone mechanical properties. The relation between the resonant frequencies, long bone geometry and mechanical properties can be obtained by vibration analysis. In vivo measurements were performed on human ulna as a simple beam model with an experimental technique and associated apparatus. The resonant frequency of the ulna was obtained by Fast Fourier Transformation (FFT) analysis of the vibration response of piezoelectric accelerometer. Both elastic modulus and speed of the sound were inferred from the resonant frequency. Measurement error in the improved experimental setup was comparable with the previous work. The in vivo determination of bone elastic response has potential value in screening programs for metabolic bone disease, early detection of osteoporosis and evaluation of skeletal effects of various therapeutic modalities.
Yago, Martín
2017-05-01
QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results. A statistical model was used to construct charts for the 1 ks and X /χ 2 rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results. 1 ks Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. X /χ 2 rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance. Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision. © 2017 American Association for Clinical Chemistry.
NASA Astrophysics Data System (ADS)
Laiolo, P.; Gabellani, S.; Campo, L.; Silvestro, F.; Delogu, F.; Rudari, R.; Pulvirenti, L.; Boni, G.; Fascetti, F.; Pierdicca, N.; Crapolicchio, R.; Hasenauer, S.; Puca, S.
2016-06-01
The reliable estimation of hydrological variables in space and time is of fundamental importance in operational hydrology to improve the flood predictions and hydrological cycle description. Nowadays remotely sensed data can offer a chance to improve hydrological models especially in environments with scarce ground based data. The aim of this work is to update the state variables of a physically based, distributed and continuous hydrological model using four different satellite-derived data (three soil moisture products and a land surface temperature measurement) and one soil moisture analysis to evaluate, even with a non optimal technique, the impact on the hydrological cycle. The experiments were carried out for a small catchment, in the northern part of Italy, for the period July 2012-June 2013. The products were pre-processed according to their own characteristics and then they were assimilated into the model using a simple nudging technique. The benefits on the model predictions of discharge were tested against observations. The analysis showed a general improvement of the model discharge predictions, even with a simple assimilation technique, for all the assimilation experiments; the Nash-Sutcliffe model efficiency coefficient was increased from 0.6 (relative to the model without assimilation) to 0.7, moreover, errors on discharge were reduced up to the 10%. An added value to the model was found in the rainfall season (autumn): all the assimilation experiments reduced the errors up to the 20%. This demonstrated that discharge prediction of a distributed hydrological model, which works at fine scale resolution in a small basin, can be improved with the assimilation of coarse-scale satellite-derived data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Juan; Beltran, Chris J., E-mail: beltran.chris@mayo.edu; Herman, Michael G.
Purpose: To quantitatively and systematically assess dosimetric effects induced by spot positioning error as a function of spot spacing (SS) on intensity-modulated proton therapy (IMPT) plan quality and to facilitate evaluation of safety tolerance limits on spot position. Methods: Spot position errors (PE) ranging from 1 to 2 mm were simulated. Simple plans were created on a water phantom, and IMPT plans were calculated on two pediatric patients with a brain tumor of 28 and 3 cc, respectively, using a commercial planning system. For the phantom, a uniform dose was delivered to targets located at different depths from 10 tomore » 20 cm with various field sizes from 2{sup 2} to 15{sup 2} cm{sup 2}. Two nominal spot sizes, 4.0 and 6.6 mm of 1 σ in water at isocenter, were used for treatment planning. The SS ranged from 0.5 σ to 1.5 σ, which is 2–6 mm for the small spot size and 3.3–9.9 mm for the large spot size. Various perturbation scenarios of a single spot error and systematic and random multiple spot errors were studied. To quantify the dosimetric effects, percent dose error (PDE) depth profiles and the value of percent dose error at the maximum dose difference (PDE [ΔDmax]) were used for evaluation. Results: A pair of hot and cold spots was created per spot shift. PDE[ΔDmax] is found to be a complex function of PE, SS, spot size, depth, and global spot distribution that can be well defined in simple models. For volumetric targets, the PDE [ΔDmax] is not noticeably affected by the change of field size or target volume within the studied ranges. In general, reducing SS decreased the dose error. For the facility studied, given a single spot error with a PE of 1.2 mm and for both spot sizes, a SS of 1σ resulted in a 2% maximum dose error; a SS larger than 1.25 σ substantially increased the dose error and its sensitivity to PE. A similar trend was observed in multiple spot errors (both systematic and random errors). Systematic PE can lead to noticeable hot spots along the field edges, which may be near critical structures. However, random PE showed minimal dose error. Conclusions: Dose error dependence for PE was quantitatively and systematically characterized and an analytic tool was built to simulate systematic and random errors for patient-specific IMPT. This information facilitates the determination of facility specific spot position error thresholds.« less
A strategy for reducing gross errors in the generalized Born models of implicit solvation
Onufriev, Alexey V.; Sigalov, Grigori
2011-01-01
The “canonical” generalized Born (GB) formula [C. Still, A. Tempczyk, R. C. Hawley, and T. Hendrickson, J. Am. Chem. Soc. 112, 6127 (1990)] is known to provide accurate estimates for total electrostatic solvation energies ΔGel of biomolecules if the corresponding effective Born radii are accurate. Here we show that even if the effective Born radii are perfectly accurate, the canonical formula still exhibits significant number of gross errors (errors larger than 2kBT relative to numerical Poisson equation reference) in pairwise interactions between individual atomic charges. Analysis of exact analytical solutions of the Poisson equation (PE) for several idealized nonspherical geometries reveals two distinct spatial modes of the PE solution; these modes are also found in realistic biomolecular shapes. The canonical GB Green function misses one of two modes seen in the exact PE solution, which explains the observed gross errors. To address the problem and reduce gross errors of the GB formalism, we have used exact PE solutions for idealized nonspherical geometries to suggest an alternative analytical Green function to replace the canonical GB formula. The proposed functional form is mathematically nearly as simple as the original, but depends not only on the effective Born radii but also on their gradients, which allows for better representation of details of nonspherical molecular shapes. In particular, the proposed functional form captures both modes of the PE solution seen in nonspherical geometries. Tests on realistic biomolecular structures ranging from small peptides to medium size proteins show that the proposed functional form reduces gross pairwise errors in all cases, with the amount of reduction varying from more than an order of magnitude for small structures to a factor of 2 for the largest ones. PMID:21528947
O'Connor, Annette M; Totton, Sarah C; Cullen, Jonah N; Ramezani, Mahmood; Kalivarapu, Vijay; Yuan, Chaohui; Gilbert, Stephen B
2018-01-01
Systematic reviews are increasingly using data from preclinical animal experiments in evidence networks. Further, there are ever-increasing efforts to automate aspects of the systematic review process. When assessing systematic bias and unit-of-analysis errors in preclinical experiments, it is critical to understand the study design elements employed by investigators. Such information can also inform prioritization of automation efforts that allow the identification of the most common issues. The aim of this study was to identify the design elements used by investigators in preclinical research in order to inform unique aspects of assessment of bias and error in preclinical research. Using 100 preclinical experiments each related to brain trauma and toxicology, we assessed design elements described by the investigators. We evaluated Methods and Materials sections of reports for descriptions of the following design elements: 1) use of comparison group, 2) unit of allocation of the interventions to study units, 3) arrangement of factors, 4) method of factor allocation to study units, 5) concealment of the factors during allocation and outcome assessment, 6) independence of study units, and 7) nature of factors. Many investigators reported using design elements that suggested the potential for unit-of-analysis errors, i.e., descriptions of repeated measurements of the outcome (94/200) and descriptions of potential for pseudo-replication (99/200). Use of complex factor arrangements was common, with 112 experiments using some form of factorial design (complete, incomplete or split-plot-like). In the toxicology dataset, 20 of the 100 experiments appeared to use a split-plot-like design, although no investigators used this term. The common use of repeated measures and factorial designs means understanding bias and error in preclinical experimental design might require greater expertise than simple parallel designs. Similarly, use of complex factor arrangements creates novel challenges for accurate automation of data extraction and bias and error assessment in preclinical experiments.
Nickerson, Naomi H; Li, Ying; Benjamin, Simon C
2013-01-01
A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.
Expectancy-related changes in firing of dopamine neurons depend on orbitofrontal cortex.
Takahashi, Yuji K; Roesch, Matthew R; Wilson, Robert C; Toreson, Kathy; O'Donnell, Patricio; Niv, Yael; Schoenbaum, Geoffrey
2011-10-30
The orbitofrontal cortex has been hypothesized to carry information regarding the value of expected rewards. Such information is essential for associative learning, which relies on comparisons between expected and obtained reward for generating instructive error signals. These error signals are thought to be conveyed by dopamine neurons. To test whether orbitofrontal cortex contributes to these error signals, we recorded from dopamine neurons in orbitofrontal-lesioned rats performing a reward learning task. Lesions caused marked changes in dopaminergic error signaling. However, the effect of lesions was not consistent with a simple loss of information regarding expected value. Instead, without orbitofrontal input, dopaminergic error signals failed to reflect internal information about the impending response that distinguished externally similar states leading to differently valued future rewards. These results are consistent with current conceptualizations of orbitofrontal cortex as supporting model-based behavior and suggest an unexpected role for this information in dopaminergic error signaling.
Backward-gazing method for measuring solar concentrators shape errors.
Coquand, Mathieu; Henault, François; Caliot, Cyril
2017-03-01
This paper describes a backward-gazing method for measuring the optomechanical errors of solar concentrating surfaces. It makes use of four cameras placed near the solar receiver and simultaneously recording images of the sun reflected by the optical surfaces. Simple data processing then allows reconstructing the slope and shape errors of the surfaces. The originality of the method is enforced by the use of generalized quad-cell formulas and approximate mathematical relations between the slope errors of the mirrors and their reflected wavefront in the case of sun-tracking heliostats at high-incidence angles. Numerical simulations demonstrate that the measurement accuracy is compliant with standard requirements of solar concentrating optics in the presence of noise or calibration errors. The method is suited to fine characterization of the optical and mechanical errors of heliostats and their facets, or to provide better control for real-time sun tracking.
A novel body frame based approach to aerospacecraft attitude tracking.
Ma, Carlos; Chen, Michael Z Q; Lam, James; Cheung, Kie Chung
2017-09-01
In the common practice of designing an attitude tracker for an aerospacecraft, one transforms the Newton-Euler rotation equations to obtain the dynamic equations of some chosen inertial frame based attitude metrics, such as Euler angles and unit quaternions. A Lyapunov approach is then used to design a controller which ensures asymptotic convergence of the attitude to the desired orientation. Although this design methodology is pretty standard, it usually involves singularity-prone coordinate transformations which complicates the analysis process and controller design. A new, singularity free error feedback method is proposed in the paper to provide simple and intuitive stability analysis and controller synthesis. This new body frame based method utilizes the concept of Euleraxis and angles to generate the smallest error angles from a body frame perspective, without coordinate transformations. Global tracking convergence is illustrated with the use of a feedback linearizing PD tracker, a sliding mode controller, and a model reference adaptive controller. Experimental results are also obtained on a quadrotor platform with unknown system parameters and disturbances, using a boundary layer approximated sliding mode controller, a PIDD controller, and a unit sliding mode controller. Significant tracking quality is attained. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Roentgen stereophotogrammetric analysis of metal-backed hemispherical cups without attached markers.
Valstar, E R; Spoor, C W; Nelissen, R G; Rozing, P M
1997-11-01
A method for the detection of micromotion of a metal-backed hemispherical acetabular cup is presented and tested. Unlike in conventional roentgen stereophotogrammetric analysis, the cup does not have to be marked with tantalum markers; the micromotion is calculated from the contours of the hemispherical part and the base circle of the cup. In this way, two rotations (tilt and anteversion) and the translations along the three cardinal axes are obtained. In a phantom study, the maximum error in the position of the cup's centre was 0.04 mm. The mean error in the orientation of the cup was 0.41 degree, with a 95% confidence interval of 0.28-0.54 degree. The in vivo accuracy was tested by repeated measurement of 21 radiographs from seven patients. The upper bound of the 95% tolerance interval for the translations along the transversal, longitudinal, and sagittal axes was 0.09, 0.07, and 0.34 mm, respectively: for the rotation, this upper bound was 0.39 degree. These results show that the new method, in which the position and orientation of metal-backed hemispherical cup is calculated from its projected contours, is a simple and accurate alternative to attaching markers to the cup.
Not Normal: the uncertainties of scientific measurements
NASA Astrophysics Data System (ADS)
Bailey, David C.
2017-01-01
Judging the significance and reproducibility of quantitative research requires a good understanding of relevant uncertainties, but it is often unclear how well these have been evaluated and what they imply. Reported scientific uncertainties were studied by analysing 41 000 measurements of 3200 quantities from medicine, nuclear and particle physics, and interlaboratory comparisons ranging from chemistry to toxicology. Outliers are common, with 5σ disagreements up to five orders of magnitude more frequent than naively expected. Uncertainty-normalized differences between multiple measurements of the same quantity are consistent with heavy-tailed Student's t-distributions that are often almost Cauchy, far from a Gaussian Normal bell curve. Medical research uncertainties are generally as well evaluated as those in physics, but physics uncertainty improves more rapidly, making feasible simple significance criteria such as the 5σ discovery convention in particle physics. Contributions to measurement uncertainty from mistakes and unknown problems are not completely unpredictable. Such errors appear to have power-law distributions consistent with how designed complex systems fail, and how unknown systematic errors are constrained by researchers. This better understanding may help improve analysis and meta-analysis of data, and help scientists and the public have more realistic expectations of what scientific results imply.
Performance Analysis of Local Ensemble Kalman Filter
NASA Astrophysics Data System (ADS)
Tong, Xin T.
2018-03-01
Ensemble Kalman filter (EnKF) is an important data assimilation method for high-dimensional geophysical systems. Efficient implementation of EnKF in practice often involves the localization technique, which updates each component using only information within a local radius. This paper rigorously analyzes the local EnKF (LEnKF) for linear systems and shows that the filter error can be dominated by the ensemble covariance, as long as (1) the sample size exceeds the logarithmic of state dimension and a constant that depends only on the local radius; (2) the forecast covariance matrix admits a stable localized structure. In particular, this indicates that with small system and observation noises, the filter error will be accurate in long time even if the initialization is not. The analysis also reveals an intrinsic inconsistency caused by the localization technique, and a stable localized structure is necessary to control this inconsistency. While this structure is usually taken for granted for the operation of LEnKF, it can also be rigorously proved for linear systems with sparse local observations and weak local interactions. These theoretical results are also validated by numerical implementation of LEnKF on a simple stochastic turbulence in two dynamical regimes.
Not Normal: the uncertainties of scientific measurements
2017-01-01
Judging the significance and reproducibility of quantitative research requires a good understanding of relevant uncertainties, but it is often unclear how well these have been evaluated and what they imply. Reported scientific uncertainties were studied by analysing 41 000 measurements of 3200 quantities from medicine, nuclear and particle physics, and interlaboratory comparisons ranging from chemistry to toxicology. Outliers are common, with 5σ disagreements up to five orders of magnitude more frequent than naively expected. Uncertainty-normalized differences between multiple measurements of the same quantity are consistent with heavy-tailed Student’s t-distributions that are often almost Cauchy, far from a Gaussian Normal bell curve. Medical research uncertainties are generally as well evaluated as those in physics, but physics uncertainty improves more rapidly, making feasible simple significance criteria such as the 5σ discovery convention in particle physics. Contributions to measurement uncertainty from mistakes and unknown problems are not completely unpredictable. Such errors appear to have power-law distributions consistent with how designed complex systems fail, and how unknown systematic errors are constrained by researchers. This better understanding may help improve analysis and meta-analysis of data, and help scientists and the public have more realistic expectations of what scientific results imply. PMID:28280557
Bankfull characteristics of Ohio streams and their relation to peak streamflows
Sherwood, James M.; Huitger, Carrie A.
2005-01-01
Regional curves, simple-regression equations, and multiple-regression equations were developed to estimate bankfull width, bankfull mean depth, bankfull cross-sectional area, and bankfull discharge of rural, unregulated streams in Ohio. The methods are based on geomorphic, basin, and flood-frequency data collected at 50 study sites on unregulated natural alluvial streams in Ohio, of which 40 sites are near streamflow-gaging stations. The regional curves and simple-regression equations relate the bankfull characteristics to drainage area. The multiple-regression equations relate the bankfull characteristics to drainage area, main-channel slope, main-channel elevation index, median bed-material particle size, bankfull cross-sectional area, and local-channel slope. Average standard errors of prediction for bankfull width equations range from 20.6 to 24.8 percent; for bankfull mean depth, 18.8 to 20.6 percent; for bankfull cross-sectional area, 25.4 to 30.6 percent; and for bankfull discharge, 27.0 to 78.7 percent. The simple-regression (drainage-area only) equations have the highest average standard errors of prediction. The multiple-regression equations in which the explanatory variables included drainage area, main-channel slope, main-channel elevation index, median bed-material particle size, bankfull cross-sectional area, and local-channel slope have the lowest average standard errors of prediction. Field surveys were done at each of the 50 study sites to collect the geomorphic data. Bankfull indicators were identified and evaluated, cross-section and longitudinal profiles were surveyed, and bed- and bank-material were sampled. Field data were analyzed to determine various geomorphic characteristics such as bankfull width, bankfull mean depth, bankfull cross-sectional area, bankfull discharge, streambed slope, and bed- and bank-material particle-size distribution. The various geomorphic characteristics were analyzed by means of a combination of graphical and statistical techniques. The logarithms of the annual peak discharges for the 40 gaged study sites were fit by a Pearson Type III frequency distribution to develop flood-peak discharges associated with recurrence intervals of 2, 5, 10, 25, 50, and 100 years. The peak-frequency data were related to geomorphic, basin, and climatic variables by multiple-regression analysis. Simple-regression equations were developed to estimate 2-, 5-, 10-, 25-, 50-, and 100-year flood-peak discharges of rural, unregulated streams in Ohio from bankfull channel cross-sectional area. The average standard errors of prediction are 31.6, 32.6, 35.9, 41.5, 46.2, and 51.2 percent, respectively. The study and methods developed are intended to improve understanding of the relations between geomorphic, basin, and flood characteristics of streams in Ohio and to aid in the design of hydraulic structures, such as culverts and bridges, where stability of the stream and structure is an important element of the design criteria. The study was done in cooperation with the Ohio Department of Transportation and the U.S. Department of Transportation, Federal Highway Administration.
Polarimeter calibration error gets far out of control
NASA Astrophysics Data System (ADS)
Chipman, Russell A.
2015-09-01
This is a sad story about a polarization calibration error gone amuck. A simple laboratory mistake was mistaken for a new phenomena. Aggressive management did their job and sold the flawed idea very effectively and substantial funding followed. Questions were raised and a Government lab tried but couldn't to recreate the breakthrough. The results were unpleasant and the field of infrared polarimetry developed a bad reputation for several years.
How Short and Light Can a Simple Pendulum Be for Classroom Use?
ERIC Educational Resources Information Center
Oliveira, V.
2014-01-01
We compare the period of oscillation of an ideal simple pendulum with the period of a more "real" pendulum constituted of a rigid sphere and a rigid slender rod. We determine the relative error in the calculation of the local acceleration of gravity if the period of the ideal pendulum is used instead of the period of this real pendulum.
How short and light can a simple pendulum be for classroom use?
NASA Astrophysics Data System (ADS)
Oliveira, V.
2014-07-01
We compare the period of oscillation of an ideal simple pendulum with the period of a more ‘real’ pendulum constituted of a rigid sphere and a rigid slender rod. We determine the relative error in the calculation of the local acceleration of gravity if the period of the ideal pendulum is used instead of the period of this real pendulum.
Measuring the Accuracy of Simple Evolving Connectionist System with Varying Distance Formulas
NASA Astrophysics Data System (ADS)
Al-Khowarizmi; Sitompul, O. S.; Suherman; Nababan, E. B.
2017-12-01
Simple Evolving Connectionist System (SECoS) is a minimal implementation of Evolving Connectionist Systems (ECoS) in artificial neural networks. The three-layer network architecture of the SECoS could be built based on the given input. In this study, the activation value for the SECoS learning process, which is commonly calculated using normalized Hamming distance, is also calculated using normalized Manhattan distance and normalized Euclidean distance in order to compare the smallest error value and best learning rate obtained. The accuracy of measurement resulted by the three distance formulas are calculated using mean absolute percentage error. In the training phase with several parameters, such as sensitivity threshold, error threshold, first learning rate, and second learning rate, it was found that normalized Euclidean distance is more accurate than both normalized Hamming distance and normalized Manhattan distance. In the case of beta fibrinogen gene -455 G/A polymorphism patients used as training data, the highest mean absolute percentage error value is obtained with normalized Manhattan distance compared to normalized Euclidean distance and normalized Hamming distance. However, the differences are very small that it can be concluded that the three distance formulas used in SECoS do not have a significant effect on the accuracy of the training results.
Orhan, A Emin; Ma, Wei Ji
2017-07-26
Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.
NASA Technical Reports Server (NTRS)
Mace, Gerald G.; Ackerman, Thomas P.
1996-01-01
A topic of current practical interest is the accurate characterization of the synoptic-scale atmospheric state from wind profiler and radiosonde network observations. We have examined several related and commonly applied objective analysis techniques for performing this characterization and considered their associated level of uncertainty both from a theoretical and a practical standpoint. A case study is presented where two wind profiler triangles with nearly identical centroids and no common vertices produced strikingly different results during a 43-h period. We conclude that the uncertainty in objectively analyzed quantities can easily be as large as the expected synoptic-scale signal. In order to quantify the statistical precision of the algorithms, we conducted a realistic observing system simulation experiment using output from a mesoscale model. A simple parameterization for estimating the uncertainty in horizontal gradient quantities in terms of known errors in the objectively analyzed wind components and temperature is developed from these results.
NASA Technical Reports Server (NTRS)
Varnai, Tamas; Marshak, Alexander
2000-01-01
This paper presents a simple approach to estimate the uncertainties that arise in satellite retrievals of cloud optical depth when the retrievals use one-dimensional radiative transfer theory for heterogeneous clouds that have variations in all three dimensions. For the first time, preliminary error bounds are set to estimate the uncertainty of cloud optical depth retrievals. These estimates can help us better understand the nature of uncertainties that three-dimensional effects can introduce into retrievals of this important product of the MODIS instrument. The probability distribution of resulting retrieval errors is examined through theoretical simulations of shortwave cloud reflection for a wide variety of cloud fields. The results are used to illustrate how retrieval uncertainties change with observable and known parameters, such as solar elevation or cloud brightness. Furthermore, the results indicate that a tendency observed in an earlier study, clouds appearing thicker for oblique sun, is indeed caused by three-dimensional radiative effects.
Error Reduction Program. [combustor performance evaluation codes
NASA Technical Reports Server (NTRS)
Syed, S. A.; Chiappetta, L. M.; Gosman, A. D.
1985-01-01
The details of a study to select, incorporate and evaluate the best available finite difference scheme to reduce numerical error in combustor performance evaluation codes are described. The combustor performance computer programs chosen were the two dimensional and three dimensional versions of Pratt & Whitney's TEACH code. The criteria used to select schemes required that the difference equations mirror the properties of the governing differential equation, be more accurate than the current hybrid difference scheme, be stable and economical, be compatible with TEACH codes, use only modest amounts of additional storage, and be relatively simple. The methods of assessment used in the selection process consisted of examination of the difference equation, evaluation of the properties of the coefficient matrix, Taylor series analysis, and performance on model problems. Five schemes from the literature and three schemes developed during the course of the study were evaluated. This effort resulted in the incorporation of a scheme in 3D-TEACH which is usuallly more accurate than the hybrid differencing method and never less accurate.
Estimates of runoff using water-balance and atmospheric general circulation models
Wolock, D.M.; McCabe, G.J.
1999-01-01
The effects of potential climate change on mean annual runoff in the conterminous United States (U.S.) are examined using a simple water-balance model and output from two atmospheric general circulation models (GCMs). The two GCMs are from the Canadian Centre for Climate Prediction and Analysis (CCC) and the Hadley Centre for Climate Prediction and Research (HAD). In general, the CCC GCM climate results in decreases in runoff for the conterminous U.S., and the HAD GCM climate produces increases in runoff. These estimated changes in runoff primarily are the result of estimated changes in precipitation. The changes in mean annual runoff, however, mostly are smaller than the decade-to-decade variability in GCM-based mean annual runoff and errors in GCM-based runoff. The differences in simulated runoff between the two GCMs, together with decade-to-decade variability and errors in GCM-based runoff, cause the estimates of changes in runoff to be uncertain and unreliable.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1990-01-01
An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.
Latent error detection: A golden two hours for detection.
Saward, Justin R E; Stanton, Neville A
2017-03-01
Undetected error in safety critical contexts generates a latent condition that can contribute to a future safety failure. The detection of latent errors post-task completion is observed in naval air engineers using a diary to record work-related latent error detection (LED) events. A systems view is combined with multi-process theories to explore sociotechnical factors associated with LED. Perception of cues in different environments facilitates successful LED, for which the deliberate review of past tasks within two hours of the error occurring and whilst remaining in the same or similar sociotechnical environment to that which the error occurred appears most effective. Identified ergonomic interventions offer potential mitigation for latent errors; particularly in simple everyday habitual tasks. It is thought safety critical organisations should look to engineer further resilience through the application of LED techniques that engage with system cues across the entire sociotechnical environment, rather than relying on consistent human performance. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Procedural error monitoring and smart checklists
NASA Technical Reports Server (NTRS)
Palmer, Everett
1990-01-01
Human beings make and usually detect errors routinely. The same mental processes that allow humans to cope with novel problems can also lead to error. Bill Rouse has argued that errors are not inherently bad but their consequences may be. He proposes the development of error-tolerant systems that detect errors and take steps to prevent the consequences of the error from occurring. Research should be done on self and automatic detection of random and unanticipated errors. For self detection, displays should be developed that make the consequences of errors immediately apparent. For example, electronic map displays graphically show the consequences of horizontal flight plan entry errors. Vertical profile displays should be developed to make apparent vertical flight planning errors. Other concepts such as energy circles could also help the crew detect gross flight planning errors. For automatic detection, systems should be developed that can track pilot activity, infer pilot intent and inform the crew of potential errors before their consequences are realized. Systems that perform a reasonableness check on flight plan modifications by checking route length and magnitude of course changes are simple examples. Another example would be a system that checked the aircraft's planned altitude against a data base of world terrain elevations. Information is given in viewgraph form.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elliott, C.J.; McVey, B.; Quimby, D.C.
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of thesemore » errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.« less
NASA Astrophysics Data System (ADS)
Ramadhan, Rifqi; Prabowo, Rian Gilang; Aprilliyani, Ria; Basari
2018-02-01
Victims of acute cancer and tumor are growing each year and cancer becomes one of the causes of human deaths in the world. Cancers or tumor tissue cells are cells that grow abnormally and turn to take over and damage the surrounding tissues. At the beginning, cancers or tumors do not have definite symptoms in its early stages, and can even attack the tissues inside of the body. This phenomena is not identifiable under visual human observation. Therefore, an early detection system which is cheap, quick, simple, and portable is essensially required to anticipate the further development of cancer or tumor. Among all of the modalities, microwave imaging is considered to be a cheaper, simple, and portable system method. There are at least two simple image reconstruction algorithms i.e. Filtered Back Projection (FBP) and Algebraic Reconstruction Technique (ART), which have been adopted in some common modalities. In this paper, both algorithms will be compared by reconstructing the image from an artificial tissue model (i.e. phantom), which has two different dielectric distributions. We addressed two performance comparisons, namely quantitative and qualitative analysis. Qualitative analysis includes the smoothness of the image and also the success in distinguishing dielectric differences by observing the image with human eyesight. In addition, quantitative analysis includes Histogram, Structural Similarity Index (SSIM), Mean Squared Error (MSE), and Peak Signal-to-Noise Ratio (PSNR) calculation were also performed. As a result, quantitative parameters of FBP might show better values than the ART. However, ART is likely more capable to distinguish two different dielectric value than FBP, due to higher contrast in ART and wide distribution grayscale level.
Comparison of rigorous and simple vibrational models for the CO2 gasdynamic laser
NASA Technical Reports Server (NTRS)
Monson, D. J.
1977-01-01
The accuracy of a simple vibrational model for computing the gain in a CO2 gasdynamic laser is assessed by comparing results computed from it with results computed from a rigorous vibrational model. The simple model is that of Anderson et al. (1971), in which the vibrational kinetics are modeled by grouping the nonequilibrium vibrational degrees of freedom into two modes, to each of which there corresponds an equation describing vibrational relaxation. The two models agree fairly well in the computed gain at low temperatures, but the simple model predicts too high a gain at the higher temperatures of current interest. The sources of error contributing to the overestimation given by the simple model are determined by examining the simplified relaxation equations.
The Most Common Geometric and Semantic Errors in CityGML Datasets
NASA Astrophysics Data System (ADS)
Biljecki, F.; Ledoux, H.; Du, X.; Stoter, J.; Soon, K. H.; Khoo, V. H. S.
2016-10-01
To be used as input in most simulation and modelling software, 3D city models should be geometrically and topologically valid, and semantically rich. We investigate in this paper what is the quality of currently available CityGML datasets, i.e. we validate the geometry/topology of the 3D primitives (Solid and MultiSurface), and we validate whether the semantics of the boundary surfaces of buildings is correct or not. We have analysed all the CityGML datasets we could find, both from portals of cities and on different websites, plus a few that were made available to us. We have thus validated 40M surfaces in 16M 3D primitives and 3.6M buildings found in 37 CityGML datasets originating from 9 countries, and produced by several companies with diverse software and acquisition techniques. The results indicate that CityGML datasets without errors are rare, and those that are nearly valid are mostly simple LOD1 models. We report on the most common errors we have found, and analyse them. One main observation is that many of these errors could be automatically fixed or prevented with simple modifications to the modelling software. Our principal aim is to highlight the most common errors so that these are not repeated in the future. We hope that our paper and the open-source software we have developed will help raise awareness for data quality among data providers and 3D GIS software producers.
NASA Technical Reports Server (NTRS)
Goodrich, John W.
2017-01-01
This paper presents results from numerical experiments for controlling the error caused by a damping layer boundary treatment when simulating the propagation of an acoustic signal from a continuous pressure source. The computations are with the 2D Linearized Euler Equations (LEE) for both a uniform mean flow and a steady parallel jet. The numerical experiments are with algorithms that are third, fifth, seventh and ninth order accurate in space and time. The numerical domain is enclosed in a damping layer boundary treatment. The damping is implemented in a time accurate manner, with simple polynomial damping profiles of second, fourth, sixth and eighth power. At the outer boundaries of the damping layer the propagating solution is uniformly set to zero. The complete boundary treatment is remarkably simple and intrinsically independant from the dimension of the spatial domain. The reported results show the relative effect on the error from the boundary treatment by varying the damping layer width, damping profile power, damping amplitude, propagtion time, grid resolution and algorithm order. The issue that is being addressed is not the accuracy of the numerical solution when compared to a mathematical solution, but the effect of the complete boundary treatment on the numerical solution, and to what degree the error in the numerical solution from the complete boundary treatment can be controlled. We report maximum relative absolute errors from just the boundary treatment that range from O[10-2] to O[10-7].
Online automatic tuning and control for fed-batch cultivation
van Straten, Gerrit; van der Pol, Leo A.; van Boxtel, Anton J. B.
2007-01-01
Performance of controllers applied in biotechnological production is often below expectation. Online automatic tuning has the capability to improve control performance by adjusting control parameters. This work presents automatic tuning approaches for model reference specific growth rate control during fed-batch cultivation. The approaches are direct methods that use the error between observed specific growth rate and its set point; systematic perturbations of the cultivation are not necessary. Two automatic tuning methods proved to be efficient, in which the adaptation rate is based on a combination of the error, squared error and integral error. These methods are relatively simple and robust against disturbances, parameter uncertainties, and initialization errors. Application of the specific growth rate controller yields a stable system. The controller and automatic tuning methods are qualified by simulations and laboratory experiments with Bordetella pertussis. PMID:18157554
Learning to classify in large committee machines
NASA Astrophysics Data System (ADS)
O'kane, Dominic; Winther, Ole
1994-10-01
The ability of a two-layer neural network to learn a specific non-linearly-separable classification task, the proximity problem, is investigated using a statistical mechanics approach. Both the tree and fully connected architectures are investigated in the limit where the number K of hidden units is large, but still much smaller than the number N of inputs. Both have continuous weights. Within the replica symmetric ansatz, we find that for zero temperature training, the tree architecture exhibits a strong overtraining effect. For nonzero temperature the asymptotic error is lowered, but it is still higher than the corresponding value for the simple perceptron. The fully connected architecture is considered for two regimes. First, for a finite number of examples we find a symmetry among the hidden units as each performs equally well. The asymptotic generalization error is finite, and minimal for T-->∞ where it goes to the same value as for the simple perceptron. For a large number of examples we find a continuous transition to a phase with broken hidden-unit symmetry, which has an asymptotic generalization error equal to zero.
NASA Technical Reports Server (NTRS)
Herman, G. C.
1986-01-01
A lateral guidance algorithm which controls the location of the line of intersection between the actual and desired orbital planes (the hinge line) is developed for the aerobraking phase of a lift-modulated orbital transfer vehicle. The on-board targeting algorithm associated with this lateral guidance algorithm is simple and concise which is very desirable since computation time and space are limited on an on-board flight computer. A variational equation which describes the movement of the hinge line is derived. Simple relationships between the plane error, the desired hinge line position, the position out-of-plane error, and the velocity out-of-plane error are found. A computer simulation is developed to test the lateral guidance algorithm for a variety of operating conditions. The algorithm does reduce the total burn magnitude needed to achieve the desired orbit by allowing the plane correction and perigee-raising burn to be combined in a single maneuver. The algorithm performs well under vacuum perigee dispersions, pot-hole density disturbance, and thick atmospheres. The results for many different operating conditions are presented.
A Simple Method to Improve Autonomous GPS Positioning for Tractors
Gomez-Gil, Jaime; Alonso-Garcia, Sergio; Gómez-Gil, Francisco Javier; Stombaugh, Tim
2011-01-01
Error is always present in the GPS guidance of a tractor along a desired trajectory. One way to reduce GPS guidance error is by improving the tractor positioning. The most commonly used ways to do this are either by employing more precise GPS receivers and differential corrections or by employing GPS together with some other local positioning systems such as electronic compasses or Inertial Navigation Systems (INS). However, both are complex and expensive solutions. In contrast, this article presents a simple and low cost method to improve tractor positioning when only a GPS receiver is used as the positioning sensor. The method is based on placing the GPS receiver ahead of the tractor, and on applying kinematic laws of tractor movement, or a geometric approximation, to obtain the midpoint position and orientation of the tractor rear axle more precisely. This precision improvement is produced by the fusion of the GPS data with tractor kinematic control laws. Our results reveal that the proposed method effectively reduces the guidance GPS error along a straight trajectory. PMID:22163917
Read Code quality assurance: from simple syntax to semantic stability.
Schulz, E B; Barrett, J W; Price, C
1998-01-01
As controlled clinical vocabularies assume an increasing role in modern clinical information systems, so the issue of their quality demands greater attention. In order to meet the resulting stringent criteria for completeness and correctness, a quality assurance system comprising a database of more than 500 rules is being developed and applied to the Read Thesaurus. The authors discuss the requirement to apply quality assurance processes to their dynamic editing database in order to ensure the quality of exported products. Sources of errors include human, hardware, and software factors as well as new rules and transactions. The overall quality strategy includes prevention, detection, and correction of errors. The quality assurance process encompasses simple data specification, internal consistency, inspection procedures and, eventually, field testing. The quality assurance system is driven by a small number of tables and UNIX scripts, with "business rules" declared explicitly as Structured Query Language (SQL) statements. Concurrent authorship, client-server technology, and an initial failure to implement robust transaction control have all provided valuable lessons. The feedback loop for error management needs to be short.
Evidence flow graph methods for validation and verification of expert systems
NASA Technical Reports Server (NTRS)
Becker, Lee A.; Green, Peter G.; Bhatnagar, Jayant
1988-01-01
This final report describes the results of an investigation into the use of evidence flow graph techniques for performing validation and verification of expert systems. This was approached by developing a translator to convert horn-clause rule bases into evidence flow graphs, a simulation program, and methods of analysis. These tools were then applied to a simple rule base which contained errors. It was found that the method was capable of identifying a variety of problems, for example that the order of presentation of input data or small changes in critical parameters could effect the output from a set of rules.
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Castano, Diego J.
1989-01-01
A method for studying aerosols over the ocean using Nimbus-7 CZCS data is proposed which circumvents having to perform radiative transfer computations involving the aerosol properties. The method is applied to the CZCS band 4 at 670 nm, and yields the total radiance (L sub t) backscattered from the top of a stratified atmosphere containing both stratospheric and tropospheric aerosols and the the Rayleigh scattered radiance (L sub r). The radiance which the aerosol would produce in the single scattering approximation is retrieved from (L sub t) - (L sub r) with an error of not greater than 5-7 percent.
A Virtual Instrument System for Determining Sugar Degree of Honey
Wu, Qijun; Gong, Xun
2015-01-01
This study established a LabVIEW-based virtual instrument system to measure optical activity through the communication of conventional optical instrument with computer via RS232 port. This system realized the functions for automatic acquisition, real-time display, data processing, results playback, and so forth. Therefore, it improved accuracy of the measurement results by avoiding the artificial operation, cumbersome data processing, and the artificial error in optical activity measurement. The system was applied to the analysis of the batch inspection on the sugar degree of honey. The results obtained were satisfying. Moreover, it showed advantages such as friendly man-machine dialogue, simple operation, and easily expanded functions. PMID:26504615
Experimental test of the variability of G using Viking lander ranging data
NASA Technical Reports Server (NTRS)
Hellings, R. W.; Adams, P. J.; Anderson, J. D.; Keesey, M. S.; Lau, E. L.; Standish, E. M.; Canuto, V. M.; Goldman, I.
1983-01-01
Results are presented from the analysis of solar-system astrometric data, notably the range data to the Viking landers on Mars. A least-squares fit of the parameters of the solar system model to these data limits a simple time variation in the effective Newtonian gravitational constant to (2 + or - 4) x 10 to the -12th/yr and a rate of drift of atomic clocks relative to the implicit clock of relativistic dynamics to (1 + or - 8) x 10 to the -12th/yr. The error limits quoted are the result of uncertainties in the masses of the asteroids.
Mapping cumulative noise from shipping to inform marine spatial planning.
Erbe, Christine; MacGillivray, Alexander; Williams, Rob
2012-11-01
Including ocean noise in marine spatial planning requires predictions of noise levels on large spatiotemporal scales. Based on a simple sound transmission model and ship track data (Automatic Identification System, AIS), cumulative underwater acoustic energy from shipping was mapped throughout 2008 in the west Canadian Exclusive Economic Zone, showing high noise levels in critical habitats for endangered resident killer whales, exceeding limits of "good conservation status" under the EU Marine Strategy Framework Directive. Error analysis proved that rough calculations of noise occurrence and propagation can form a basis for management processes, because spending resources on unnecessary detail is wasteful and delays remedial action.
Tunnel ionization of atoms and molecules: How accurate are the weak-field asymptotic formulas?
NASA Astrophysics Data System (ADS)
Labeye, Marie; Risoud, François; Maquet, Alfred; Caillat, Jérémie; Taïeb, Richard
2018-05-01
Weak-field asymptotic formulas for the tunnel ionization rate of atoms and molecules in strong laser fields are often used for the analysis of strong field recollision experiments. We investigate their accuracy and domain of validity for different model systems by confronting them to exact numerical results, obtained by solving the time dependent Schrödinger equation. We find that corrections that take the dc-Stark shift into account are a simple and efficient way to improve the formula. Furthermore, analyzing the different approximations used, we show that error compensation plays a crucial role in the fair agreement between exact and analytical results.
Determination of the Cryolite Ratio of KF-NaF-AlF3 Electrolyte by Conductivity Method
NASA Astrophysics Data System (ADS)
Yan, Hengwei; Yang, Jianhong; Liu, Zhanwei; Wang, Chengzhi; Ma, Wenhui
2018-05-01
The cryolite ratio (CR) is an important parameter for the electrolyte in aluminum reduction cells. The measurement method for the CR of the KF-NaF-AlF3 system acid (CR < 3) electrolyte by means of electrical conductivity was initially developed, and the formula for calculating the CR was deduced. This method has the advantages of simple operation and high precision. In addition, the relative standard deviations (RSD) of the measurement are < 1.2 pct, and the analysis error of the NaF or KF content has little effect on the determination of the CR.
Divya, O; Mishra, Ashok K
2007-05-29
Quantitative determination of kerosene fraction present in diesel has been carried out based on excitation emission matrix fluorescence (EEMF) along with parallel factor analysis (PARAFAC) and N-way partial least squares regression (N-PLS). EEMF is a simple, sensitive and nondestructive method suitable for the analysis of multifluorophoric mixtures. Calibration models consisting of varying compositions of diesel and kerosene were constructed and their validation was carried out using leave-one-out cross validation method. The accuracy of the model was evaluated through the root mean square error of prediction (RMSEP) for the PARAFAC, N-PLS and unfold PLS methods. N-PLS was found to be a better method compared to PARAFAC and unfold PLS method because of its low RMSEP values.
Photometric method for determination of acidity constants through integral spectra analysis.
Zevatskiy, Yuriy Eduardovich; Ruzanov, Daniil Olegovich; Samoylov, Denis Vladimirovich
2015-04-15
An express method for determination of acidity constants of organic acids, based on the analysis of the integral transmittance vs. pH dependence is developed. The integral value is registered as a photocurrent of photometric device simultaneously with potentiometric titration. The proposed method allows to obtain pKa using only simple and low-cost instrumentation. The optical part of the experimental setup has been optimized through the exclusion of the monochromator device. Thus it only takes 10-15 min to obtain one pKa value with the absolute error of less than 0.15 pH units. Application limitations and reliability of the method have been tested for a series of organic acids of various nature. Copyright © 2015 Elsevier B.V. All rights reserved.
Design and analysis of control system for VCSEL of atomic interference magnetometer
NASA Astrophysics Data System (ADS)
Zhang, Xiao-nan; Sun, Xiao-jie; Kou, Jun; Yang, Feng; Li, Jie; Ren, Zhang; Wei, Zong-kang
2016-11-01
Magnetic field detection is an important means of deep space environment exploration. Benefit from simple structure and low power consumption, atomic interference magnetometer become one of the most potential detector payloads. Vertical Cavity Surface Emitting Laser (VCSEL) is usually used as a light source in atomic interference magnetometer and its frequency stability directly affects the stability and sensitivity of magnetometer. In this paper, closed-loop control strategy of VCSEL was designed and analysis, the controller parameters were selected and the feedback error algorithm was optimized as well. According to the results of experiments that were performed on the hardware-in-the-loop simulation platform, the designed closed-loop control system is reasonable and it is able to effectively improve the laser frequency stability during the actual work of the magnetometer.
NASA Astrophysics Data System (ADS)
Lin, Yinwei
2018-06-01
A three-dimensional modeling of fish school performed by a modified Adomian decomposition method (ADM) discretized by the finite difference method is proposed. To our knowledge, few studies of the fish school are documented due to expensive cost of numerical computing and tedious three-dimensional data analysis. Here, we propose a simple model replied on the Adomian decomposition method to estimate the efficiency of energy saving of the flow motion of the fish school. First, the analytic solutions of Navier-Stokes equations are used for numerical validation. The influences of the distance between the side-by-side two fishes are studied on the energy efficiency of the fish school. In addition, the complete error analysis for this method is presented.
The measurement of linear frequency drift in oscillators
NASA Astrophysics Data System (ADS)
Barnes, J. A.
1985-04-01
A linear drift in frequency is an important element in most stochastic models of oscillator performance. Quartz crystal oscillators often have drifts in excess of a part in ten to the tenth power per day. Even commercial cesium beam devices often show drifts of a few parts in ten to the thirteenth per year. There are many ways to estimate the drift rates from data samples (e.g., regress the phase on a quadratic; regress the frequency on a linear; compute the simple mean of the first difference of frequency; use Kalman filters with a drift term as one element in the state vector; and others). Although most of these estimators are unbiased, they vary in efficiency (i.e., confidence intervals). Further, the estimation of confidence intervals using the standard analysis of variance (typically associated with the specific estimating technique) can give amazingly optimistic results. The source of these problems is not an error in, say, the regressions techniques, but rather the problems arise from correlations within the residuals. That is, the oscillator model is often not consistent with constraints on the analysis technique or, in other words, some specific analysis techniques are often inappropriate for the task at hand. The appropriateness of a specific analysis technique is critically dependent on the oscillator model and can often be checked with a simple whiteness test on the residuals.
Correcting For Seed-Particle Lag In LV Measurements
NASA Technical Reports Server (NTRS)
Jones, Gregory S.; Gartrell, Luther R.; Kamemoto, Derek Y.
1994-01-01
Two experiments conducted to evaluate effects of sizes of seed particles on errors in LV measurements of mean flows. Both theoretical and conventional experimental methods used to evaluate errors. First experiment focused on measurement of decelerating stagnation streamline of low-speed flow around circular cylinder with two-dimensional afterbody. Second performed in transonic flow and involved measurement of decelerating stagnation streamline of hemisphere with cylindrical afterbody. Concluded, mean-quantity LV measurements subject to large errors directly attributable to sizes of particles. Predictions of particle-response theory showed good agreement with experimental results, indicating velocity-error-correction technique used in study viable for increasing accuracy of laser velocimetry measurements. Technique simple and useful in any research facility in which flow velocities measured.
NASA Astrophysics Data System (ADS)
Seeto, Wen Jun; Lipke, Elizabeth Ann
2016-03-01
Tracking of rolling cells via in vitro experiment is now commonly performed using customized computer programs. In most cases, two critical challenges continue to limit analysis of cell rolling data: long computation times due to the complexity of tracking algorithms and difficulty in accurately correlating a given cell with itself from one frame to the next, which is typically due to errors caused by cells that either come close in proximity to each other or come in contact with each other. In this paper, we have developed a sophisticated, yet simple and highly effective, rolling cell tracking system to address these two critical problems. This optical cell tracking analysis (OCTA) system first employs ImageJ for cell identification in each frame of a cell rolling video. A custom MATLAB code was written to use the geometric and positional information of all cells as the primary parameters for matching each individual cell with itself between consecutive frames and to avoid errors when tracking cells that come within close proximity to one another. Once the cells are matched, rolling velocity can be obtained for further analysis. The use of ImageJ for cell identification eliminates the need for high level MATLAB image processing knowledge. As a result, only fundamental MATLAB syntax is necessary for cell matching. OCTA has been implemented in the tracking of endothelial colony forming cell (ECFC) rolling under shear. The processing time needed to obtain tracked cell data from a 2 min ECFC rolling video recorded at 70 frames per second with a total of over 8000 frames is less than 6 min using a computer with an Intel® Core™ i7 CPU 2.80 GHz (8 CPUs). This cell tracking system benefits cell rolling analysis by substantially reducing the time required for post-acquisition data processing of high frame rate video recordings and preventing tracking errors when individual cells come in close proximity to one another.
Legleiter, C.J.; Kinzel, P.J.; Overstreet, B.T.
2011-01-01
This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes. ?? 2011 by the American Geophysical Union.
Legleiter, Carl J.; Kinzel, Paul J.; Overstreet, Brandon T.
2011-01-01
This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes.
Simon, Steven L; Hoffman, F Owen; Hofer, Eduard
2015-01-01
Retrospective dose estimation, particularly dose reconstruction that supports epidemiological investigations of health risk, relies on various strategies that include models of physical processes and exposure conditions with detail ranging from simple to complex. Quantification of dose uncertainty is an essential component of assessments for health risk studies since, as is well understood, it is impossible to retrospectively determine the true dose for each person. To address uncertainty in dose estimation, numerical simulation tools have become commonplace and there is now an increased understanding about the needs and what is required for models used to estimate cohort doses (in the absence of direct measurement) to evaluate dose response. It now appears that for dose-response algorithms to derive the best, unbiased estimate of health risk, we need to understand the type, magnitude and interrelationships of the uncertainties of model assumptions, parameters and input data used in the associated dose estimation models. Heretofore, uncertainty analysis of dose estimates did not always properly distinguish between categories of errors, e.g., uncertainty that is specific to each subject (i.e., unshared error), and uncertainty of doses from a lack of understanding and knowledge about parameter values that are shared to varying degrees by numbers of subsets of the cohort. While mathematical propagation of errors by Monte Carlo simulation methods has been used for years to estimate the uncertainty of an individual subject's dose, it was almost always conducted without consideration of dependencies between subjects. In retrospect, these types of simple analyses are not suitable for studies with complex dose models, particularly when important input data are missing or otherwise not available. The dose estimation strategy presented here is a simulation method that corrects the previous deficiencies of analytical or simple Monte Carlo error propagation methods and is termed, due to its capability to maintain separation between shared and unshared errors, the two-dimensional Monte Carlo (2DMC) procedure. Simply put, the 2DMC method simulates alternative, possibly true, sets (or vectors) of doses for an entire cohort rather than a single set that emerges when each individual's dose is estimated independently from other subjects. Moreover, estimated doses within each simulated vector maintain proper inter-relationships such that the estimated doses for members of a cohort subgroup that share common lifestyle attributes and sources of uncertainty are properly correlated. The 2DMC procedure simulates inter-individual variability of possibly true doses within each dose vector and captures the influence of uncertainty in the values of dosimetric parameters across multiple realizations of possibly true vectors of cohort doses. The primary characteristic of the 2DMC approach, as well as its strength, are defined by the proper separation between uncertainties shared by members of the entire cohort or members of defined cohort subsets, and uncertainties that are individual-specific and therefore unshared.
Nelson, Jonathan M.; Kinzel, Paul J.; Schmeeckle, Mark Walter; McDonald, Richard R.; Minear, Justin T.
2016-01-01
Noncontact methods for measuring water-surface elevation and velocity in laboratory flumes and rivers are presented with examples. Water-surface elevations are measured using an array of acoustic transducers in the laboratory and using laser scanning in field situations. Water-surface velocities are based on using particle image velocimetry or other machine vision techniques on infrared video of the water surface. Using spatial and temporal averaging, results from these methods provide information that can be used to develop estimates of discharge for flows over known bathymetry. Making such estimates requires relating water-surface velocities to vertically averaged velocities; the methods here use standard relations. To examine where these relations break down, laboratory data for flows over simple bumps of three amplitudes are evaluated. As anticipated, discharges determined from surface information can have large errors where nonhydrostatic effects are large. In addition to investigating and characterizing this potential error in estimating discharge, a simple method for correction of the issue is presented. With a simple correction based on bed gradient along the flow direction, remotely sensed estimates of discharge appear to be viable.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less
In vivo measurement of mechanical properties of human long bone by using sonic sound
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hossain, M. Jayed, E-mail: zed.hossain06@gmail.com; Rahman, M. Moshiur, E-mail: razib-121@yahoo.com; Alam, Morshed
Vibration analysis has evaluated as non-invasive techniques for the in vivo assessment of bone mechanical properties. The relation between the resonant frequencies, long bone geometry and mechanical properties can be obtained by vibration analysis. In vivo measurements were performed on human ulna as a simple beam model with an experimental technique and associated apparatus. The resonant frequency of the ulna was obtained by Fast Fourier Transformation (FFT) analysis of the vibration response of piezoelectric accelerometer. Both elastic modulus and speed of the sound were inferred from the resonant frequency. Measurement error in the improved experimental setup was comparable with themore » previous work. The in vivo determination of bone elastic response has potential value in screening programs for metabolic bone disease, early detection of osteoporosis and evaluation of skeletal effects of various therapeutic modalities.« less
NASA Technical Reports Server (NTRS)
1973-01-01
An analysis of Very Low Frequency propagation in the atmosphere in the 10-14 kHz range leads to a discussion of some of the more significant causes of phase perturbation. The method of generating sky-wave corrections to predict the Omega phase is discussed. Composite Omega is considered as a means of lane identification and of reducing Omega navigation error. A simple technique for generating trapezoidal model (T-model) phase prediction is presented and compared with the Navy predictions and actual phase measurements. The T-model prediction analysis illustrates the ability to account for the major phase shift created by the diurnal effects on the lower ionosphere. An analysis of the Navy sky-wave correction table is used to provide information about spatial and temporal correlation of phase correction relative to the differential mode of operation.
An efficient current-based logic cell model for crosstalk delay analysis
NASA Astrophysics Data System (ADS)
Nazarian, Shahin; Das, Debasish
2013-04-01
Logic cell modelling is an important component in the analysis and design of CMOS integrated circuits, mostly due to nonlinear behaviour of CMOS cells with respect to the voltage signal at their input and output pins. A current-based model for CMOS logic cells is presented, which can be used for effective crosstalk noise and delta delay analysis in CMOS VLSI circuits. Existing current source models are expensive and need a new set of Spice-based characterisation, which is not compatible with typical EDA tools. In this article we present Imodel, a simple nonlinear logic cell model that can be derived from the typical cell libraries such as NLDM, with accuracy much higher than NLDM-based cell delay models. In fact, our experiments show an average error of 3% compared to Spice. This level of accuracy comes with a maximum runtime penalty of 19% compared to NLDM-based cell delay models on medium-sized industrial designs.
Honeywell Technical Order Transfer Tests.
1987-06-12
of simple corrections, a reasonable reproduction of the original could be generated. The quality was not good enough for a production environment. Lack of automated quality control (AQC) tools could account for the errors.
On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)
NASA Astrophysics Data System (ADS)
Huffman, G. J.
2013-12-01
Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate. Is this sufficient, or is it necessary to aggregate the precipitation error estimates across the time/space data cube used for averaging? At least for small time/space data cubes it would seem that the detailed variables that affect each precipitation error estimate in the aggregation, such as sensor type, land/ocean surface type, convective/stratiform type, and so on, drive variations that must be accounted for explicitly.
A Simple Test of Class-Level Genetic Association Can Reveal Novel Cardiometabolic Trait Loci.
Qian, Jing; Nunez, Sara; Reed, Eric; Reilly, Muredach P; Foulkes, Andrea S
2016-01-01
Characterizing the genetic determinants of complex diseases can be further augmented by incorporating knowledge of underlying structure or classifications of the genome, such as newly developed mappings of protein-coding genes, epigenetic marks, enhancer elements and non-coding RNAs. We apply a simple class-level testing framework, termed Genetic Class Association Testing (GenCAT), to identify protein-coding gene association with 14 cardiometabolic (CMD) related traits across 6 publicly available genome wide association (GWA) meta-analysis data resources. GenCAT uses SNP-level meta-analysis test statistics across all SNPs within a class of elements, as well as the size of the class and its unique correlation structure, to determine if the class is statistically meaningful. The novelty of findings is evaluated through investigation of regional signals. A subset of findings are validated using recently updated, larger meta-analysis resources. A simulation study is presented to characterize overall performance with respect to power, control of family-wise error and computational efficiency. All analysis is performed using the GenCAT package, R version 3.2.1. We demonstrate that class-level testing complements the common first stage minP approach that involves individual SNP-level testing followed by post-hoc ascribing of statistically significant SNPs to genes and loci. GenCAT suggests 54 protein-coding genes at 41 distinct loci for the 13 CMD traits investigated in the discovery analysis, that are beyond the discoveries of minP alone. An additional application to biological pathways demonstrates flexibility in defining genetic classes. We conclude that it would be prudent to include class-level testing as standard practice in GWA analysis. GenCAT, for example, can be used as a simple, complementary and efficient strategy for class-level testing that leverages existing data resources, requires only summary level data in the form of test statistics, and adds significant value with respect to its potential for identifying multiple novel and clinically relevant trait associations.
Theoretical performance analysis of doped optical fibers based on pseudo parameters
NASA Astrophysics Data System (ADS)
Karimi, Maryam; Seraji, Faramarz E.
2010-09-01
Characterization of doped optical fibers (DOFs) is an essential primary stage for design of DOF-based devices. This paper presents design of novel measurement techniques to determine DOFs parameters using mono-beam propagation in a low-loss medium by generating pseudo parameters for the DOFs. The designed techniques are able to characterize simultaneously the absorption, emission cross-sections (ACS and ECS), and dopant concentration of DOFs. In both the proposed techniques, we assume pseudo parameters for the DOFs instead of their actual values and show that the choice of these pseudo parameters values for design of DOF-based devices, such as erbium-doped fiber amplifier (EDFA), are appropriate and the resulting error is quite negligible when compared with the actual parameters values.Utilization of pseudo ACS and ECS values in design procedure of EDFAs does not require the measurement of background loss coefficient (BLC) and makes the rate equation of the DOFs simple. It is shown that by using the pseudo parameters values obtained by the proposed techniques, the error in the gain of a designed EDFA with a BLC of about 1 dB/km, are about 0.08 dB. It is further indicated that the same scenario holds good for BLC lower than 5 dB/m and higher than 12 dB/m. The proposed characterization techniques have simple procedures and are low cost that can have an advantageous use in manufacturing of the DOFs.
On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction.
Crop, F; Van Rompaye, B; Paelinck, L; Vakaet, L; Thierens, H; De Wagter, C
2008-07-21
The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry.
NASA Astrophysics Data System (ADS)
Hernández-Pajares, Manuel; Roma-Dollase, David; Krankowski, Andrzej; García-Rigo, Alberto; Orús-Pérez, Raül
2017-12-01
A summary of the main concepts on global ionospheric map(s) [hereinafter GIM(s)] of vertical total electron content (VTEC), with special emphasis on their assessment, is presented in this paper. It is based on the experience accumulated during almost two decades of collaborative work in the context of the international global navigation satellite systems (GNSS) service (IGS) ionosphere working group. A representative comparison of the two main assessments of ionospheric electron content models (VTEC-altimeter and difference of Slant TEC, based on independent global positioning system data GPS, dSTEC-GPS) is performed. It is based on 26 GPS receivers worldwide distributed and mostly placed on islands, from the last quarter of 2010 to the end of 2016. The consistency between dSTEC-GPS and VTEC-altimeter assessments for one of the most accurate IGS GIMs (the tomographic-kriging GIM `UQRG' computed by UPC) is shown. Typical error RMS values of 2 TECU for VTEC-altimeter and 0.5 TECU for dSTEC-GPS assessments are found. And, as expected by following a simple random model, there is a significant correlation between both RMS and specially relative errors, mainly evident when large enough number of observations per pass is considered. The authors expect that this manuscript will be useful for new analysis contributor centres and in general for the scientific and technical community interested in simple and truly external ways of validating electron content models of the ionosphere.
Slide Set: Reproducible image analysis and batch processing with ImageJ.
Nanes, Benjamin A
2015-11-01
Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, R. M.; Tai, K.-S.
2013-01-01
The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.
Nuclear Quadrupole Moments and Nuclear Shell Structure
DOE R&D Accomplishments Database
Townes, C. H.; Foley, H. M.; Low, W.
1950-06-23
Describes a simple model, based on nuclear shell considerations, which leads to the proper behavior of known nuclear quadrupole moments, although predictions of the magnitudes of some quadrupole moments are seriously in error.
Human iris three-dimensional imaging at micron resolution by a micro-plenoptic camera
Chen, Hao; Woodward, Maria A.; Burke, David T.; Jeganathan, V. Swetha E.; Demirci, Hakan; Sick, Volker
2017-01-01
A micro-plenoptic system was designed to capture the three-dimensional (3D) topography of the anterior iris surface by simple single-shot imaging. Within a depth-of-field of 2.4 mm, depth resolution of 10 µm can be achieved with accuracy (systematic errors) and precision (random errors) below 20%. We demonstrated the application of our micro-plenoptic imaging system on two healthy irides, an iris with naevi, and an iris with melanoma. The ridges and folds, with height differences of 10~80 µm, on the healthy irides can be effectively captured. The front surface on the iris naevi was flat, and the iris melanoma was 50 ± 10 µm higher than the surrounding iris. The micro-plenoptic imaging system has great potential to be utilized for iris disease diagnosis and continuing, simple monitoring. PMID:29082081
Human iris three-dimensional imaging at micron resolution by a micro-plenoptic camera.
Chen, Hao; Woodward, Maria A; Burke, David T; Jeganathan, V Swetha E; Demirci, Hakan; Sick, Volker
2017-10-01
A micro-plenoptic system was designed to capture the three-dimensional (3D) topography of the anterior iris surface by simple single-shot imaging. Within a depth-of-field of 2.4 mm, depth resolution of 10 µm can be achieved with accuracy (systematic errors) and precision (random errors) below 20%. We demonstrated the application of our micro-plenoptic imaging system on two healthy irides, an iris with naevi, and an iris with melanoma. The ridges and folds, with height differences of 10~80 µm, on the healthy irides can be effectively captured. The front surface on the iris naevi was flat, and the iris melanoma was 50 ± 10 µm higher than the surrounding iris. The micro-plenoptic imaging system has great potential to be utilized for iris disease diagnosis and continuing, simple monitoring.
Cognitive Load Differentially Impacts Response Control in Girls and Boys with ADHD
Mostofsky, Stewart H.; Rosch, Keri S.
2015-01-01
Children with attention-deficit hyperactivity disorder (ADHD) consistently show impaired response control, including deficits in response inhibition and increased intrasubject variability (ISV) compared to typically-developing (TD) children. However, significantly less research has examined factors that may influence response control in individuals with ADHD, such as task or participant characteristics. The current study extends the literature by examining the impact of increasing cognitive demands on response control in a large sample of 81children with ADHD (40 girls) and 100 TD children (47 girls), ages 8–12 years. Participants completed a simple Go/No-Go (GNG) task with minimal cognitive demands, and a complex GNG task with increased cognitive load. Results showed that increasing cognitive load differentially impacted response control (commission error rate and tau, an ex-Gaussian measure of ISV) for girls, but not boys, with ADHD compared to same-sex TD children. Specifically, a sexually dimorphic pattern emerged such that boys with ADHD demonstrated higher commission error rate and tau on both the simple and complex GNG tasks as compared to TD boys, whereas girls with ADHD did not differ from TD girls on the simple GNG task, but showed higher commission error rate and tau on the complex GNG task. These findings suggest that task complexity influences response control in children with ADHD in a sexually dimorphic manner. The findings have substantive implications for the pathophysiology of ADHD in boys versus girls with ADHD. PMID:25624066
Weights and measures: a new look at bisection behaviour in neglect.
McIntosh, Robert D; Schindler, Igor; Birchall, Daniel; Milner, A David
2005-12-01
Horizontal line bisection is a ubiquitous task in the investigation of visual neglect. Patients with left neglect typically make rightward errors that increase with line length and for lines at more leftward positions. For short lines, or for lines presented in right space, these errors may 'cross over' to become leftward. We have taken a new approach to these phenomena by employing a different set of dependent and independent variables for their description. Rather than recording bisection error, we record the lateral position of the response within the workspace. We have studied how this varies when the locations of the left and right endpoints are manipulated independently. Across 30 patients with left neglect, we have observed a characteristic asymmetry between the 'weightings' accorded to the two endpoints, such that responses are less affected by changes in the location of the left endpoint than by changes in the location of the right. We show that a simple endpoint weightings analysis accounts readily for the effects of line length and spatial position, including cross-over effects, and leads to an index of neglect that is more sensitive than the standard measure. We argue that this novel approach is more parsimonious than the standard model and yields fresh insights into the nature of neglect impairment.
Novel space-time trellis codes for free-space optical communications using transmit laser selection.
García-Zambrana, Antonio; Boluda-Ruiz, Rubén; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz
2015-09-21
In this paper, the deployment of novel space-time trellis codes (STTCs) with transmit laser selection (TLS) for free-space optical (FSO) communication systems using intensity modulation and direct detection (IM/DD) over atmospheric turbulence and misalignment fading channels is presented. Combining TLS and STTC with rate 1 bit/(s · Hz), a new code design criterion based on the use of the largest order statistics is here proposed for multiple-input/single-output (MISO) FSO systems in order to improve the diversity order gain by properly chosing the transmit lasers out of the available L lasers. Based on a pairwise error probability (PEP) analysis, closed-form asymptotic bit error-rate (BER) expressions in the range from low to high signal-to-noise ratio (SNR) are derived when the irradiance of the transmitted optical beam is susceptible to moderate-to-strong turbulence conditions, following a gamma-gamma (GG) distribution, and pointing error effects, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. Obtained results show diversity orders of 2L and 3L when simple two-state and four-state STTCs are considered, respectively. Simulation results are further demonstrated to confirm the analytical results.
Interpolation for de-Dopplerisation
NASA Astrophysics Data System (ADS)
Graham, W. R.
2018-05-01
'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.
Development of virtual environments for training skills and reducing errors in laparoscopic surgery
NASA Astrophysics Data System (ADS)
Tendick, Frank; Downes, Michael S.; Cavusoglu, Murat C.; Gantert, Walter A.; Way, Lawrence W.
1998-06-01
In every surgical procedure there are key steps and skills that, if performed incorrectly, can lead to complications. In conjunction with efforts, based on task and error analysis, in the Videoscopic Training Center at UCSF to identify these key elements in laparoscopic surgical procedures, the authors are developing virtual environments and modeling methods to train the elements. Laparoscopic surgery is particularly demanding of the surgeon's spatial skills, requiring the ability to create 3D mental models and plans while viewing a 2D image. For example, operating a laparoscope with the objective lens angled from the scope axis is a skill that some surgeons have difficulty mastering, even after using the instrument in many procedures. Virtual environments are a promising medium for teaching spatial skills. A kinematically accurate model of an angled laparoscope in an environment of simple targets is being tested in courses for novice and experienced surgeons. Errors in surgery are often due to a misinterpretation of local anatomy compounded with inadequate procedural knowledge. Methods to avoid bile duct injuries in cholecystectomy are being integrated into a deformable environment consisting of the liver, gallbladder, and biliary tree. Novel deformable tissue modeling algorithms based on finite element methods will be used to improve the response of the anatomical models.
Network reconstruction via graph blending
NASA Astrophysics Data System (ADS)
Estrada, Rolando
2016-05-01
Graphs estimated from empirical data are often noisy and incomplete due to the difficulty of faithfully observing all the components (nodes and edges) of the true graph. This problem is particularly acute for large networks where the number of components may far exceed available surveillance capabilities. Errors in the observed graph can render subsequent analyses invalid, so it is vital to develop robust methods that can minimize these observational errors. Errors in the observed graph may include missing and spurious components, as well fused (multiple nodes are merged into one) and split (a single node is misinterpreted as many) nodes. Traditional graph reconstruction methods are only able to identify missing or spurious components (primarily edges, and to a lesser degree nodes), so we developed a novel graph blending framework that allows us to cast the full estimation problem as a simple edge addition/deletion problem. Armed with this framework, we systematically investigate the viability of various topological graph features, such as the degree distribution or the clustering coefficients, and existing graph reconstruction methods for tackling the full estimation problem. Our experimental results suggest that incorporating any topological feature as a source of information actually hinders reconstruction accuracy. We provide a theoretical analysis of this phenomenon and suggest several avenues for improving this estimation problem.
Antenna Deployment for the Localization of Partial Discharges in Open-Air Substations
Robles, Guillermo; Fresno, José Manuel; Sánchez-Fernández, Matilde; Martínez-Tarifa, Juan Manuel
2016-01-01
Partial discharges are ionization processes inside or on the surface of dielectrics that can unveil insulation problems in electrical equipment. The charge accumulated is released under certain environmental and voltage conditions attacking the insulation both physically and chemically. The final consequence of a continuous occurrence of these events is the breakdown of the dielectric. The electron avalanche provokes a derivative of the electric field with respect to time, creating an electromagnetic impulse that can be detected with antennas. The localization of the source helps in the identification of the piece of equipment that has to be decommissioned. This can be done by deploying antennas and calculating the time difference of arrival (TDOA) of the electromagnetic pulses. However, small errors in this parameter can lead to great displacements of the calculated position of the source. Usually, four antennas are used to find the source but the array geometry has to be correctly deployed to have minimal errors in the localization. This paper demonstrates, by an analysis based on simulation and also experimentally, that the most common layouts are not always the best options and proposes a simple antenna layout to reduce the systematic error in the TDOA calculation due to the positions of the antennas in the array. PMID:27092501
Honest Importance Sampling with Multiple Markov Chains
Tan, Aixin; Doss, Hani; Hobert, James P.
2017-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855
Honest Importance Sampling with Multiple Markov Chains.
Tan, Aixin; Doss, Hani; Hobert, James P
2015-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.
Kasuga, Shoko; Kurata, Makiko; Liu, Meigen; Ushiba, Junichi
2015-05-01
Human's sophisticated motor learning system paradoxically interferes with motor performance when visual information is mirror-reversed (MR), because normal movement error correction further aggravates the error. This error-increasing mechanism makes performing even a simple reaching task difficult, but is overcome by alterations in the error correction rule during the trials. To isolate factors that trigger learners to change the error correction rule, we manipulated the gain of visual angular errors when participants made arm-reaching movements with mirror-reversed visual feedback, and compared the rule alteration timing between groups with normal or reduced gain. Trial-by-trial changes in the visual angular error was tracked to explain the timing of the change in the error correction rule. Under both gain conditions, visual angular errors increased under the MR transformation, and suddenly decreased after 3-5 trials with increase. The increase became degressive at different amplitude between the two groups, nearly proportional to the visual gain. The findings suggest that the alteration of the error-correction rule is not dependent on the amplitude of visual angular errors, and possibly determined by the number of trials over which the errors increased or statistical property of the environment. The current results encourage future intensive studies focusing on the exact rule-change mechanism. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Prevalence of refraction errors and color blindness in heavy vehicle drivers.
Erdoğan, Haydar; Ozdemir, Levent; Arslan, Seher; Cetin, Ilhan; Ozeç, Ayşe Vural; Cetinkaya, Selma; Sümer, Haldun
2011-01-01
To investigate the frequency of eye disorders in heavy vehicle drivers. A cross-sectional type study was conducted between November 2004 and September 2006 in 200 driver and 200 non-driver persons. A complete ophthalmologic examination was performed, including visual acuity, and dilated examination of the posterior segment. We used the auto refractometer for determining refractive errors. According to eye examination results, the prevalence of the refractive error was 21.5% and 31.3% in study and control groups respectively (P<0.05). The most common type of refraction error in the study group was myopic astigmatism (8.3%) while in the control group simple myopia (12.8%). Prevalence of dyschromatopsia in the rivers, control group and total group was 2.2%, 2.8% and 2.6% respectively. A considerably high number of drivers are in lack of optimal visual acuity. Refraction errors in drivers may impair the traffic security.
Specificity of Atmospheric Correction of Satellite Data on Ocean Color in the Far East
NASA Astrophysics Data System (ADS)
Aleksanin, A. I.; Kachur, V. A.
2017-12-01
Calculation errors in ocean-brightness coefficients in the Far Eastern are analyzed for two atmospheric correction algorithms (NIR and MUMM). The daylight measurements in different water types show that the main error component is systematic and has a simple dependence on the magnitudes of the coefficients. The causes of the error behavior are considered. The most probable explanation for the large errors in ocean-color parameters in the Far East is a high concentration of continental aerosol absorbing light. A comparison between satellite and in situ measurements at AERONET stations in the United States and South Korea has been made. It is shown the errors in these two regions differ by up to 10 times upon close water turbidity and relatively high aerosol optical-depth computation precision in the case of using the NIR correction of the atmospheric effect.
Prevalence of refraction errors and color blindness in heavy vehicle drivers
Erdoğan, Haydar; Özdemir, Levent; Arslan, Seher; Çetin, Ilhan; Özeç, Ayşe Vural; Çetinkaya, Selma; Sümer, Haldun
2011-01-01
AIM To investigate the frequency of eye disorders in heavy vehicle drivers. METHODS A cross-sectional type study was conducted between November 2004 and September 2006 in 200 driver and 200 non-driver persons. A complete ophthalmologic examination was performed, including visual acuity, and dilated examination of the posterior segment. We used the auto refractometer for determining refractive errors. RESULTS According to eye examination results, the prevalence of the refractive error was 21.5% and 31.3% in study and control groups respectively (P<0.05). The most common type of refraction error in the study group was myopic astigmatism (8.3%) while in the control group simple myopia (12.8%). Prevalence of dyschromatopsia in the rivers, control group and total group was 2.2%, 2.8% and 2.6% respectively. CONCLUSION A considerably high number of drivers are in lack of optimal visual acuity. Refraction errors in drivers may impair the traffic security. PMID:22553671
NASA Astrophysics Data System (ADS)
Gerck, Ed
We present a new, comprehensive framework to qualitatively improve election outcome trustworthiness, where voting is modeled as an information transfer process. Although voting is deterministic (all ballots are counted), information is treated stochastically using Information Theory. Error considerations, including faults, attacks, and threats by adversaries, are explicitly included. The influence of errors may be corrected to achieve an election outcome error as close to zero as desired (error-free), with a provably optimal design that is applicable to any type of voting, with or without ballots. Sixteen voting system requirements, including functional, performance, environmental and non-functional considerations, are derived and rated, meeting or exceeding current public-election requirements. The voter and the vote are unlinkable (secret ballot) although each is identifiable. The Witness-Voting System (Gerck, 2001) is extended as a conforming implementation of the provably optimal design that is error-free, transparent, simple, scalable, robust, receipt-free, universally-verifiable, 100% voter-verified, and end-to-end audited.
Prediction error induced motor contagions in human behaviors.
Ikegami, Tsuyoshi; Ganesh, Gowrishankar; Takeuchi, Tatsuya; Nakamoto, Hiroki
2018-05-29
Motor contagions refer to implicit effects on one's actions induced by observed actions. Motor contagions are believed to be induced simply by action observation and cause an observer's action to become similar to the action observed. In contrast, here we report a new motor contagion that is induced only when the observation is accompanied by prediction errors - differences between actions one observes and those he/she predicts or expects. In two experiments, one on whole-body baseball pitching and another on simple arm reaching, we show that the observation of the same action induces distinct motor contagions, depending on whether prediction errors are present or not. In the absence of prediction errors, as in previous reports, participants' actions changed to become similar to the observed action, while in the presence of prediction errors, their actions changed to diverge away from it, suggesting distinct effects of action observation and action prediction on human actions. © 2018, Ikegami et al.
Random synaptic feedback weights support error backpropagation for deep learning
NASA Astrophysics Data System (ADS)
Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.
2016-11-01
The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.
Random synaptic feedback weights support error backpropagation for deep learning
Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.
2016-01-01
The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning. PMID:27824044
Error Mitigation for Short-Depth Quantum Circuits
NASA Astrophysics Data System (ADS)
Temme, Kristan; Bravyi, Sergey; Gambetta, Jay M.
2017-11-01
Two schemes are presented that mitigate the effect of errors and decoherence in short-depth quantum circuits. The size of the circuits for which these techniques can be applied is limited by the rate at which the errors in the computation are introduced. Near-term applications of early quantum devices, such as quantum simulations, rely on accurate estimates of expectation values to become relevant. Decoherence and gate errors lead to wrong estimates of the expectation values of observables used to evaluate the noisy circuit. The two schemes we discuss are deliberately simple and do not require additional qubit resources, so to be as practically relevant in current experiments as possible. The first method, extrapolation to the zero noise limit, subsequently cancels powers of the noise perturbations by an application of Richardson's deferred approach to the limit. The second method cancels errors by resampling randomized circuits according to a quasiprobability distribution.
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
Simple measurement of lenticular lens quality for autostereoscopic displays
NASA Astrophysics Data System (ADS)
Gray, Stuart; Boudreau, Robert A.
2013-03-01
Lenticular lens based autostereoscopic 3D displays are finding many applications in digital signage and consumer electronics devices. A high quality 3D viewing experience requires the lenticular lens be properly aligned with the pixels on the display device so that each eye views the correct image. This work presents a simple and novel method for rapidly assessing the quality of a lenticular lens to be used in autostereoscopic displays. Errors in lenticular alignment across the entire display are easily observed with a simple test pattern where adjacent views are programmed to display different colors.
Mortensen, Kim I; Tassone, Chiara; Ehrlich, Nicky; Andresen, Thomas L; Flyvbjerg, Henrik
2018-05-09
Nanosize lipid vesicles are used extensively at the interface between nanotechnology and biology, e.g., as containers for chemical reactions at minute concentrations and vehicles for targeted delivery of pharmaceuticals. Typically, vesicle samples are heterogeneous as regards vesicle size and structural properties. Consequently, vesicles must be characterized individually to ensure correct interpretation of experimental results. Here we do that using dual-color fluorescence labeling of vesicles-of their lipid bilayers and lumens, separately. A vesicle then images as two spots, one in each color channel. A simple image analysis determines the total intensity and width of each spot. These four data all depend on the vesicle radius in a simple manner for vesicles that are spherical, unilamellar, and optimal encapsulators of molecular cargo. This permits identification of such ideal vesicles. They in turn enable calibration of the dual-color fluorescence microscopy images they appear in. Since this calibration is not a separate experiment but an analysis of images of vesicles to be characterized, it eliminates the potential source of error that a separate calibration experiment would have been. Nonideal vesicles in the same images were characterized by how their four data violate the calibrated relationship established for ideal vesicles. In this way, our method yields size, shape, lamellarity, and encapsulation efficiency of each imaged vesicle. Applying this procedure to extruded samples of vesicles, we found that, contrary to common assumptions, only a fraction of vesicles are ideal.
Development of analysis technique to predict the material behavior of blowing agent
NASA Astrophysics Data System (ADS)
Hwang, Ji Hoon; Lee, Seonggi; Hwang, So Young; Kim, Naksoo
2014-11-01
In order to numerically simulate the foaming behavior of mastic sealer containing the blowing agent, a foaming and driving force model are needed which incorporate the foaming characteristics. Also, the elastic stress model is required to represent the material behavior of co-existing phase of liquid state and the cured polymer. It is important to determine the thermal properties such as thermal conductivity and specific heat because foaming behavior is heavily influenced by temperature change. In this study, three models are proposed to explain the foaming process and material behavior during and after the process. To obtain the material parameters in each model, following experiments and the numerical simulations are performed: thermal test, simple shear test and foaming test. The error functions are defined as differences between the experimental measurements and the numerical simulation results, and then the parameters are determined by minimizing the error functions. To ensure the validity of the obtained parameters, the confirmation simulation for each model is conducted by applying the determined parameters. The cross-verification is performed by measuring the foaming/shrinkage force. The results of cross-verification tended to follow the experimental results. Interestingly, it was possible to estimate the micro-deformation occurring in automobile roof surface by applying the proposed model to oven process analysis. The application of developed analysis technique will contribute to the design with minimized micro-deformation.
Baek, Hyun Jae; Shin, JaeWook; Jin, Gunwoo; Cho, Jaegeol
2017-10-24
Photoplethysmographic signals are useful for heart rate variability analysis in practical ambulatory applications. While reducing the sampling rate of signals is an important consideration for modern wearable devices that enable 24/7 continuous monitoring, there have not been many studies that have investigated how to compensate the low timing resolution of low-sampling-rate signals for accurate heart rate variability analysis. In this study, we utilized the parabola approximation method and measured it against the conventional cubic spline interpolation method for the time, frequency, and nonlinear domain variables of heart rate variability. For each parameter, the intra-class correlation, standard error of measurement, Bland-Altman 95% limits of agreement and root mean squared relative error were presented. Also, elapsed time taken to compute each interpolation algorithm was investigated. The results indicated that parabola approximation is a simple, fast, and accurate algorithm-based method for compensating the low timing resolution of pulse beat intervals. In addition, the method showed comparable performance with the conventional cubic spline interpolation method. Even though the absolute value of the heart rate variability variables calculated using a signal sampled at 20 Hz were not exactly matched with those calculated using a reference signal sampled at 250 Hz, the parabola approximation method remains a good interpolation method for assessing trends in HRV measurements for low-power wearable applications.
NASA Astrophysics Data System (ADS)
Brandstetter, M.; Volgger, L.; Genner, A.; Jungbauer, C.; Lendl, B.
2013-02-01
This work reports on a compact sensor for fast and reagent-free point-of-care determination of glucose, lactate and triglycerides in blood serum based on a tunable (1030-1230 cm-1) external-cavity quantum cascade laser (EC-QCL). For simple and robust operation a single beam set-up was designed and only thermoelectric cooling was used for the employed laser and detector. Full computer control of analysis including liquid handling and data analysis facilitated routine measurements. A high optical pathlength (>100 μm) is a prerequisite for robust measurements in clinical practice. Hence, the optimum optical pathlength for transmission measurements in aqueous solution was considered in theory and experiment. The experimentally determined maximum signal-to-noise ratio (SNR) was around 140 μm for the QCL blood sensor and around 50 μm for a standard FT-IR spectrometer employing a liquid nitrogen cooled mercury cadmium telluride (MCT) detector. A single absorption spectrum was used to calculate the analyte concentrations simultaneously by using a partial-least-squares (PLS) regression analysis. Glucose was determined in blood serum with a prediction error (RMSEP) of 6.9 mg/dl and triglycerides with an error of cross-validation (RMSECV) of 17.5 mg/dl in a set of 42 different patients. In spiked serum samples the lactate concentration could be determined with an RMSECV of 8.9 mg/dl.
NASA Technical Reports Server (NTRS)
Gupta, Hoshin V.; Kling, Harald; Yilmaz, Koray K.; Martinez-Baquero, Guillermo F.
2009-01-01
The mean squared error (MSE) and the related normalization, the Nash-Sutcliffe efficiency (NSE), are the two criteria most widely used for calibration and evaluation of hydrological models with observed data. Here, we present a diagnostically interesting decomposition of NSE (and hence MSE), which facilitates analysis of the relative importance of its different components in the context of hydrological modelling, and show how model calibration problems can arise due to interactions among these components. The analysis is illustrated by calibrating a simple conceptual precipitation-runoff model to daily data for a number of Austrian basins having a broad range of hydro-meteorological characteristics. Evaluation of the results clearly demonstrates the problems that can be associated with any calibration based on the NSE (or MSE) criterion. While we propose and test an alternative criterion that can help to reduce model calibration problems, the primary purpose of this study is not to present an improved measure of model performance. Instead, we seek to show that there are systematic problems inherent with any optimization based on formulations related to the MSE. The analysis and results have implications to the manner in which we calibrate and evaluate environmental models; we discuss these and suggest possible ways forward that may move us towards an improved and diagnostically meaningful approach to model performance evaluation and identification.
Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping
NASA Astrophysics Data System (ADS)
Piedrafita, Álvaro; Renes, Joseph M.
2017-12-01
We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.
Choose and choose again: appearance-reality errors, pragmatics and logical ability.
Deák, Gedeon O; Enright, Brian
2006-05-01
In the Appearance/Reality (AR) task some 3- and 4-year-old children make perseverative errors: they choose the same word for the appearance and the function of a deceptive object. Are these errors specific to the AR task, or signs of a general question-answering problem? Preschoolers completed five tasks: AR; simple successive forced-choice question pairs (QP); flexible naming of objects (FN); working memory (WM) span; and indeterminacy detection (ID). AR errors correlated with QP errors. Insensitivity to indeterminacy predicted perseveration in both tasks. Neither WM span nor flexible naming predicted other measures. Age predicted sensitivity to indeterminacy. These findings suggest that AR tests measure a pragmatic understanding; specifically, different questions about a topic usually call for different answers. This understanding is related to the ability to detect indeterminacy of each question in a series. AR errors are unrelated to the ability to represent an object as belonging to multiple categories, to working memory span, or to inhibiting previously activated words.
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.
1987-01-01
It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.
Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.
Spiess, Martin; Jordan, Pascal; Wendt, Mike
2018-05-07
In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.
Correction of Refractive Errors in Rhesus Macaques (Macaca mulatta) Involved in Visual Research
Mitchell, Jude F; Boisvert, Chantal J; Reuter, Jon D; Reynolds, John H; Leblanc, Mathias
2014-01-01
Macaques are the most common animal model for studies in vision research, and due to their high value as research subjects, often continue to participate in studies well into old age. As is true in humans, visual acuity in macaques is susceptible to refractive errors. Here we report a case study in which an aged macaque demonstrated clear impairment in visual acuity according to performance on a demanding behavioral task. Refraction demonstrated bilateral myopia that significantly affected behavioral and visual tasks. Using corrective lenses, we were able to restore visual acuity. After correction of myopia, the macaque's performance on behavioral tasks was comparable to that of a healthy control. We screened 20 other male macaques to assess the incidence of refractive errors and ocular pathologies in a larger population. Hyperopia was the most frequent ametropia but was mild in all cases. A second macaque had mild myopia and astigmatism in one eye. There were no other pathologies observed on ocular examination. We developed a simple behavioral task that visual research laboratories could use to test visual acuity in macaques. The test was reliable and easily learned by the animals in 1 d. This case study stresses the importance of screening macaques involved in visual science for refractive errors and ocular pathologies to ensure the quality of research; we also provide simple methodology for screening visual acuity in these animals. PMID:25427343
Regression dilution bias: tools for correction methods and sample size calculation.
Berglund, Lars
2012-08-01
Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.
Correction of refractive errors in rhesus macaques (Macaca mulatta) involved in visual research.
Mitchell, Jude F; Boisvert, Chantal J; Reuter, Jon D; Reynolds, John H; Leblanc, Mathias
2014-08-01
Macaques are the most common animal model for studies in vision research, and due to their high value as research subjects, often continue to participate in studies well into old age. As is true in humans, visual acuity in macaques is susceptible to refractive errors. Here we report a case study in which an aged macaque demonstrated clear impairment in visual acuity according to performance on a demanding behavioral task. Refraction demonstrated bilateral myopia that significantly affected behavioral and visual tasks. Using corrective lenses, we were able to restore visual acuity. After correction of myopia, the macaque's performance on behavioral tasks was comparable to that of a healthy control. We screened 20 other male macaques to assess the incidence of refractive errors and ocular pathologies in a larger population. Hyperopia was the most frequent ametropia but was mild in all cases. A second macaque had mild myopia and astigmatism in one eye. There were no other pathologies observed on ocular examination. We developed a simple behavioral task that visual research laboratories could use to test visual acuity in macaques. The test was reliable and easily learned by the animals in 1 d. This case study stresses the importance of screening macaques involved in visual science for refractive errors and ocular pathologies to ensure the quality of research; we also provide simple methodology for screening visual acuity in these animals.
Can we improve patient safety?
Corbally, Martin Thomas
2014-01-01
Despite greater awareness of patient safety issues especially in the operating room and the widespread implementation of surgical time out World Health Organization (WHO), errors, especially wrong site surgery, continue. Most such errors are due to lapses in communication where decision makers fail to consult or confirm operative findings but worryingly where parental concerns over the planned procedure are ignored or not followed through. The WHO Surgical Pause/Time Out aims to capture these errors and prevent them, but the combination of human error and complex hospital environments can overwhelm even robust safety structures and simple common sense. Parents are the ultimate repository of information on their child's condition and planned surgery but are traditionally excluded from the process of Surgical Pause and Time Out, perhaps to avoid additional stress. In addition, surgeons, like pilots, are subject to the phenomenon of "plan-continue-fail" with potentially disastrous outcomes. If we wish to improve patient safety during surgery and avoid wrong site errors then we must include parents in the Surgical Pause/Time Out. A recent pilot study has shown that neither staff nor parents found it added to their stress, but, moreover, 100% of parents considered that it should be a mandatory component of the Surgical Pause nor does it add to the stress of surgery. Surgeons should be required to confirm that the planned procedure is in keeping with the operative findings especially in extirpative surgery and this "step back" should be incorporated into the standard Surgical Pause. It is clear that we must improve patient safety further and these simple measures should add to that potential.
Pinzon Morales, Ruben Dario; Hirata, Yutaka
2016-12-20
Motor learning in the cerebellum is believed to entail plastic changes at synapses between parallel fibers and Purkinje cells, induced by the teaching signal conveyed in the climbing fiber (CF) input. Despite the abundant research on the cerebellum, the nature of this signal is still a matter of debate. Two types of movement error information have been proposed to be plausible teaching signals: sensory error (SE) and motor command error (ME); however, their plausibility has not been tested in the real world. Here, we conducted a comparison of different types of CF teaching signals in real-world engineering applications by using a realistic neuronal network model of the cerebellum. We employed a direct current motor (simple task) and a two-wheeled balancing robot (difficult task). We demonstrate that SE, ME or a linear combination of the two is sufficient to yield comparable performance in a simple task. When the task is more difficult, although SE slightly outperformed ME, these types of error information are all able to adequately control the robot. We categorize granular cells according to their inputs and the error signal revealing that different granule cells are preferably engaged for SE, ME or their combination. Thus, unlike previous theoretical and simulation studies that support either SE or ME, it is demonstrated for the first time in a real-world engineering application that both SE and ME are adequate as the CF teaching signal in a realistic computational cerebellar model, even when the control task is as difficult as stabilizing a two-wheeled balancing robot.
Pinzon Morales, Ruben Dario; Hirata, Yutaka
2016-01-01
Motor learning in the cerebellum is believed to entail plastic changes at synapses between parallel fibers and Purkinje cells, induced by the teaching signal conveyed in the climbing fiber (CF) input. Despite the abundant research on the cerebellum, the nature of this signal is still a matter of debate. Two types of movement error information have been proposed to be plausible teaching signals: sensory error (SE) and motor command error (ME); however, their plausibility has not been tested in the real world. Here, we conducted a comparison of different types of CF teaching signals in real-world engineering applications by using a realistic neuronal network model of the cerebellum. We employed a direct current motor (simple task) and a two-wheeled balancing robot (difficult task). We demonstrate that SE, ME or a linear combination of the two is sufficient to yield comparable performance in a simple task. When the task is more difficult, although SE slightly outperformed ME, these types of error information are all able to adequately control the robot. We categorize granular cells according to their inputs and the error signal revealing that different granule cells are preferably engaged for SE, ME or their combination. Thus, unlike previous theoretical and simulation studies that support either SE or ME, it is demonstrated for the first time in a real-world engineering application that both SE and ME are adequate as the CF teaching signal in a realistic computational cerebellar model, even when the control task is as difficult as stabilizing a two-wheeled balancing robot. PMID:27999381
NASA Astrophysics Data System (ADS)
Mermin, N. David
2007-08-01
Preface; 1. Cbits and Qbits; 2. General features and some simple examples; 3. Breaking RSA encryption with a quantum computer; 4. Searching with a quantum computer; 5. Quantum error correction; 6. Protocols that use just a few Qbits; Appendices; Index.
Effect of signal intensity and camera quantization on laser speckle contrast analysis
Song, Lipei; Elson, Daniel S.
2012-01-01
Laser speckle contrast analysis (LASCA) is limited to being a qualitative method for the measurement of blood flow and tissue perfusion as it is sensitive to the measurement configuration. The signal intensity is one of the parameters that can affect the contrast values due to the quantization of the signals by the camera and analog-to-digital converter (ADC). In this paper we deduce the theoretical relationship between signal intensity and contrast values based on the probability density function (PDF) of the speckle pattern and simplify it to a rational function. A simple method to correct this contrast error is suggested. The experimental results demonstrate that this relationship can effectively compensate the bias in contrast values induced by the quantized signal intensity and correct for bias induced by signal intensity variations across the field of view. PMID:23304650
Modeling learning in brain stem and cerebellar sites responsible for VOR plasticity
NASA Technical Reports Server (NTRS)
Quinn, K. J.; Didier, A. J.; Baker, J. F.; Peterson, B. W.
1998-01-01
A simple model of vestibuloocular reflex (VOR) function was used to analyze several hypotheses currently held concerning the characteristics of VOR plasticity. The network included a direct vestibular pathway and an indirect path via the cerebellum. An optimization analysis of this model suggests that regulation of brain stem sites is critical for the proper modification of VOR gain. A more physiologically plausible learning rule was also applied to this network. Analysis of these simulation results suggests that the preferred error correction signal controlling gain modification of the VOR is the direct output of the accessory optic system (AOS) to the vestibular nuclei vs. a signal relayed through the cerebellum via floccular Purkinje cells. The potential anatomical and physiological basis for this conclusion is discussed, in relation to our current understanding of the latency of the adapted VOR response.
Oppliger, R A; Nielsen, D H; Shetler, A C; Crowley, E T; Albright, J P
1992-01-01
The need for simple, valid techniques of body composition assessment among athletes is a growing concern of the physical therapist. This paper reports on several common methods applied to university football players. Body composition analysis was conducted on 28 Division IA football players using three different bioelectrical impedance analysis (BIA) systems, skinfolds (SF), and hydrostatic weighing (HYDRO). Correlations for all methods with HYDRO were high (>.88), but BIA significantly overpredicted body fatness. In contrast, three SF equations showed small differences with HYDRO and reasonable measurement error. Clinicians should exercise caution when using BIA based on the existing manufacturers' equations with athletic populations. Adjustments to BIA regression equations by including modifying or anthropometric variables could enhance the predictive accuracy of these methods with lean, athletic males. J Orthop Sports Phys Ther 1992;15(4):187-192.
Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace.
Zhang, Cheng; Lai, Chun-Liang; Pettitt, B Montgomery
The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution.
Abreu, P C; Greenberg, D A; Hodge, S E
1999-09-01
Several methods have been proposed for linkage analysis of complex traits with unknown mode of inheritance. These methods include the LOD score maximized over disease models (MMLS) and the "nonparametric" linkage (NPL) statistic. In previous work, we evaluated the increase of type I error when maximizing over two or more genetic models, and we compared the power of MMLS to detect linkage, in a number of complex modes of inheritance, with analysis assuming the true model. In the present study, we compare MMLS and NPL directly. We simulated 100 data sets with 20 families each, using 26 generating models: (1) 4 intermediate models (penetrance of heterozygote between that of the two homozygotes); (2) 6 two-locus additive models; and (3) 16 two-locus heterogeneity models (admixture alpha = 1.0,.7,.5, and.3; alpha = 1.0 replicates simple Mendelian models). For LOD scores, we assumed dominant and recessive inheritance with 50% penetrance. We took the higher of the two maximum LOD scores and subtracted 0.3 to correct for multiple tests (MMLS-C). We compared expected maximum LOD scores and power, using MMLS-C and NPL as well as the true model. Since NPL uses only the affected family members, we also performed an affecteds-only analysis using MMLS-C. The MMLS-C was both uniformly more powerful than NPL for most cases we examined, except when linkage information was low, and close to the results for the true model under locus heterogeneity. We still found better power for the MMLS-C compared with NPL in affecteds-only analysis. The results show that use of two simple modes of inheritance at a fixed penetrance can have more power than NPL when the trait mode of inheritance is complex and when there is heterogeneity in the data set.
Analysis of lard in meatball broth using Fourier transform infrared spectroscopy and chemometrics.
Kurniawati, Endah; Rohman, Abdul; Triyana, Kuwat
2014-01-01
Meatball is one of the favorite foods in Indonesia. For the economic reason (due to the price difference), the substitution of beef meat with pork can occur. In this study, FTIR spectroscopy in combination with chemometrics of partial least square (PLS) and principal component analysis (PCA) was used for analysis of pork fat (lard) in meatball broth. Lard in meatball broth was quantitatively determined at wavenumber region of 1018-1284 cm(-1). The coefficient of determination (R(2)) and root mean square error of calibration (RMSEC) values obtained were 0.9975 and 1.34% (v/v), respectively. Furthermore, the classification of lard and beef fat in meatball broth as well as in commercial samples was performed at wavenumber region of 1200-1000 cm(-1). The results showed that FTIR spectroscopy coupled with chemometrics can be used for quantitative analysis and classification of lard in meatball broth for Halal verification studies. The developed method is simple in operation, rapid and not involving extensive sample preparation. © 2013.
NASA Technical Reports Server (NTRS)
Waszak, Martin R.; Fung, Jimmy
1998-01-01
This report describes the development of transfer function models for the trailing-edge and upper and lower spoiler actuators of the Benchmark Active Control Technology (BACT) wind tunnel model for application to control system analysis and design. A simple nonlinear least-squares parameter estimation approach is applied to determine transfer function parameters from frequency response data. Unconstrained quasi-Newton minimization of weighted frequency response error was employed to estimate the transfer function parameters. An analysis of the behavior of the actuators over time to assess the effects of wear and aerodynamic load by using the transfer function models is also presented. The frequency responses indicate consistent actuator behavior throughout the wind tunnel test and only slight degradation in effectiveness due to aerodynamic hinge loading. The resulting actuator models have been used in design, analysis, and simulation of controllers for the BACT to successfully suppress flutter over a wide range of conditions.
Assessment of the relative merits of a few methods to detect evolutionary trends.
Laurin, Michel
2010-12-01
Some of the most basic questions about the history of life concern evolutionary trends. These include determining whether or not metazoans have become more complex over time, whether or not body size tends to increase over time (the Cope-Depéret rule), or whether or not brain size has increased over time in various taxa, such as mammals and birds. Despite the proliferation of studies on such topics, assessment of the reliability of results in this field is hampered by the variability of techniques used and the lack of statistical validation of these methods. To solve this problem, simulations are performed using a variety of evolutionary models (gradual Brownian motion, speciational Brownian motion, and Ornstein-Uhlenbeck), with or without a drift of variable amplitude, with variable variance of tips, and with bounds placed close or far from the starting values and final means of simulated characters. These are used to assess the relative merits (power, Type I error rate, bias, and mean absolute value of error on slope estimate) of several statistical methods that have recently been used to assess the presence of evolutionary trends in comparative data. Results show widely divergent performance of the methods. The simple, nonphylogenetic regression (SR) and variance partitioning using phylogenetic eigenvector regression (PVR) with a broken stick selection procedure have greatly inflated Type I error rate (0.123-0.180 at a 0.05 threshold), which invalidates their use in this context. However, they have the greatest power. Most variants of Felsenstein's independent contrasts (FIC; five of which are presented) have adequate Type I error rate, although two have a slightly inflated Type I error rate with at least one of the two reference trees (0.064-0.090 error rate at a 0.05 threshold). The power of all contrast-based methods is always much lower than that of SR and PVR, except under Brownian motion with a strong trend and distant bounds. Mean absolute value of error on slope of all FIC methods is slightly higher than that of phylogenetic generalized least squares (PGLS), SR, and PVR. PGLS performs well, with low Type I error rate, low error on regression coefficient, and power comparable with some FIC methods. Four variants of skewness analysis are examined, and a new method to assess significance of results is presented. However, all have consistently low power, except in rare combinations of trees, trend strength, and distance between final means and bounds. Globally, the results clearly show that FIC-based methods and PGLS are globally better than nonphylogenetic methods and variance partitioning with PVR. FIC methods and PGLS are sensitive to the model of evolution (and, hence, to branch length errors). Our results suggest that regressing raw character contrasts against raw geological age contrasts yields a good combination of power and Type I error rate. New software to facilitate batch analysis is presented.
Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.; Norris, Jay P.; Jackson, Brad; Chiang, James
2013-01-01
This paper addresses the problem of detecting and characterizing local variability in time series and other forms of sequential data. The goal is to identify and characterize statistically significant variations, at the same time suppressing the inevitable corrupting observational errors. We present a simple nonparametric modeling technique and an algorithm implementing it-an improved and generalized version of Bayesian Blocks [Scargle 1998]-that finds the optimal segmentation of the data in the observation interval. The structure of the algorithm allows it to be used in either a real-time trigger mode, or a retrospective mode. Maximum likelihood or marginal posterior functions to measure model fitness are presented for events, binned counts, and measurements at arbitrary times with known error distributions. Problems addressed include those connected with data gaps, variable exposure, extension to piece- wise linear and piecewise exponential representations, multivariate time series data, analysis of variance, data on the circle, other data modes, and dispersed data. Simulations provide evidence that the detection efficiency for weak signals is close to a theoretical asymptotic limit derived by [Arias-Castro, Donoho and Huo 2003]. In the spirit of Reproducible Research [Donoho et al. (2008)] all of the code and data necessary to reproduce all of the figures in this paper are included as auxiliary material.
Whittington, James C. R.; Bogacz, Rafal
2017-01-01
To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output. PMID:28333583
NASA Astrophysics Data System (ADS)
Kang, Pilsang; Koo, Changhoi; Roh, Hokyu
2017-11-01
Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.
Whittington, James C R; Bogacz, Rafal
2017-05-01
To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output.
An improved pi/4-QPSK with nonredundant error correction for satellite mobile broadcasting
NASA Technical Reports Server (NTRS)
Feher, Kamilo; Yang, Jiashi
1991-01-01
An improved pi/4-quadrature phase-shift keying (QPSK) receiver that incorporates a simple nonredundant error correction (NEC) structure is proposed for satellite and land-mobile digital broadcasting. The bit-error-rate (BER) performance of the pi/4-QPSK with NEC is analyzed and evaluated in a fast Rician fading and additive white Gaussian noise (AWGN) environment using computer simulation. It is demonstrated that with simple electronics the performance of a noncoherently detected pi/4-QPSK signal in both AWGN and fast Rician fading can be improved. When the K-factor (a ratio of average power of multipath signal to direct path power) of the Rician channel decreases, the improvement increases. An improvement of 1.2 dB could be obtained at a BER of 0.0001 in the AWGN channel. This performance gain is achieved without requiring any signal redundancy and additional bandwidth. Three types of noncoherent detection schemes of pi/4-QPSK with NEC structure, such as IF band differential detection, baseband differential detection, and FM discriminator, are discussed. It is concluded that the pi/4-QPSK with NEC is an attractive scheme for power-limited satellite land-mobile broadcasting systems.
Nelson, Jonathan M.; Kinzel, Paul J.; McDonald, Richard R.; Schmeeckle, Mark
2016-01-01
Recently developed optical and videographic methods for measuring water-surface properties in a noninvasive manner hold great promise for extracting river hydraulic and bathymetric information. This paper describes such a technique, concentrating on the method of infrared videog- raphy for measuring surface velocities and both acoustic (laboratory-based) and laser-scanning (field-based) techniques for measuring water-surface elevations. In ideal laboratory situations with simple flows, appropriate spatial and temporal averaging results in accurate water-surface elevations and water-surface velocities. In test cases, this accuracy is sufficient to allow direct inversion of the governing equations of motion to produce estimates of depth and discharge. Unlike other optical techniques for determining local depth that rely on transmissivity of the water column (bathymetric lidar, multi/hyperspectral correlation), this method uses only water-surface information, so even deep and/or turbid flows can be investigated. However, significant errors arise in areas of nonhydrostatic spatial accelerations, such as those associated with flow over bedforms or other relatively steep obstacles. Using laboratory measurements for test cases, the cause of these errors is examined and both a simple semi-empirical method and computational results are presented that can potentially reduce bathymetric inversion errors.
Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students
NASA Astrophysics Data System (ADS)
Priyani, H. A.; Ekawati, R.
2018-01-01
Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.
NASA Astrophysics Data System (ADS)
Mednova, Olga; Kirsanov, Dmitry; Rudnitskaya, Alisa; Kilmartin, Paul; Legin, Andrey
2009-05-01
The present study deals with a potentiometric electronic tongue (ET) multisensor system applied for the simultaneous determination of several chemical parameters for white wines produced in New Zealand. Methods in use for wine quality control are often expensive and require considerable time and skilled operation. The ET approach usually offers a simple and fast measurement protocol and allows automation for on-line analysis under industrial conditions. The ET device developed in this research is capable of quantifying the free and total SO2 content, total acids and some polyphenolic compounds in white wines with acceptable analytical errors.
Toward an affordable and user-friendly visual motion capture system.
Bonnet, V; Sylla, N; Cherubini, A; Gonzáles, A; Azevedo Coste, C; Fraisse, P; Venture, G
2014-01-01
The present study aims at designing and evaluating a low-cost, simple and portable system for arm joint angle estimation during grasping-like motions. The system is based on a single RGB-D camera and three customized markers. The automatically detected and tracked marker positions were used as inputs to an offline inverse kinematic process based on bio-mechanical constraints to reduce noise effect and handle marker occlusion. The method was validated on 4 subjects with different motions. The joint angles were estimated both with the proposed low-cost system and, a stereophotogrammetric system. Comparative analysis shows good accuracy with high correlation coefficient (r= 0.92) and low average RMS error (3.8 deg).
Bloom, Guillaume; Larat, Christian; Lallier, Eric; Lee-Bouhours, Mane-Si Laure; Loiseaux, Brigitte; Huignard, Jean-Pierre
2011-02-10
We have designed a high-efficiency array generator composed of subwavelength grooves etched in a GaAs substrate for operation at 4.5 μm. The method used combines rigorous coupled wave analysis with an optimization algorithm. The optimized beam splitter has both a high efficiency (∼96%) and a good intensity uniformity (∼0.2%). The fabrication error tolerances are numerically calculated, and it is shown that this subwavelength array generator could be fabricated with current electron beam writers and inductively coupled plasma etching. Finally, we studied the effect of a simple and realistic antireflection coating on the performance of the beam splitter.
NASA Technical Reports Server (NTRS)
LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.
2011-01-01
This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.
Simple and Accurate Method for Central Spin Problems
NASA Astrophysics Data System (ADS)
Lindoy, Lachlan P.; Manolopoulos, David E.
2018-06-01
We describe a simple quantum mechanical method that can be used to obtain accurate numerical results over long timescales for the spin correlation tensor of an electron spin that is hyperfine coupled to a large number of nuclear spins. This method does not suffer from the statistical errors that accompany a Monte Carlo sampling of the exact eigenstates of the central spin Hamiltonian obtained from the algebraic Bethe ansatz, or from the growth of the truncation error with time in the time-dependent density matrix renormalization group (TDMRG) approach. As a result, it can be applied to larger central spin problems than the algebraic Bethe ansatz, and for longer times than the TDMRG algorithm. It is therefore an ideal method to use to solve central spin problems, and we expect that it will also prove useful for a variety of related problems that arise in a number of different research fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, S.; Jark, W.; Takacs, P.Z.
1995-02-01
Metrology requirements for optical components for third generation synchrotron sources are taxing the state-of-the-art in manufacturing technology. We have investigated a number of effect sources in a commercial figure measurement instrument, the Long Trace Profiler II (LTP II), and have demonstrated that, with some simple modifications, we can significantly reduce the effect of error sources and improve the accuracy and reliability of the measurement. By keeping the optical head stationary and moving a penta prism along the translation stage, the stability of the optical system is greatly improved, and the remaining error signals can be corrected by a simple referencemore » beam subtraction. We illustrate the performance of the modified system by investigating the distortion produced by gravity on a typical synchrotron mirror and demonstrate the repeatability of the instrument despite relaxed tolerances on the translation stage.« less
NASA Astrophysics Data System (ADS)
Hu, Junbao; Meng, Xin; Wei, Qi; Kong, Yan; Jiang, Zhilong; Xue, Liang; Liu, Fei; Liu, Cheng; Wang, Shouyu
2018-03-01
Wide-field microscopy is commonly used for sample observations in biological research and medical diagnosis. However, the tilting error induced by the oblique location of the image recorder or the sample, as well as the inclination of the optical path often deteriorates the imaging quality. In order to eliminate the tilting in microscopy, a numerical tilting compensation technique based on wavefront sensing using transport of intensity equation method is proposed in this paper. Both the provided numerical simulations and practical experiments prove that the proposed technique not only accurately determines the tilting angle with simple setup and procedures, but also compensates the tilting error for imaging quality improvement even in the large tilting cases. Considering its simple systems and operations, as well as image quality improvement capability, it is believed the proposed method can be applied for tilting compensation in the optical microscopy.
Modeling and predicting historical volatility in exchange rate markets
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2017-04-01
Volatility modeling and forecasting of currency exchange rate is an important task in several business risk management tasks; including treasury risk management, derivatives pricing, and portfolio risk evaluation. The purpose of this study is to present a simple and effective approach for predicting historical volatility of currency exchange rate. The approach is based on a limited set of technical indicators as inputs to the artificial neural networks (ANN). To show the effectiveness of the proposed approach, it was applied to forecast US/Canada and US/Euro exchange rates volatilities. The forecasting results show that our simple approach outperformed the conventional GARCH and EGARCH with different distribution assumptions, and also the hybrid GARCH and EGARCH with ANN in terms of mean absolute error, mean of squared errors, and Theil's inequality coefficient. Because of the simplicity and effectiveness of the approach, it is promising for US currency volatility prediction tasks.
Synchronizing movements with the metronome: nonlinear error correction and unstable periodic orbits.
Engbert, Ralf; Krampe, Ralf Th; Kurths, Jürgen; Kliegl, Reinhold
2002-02-01
The control of human hand movements is investigated in a simple synchronization task. We propose and analyze a stochastic model based on nonlinear error correction; a mechanism which implies the existence of unstable periodic orbits. This prediction is tested in an experiment with human subjects. We find that our experimental data are in good agreement with numerical simulations of our theoretical model. These results suggest that feedback control of the human motor systems shows nonlinear behavior. Copyright 2001 Elsevier Science (USA).
Relative entropy as a universal metric for multiscale errors
NASA Astrophysics Data System (ADS)
Chaimovich, Aviel; Shell, M. Scott
2010-06-01
We show that the relative entropy, Srel , suggests a fundamental indicator of the success of multiscale studies, in which coarse-grained (CG) models are linked to first-principles (FP) ones. We demonstrate that Srel inherently measures fluctuations in the differences between CG and FP potential energy landscapes, and develop a theory that tightly and generally links it to errors associated with coarse graining. We consider two simple case studies substantiating these results, and suggest that Srel has important ramifications for evaluating and designing coarse-grained models.
Relative entropy as a universal metric for multiscale errors.
Chaimovich, Aviel; Shell, M Scott
2010-06-01
We show that the relative entropy, Srel, suggests a fundamental indicator of the success of multiscale studies, in which coarse-grained (CG) models are linked to first-principles (FP) ones. We demonstrate that Srel inherently measures fluctuations in the differences between CG and FP potential energy landscapes, and develop a theory that tightly and generally links it to errors associated with coarse graining. We consider two simple case studies substantiating these results, and suggest that Srel has important ramifications for evaluating and designing coarse-grained models.
EOS radiometer concepts for soil moisture remote sensing
NASA Technical Reports Server (NTRS)
Carr, J.
1986-01-01
Preliminary work with aperture synthesis concepts for EOS is reported. The effects of nonvanishing bandwidths on image reconstruction in aperture synthesis system was studied. It is found that nonvanishing bandwidths introduce errors in off-axis pixels when naive Fourier processing is used. The net effect is for bandwidth to limit sensor field-of-view. To quantify this effect a computer program was written which is documented. Example runs are included which illustrate the resultant radiometric errors and effective fields-of-view for a plausible simple sensor.
A software simulation study of a (255,223) Reed-Solomon encoder-decoder
NASA Technical Reports Server (NTRS)
Pollara, F.
1985-01-01
A set of software programs which simulates a (255,223) Reed-Solomon encoder/decoder pair is described. The transform decoder algorithm uses a modified Euclid algorithm, and closely follows the pipeline architecture proposed for the hardware decoder. Uncorrectable error patterns are detected by a simple test, and the inverse transform is computed by a finite field FFT. Numerical examples of the decoder operation are given for some test codewords, with and without errors. The use of the software package is briefly described.