Science.gov

Sample records for additional systematic error

  1. Error correction in adders using systematic subcodes.

    NASA Technical Reports Server (NTRS)

    Rao, T. R. N.

    1972-01-01

    A generalized theory is presented for the construction of a systematic subcode for a given AN code in such a way that error control properties of the AN code are preserved in this new code. The 'systematic weight' and 'systematic distance' functions in this new code depend not only on its number representation system but also on its addition structure. Finally, to illustrate this theory, a simple error-correcting adder organization using a systematic subcode of 29 N code is sketched in some detail.

  2. A Low-Cost Environmental Monitoring System: How to Prevent Systematic Errors in the Design Phase through the Combined Use of Additive Manufacturing and Thermographic Techniques.

    PubMed

    Salamone, Francesco; Danza, Ludovico; Meroni, Italo; Pollastro, Maria Cristina

    2017-04-11

    nEMoS (nano Environmental Monitoring System) is a 3D-printed device built following the Do-It-Yourself (DIY) approach. It can be connected to the web and it can be used to assess indoor environmental quality (IEQ). It is built using some low-cost sensors connected to an Arduino microcontroller board. The device is assembled in a small-sized case and both thermohygrometric sensors used to measure the air temperature and relative humidity, and the globe thermometer used to measure the radiant temperature, can be subject to thermal effects due to overheating of some nearby components. A thermographic analysis was made to rule out this possibility. The paper shows how the pervasive technique of additive manufacturing can be combined with the more traditional thermographic techniques to redesign the case and to verify the accuracy of the optimized system in order to prevent instrumental systematic errors in terms of the difference between experimental and actual values of the above-mentioned environmental parameters.

  3. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  4. Measuring Systematic Error with Curve Fits

    ERIC Educational Resources Information Center

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

  5. Antenna pointing systematic error model derivations

    NASA Technical Reports Server (NTRS)

    Guiar, C. N.; Lansing, F. L.; Riggs, R.

    1987-01-01

    The pointing model used to represent and correct systematic errors for the Deep Space Network (DSN) antennas is presented. Analytical expressions are given in both azimuth-elevation (az-el) and hour angle-declination (ha-dec) mounts for RF axis collimation error, encoder offset, nonorthogonality of axes, axis plane tilt, and structural flexure due to gravity loading. While the residual pointing errors (rms) after correction appear to be within the ten percent of the half-power beamwidth criterion commonly set for good pointing accuracy, the DSN has embarked on an extensive pointing improvement and modeling program aiming toward an order of magnitude higher pointing precision.

  6. Systematic errors in long baseline oscillation experiments

    SciTech Connect

    Harris, Deborah A.; /Fermilab

    2006-02-01

    This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.

  7. Empirical Analysis of Systematic Communication Errors.

    DTIC Science & Technology

    1981-09-01

    human o~ . .... 8 components in communication systems. (Systematic errors were defined to be those that occur regularly in human communication links...phase of the human communication process and focuses on the linkage between a specific piece of information (and the receiver) and the transmission...communication flow. (2) Exchange. Exchange is the next phase in human communication and entails a concerted effort on the part of the sender and receiver to share

  8. Systematic errors in strong lens modeling

    NASA Astrophysics Data System (ADS)

    Johnson, Traci Lin; Sharon, Keren; Bayliss, Matthew B.

    2015-08-01

    The lensing community has made great strides in quantifying the statistical errors associated with strong lens modeling. However, we are just now beginning to understand the systematic errors. Quantifying these errors is pertinent to Frontier Fields science, as number counts and luminosity functions are highly sensitive to the value of the magnifications of background sources across the entire field of view. We are aware that models can be very different when modelers change their assumptions about the parameterization of the lensing potential (i.e., parametric vs. non-parametric models). However, models built while utilizing a single methodology can lead to inconsistent outcomes for different quantities, distributions, and qualities of redshift information regarding the multiple images used as constraints in the lens model. We investigate how varying the number of multiple image constraints and available redshift information of those constraints (ex., spectroscopic vs. photometric vs. no redshift) can influence the outputs of our parametric strong lens models, specifically, the mass distribution and magnifications of background sources. We make use of the simulated clusters by M. Meneghetti et al. and the first two Frontier Fields clusters, which have a high number of multiply imaged galaxies with spectroscopically-measured redshifts (or input redshifts, in the case of simulated clusters). This work will not only inform upon Frontier Field science, but also for work on the growing collection of strong lensing galaxy clusters, most of which are less massive and are capable of lensing a handful of galaxies, and are more prone to these systematic errors.

  9. More on Systematic Error in a Boyle's Law Experiment

    NASA Astrophysics Data System (ADS)

    McCall, Richard P.

    2012-01-01

    A recent article in The Physics Teacher describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.2-7

  10. More on Systematic Error in a Boyle's Law Experiment

    ERIC Educational Resources Information Center

    McCall, Richard P.

    2012-01-01

    A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.

  11. The effect of uncertainty and systematic errors in hydrological modelling

    NASA Astrophysics Data System (ADS)

    Steinsland, I.; Engeland, K.; Johansen, S. S.; Øverleir-Petersen, A.; Kolberg, S. A.

    2014-12-01

    The aims of hydrological model identification and calibration are to find the best possible set of process parametrization and parameter values that transform inputs (e.g. precipitation and temperature) to outputs (e.g. streamflow). These models enable us to make predictions of streamflow. Several sources of uncertainties have the potential to hamper the possibility of a robust model calibration and identification. In order to grasp the interaction between model parameters, inputs and streamflow, it is important to account for both systematic and random errors in inputs (e.g. precipitation and temperatures) and streamflows. By random errors we mean errors that are independent from time step to time step whereas by systematic errors we mean errors that persists for a longer period. Both random and systematic errors are important in the observation and interpolation of precipitation and temperature inputs. Important random errors comes from the measurements themselves and from the network of gauges. Important systematic errors originate from the under-catch in precipitation gauges and from unknown spatial trends that are approximated in the interpolation. For streamflow observations, the water level recordings might give random errors whereas the rating curve contributes mainly with a systematic error. In this study we want to answer the question "What is the effect of random and systematic errors in inputs and observed streamflow on estimated model parameters and streamflow predictions?". To answer we test systematically the effect of including uncertainties in inputs and streamflow during model calibration and simulation in distributed HBV model operating on daily time steps for the Osali catchment in Norway. The case study is based on observations from, uncertainty carefullt quantified, and increased uncertainties and systmatical errors are done realistically by for example removing a precipitation gauge from the network.We find that the systematical errors in

  12. Phylogenomics of Lophotrochozoa with Consideration of Systematic Error.

    PubMed

    Kocot, Kevin M; Struck, Torsten H; Merkel, Julia; Waits, Damien S; Todt, Christiane; Brannock, Pamela M; Weese, David A; Cannon, Johanna T; Moroz, Leonid L; Lieb, Bernhard; Halanych, Kenneth M

    2017-03-01

    Phylogenomic studies have improved understanding of deep metazoan phylogeny and show promise for resolving incongruences among analyses based on limited numbers of loci. One region of the animal tree that has been especially difficult to resolve, even with phylogenomic approaches, is relationships within Lophotrochozoa (the animal clade that includes molluscs, annelids, and flatworms among others). Lack of resolution in phylogenomic analyses could be due to insufficient phylogenetic signal, limitations in taxon and/or gene sampling, or systematic error. Here, we investigated why lophotrochozoan phylogeny has been such a difficult question to answer by identifying and reducing sources of systematic error. We supplemented existing data with 32 new transcriptomes spanning the diversity of Lophotrochozoa and constructed a new set of Lophotrochozoa-specific core orthologs. Of these, 638 orthologous groups (OGs) passed strict screening for paralogy using a tree-based approach. In order to reduce possible sources of systematic error, we calculated branch-length heterogeneity, evolutionary rate, percent missing data, compositional bias, and saturation for each OG and analyzed increasingly stricter subsets of only the most stringent (best) OGs for these five variables. Principal component analysis of the values for each factor examined for each OG revealed that compositional heterogeneity and average patristic distance contributed most to the variance observed along the first principal component while branch-length heterogeneity and, to a lesser extent, saturation contributed most to the variance observed along the second. Missing data did not strongly contribute to either. Additional sensitivity analyses examined effects of removing taxa with heterogeneous branch lengths, large amounts of missing data, and compositional heterogeneity. Although our analyses do not unambiguously resolve lophotrochozoan phylogeny, we advance the field by reducing the list of viable hypotheses

  13. Improved Systematic Pointing Error Model for the DSN Antennas

    NASA Technical Reports Server (NTRS)

    Rochblatt, David J.; Withington, Philip M.; Richter, Paul H.

    2011-01-01

    New pointing models have been developed for large reflector antennas whose construction is founded on elevation over azimuth mount. At JPL, the new models were applied to the Deep Space Network (DSN) 34-meter antenna s subnet for corrections of their systematic pointing errors; it achieved significant improvement in performance at Ka-band (32-GHz) and X-band (8.4-GHz). The new models provide pointing improvements relative to the traditional models by a factor of two to three, which translate to approximately 3-dB performance improvement at Ka-band. For radio science experiments where blind pointing performance is critical, the new innovation provides a new enabling technology. The model extends the traditional physical models with higher-order mathematical terms, thereby increasing the resolution of the model for a better fit to the underlying systematic imperfections that are the cause of antenna pointing errors. The philosophy of the traditional model was that all mathematical terms in the model must be traced to a physical phenomenon causing antenna pointing errors. The traditional physical terms are: antenna axis tilts, gravitational flexure, azimuth collimation, azimuth encoder fixed offset, azimuth and elevation skew, elevation encoder fixed offset, residual refraction, azimuth encoder scale error, and antenna pointing de-rotation terms for beam waveguide (BWG) antennas. Besides the addition of spherical harmonics terms, the new models differ from the traditional ones in that the coefficients for the cross-elevation and elevation corrections are completely independent and may be different, while in the traditional model, some of the terms are identical. In addition, the new software allows for all-sky or mission-specific model development, and can utilize the previously used model as an a priori estimate for the development of the updated models.

  14. A Systematic Error Correction Method for TOVS Radiances

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.

  15. Systematic errors for a Mueller matrix dual rotating compensator ellipsometer.

    PubMed

    Broch, Laurent; En Naciri, Aotmane; Johann, Luc

    2008-06-09

    The characterization of anisotropic materials and complex systems by ellipsometry has pushed the design of instruments to require the measurement of the full reflection Mueller matrix of the sample with a great precision. Therefore Mueller matrix ellipsometers have emerged over the past twenty years. The values of some coefficients of the matrix can be very small and errors due to noise or systematic errors can induce distored analysis. We present a detailed characterization of the systematic errors for a Mueller Matrix Ellipsometer in the dual-rotating compensator configuration. Starting from a general formalism, we derive explicit first-order expressions for the errors on all the coefficients of the Mueller matrix of the sample. The errors caused by inaccuracy of the azimuthal arrangement of the optical components and residual ellipticity introduced by imperfect optical elements are shown. A new method based on a four-zone averaging measurement is proposed to vanish the systematic errors.

  16. Analysis and Correction of Systematic Height Model Errors

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.

    2016-06-01

    The geometry of digital height models (DHM) determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC). Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3) has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP), but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM) digital surface model (DSM) or the new AW3D30 DSM, based on ALOS PRISM images, are

  17. Improved arrayed-waveguide-grating layout avoiding systematic phase errors.

    PubMed

    Ismail, Nur; Sun, Fei; Sengo, Gabriel; Wörhoff, Kerstin; Driessen, Alfred; de Ridder, René M; Pollnau, Markus

    2011-04-25

    We present a detailed description of an improved arrayed-waveguide-grating (AWG) layout for both, low and high diffraction orders. The novel layout presents identical bends across the entire array; in this way systematic phase errors arising from different bends that are inherent to conventional AWG designs are completely eliminated. In addition, for high-order AWGs our design results in more than 50% reduction of the occupied area on the wafer. We present an experimental characterization of a low-order device fabricated according to this geometry. The device has a resolution of 5.5 nm, low intrinsic losses (< 2 dB) in the wavelength region of interest for the application, and is polarization insensitive over a wide spectral range of 215 nm.

  18. Systematic error analysis and correction in quadriwave lateral shearing interferometer

    NASA Astrophysics Data System (ADS)

    Zhu, Wenhua; Li, Jinpeng; Chen, Lei; Zheng, Donghui; Yang, Ying; Han, Zhigang

    2016-12-01

    To obtain high-precision and high-resolution measurement of dynamic wavefront, the systematic error of the quadriwave lateral shearing interferometer (QWLSI) is analyzed and corrected. The interferometer combines a chessboard grating with an order selection mask to select four replicas of the wavefront under test. A collimating lens is introduced to collimate the replicas, which not only eliminates the coma induced by the shear between each two replicas, but also avoids the astigmatism and defocus caused by CCD tilt. Besides, this configuration permits the shear amount to vary from zero, which benefits calibrating the systematic errors. A practical transmitted wavefront was measured by the QWLSI with different shear amounts. The systematic errors of reconstructed wavefronts are well suppressed. The standard deviation of root mean square is 0.8 nm, which verifies the stability and reliability of QWLSI for dynamic wavefront measurement.

  19. Optimal input design for aircraft instrumentation systematic error estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1991-01-01

    A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.

  20. Neutrino spectrum at the far detector systematic errors

    SciTech Connect

    Szleper, M.; Para, A.

    2001-10-01

    Neutrino oscillation experiments often employ two identical detectors to minimize errors due to inadequately known neutrino beam. We examine various systematics effects related to the prediction of the neutrino spectrum in the `far' detector on the basis of the spectrum observed at the `near' detector. We propose a novel method of the derivation of the far detector spectrum. This method is less sensitive to the details of the understanding of the neutrino beam line and the hadron production spectra than the usually used `double ratio' method thus allowing to reduce the systematic errors.

  1. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    PubMed

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  2. Reducing systematic error in weak lensing cluster surveys

    SciTech Connect

    Utsumi, Yousuke; Miyazaki, Satoshi; Hamana, Takashi; Geller, Margaret J.; Kurtz, Michael J.; Fabricant, Daniel G.; Dell'Antonio, Ian P.; Oguri, Masamune

    2014-05-10

    Weak lensing provides an important route toward collecting samples of clusters of galaxies selected by mass. Subtle systematic errors in image reduction can compromise the power of this technique. We use the B-mode signal to quantify this systematic error and to test methods for reducing this error. We show that two procedures are efficient in suppressing systematic error in the B-mode: (1) refinement of the mosaic CCD warping procedure to conform to absolute celestial coordinates and (2) truncation of the smoothing procedure on a scale of 10'. Application of these procedures reduces the systematic error to 20% of its original amplitude. We provide an analytic expression for the distribution of the highest peaks in noise maps that can be used to estimate the fraction of false peaks in the weak-lensing κ-signal-to-noise ratio (S/N) maps as a function of the detection threshold. Based on this analysis, we select a threshold S/N = 4.56 for identifying an uncontaminated set of weak-lensing peaks in two test fields covering a total area of ∼3 deg{sup 2}. Taken together these fields contain seven peaks above the threshold. Among these, six are probable systems of galaxies and one is a superposition. We confirm the reliability of these peaks with dense redshift surveys, X-ray, and imaging observations. The systematic error reduction procedures we apply are general and can be applied to future large-area weak-lensing surveys. Our high-peak analysis suggests that with an S/N threshold of 4.5, there should be only 2.7 spurious weak-lensing peaks even in an area of 1000 deg{sup 2}, where we expect ∼2000 peaks based on our Subaru fields.

  3. Tune shifts due to systematic errors in bend magnets

    SciTech Connect

    Douglas, D.

    1983-12-01

    The presence of systematic error multipoles in bend magnets, persistent currents at low magnet excitation, and saturation effects at high magnet excitation may all lead to tune shifts which could prove detrimental to the operation of the SSC. It is the purpose of this note to report estimates of the magnitude of these tune shifts and the corrector strengths required to circumvent them.

  4. Students' Systematic Errors When Solving Kinetic and Chemical Equilibrium Problems.

    ERIC Educational Resources Information Center

    BouJaoude, Saouma

    Although students' misconceptions about the concept of chemical equilibrium has been the focus of numerous investigations, few have investigated students' systematic errors when solving equilibrium problems at the college level. Students (n=189) enrolled in the second semester of a first year chemistry course for science and engineering majors at…

  5. Medication Errors in the Southeast Asian Countries: A Systematic Review

    PubMed Central

    Salmasi, Shahrzad; Khan, Tahir Mehmood; Hong, Yet Hoi; Ming, Long Chiau; Wong, Tin Wui

    2015-01-01

    Background Medication error (ME) is a worldwide issue, but most studies on ME have been undertaken in developed countries and very little is known about ME in Southeast Asian countries. This study aimed systematically to identify and review research done on ME in Southeast Asian countries in order to identify common types of ME and estimate its prevalence in this region. Methods The literature relating to MEs in Southeast Asian countries was systematically reviewed in December 2014 by using; Embase, Medline, Pubmed, ProQuest Central and the CINAHL. Inclusion criteria were studies (in any languages) that investigated the incidence and the contributing factors of ME in patients of all ages. Results The 17 included studies reported data from six of the eleven Southeast Asian countries: five studies in Singapore, four in Malaysia, three in Thailand, three in Vietnam, one in the Philippines and one in Indonesia. There was no data on MEs in Brunei, Laos, Cambodia, Myanmar and Timor. Of the seventeen included studies, eleven measured administration errors, four focused on prescribing errors, three were done on preparation errors, three on dispensing errors and two on transcribing errors. There was only one study of reconciliation error. Three studies were interventional. Discussion The most frequently reported types of administration error were incorrect time, omission error and incorrect dose. Staff shortages, and hence heavy workload for nurses, doctor/nurse distraction, and misinterpretation of the prescription/medication chart, were identified as contributing factors of ME. There is a serious lack of studies on this topic in this region which needs to be addressed if the issue of ME is to be fully understood and addressed. PMID:26340679

  6. The Effect of Systematic Error in Forced Oscillation Testing

    NASA Technical Reports Server (NTRS)

    Williams, Brianne Y.; Landman, Drew; Flory, Isaac L., IV; Murphy, Patrick C.

    2012-01-01

    One of the fundamental problems in flight dynamics is the formulation of aerodynamic forces and moments acting on an aircraft in arbitrary motion. Classically, conventional stability derivatives are used for the representation of aerodynamic loads in the aircraft equations of motion. However, for modern aircraft with highly nonlinear and unsteady aerodynamic characteristics undergoing maneuvers at high angle of attack and/or angular rates the conventional stability derivative model is no longer valid. Attempts to formulate aerodynamic model equations with unsteady terms are based on several different wind tunnel techniques: for example, captive, wind tunnel single degree-of-freedom, and wind tunnel free-flying techniques. One of the most common techniques is forced oscillation testing. However, the forced oscillation testing method does not address the systematic and systematic correlation errors from the test apparatus that cause inconsistencies in the measured oscillatory stability derivatives. The primary objective of this study is to identify the possible sources and magnitude of systematic error in representative dynamic test apparatuses. Sensitivities of the longitudinal stability derivatives to systematic errors are computed, using a high fidelity simulation of a forced oscillation test rig, and assessed using both Design of Experiments and Monte Carlo methods.

  7. Investigation of systematic CD distribution error on intrafield

    NASA Astrophysics Data System (ADS)

    Kim, Keunjun; Kim, Daewoo; Kang, Junghyun; Jeong, Inseok; Lee, Sungkoo; Kim, Hyeongsoo

    2016-03-01

    As feature size shrinks, better critical dimension uniformity (CDU) is highly demanded in aspects of device characteristics. Intra field CDU is one of main contributor in total CD variation budget. Especially systematic CD distribution in shot, bank and MAT boundary should be strongly considered to minimize repeated error to guarantee high yield even though it is not prominent in overall CDU value. In this paper, we investigated the several factors to affect systematic CD distribution error on intra field. First of all, localized mask CD variation caused by electron-beam scattering over local region, development loading and etch loading effect directly printed in wafer. Appropriate mask fabrication suppress CD variation at boundary region. Secondly, chemical flare effect is expected to make CD gradient at boundary region. Photo acid concentration change by sub-resolution assist feature (SRAF) can reduce the CD gradient. We demonstrated SRAF size dependency in positive tone develop (PTD) and negative tone develop (NTD) case. Thirdly, out-of-field stray light (OOFSL) due to adjacent exposed field causes CD gradient at field boundary. Exposure dose reduction is expected as a solution in this case. Even though we perfectly control CDU at boundary region after mask patterning, other process issues such as etch and CMP loading effect also make worse the CD distribution at boundary region. Through the consideration of above factors, we optimized systematic CD distribution error at boundary region before etch. Furthermore we compared several techniques to compensate post-etch systematic CD distribution.

  8. Spatial reasoning in the treatment of systematic sensor errors

    SciTech Connect

    Beckerman, M.; Jones, J.P.; Mann, R.C.; Farkas, L.A.; Johnston, S.E.

    1988-01-01

    In processing ultrasonic and visual sensor data acquired by mobile robots systematic errors can occur. The sonar errors include distortions in size and surface orientation due to the beam resolution, and false echoes. The vision errors include, among others, ambiguities in discriminating depth discontinuities from intensity gradients generated by variations in surface brightness. In this paper we present a methodology for the removal of systematic errors using data from the sonar sensor domain to guide the processing of information in the vision domain, and vice versa. During the sonar data processing some errors are removed from 2D navigation maps through pattern analyses and consistent-labelling conditions, using spatial reasoning about the sonar beam and object characteristics. Others are removed using visual information. In the vision data processing vertical edge segments are extracted using a Canny-like algorithm, and are labelled. Object edge features are then constructed from the segments using statistical and spatial analyses. A least-squares method is used during the statistical analysis, and sonar range data are used in the spatial analysis. 7 refs., 10 figs.

  9. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.

  10. Systematic Errors in GNSS Radio Occultation Data - Part 2

    NASA Astrophysics Data System (ADS)

    Foelsche, Ulrich; Danzer, Julia; Scherllin-Pirscher, Barbara; Schwärz, Marc

    2014-05-01

    The Global Navigation Satellite System (GNSS) Radio Occultation (RO) technique has the potential to deliver climate benchmark measurements of the upper troposphere and lower stratosphere (UTLS), since RO data can be traced, in principle, to the international standard for the second. Climatologies derived from RO data from different satellites show indeed an amazing consistency of (better than 0.1 K). The value of RO data for climate monitoring is therefore increasingly recognized by the scientific community, but there is also concern about potential residual systematic errors in RO climatologies, which might be common to data from all satellites. We have analyzed different potential error sources and present results on two of them. (1) If temperature is calculated from observed refractivity with the assumption that water vapor is zero, the product is called "dry temperature", which is commonly used to study the Earth's atmosphere, e.g., when analyzing temperature trends due to global warming. Dry temperature is a useful quantity, since it does not need additional background information in its retrieval. Concurrent trends in water vapor could, however, pretend false trends in dry temperature. We analyzed this effect, and identified the regions in the atmosphere, where it is safe to take dry temperature as a proxy for physical temperature. We found that the heights, where specified values of differences between dry and physical temperature are encountered, increase by about 150 m per decade, with little differences between all the 38 climate models under investigation. (2) All current RO retrievals use a "classic" set of (measured) constants, relating atmospheric microwave refractivity with temperature, pressure, and water vapor partial pressure. With the steadily increasing quality of RO climatologies, errors in these constants are not negligible anymore. We show how these parameters can be related to more fundamental physical quantities (fundamental constants, the

  11. Systematic tests for position-dependent additive shear bias

    NASA Astrophysics Data System (ADS)

    van Uitert, Edo; Schneider, Peter

    2016-11-01

    We present new tests to identify stationary position-dependent additive shear biases in weak gravitational lensing data sets. These tests are important diagnostics for currently ongoing and planned cosmic shear surveys, as such biases induce coherent shear patterns that can mimic and potentially bias the cosmic shear signal. The central idea of these tests is to determine the average ellipticity of all galaxies with shape measurements in a grid in the pixel plane. The distribution of the absolute values of these averaged ellipticities can be compared to randomised catalogues; a difference points to systematics in the data. In addition, we introduce a method to quantify the spatial correlation of the additive bias, which suppresses the contribution from cosmic shear and therefore eases the identification of a position-dependent additive shear bias in the data. We apply these tests to the publicly available shear catalogues from the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) and the Kilo Degree Survey (KiDS) and find evidence for a small but non-negligible residual additive bias at small scales. As this residual bias is smaller than the error on the shear correlation signal at those scales, it is highly unlikely that it causes a significant bias in the published cosmic shear results of CFHTLenS. In CFHTLenS, the amplitude of this systematic signal is consistent with zero in fields where the number of stars used to model the point spread function (PSF) is higher than average, suggesting that the position-dependent additive shear bias originates from undersampled PSF variations across the image.

  12. Quality assessment of speckle patterns for DIC by consideration of both systematic errors and random errors

    NASA Astrophysics Data System (ADS)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren

    2016-11-01

    The performance of digital image correlation (DIC) is influenced by the quality of speckle patterns significantly. Thus, it is crucial to present a valid and practical method to assess the quality of speckle patterns. However, existing assessment methods either lack a solid theoretical foundation or fail to consider the errors due to interpolation. In this work, it is proposed to assess the quality of speckle patterns by estimating the root mean square error (RMSE) of DIC, which is the square root of the sum of square of systematic error and random error. Two performance evaluation parameters, respectively the maximum and the quadratic mean of RMSE, are proposed to characterize the total error. An efficient algorithm is developed to estimate these parameters, and the correctness of this algorithm is verified by numerical experiments for both 1 dimensional signal and actual speckle images. The influences of correlation criterion, shape function order, and sub-pixel registration algorithm are briefly discussed. Compared to existing methods, method presented by this paper is more valid due to the consideration of both measurement accuracy and precision.

  13. More systematic errors in the measurement of power spectral density

    NASA Astrophysics Data System (ADS)

    Mack, Chris A.

    2015-07-01

    Power spectral density (PSD) analysis is an important part of understanding line-edge and linewidth roughness in lithography. But uncertainty in the measured PSD, both random and systematic, complicates interpretation. It is essential to understand and quantify the sources of the measured PSD's uncertainty and to develop mitigation strategies. Both analytical derivations and simulations of rough features are used to evaluate data window functions for reducing spectral leakage and to understand the impact of data detrending on biases in PSD, autocovariance function (ACF), and height-to-height covariance function measurement. A generalized Welch window was found to be best among the windows tested. Linear detrending for line-edge roughness measurement results in underestimation of the low-frequency PSD and errors in the ACF and height-to-height covariance function. Measuring multiple edges per scanning electron microscope image reduces this detrending bias.

  14. TRAINING ERRORS AND RUNNING RELATED INJURIES: A SYSTEMATIC REVIEW

    PubMed Central

    Buist, Ida; Sørensen, Henrik; Lind, Martin; Rasmussen, Sten

    2012-01-01

    Purpose: The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries. Methods: A systematic search was performed in PubMed, Web of Science, Embase, and SportDiscus. Studies were included if they examined novice, recreational, or elite runners between the ages of 18 and 65. Exposure variables were training characteristics defined as volume, distance or mileage, time or duration, frequency, intensity, speed or pace, or similar terms. The outcome of interest was Running Related Injuries (RRI) in general or specific RRI in the lower extremity or lower back. Methodological quality was evaluated using quality assessment tools of 11 to 16 items. Results: After examining 4561 titles and abstracts, 63 articles were identified as potentially relevant. Finally, nine retrospective cohort studies, 13 prospective cohort studies, six case-control studies, and three randomized controlled trials were included. The mean quality score was 44.1%. Conflicting results were reported on the relationships between volume, duration, intensity, and frequency and RRI. Conclusion: It was not possible to identify which training errors were related to running related injuries. Still, well supported data on which training errors relate to or cause running related injuries is highly important for determining proper prevention strategies. If methodological limitations in measuring training variables can be resolved, more work can be conducted to define training and the interactions between different training variables, create several hypotheses, test the hypotheses in a large scale prospective study, and explore cause and effect relationships in randomized controlled trials. Level of evidence: 2a PMID:22389869

  15. Using Laser Scanners to Augment the Systematic Error Pointing Model

    NASA Astrophysics Data System (ADS)

    Wernicke, D. R.

    2016-08-01

    The antennas of the Deep Space Network (DSN) rely on precise pointing algorithms to communicate with spacecraft that are billions of miles away. Although the existing systematic error pointing model is effective at reducing blind pointing errors due to static misalignments, several of its terms have a strong dependence on seasonal and even daily thermal variation and are thus not easily modeled. Changes in the thermal state of the structure create a separation from the model and introduce a varying pointing offset. Compensating for this varying offset is possible by augmenting the pointing model with laser scanners. In this approach, laser scanners mounted to the alidade measure structural displacements while a series of transformations generate correction angles. Two sets of experiments were conducted in August 2015 using commercially available laser scanners. When compared with historical monopulse corrections under similar conditions, the computed corrections are within 3 mdeg of the mean. However, although the results show promise, several key challenges relating to the sensitivity of the optical equipment to sunlight render an implementation of this approach impractical. Other measurement devices such as inclinometers may be implementable at a significantly lower cost.

  16. The effect of horizontal resolution on systematic errors of the GLA forecast model

    NASA Technical Reports Server (NTRS)

    Chen, Tsing-Chang; Chen, Jau-Ming; Pfaendtner, James

    1990-01-01

    Systematic prediction errors of the Goddard Laboratory for Atmospheres (GLA) forecast system are reduced when the higher-resolution (2 x 2.5 deg) model version is used. Based on a budget analysis of the 200-mb eddy streamfunction, the improvement of stationary eddy forecasting is seen to be caused by the following mechanism: by increasing the horizontal spatial resolution of the forecast model, atmospheric diabatic heating over the three tropical continents is changed in a way that intensifies the planetary-scale divergent circulations associated with the three pairs of divergent-convergent centers over these continents. The intensified divergent circulation results in an enhancement of vorticity sources in the Northern Hemisphere. The additional vorticity is advected eastward by a stationary wave train along 30 deg N, thereby reducing systematic errors in the lower-resolution (4 x 5 deg) GLA model.

  17. Minor Planet Observations to Identify Reference System Systematic Errors

    NASA Astrophysics Data System (ADS)

    Hemenway, Paul D.; Duncombe, R. L.; Castelaz, M. W.

    2011-04-01

    In the 1930's Brouwer proposed using minor planets to correct the Fundamental System of celestial coordinates. Since then, many projects have used or proposed to use visual, photographic, photo detector, and space based observations to that end. From 1978 to 1990, a project was undertaken at the University of Texas utilizing the long focus and attendant advantageous plate scale (c. 7.37"/mm) of the 2.1m Otto Struve reflector's Cassegrain focus. The project followed precepts given in 1979. The program had several potential advantages over previous programs including high inclination orbits to cover half the celestial sphere, and, following Kristensen, the use of crossing points to remove entirely systematic star position errors from some observations. More than 1000 plates were obtained of 34 minor planets as part of this project. In July 2010 McDonald Observatory donated the plates to the Pisgah Astronomical Research Institute (PARI) in North Carolina. PARI is in the process of renovating the Space Telescope Science Institute GAMMA II modified PDS microdensitometer to scan the plates in the archives. We plan to scan the minor planet plates, reduce the plates to the densified ICRS using the UCAC4 positions (or the best available positions at the time of the reductions), and then determine the utility of attempting to find significant systematic corrections. Here we report the current status of various aspects of the project. Support from the National Science Foundation in the last millennium is gratefully acknowledged, as is help from Judit Ries and Wayne Green in packing and transporting the plates.

  18. Systematics for checking geometric errors in CNC lathes

    NASA Astrophysics Data System (ADS)

    Araújo, R. P.; Rolim, T. L.

    2015-10-01

    Non-idealities presented in machine tools compromise directly both the geometry and the dimensions of machined parts, generating distortions in the project. Given the competitive scenario among different companies, it is necessary to have knowledge of the geometric behavior of these machines in order to be able to establish their processing capability, avoiding waste of time and materials as well as satisfying customer requirements. But despite the fact that geometric tests are important and necessary to clarify the use of the machine correctly, therefore preventing future damage, most users do not apply such tests on their machines for lack of knowledge or lack of proper motivation, basically due to two factors: long period of time and high costs of testing. This work proposes a systematics for checking straightness and perpendicularity errors in CNC lathes demanding little time and cost with high metrological reliability, to be used on factory floors of small and medium-size businesses to ensure the quality of its products and make them competitive.

  19. Effect of Time Step On Atmospheric Model Systematic Errors

    NASA Astrophysics Data System (ADS)

    Williamson, D. L.

    Semi-Lagrangian approximations are becoming more common in operational Numer- ical Weather Prediction models because of the efficiency allowed by their long time steps. The early work that demonstrated that semi-Lagrangian forecasts were compa- rable to Eulerian in accuracy were based on mid-latitude short-range forecasts which were dominated by dynamical processes. These indicated no significant loss of accu- racy with semi-Lagrangian approximations and long time steps. Today, subgrid-scale parameterizations play a larger role in even short range forecasts. While not ignored, the effect of a longer time step on the parameterizations has been less thoroughly stud- ied. We present results from the NCAR CCM3 that indicate that the systematic errors in tropical precipitation patterns can depend on the time step. The actual dependency depends on the parameterization suite of the model. We identify the dependency in aqua-planet integrations. With the CCM3 parameterization suite, longer time steps re- sult in double precipitation maxima straddling the SST maximum while shorter time steps result in a single precipitation maximum over the SST maximum. Other param- eterization suites behave differently. The cause of the dependency will be discussed.

  20. A study of systematic errors in the PMD CamBoard nano

    NASA Astrophysics Data System (ADS)

    Chow, Jacky C. K.; Lichti, Derek D.

    2013-04-01

    Time-of-flight-based three-dimensional cameras are the state-of-the-art imaging modality for acquiring rapid 3D position information. Unlike any other technology on the market, it can deliver 2D images co-located with distance information at every pixel location, without any shadows. Recent technological advancements have begun miniaturizing such technology to be more suitable for laptops and eventually cellphones. This paper explores the systematic errors inherent to the new PMD CamBoard nano camera. As the world's most compact 3D time-of-flight camera it has applications in a wide domain, such as gesture control and facial recognition. To model the systematic errors, a one-step point-based and plane-based bundle adjustment method is used. It simultaneously estimates all systematic errors and unknown parameters by minimizing the residuals of image measurements, distance measurements, and amplitude measurements in a least-squares sense. The presented self-calibration method only requires a standard checkerboard target on a flat plane, making it a suitable candidate for on-site calibration. In addition, because distances are only constrained to lie on a plane, the raw pixel-by-pixel distance observations can be used. This makes it possible to increase the number of distance observations in the adjustment with ease. The results from this paper indicate that amplitude dependent range errors are the dominant error source for the nano under low scattering imaging configurations. Post user self-calibration, the RMSE of the range observations reduced by almost 50%, delivering range measurements at a precision of approximately 2.5cm within a 70cm interval.

  1. Systematic errors in two-dimensional digital image correlation due to lens distortion

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Yu, Liping; Wu, Dafang; Tang, Liqun

    2013-02-01

    Lens distortion practically presents in a real optical imaging system causing non-uniform geometric distortion in the recorded images, and gives rise to additional errors in the displacement and strain results measured by two-dimensional digital image correlation (2D-DIC). In this work, the systematic errors in the displacement and strain results measured by 2D-DIC due to lens distortion are investigated theoretically using the radial lens distortion model and experimentally through easy-to-implement rigid body, in-plane translation tests. Theoretical analysis shows that the displacement and strain errors at an interrogated image point are not only in linear proportion to the distortion coefficient of the camera lens used, but also depend on its distance relative to distortion center and its magnitude of displacement. To eliminate the systematic errors caused by lens distortion, a simple linear least-squares algorithm is proposed to estimate the distortion coefficient from the distorted displacement results of rigid body, in-plane translation tests, which can be used to correct the distorted displacement fields to obtain unbiased displacement and strain fields. Experimental results verify the correctness of the theoretical derivation and the effectiveness of the proposed lens distortion correction method.

  2. Drug Administration Errors in Hospital Inpatients: A Systematic Review

    PubMed Central

    Berdot, Sarah; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre; Sabatier, Brigitte

    2013-01-01

    Context Drug administration in the hospital setting is the last barrier before a possible error reaches the patient. Objectives We aimed to analyze the prevalence and nature of administration error rate detected by the observation method. Data Sources Embase, MEDLINE, Cochrane Library from 1966 to December 2011 and reference lists of included studies. Study Selection Observational studies, cross-sectional studies, before-and-after studies, and randomized controlled trials that measured the rate of administration errors in inpatients were included. Data Extraction Two reviewers (senior pharmacists) independently identified studies for inclusion. One reviewer extracted the data; the second reviewer checked the data. The main outcome was the error rate calculated as being the number of errors without wrong time errors divided by the Total Opportunity for Errors (TOE, sum of the total number of doses ordered plus the unordered doses given), and multiplied by 100. For studies that reported it, clinical impact was reclassified into four categories from fatal to minor or no impact. Due to a large heterogeneity, results were expressed as median values (interquartile range, IQR), according to their study design. Results Among 2088 studies, a total of 52 reported TOE. Most of the studies were cross-sectional studies (N=46). The median error rate without wrong time errors for the cross-sectional studies using TOE was 10.5% [IQR: 7.3%-21.7%]. No fatal error was observed and most errors were classified as minor in the 18 studies in which clinical impact was analyzed. We did not find any evidence of publication bias. Conclusions Administration errors are frequent among inpatients. The median error rate without wrong time errors for the cross-sectional studies using TOE was about 10%. A standardization of administration error rate using the same denominator (TOE), numerator and types of errors is essential for further publications. PMID:23818992

  3. Assessment of systematic measurement errors for acoustic travel-time tomography of the atmosphere.

    PubMed

    Vecherin, Sergey N; Ostashev, Vladimir E; Wilson, D Keith

    2013-09-01

    Two algorithms are described for assessing systematic errors in acoustic travel-time tomography of the atmosphere, the goal of which is to reconstruct the temperature and wind velocity fields given the transducers' locations and the measured travel times of sound propagating between each speaker-microphone pair. The first algorithm aims at assessing the errors simultaneously with the mean field reconstruction. The second algorithm uses the results of the first algorithm to identify the ray paths corrupted by the systematic errors and then estimates these errors more accurately. Numerical simulations show that the first algorithm can improve the reconstruction when relatively small systematic errors are present in all paths. The second algorithm significantly improves the reconstruction when systematic errors are present in a few, but not all, ray paths. The developed algorithms were applied to experimental data obtained at the Boulder Atmospheric Observatory.

  4. Pressure Measurements Using an Airborne Differential Absorption Lidar. Part 1; Analysis of the Systematic Error Sources

    NASA Technical Reports Server (NTRS)

    Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.

    1999-01-01

    Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.

  5. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  6. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    NASA Astrophysics Data System (ADS)

    Kanphet, J.; Suriyapee, S.; Dumrongkijudom, N.; Sanghangthum, T.; Kumkhwao, J.; Wisetrintong, M.

    2016-03-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable.

  7. Systematic biases and Type I error accumulation in tests of the race model inequality.

    PubMed

    Kiesel, Andrea; Miller, Jeff; Ulrich, Rolf

    2007-08-01

    In simple, go/no-go, and choice reaction time (RT) tasks, responses are faster to two redundant targets than to a single target. This redundancy gain has been explained in terms of a race model assuming that whichever target is processed faster determines RT (Raab, 1962). Miller (1982) presented a race model inequality to test the race model by comparing the RT distributions of single and redundant target conditions. Here, we present simulations indicating that the standard tests of this inequality (for a description of the testing algorithm, see Ulrich, Miller, & Schröter, 2007) are afflicted with systematic biases and Type I error accumulation. Systematic biases tend to produce violations of the race model inequality, but they decrease as the numbers of observations increase. Reasonably unbiased tests of the race model inequality are obtained for sample sizes of at least 20 for each target condition. In addition, Type I error accumulates because of testing the inequality at multiple percentiles. To reduce Type I error, the race model inequality should be tested in a restricted range of percentiles, preferably in the percentile range 10% to 25%.

  8. An improved regularization method for estimating near real-time systematic errors suitable for medium-long GPS baseline solutions

    NASA Astrophysics Data System (ADS)

    Luo, X.; Ou, J.; Yuan, Y.; Gao, J.; Jin, X.; Zhang, K.; Xu, H.

    2008-08-01

    It is well known that the key problem associated with network-based real-time kinematic (RTK) positioning is the estimation of systematic errors of GPS observations, such as residual ionospheric delays, tropospheric delays, and orbit errors, particularly for medium-long baselines. Existing methods dealing with these systematic errors are either not applicable for making estimations in real-time or require additional observations in the computation. In both cases, the result is a difficulty in performing rapid positioning. We have developed a new strategy for estimating the systematic errors for near real-time applications. In this approach, only two epochs of observations are used each time to estimate the parameters. In order to overcome severe ill-conditioned problems of the normal equation, the Tikhonov regularization method is used. We suggest that the regularized matrix be constructed by combining the a priori information of the known coordinates of the reference stations, followed by the determination of the corresponding regularized parameter. A series of systematic errors estimation can be obtained using a session of GPS observations, and the new process can assist in resolving the integer ambiguities of medium-long baselines and in constructing the virtual observations for the virtual reference station. A number of tests using three medium- to long-range baselines (from tens of kilometers to longer than 1000 kilometers) are used to validate the new approach. Test results indicate that the coordinates of three baseline lengths derived are in the order of several centimeters after the systematical errors are successfully removed. Our results demonstrate that the proposed method can effectively estimate systematic errors in the near real-time for medium-long GPS baseline solutions.

  9. Analysis of possible systematic errors in the Oslo method

    SciTech Connect

    Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.

    2011-03-15

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and {gamma}-ray transmission coefficient from a set of particle-{gamma} coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  10. Strategies for minimizing the impact of systematic errors on land data assimilation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Data assimilation concerns itself primarily with the impact of random stochastic errors on state estimation. However, the developers of land data assimilation systems are commonly faced with systematic errors arising from both the parameterization of a land surface model and the need to pre-process ...

  11. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    PubMed

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2015-04-24

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  12. COBE Differential Microwave Radiometers - Preliminary systematic error analysis

    NASA Technical Reports Server (NTRS)

    Kogut, A.; Smoot, G. F.; Bennett, C. L.; Wright, E. L.; Aymon, J.; De Amici, G.; Hinshaw, G.; Jackson, P. D.; Kaita, E.; Keegstra, P.

    1992-01-01

    The techniques available for the identification and subtraction of sources of dynamic uncertainty from data of the Differential Microwave Radiometer (DMR) instrument aboard COBE are discussed. Preliminary limits on the magnitude in the DMR 1 yr maps are presented. Residual uncertainties in the best DMR sky maps, after correcting the raw data for systematic effects, are less than 6 micro-K for the pixel rms variation, less than 3 micro-K for the rms quadruple amplitude of a spherical harmonic expansion, and less than 30 micro-(K-squared) for the correlation function.

  13. COBE Differential Microwave Radiometers - Preliminary systematic error analysis

    NASA Astrophysics Data System (ADS)

    Kogut, A.; Smoot, G. F.; Bennett, C. L.; Wright, E. L.; Aymon, J.; de Amici, G.; Hinshaw, G.; Jackson, P. D.; Kaita, E.; Keegstra, P.; Lineweaver, C.; Loewenstein, K.; Rokke, L.; Tenorio, L.; Boggess, N. W.; Cheng, E. S.; Gulkis, S.; Hauser, M. G.; Janssen, M. A.; Kelsall, T.; Mather, J. C.; Meyer, S.; Moseley, S. H.; Murdock, T. L.; Shafer, R. A.; Silverberg, R. F.; Weiss, R.; Wilkinson, D. T.

    1992-12-01

    The techniques available for the identification and subtraction of sources of dynamic uncertainty from data of the Differential Microwave Radiometer (DMR) instrument aboard COBE are discussed. Preliminary limits on the magnitude in the DMR 1 yr maps are presented. Residual uncertainties in the best DMR sky maps, after correcting the raw data for systematic effects, are less than 6 micro-K for the pixel rms variation, less than 3 micro-K for the rms quadruple amplitude of a spherical harmonic expansion, and less than 30 micro-(K-squared) for the correlation function.

  14. Using ridge regression in systematic pointing error corrections

    NASA Technical Reports Server (NTRS)

    Guiar, C. N.

    1988-01-01

    A pointing error model is used in the antenna calibration process. Data from spacecraft or radio star observations are used to determine the parameters in the model. However, the regression variables are not truly independent, displaying a condition known as multicollinearity. Ridge regression, a biased estimation technique, is used to combat the multicollinearity problem. Two data sets pertaining to Voyager 1 spacecraft tracking (days 105 and 106 of 1987) were analyzed using both linear least squares and ridge regression methods. The advantages and limitations of employing the technique are presented. The problem is not yet fully resolved.

  15. Second-order systematic errors in Mueller matrix dual rotating compensator ellipsometry.

    PubMed

    Broch, Laurent; En Naciri, Aotmane; Johann, Luc

    2010-06-10

    We investigate the systematic errors at the second order for a Mueller matrix ellipsometer in the dual rotating compensator configuration. Starting from a general formalism, we derive explicit second-order errors in the Mueller matrix coefficients of a given sample. We present the errors caused by the azimuthal inaccuracy of the optical components and their influences on the measurements. We demonstrate that the methods based on four-zone or two-zone averaging measurement are effective to vanish the errors due to the compensators. For the other elements, it is shown that the systematic errors at the second order can be canceled only for some coefficients of the Mueller matrix. The calibration step for the analyzer and the polarizer is developed. This important step is necessary to avoid the azimuthal inaccuracy in such elements. Numerical simulations and experimental measurements are presented and discussed.

  16. A study for systematic errors of the GLA forecast model in tropical regions

    NASA Technical Reports Server (NTRS)

    Chen, Tsing-Chang; Baker, Wayman E.; Pfaendtner, James; Corrigan, Martin

    1988-01-01

    From the sensitivity studies performed with the Goddard Laboratory for Atmospheres (GLA) analysis/forecast system, it was revealed that the forecast errors in the tropics affect the ability to forecast midlatitude weather in some cases. Apparently, the forecast errors occurring in the tropics can propagate to midlatitudes. Therefore, the systematic error analysis of the GLA forecast system becomes a necessary step in improving the model's forecast performance. The major effort of this study is to examine the possible impact of the hydrological-cycle forecast error on dynamical fields in the GLA forecast system.

  17. Theoretical estimation of systematic errors in local deformation measurements using digital image correlation

    NASA Astrophysics Data System (ADS)

    Xu, Xiaohai; Su, Yong; Zhang, Qingchuan

    2017-01-01

    The measurement accuracy using the digital image correlation (DIC) method in local deformations such as the Portevin-Le Chatelier bands, the deformations near the gap, and the crack tips has raised a major concern. The measured displacement and strain results are heavily affected by the calculation parameters (such as the subset size, the grid step, and the strain window size) due to under-matched shape functions (for displacement measurement) and surface fitting functions (for strain calculation). To evaluate the systematic errors in local deformations, theoretical estimations and approximations of displacement and strain systematic errors have been deduced when the first-order shape functions and quadric surface fitting functions are employed. The following results come out: (1) the approximate displacement systematic errors are proportional to the second-order displacement gradients and the ratio is only determined by the subset size; (2) the approximate strain systematic errors are functions of the third-order displacement gradients and the coefficients are dependent on the subset size, the grid step and the strain window size. Simulated experiments have been carried out to verify the reliability. Besides, a convenient way by comparing displacement results measured by the DIC method with different subset sizes is proposed to approximately evaluate the displacement systematic errors.

  18. SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER.

    SciTech Connect

    QIAN,S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.

    2007-08-25

    Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately.

  19. Systematic diffuse optical image errors resulting from uncertainty in the background optical properties

    NASA Astrophysics Data System (ADS)

    Cheng, Xuefeng; Boas, David A.

    1999-04-01

    We investigated the diffuse optical image errors resulting from systematic errors in the background scattering and absorption coefficients, Gaussian noise in the measurements, and the depth at which the image is reconstructed when using a 2D linear reconstruction algorithm for a 3D object. The fourth Born perturbation approach was used to generate reflectance measurements and k-space tomography was used for the reconstruction. Our simulations using both single and dual wavelengths show large systematic errors in the absolute reconstructed absorption coefficients and corresponding hemoglobin concentrations, while the errors in the relative oxy- and deoxy- hemoglobin concentrations are acceptable. The greatest difference arises from a systematic error in the depth at which an image is reconstructed. While an absolute reconstruction of the hemoglobin concentrations can deviate by 100% for a depth error of ñ1 mm, the error in the relative concentrations is less than 5%. These results demonstrate that while quantitative diffuse optical tomography is difficult, images of the relative concentrations of oxy- and deoxy-hemoglobin are accurate and robust. Other results, not presented, confirm that these findings hold for other linear reconstruction techniques (i.e. SVD and SIRT) as well as for transmission through slab geometries.

  20. Dynamically correcting two-qubit gates against any systematic logical error

    NASA Astrophysics Data System (ADS)

    Calderon Vargas, Fernando Antonio

    The reliability of quantum information processing depends on the ability to deal with noise and error in an efficient way. A significant source of error in many settings is coherent, systematic gate error. This work introduces a set of composite pulse sequences that generate maximally entangling gates and correct all systematic errors within the logical subspace to arbitrary order. These sequences are applica- ble for any two-qubit interaction Hamiltonian, and make no assumptions about the underlying noise mechanism except that it is constant on the timescale of the opera- tion. The prime use for our results will be in cases where one has limited knowledge of the underlying physical noise and control mechanisms, highly constrained control, or both. In particular, we apply these composite pulse sequences to the quantum system formed by two capacitively coupled singlet-triplet qubits, which is charac- terized by having constrained control and noise sources that are low frequency and of a non-Markovian nature.

  1. Accuracy of image-plane holographic tomography with filtered backprojection: random and systematic errors.

    PubMed

    Belashov, A V; Petrov, N V; Semenova, I V

    2016-01-01

    This paper explores the concept of image-plane holographic tomography applied to the measurements of laser-induced thermal gradients in an aqueous solution of a photosensitizer with respect to the reconstruction accuracy of three-dimensional variations of the refractive index. It uses the least-squares estimation algorithm to reconstruct refractive index variations in each holographic projection. Along with the bitelecentric optical system, transferring focused projection to the sensor plane, it facilitates the elimination of diffraction artifacts and noise suppression. This work estimates the influence of typical random and systematic errors in experiments and concludes that random errors such as accidental measurement errors or noise presence can be significantly suppressed by increasing the number of recorded digital holograms. On the contrary, even comparatively small systematic errors such as a displacement of the rotation axis projection in the course of a reconstruction procedure can significantly distort the results.

  2. Patient disclosure of medical errors in paediatrics: A systematic literature review

    PubMed Central

    Koller, Donna; Rummens, Anneke; Le Pouesard, Morgane; Espin, Sherry; Friedman, Jeremy; Coffey, Maitreya; Kenneally, Noah

    2016-01-01

    Medical errors are common within paediatrics; however, little research has examined the process of disclosing medical errors in paediatric settings. The present systematic review of current research and policy initiatives examined evidence regarding the disclosure of medical errors involving paediatric patients. Peer-reviewed research from a range of scientific journals from the past 10 years is presented, and an overview of Canadian and international policies regarding disclosure in paediatric settings are provided. The purpose of the present review was to scope the existing literature and policy, and to synthesize findings into an integrated and accessible report. Future research priorities and policy implications are then identified. PMID:27429578

  3. Patient disclosure of medical errors in paediatrics: A systematic literature review.

    PubMed

    Koller, Donna; Rummens, Anneke; Le Pouesard, Morgane; Espin, Sherry; Friedman, Jeremy; Coffey, Maitreya; Kenneally, Noah

    2016-05-01

    Medical errors are common within paediatrics; however, little research has examined the process of disclosing medical errors in paediatric settings. The present systematic review of current research and policy initiatives examined evidence regarding the disclosure of medical errors involving paediatric patients. Peer-reviewed research from a range of scientific journals from the past 10 years is presented, and an overview of Canadian and international policies regarding disclosure in paediatric settings are provided. The purpose of the present review was to scope the existing literature and policy, and to synthesize findings into an integrated and accessible report. Future research priorities and policy implications are then identified.

  4. Systematic Magnus-Based Approach for Suppressing Leakage and Nonadiabatic Errors in Quantum Dynamics

    NASA Astrophysics Data System (ADS)

    Ribeiro, Hugo; Baksic, Alexandre; Clerk, Aashish A.

    2017-01-01

    We present a systematic, perturbative method for correcting quantum gates to suppress errors that take the target system out of a chosen subspace. Our method addresses the generic problem of nonadiabatic errors in adiabatic evolution and state preparation, as well as general leakage errors due to spurious couplings to undesirable states. The method is based on the Magnus expansion: By correcting control pulses, we modify the Magnus expansion of an initially given, imperfect unitary in such a way that the desired evolution is obtained. Applications to adiabatic quantum state transfer, superconducting qubits, and generalized Landau-Zener problems are discussed.

  5. A wire spark chamber capacitive readout system with low leakage current and small systematic error

    NASA Astrophysics Data System (ADS)

    Anderhub, H. B.; Boecklin, J.; von Gunten, H. P.; Koenig, H.; Le Coultre, P.; Makowiecki, D.; Seiler, P. G.

    1983-02-01

    A wire spark chamber capacitive readout system with analog FET switch multiplexing and CAMAC interface is described. Two wire planes per chamber are read out. The information of each plane is sequentially digitized in one ADC. This and the low leakage current of the FET switches guarantee a small systematic error of the measurement of the spark position.

  6. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    SciTech Connect

    Mandelbaum, R.; Rowe, B.; Armstrong, R.; Bard, D.; Bertin, E.; Bosch, J.; Boutigny, D.; Courbin, F.; Dawson, W. A.; Donnarumma, A.; Fenech Conti, I.; Gavazzi, R.; Gentile, M.; Gill, M. S. S.; Hogg, D. W.; Huff, E. M.; Jee, M. J.; Kacprzak, T.; Kilbinger, M.; Kuntzer, T.; Lang, D.; Luo, W.; March, M. C.; Marshall, P. J.; Meyers, J. E.; Miller, L.; Miyatake, H.; Nakajima, R.; Ngole Mboula, F. M.; Nurbaeva, G.; Okura, Y.; Paulin-Henriksson, S.; Rhodes, J.; Schneider, M. D.; Shan, H.; Sheldon, E. S.; Simet, M.; Starck, J. -L.; Sureau, F.; Tewes, M.; Zarb Adami, K.; Zhang, J.; Zuntz, J.

    2015-05-01

    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

  7. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    SciTech Connect

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; Fenech Conti, Ian; Gavazzi, Raphael; Gentile, Marc; Gill, Mandeep S. S.; Hogg, David W.; Huff, Eric M.; Jee, M. James; Kacprzak, Tomasz; Kilbinger, Martin; Kuntzer, Thibault; Lang, Dustin; Luo, Wentao; March, Marisa C.; Marshall, Philip J.; Meyers, Joshua E.; Miller, Lance; Miyatake, Hironao; Nakajima, Reiko; Ngole Mboula, Fred Maurice; Nurbaeva, Guldariya; Okura, Yuki; Paulin-Henriksson, Stephane; Rhodes, Jason; Schneider, Michael D.; Shan, Huanyuan; Sheldon, Erin S.; Simet, Melanie; Starck, Jean -Luc; Sureau, Florent; Tewes, Malte; Zarb Adami, Kristian; Zhang, Jun; Zuntz, Joe

    2015-05-11

    The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

  8. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    DOE PAGES

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; ...

    2015-05-11

    The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty aboutmore » a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less

  9. Minimizing systematic errors in phytoplankton pigment concentration derived from satellite ocean color measurements

    SciTech Connect

    Martin, D.L.

    1992-01-01

    Water-leaving radiances and phytoplankton pigment concentrations are calculated from Coastal Zone Color Scanner (CZCS) total radiance measurements by separating atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. Multiple scattering interactions between Rayleigh and aerosol components together with other meteorologically-moderated radiances cause systematic errors in calculated water-leaving radiances and produce errors in retrieved phytoplankton pigment concentrations. This thesis developed techniques which minimize the effects of these systematic errors in Level IIA CZCS imagery. Results of previous radiative transfer modeling by Gordon and Castano are extended to predict the pixel-specific magnitude of systematic errors caused by Rayleigh-aerosol multiple scattering interactions. CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere are simulated mathematically and radiance-retrieval errors are calculated for a range of aerosol optical depths. Pixels which exceed an error threshold in the simulated CZCS image are rejected in a corresponding actual image. Meteorological phenomena also cause artifactual errors in CZCS-derived phytoplankton pigment concentration imagery. Unless data contaminated with these effects are masked and excluded from analysis, they will be interpreted as containing valid biological information and will contribute significantly to erroneous estimates of phytoplankton temporal and spatial variability. A method is developed which minimizes these errors through a sequence of quality-control procedures including the calculation of variable cloud-threshold radiances, the computation of the extent of electronic overshoot from bright reflectors, and the imposition of a buffer zone around clouds to exclude contaminated data.

  10. On Round-Off Error of Floating-Point Addition with Guard Digits,

    DTIC Science & Technology

    Some recent computers, such as those in the IBM 360 series, use radix 16 and single precision with guard digit in floating - point addition. In this...paper, a bound on the round-off error for floating - point addition in single precision with guard digits is derived. Comparison with double precision addition is made. (Author)

  11. Statistical and systematic errors in redshift-space distortion measurements from large surveys

    NASA Astrophysics Data System (ADS)

    Bianchi, D.; Guzzo, L.; Branchini, E.; Majerotto, E.; de la Torre, S.; Marulli, F.; Moscardini, L.; Angulo, R. E.

    2012-12-01

    We investigate the impact of statistical and systematic errors on measurements of linear redshift-space distortions (RSD) in future cosmological surveys by analysing large catalogues of dark matter haloes from the baryonic acoustic oscillation simulations at the Institute for Computational Cosmology. These allow us to estimate the dependence of errors on typical survey properties, as volume, galaxy density and mass (i.e. bias factor) of the adopted tracer. We find that measures of the specific growth rate β = f/b using the Hamilton/Kaiser harmonic expansion of the redshift-space correlation function ξ(rp, π) on scales larger than 3 h-1 Mpc are typically underestimated by up to 10 per cent for galaxy-sized haloes. This is significantly larger than the corresponding statistical errors, which amount to a few per cent, indicating the importance of non-linear improvements to the Kaiser model, to obtain accurate measurements of the growth rate. The systematic error shows a diminishing trend with increasing bias value (i.e. mass) of the haloes considered. We compare the amplitude and trends of statistical errors as a function of survey parameters to predictions obtained with the Fisher information matrix technique. This is what is usually adopted to produce RSD forecasts, based on the Feldman-Kaiser-Peacock prescription for the errors on the power spectrum. We show that this produces parameter errors fairly similar to the standard deviations from the halo catalogues, provided it is applied to strictly linear scales in Fourier space (k<0.2 h Mpc-1). Finally, we combine our measurements to define and calibrate an accurate scaling formula for the relative error on β as a function of the same parameters, which closely matches the simulation results in all explored regimes. This provides a handy and plausibly more realistic alternative to the Fisher matrix approach, to quickly and accurately predict statistical errors on RSD expected from future surveys.

  12. A hybrid variational-ensemble data assimilation scheme with systematic error correction for limited-area ocean models

    NASA Astrophysics Data System (ADS)

    Oddo, Paolo; Storto, Andrea; Dobricic, Srdjan; Russo, Aniello; Lewis, Craig; Onken, Reiner; Coelho, Emanuel

    2016-10-01

    A hybrid variational-ensemble data assimilation scheme to estimate the vertical and horizontal parts of the background error covariance matrix for an ocean variational data assimilation system is presented and tested in a limited-area ocean model implemented in the western Mediterranean Sea. An extensive data set collected during the Recognized Environmental Picture Experiments conducted in June 2014 by the Centre for Maritime Research and Experimentation has been used for assimilation and validation. The hybrid scheme is used to both correct the systematic error introduced in the system from the external forcing (initialisation, lateral and surface open boundary conditions) and model parameterisation, and improve the representation of small-scale errors in the background error covariance matrix. An ensemble system is run offline for further use in the hybrid scheme, generated through perturbation of assimilated observations. Results of four different experiments have been compared. The reference experiment uses the classical stationary formulation of the background error covariance matrix and has no systematic error correction. The other three experiments account for, or not, systematic error correction and hybrid background error covariance matrix combining the static and the ensemble-derived errors of the day. Results show that the hybrid scheme when used in conjunction with the systematic error correction reduces the mean absolute error of temperature and salinity misfit by 55 and 42 % respectively, versus statistics arising from standard climatological covariances without systematic error correction.

  13. Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid

    2015-07-01

    Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.

  14. Analysis and reduction of tropical systematic errors through a unified modelling strategy

    NASA Astrophysics Data System (ADS)

    Copsey, D.; Marshall, A.; Martin, G.; Milton, S.; Senior, C.; Sellar, A.; Shelly, A.

    2009-04-01

    Systematic errors in climate models are usually addressed in a number of ways, but current methods often make use of model climatological fields as a starting point for model modification. This approach has limitations due to non-linear feedback mechanisms which occur over longer timescales and make the source of the errors difficult to identify. In a unified modelling environment, short-range (1-5 day) weather forecasts are readily available from NWP models with very similar dynamical and physical formulations to the climate models, but often increased horizontal (and vertical) resolution. Where such forecasts exhibit similar systematic errors to their climate model counterparts, there is much to be gained from combined analysis and sensitivity testing. For example, the Met Office Hadley Centre climate model HadGEM1 (Johns et al 2007) exhibits precipitation errors in the Asian summer monsoon, with too little rainfall over the Indian peninsula and too much over the equatorial Indian Ocean to the southwest of the peninsula (Martin et al., 2004). Examination of the development of precipitation errors in the Asian summer monsoon region in Met Office NWP forecasts shows that different parts of the error pattern evolve on different timescales. Excessive rainfall over the equatorial Indian Ocean to the southwest of the Indian peninsula develops rapidly, over the first day or two of the forecast, while a dry bias over the Indian land area takes ~10 days to develop. Such information is invaluable for understanding the processes involved and how to tackle them. Other examples of the use of this approach will be discussed, including analysis of the sensitivity of the representation of the Madden-Julian Oscillation (MJO) to the convective parametrisation, and the reduction of systematic tropical temperature and moisture biases in both climate and NWP models through improved representation of convective detrainment.

  15. Validity and systematic error in measuring carotenoid consumption with dietary self-report instruments.

    PubMed

    Natarajan, Loki; Flatt, Shirley W; Sun, Xiaoying; Gamst, Anthony C; Major, Jacqueline M; Rock, Cheryl L; Al-Delaimy, Wael; Thomson, Cynthia A; Newman, Vicky A; Pierce, John P

    2006-04-15

    Vegetables and fruits are rich in carotenoids, a group of compounds thought to protect against cancer. Studies of diet-disease associations need valid and reliable instruments for measuring dietary intake. The authors present a measurement error model to estimate the validity (defined as correlation between self-reported intake and "true" intake), systematic error, and reliability of two self-report dietary assessment methods. Carotenoid exposure is measured by repeated 24-hour recalls, a food frequency questionnaire (FFQ), and a plasma marker. The model is applied to 1,013 participants assigned between 1995 and 2000 to the nonintervention arm of the Women's Healthy Eating and Living Study, a randomized trial assessing the impact of a low-fat, high-vegetable/fruit/fiber diet on preventing new breast cancer events. Diagnostics including graphs are used to assess the goodness of fit. The validity of the instruments was 0.44 for the 24-hour recalls and 0.39 for the FFQ. Systematic error accounted for over 22% and 50% of measurement error variance for the 24-hour recalls and FFQ, respectively. The use of either self-report method alone in diet-disease studies could lead to substantial bias and error. Multiple methods of dietary assessment may provide more accurate estimates of true dietary intake.

  16. Systematic Errors in Resistivity and IP Data Acquisition: Are We Interpreting the Earth or the Instrument?

    NASA Astrophysics Data System (ADS)

    La Brecque, D. J.

    2006-12-01

    For decades, resistivity and induced polarization (IP) measurements have been important tools for near-surface geophysical investigations. Recently, sophisticated, multi-channel, multi-electrode, acquisition systems have displaced older, simpler, systems allowing collection of large, complex, three-dimensional data series. Generally, these new digital acquisition systems are better than their analog ancestors at dealing with noise from external sources. However, they are prone to a number of systematic errors. Since these errors are non- random and repeatable, the field geophysicist may be blissfully unaware that while his/her field data may be very precise, they may not be particularly accurate. We have begun the second phase of research project to improve our understanding of these types of errors. The objective research is not to indict any particular manufacturer's instrument but to understand the magnitude of systematic errors in typical, modern, data acquisition. One important source of noise, results from the tendency for these systems to both send the source current, and monitor potentials through common multiplexer circuits and along the same cable bundle. Often, the source current is transmitted at hundreds of volts and the potentials measure few tens of millivolts. Thus, even tiny amounts of leakage from the transmitter wires/circuits to the receiver wire/circuits can corrupt or overwhelm the data. For example, in a recent survey, we found that a number of substantial anomalies correlated better to the multi-conductor cable used than to the subsurface. Leakage errors in cables are roughly proportional to the length of the cable and the contact impedance of the electrodes but vary dramatically the construction and type of wire insulation. Polyvinylchloride, (PVC) insulation, the type used in most inexpensive wire and cables, is extremely noisy. Not only does PVC tend to leak current from conductor to conductor, but the leakage currents tend to have large

  17. Treatment of systematic errors in the processing of wide angle sonar sensor data for robotic navigation

    SciTech Connect

    Beckerman, M.; Oblow, E.M.

    1988-04-01

    A methodology has been developed for the treatment of systematic errors which arise in the processing of sparse sensor data. We present a detailed application of this methodology to the construction from wide-angle sonar sensor data of navigation maps for use in autonomous robotic navigation. In the methodology we introduce a four-valued labelling scheme and a simple logic for label combination. The four labels, conflict, occupied, empty and unknown, are used to mark the cells of the navigation maps; the logic allows for the rapid updating of these maps as new information is acquired. The systematic errors are treated by relabelling conflicting pixel assignments. Most of the new labels are obtained from analyses of the characteristic patterns of conflict which arise during the information processing. The remaining labels are determined by imposing an elementary consistent-labelling condition. 26 refs., 9 figs.

  18. A constant altitude flight survey method for mapping atmospheric ambient pressures and systematic radar errors

    NASA Technical Reports Server (NTRS)

    Larson, T. J.; Ehernberger, L. J.

    1985-01-01

    The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air data calibrations, but also for studies of atmospheric structure and space positioning accuracy performance. The examples presented cover a wide range of radar tracking conditions for both subsonic and supersonic flight to an altitude of 42,000 ft.

  19. The effect of systematic errors on the hybridization of optical critical dimension measurements

    NASA Astrophysics Data System (ADS)

    Henn, Mark-Alexander; Barnes, Bryan M.; Zhang, Nien Fan; Zhou, Hui; Silver, Richard M.

    2015-06-01

    In hybrid metrology two or more measurements of the same measurand are combined to provide a more reliable result that ideally incorporates the individual strengths of each of the measurement methods. While these multiple measurements may come from dissimilar metrology methods such as optical critical dimension microscopy (OCD) and scanning electron microscopy (SEM), we investigated the hybridization of similar OCD methods featuring a focus-resolved simulation study of systematic errors performed at orthogonal polarizations. Specifically, errors due to line edge and line width roughness (LER, LWR) and their superposition (LEWR) are known to contribute a systematic bias with inherent correlated errors. In order to investigate the sensitivity of the measurement to LEWR, we follow a modeling approach proposed by Kato et al. who studied the effect of LEWR on extreme ultraviolet (EUV) and deep ultraviolet (DUV) scatterometry. Similar to their findings, we have observed that LEWR leads to a systematic bias in the simulated data. Since the critical dimensions (CDs) are determined by fitting the respective model data to the measurement data by minimizing the difference measure or chi square function, a proper description of the systematic bias is crucial to obtaining reliable results and to successful hybridization. In scatterometry, an analytical expression for the influence of LEWR on the measured orders can be derived, and accounting for this effect leads to a modification of the model function that not only depends on the critical dimensions but also on the magnitude of the roughness. For finite arrayed structures however, such an analytical expression cannot be derived. We demonstrate how to account for the systematic bias and that, if certain conditions are met, a significant improvement of the reliability of hybrid metrology for combining both dissimilar and similar measurement tools can be achieved.

  20. Systematic lossy error protection for video transmission over wireless ad hoc networks

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoqing; Rane, Shantanu; Girod, Bernd

    2005-07-01

    Wireless ad hoc networks present a challenge for error-resilient video transmission, since node mobility and multipath fading result in time-varying link qualities in terms of packet loss ratio and available bandwidth. In this paper, we propose to use a systematic lossy error protection (SLEP) scheme for video transmission over wireless ad hoc networks. The transmitted video signal has two parts-a systematic portion consisting of a video sequence transmitted without channel coding over an error-prone channel, and error protection information consisting of a bitstream generated by Wyner-Ziv encoding of the video sequence. Using an end-to-end video distortion model in conjunction with online estimates of packet loss ratio and available bandwidth, the optimal Wyner-Ziv description can be selected dynamically according to current channel conditions. The scheme can also be applied to choose one path for transmission from amongst multiple candidate routes with varying available bandwidths and packet loss ratios, so that the expected end-to-end video distortion is maximized. Experimental results of video transmission over a simulated ad hoc wireless network shows that the proposed SLEP scheme outperforms the conventional application layer FEC approach in that it provides graceful degradation of received video quality over a wider range of packet loss ratios and is less susceptible to inaccuracy in the packet loss ratio estimation.

  1. The systematic error in digital image correlation induced by self-heating of a digital camera

    NASA Astrophysics Data System (ADS)

    Ma, Shaopeng; Pang, Jiazhi; Ma, Qinwei

    2012-02-01

    The systematic strain measurement error in digital image correlation (DIC) induced by self-heating of digital CCD and CMOS cameras was extensively studied, and an experimental and data analysis procedure has been proposed and two parameters have been suggested to examine and evaluate this. Six digital cameras of four different types were tested to define the strain errors, and it was found that each camera needed between 1 and 2 h to reach a stable heat balance, with a measured temperature increase of around 10 °C. During the temperature increase, the virtual image expansion will cause a 70-230 µɛ strain error in the DIC measurement, which is large enough to be noticed in most DIC experiments and hence should be eliminated.

  2. Local and Global Views of Systematic Errors of Atmosphere-Ocean General Circulation Models

    NASA Astrophysics Data System (ADS)

    Mechoso, C. Roberto; Wang, Chunzai; Lee, Sang-Ki; Zhang, Liping; Wu, Lixin

    2014-05-01

    Coupled Atmosphere-Ocean General Circulation Models (CGCMs) have serious systematic errors that challenge the reliability of climate predictions. One major reason for such biases is the misrepresentations of physical processes, which can be amplified by feedbacks among climate components especially in the tropics. Much effort, therefore, is dedicated to the better representation of physical processes in coordination with intense process studies. The present paper starts with a presentation of these systematic CGCM errors with an emphasis on the sea surface temperature (SST) in simulations by 22 participants in the Coupled Model Intercomparison Project phase 5 (CMIP5). Different regions are considered for discussion of model errors, including the one around the equator, the one covered by the stratocumulus decks off Peru and Namibia, and the confluence between the Angola and Benguela currents. Hypotheses on the reasons for the errors are reviewed, with particular attention on the parameterization of low-level marine clouds, model difficulties in the simulation of the ocean heat budget under the stratocumulus decks, and location of strong SST gradients. Next the presentation turns to a global perspective of the errors and their causes. It is shown that a simulated weak Atlantic Meridional Overturning Circulation (AMOC) tends to be associated with cold biases in the entire Northern Hemisphere with an atmospheric pattern that resembles the Northern Hemisphere annular mode. The AMOC weakening is also associated with a strengthening of Antarctic bottom water formation and warm SST biases in the Southern Ocean. It is also shown that cold biases in the tropical North Atlantic and West African/Indian monsoon regions during the warm season in the Northern Hemisphere have interhemispheric links with warm SST biases in the tropical southeastern Pacific and Atlantic, respectively. The results suggest that improving the simulation of regional processes may not suffice for a more

  3. Influence of Additive and Multiplicative Structure and Direction of Comparison on the Reversal Error

    ERIC Educational Resources Information Center

    González-Calero, José Antonio; Arnau, David; Laserna-Belenguer, Belén

    2015-01-01

    An empirical study has been carried out to evaluate the potential of word order matching and static comparison as explanatory models of reversal error. Data was collected from 214 undergraduate students who translated a set of additive and multiplicative comparisons expressed in Spanish into algebraic language. In these multiplicative comparisons…

  4. Systematic errors in the measurement of the permanent electric dipole moment (EDM) of the 199Hg atom

    NASA Astrophysics Data System (ADS)

    Chen, Yi; Graner, Brent; Lindahl, Eric; Heckel, Blayne

    2016-03-01

    This talk provides a discussion of the systematic errors that were encountered in the 199Hg experiment described earlier in this session. The dominant systematic error, unseen in previous 199Hg EDM experiments, arose from small motions of the Hg vapor cells due to forces exerted by the applied electric field. Methods used to understand this effect, as well as the anticipated sources of systematic errors such as leakage currents, parameter correlations, and E2 and v × E / c effects, will be presented. The total systematic error was found to be 72% as large as the statistical error of the EDM measurement. This work was supported by NSF Grant 1306743 and by DOE Grant DE-FG02-97ER41020.

  5. Systematic errors in Monsoon simulation: importance of the equatorial Indian Ocean processes

    NASA Astrophysics Data System (ADS)

    Annamalai, H.; Taguchi, B.; McCreary, J. P., Jr.; Nagura, M.; Miyama, T.

    2015-12-01

    H. Annamalai1, B. Taguchi2, J.P. McCreary1, J. Hafner1, M. Nagura2, and T. Miyama2 International Pacific Research Center, University of Hawaii, USA Application Laboratory, JAMSTEC, Japan In climate models, simulating the monsoon precipitation climatology remains a grand challenge. Compared to CMIP3, the multi-model-mean (MMM) errors for Asian-Australian monsoon (AAM) precipitation climatology in CMIP5, relative to GPCP observations, have shown little improvement. One of the implications is that uncertainties in the future projections of time-mean changes to AAM rainfall may not have reduced from CMIP3 to CMIP5. Despite dedicated efforts by the modeling community, the progress in monsoon modeling is rather slow. This leads us to wonder: Has the scientific community reached a "plateau" in modeling mean monsoon precipitation? Our focus here is to better understanding of the coupled air-sea interactions, and moist processes that govern the precipitation characteristics over the tropical Indian Ocean where large-scale errors persist. A series idealized coupled model experiments are performed to test the hypothesis that errors in the coupled processes along the equatorial Indian Ocean during inter-monsoon seasons could potentially influence systematic errors during the monsoon season. Moist static energy budget diagnostics has been performed to identify the leading moist and radiative processes that account for the large-scale errors in the simulated precipitation. As a way forward, we propose three coordinated efforts, and they are: (i) idealized coupled model experiments; (ii) process-based diagnostics and (iii) direct observations to constrain model physics. We will argue that a systematic and coordinated approach in the identification of the various interactive processes that shape the precipitation basic state needs to be carried out, and high-quality observations over the data sparse monsoon region are needed to validate models and further improve model physics.

  6. Minimizing critical layer systematic alignment errors during non-dedicated processing

    NASA Astrophysics Data System (ADS)

    Jekauc, Igor; Roberts, William R.

    2004-05-01

    For the 150 nm and smaller half-pitch geometries, many DRAM manufacturers frequently employ dedicated exposure tool strategy for processing of most critical layers. Individual die tolerances of less than 40 nm are not uncommon for such compact geometries and a method is needed to reduce systematic overlay errors. The dedication strategy relies on the premise that a component of the systematic error induced by the inefficiencies in the exposure tool encountered at a specific layer can be diminished by re-exposing subsequent layer(s) on the same tool thus canceling out a large component of this error. In the past this strategy has, in general, resulted in better overall alignment performance, better exposure tool modeling and in decreased residual modeling errors. Increased alignment performance due to dedication does not come without its price. In such a dedicated strategy wafers are committed to process on the same tool at subsequent lithographic layers thus decreasing manufacturing flexibility and in turn affecting cost through increased processing cycle time. Tool down-events and equipment upgrades requiring significant downtime can also have a significant negative impact on running of a factory. This paper presents volume results for the 140 nm and 110 nm half-pitch geometries using 248 nm and 193 nm respective exposure wavelength state-of-art systems that show that dedicated processing still produces superior overlay and device performance results when compared blindly against non-dedicated processing. Results are also shown that at a given time an acceptable match may be found producing near equivalent results for non-dedicated processing. Changes in alignment capability are also observed after major equipment maintenance and component replacement. A point-in-time predictor strategy utilizing residual modeling errors and a set of modified performance specifications is directly compared against measured overlay data after patterning, against within field AFOV

  7. A systematic impact assessment of GRACE error correlation on data assimilation in hydrological models

    NASA Astrophysics Data System (ADS)

    Schumacher, Maike; Kusche, Jürgen; Döll, Petra

    2016-06-01

    Recently, ensemble Kalman filters (EnKF) have found increasing application for merging hydrological models with total water storage anomaly (TWSA) fields from the Gravity Recovery And Climate Experiment (GRACE) satellite mission. Previous studies have disregarded the effect of spatially correlated errors of GRACE TWSA products in their investigations. Here, for the first time, we systematically assess the impact of the GRACE error correlation structure on EnKF data assimilation into a hydrological model, i.e. on estimated compartmental and total water storages and model parameter values. Our investigations include (1) assimilating gridded GRACE-derived TWSA into the WaterGAP Global Hydrology Model and, simultaneously, calibrating its parameters; (2) introducing GRACE observations on different spatial scales; (3) modelling observation errors as either spatially white or correlated in the assimilation procedure, and (4) replacing the standard EnKF algorithm by the square root analysis scheme or, alternatively, the singular evolutive interpolated Kalman filter. Results of a synthetic experiment designed for the Mississippi River Basin indicate that the hydrological parameters are sensitive to TWSA assimilation if spatial resolution of the observation data is sufficiently high. We find a significant influence of spatial error correlation on the adjusted water states and model parameters for all implemented filter variants, in particular for subbasins with a large discrepancy between observed and initially simulated TWSA and for north-south elongated sub-basins. Considering these correlated errors, however, does not generally improve results: while some metrics indicate that it is helpful to consider the full GRACE error covariance matrix, it appears to have an adverse effect on others. We conclude that considering the characteristics of GRACE error correlation is at least as important as the selection of the spatial discretisation of TWSA observations, while the choice

  8. An examination of the southern California field test for the systematic accumulation of the optical refraction error in geodetic leveling.

    USGS Publications Warehouse

    Castle, R.O.; Brown, B.W.; Gilmore, T.D.; Mark, R.K.; Wilson, R.C.

    1983-01-01

    Appraisals of the two levelings that formed the southern California field test for the accumulation of the atmospheric refraction error indicate that random error and systematic error unrelated to refraction competed with the systematic refraction error and severely complicate any analysis of the test results. If the fewer than one-third of the sections that met less than second-order, class I standards are dropped, the divergence virtually disappears between the presumably more refraction contaminated long-sight-length survey and the less contaminated short-sight-length survey. -Authors

  9. Sherborn’s Index Animalium: New names, systematic errors and availability of names in the light of modern nomenclature

    PubMed Central

    Welter-Schultes, Francisco; Görlich, Angela; Lutze, Alexandra

    2016-01-01

    Abstract This study is aimed to shed light on the reliability of Sherborn’s Index Animalium in terms of modern usage. The AnimalBase project spent several years’ worth of teamwork dedicated to extracting new names from original sources in the period ranging from 1757 to the mid-1790s. This allowed us to closely analyse Sherborn’s work and verify the completeness and correctness of his record. We found the reliability of Sherborn’s resource generally very high, but in some special situations the reliability was reduced due to systematic errors or incompleteness in source material. Index Animalium is commonly used by taxonomists today who rely strongly on Sherborn’s record; our study is directed most pointedly at those users. We recommend paying special attention to the situations where we found that Sherborn’s data should be read with caution. In addition to some categories of systematic errors and mistakes that were Sherborn’s own responsibility, readers should also take into account that nomenclatural rules have been changed or refined in the past 100 years, and that Sherborn’s resource could eventually present outdated information. One of our main conclusions is that error rates in nomenclatoral compilations tend to be lower if one single and highly experienced person such as Sherborn carries out the work, than if a team is trying to do the task. Based on our experience with extracting names from original sources we came to the conclusion that error rates in such a manual work on names in a list are difficult to reduce below 2–4%. We suggest this is a natural limit and a point of diminishing returns for projects of this nature. PMID:26877658

  10. Correction of non-additive errors in variational and ensemble data assimilation using image registration

    NASA Astrophysics Data System (ADS)

    Landelius, Tomas; Bojarova, Jelena; Gustafsson, Nils; Lindskog, Magnus

    2013-04-01

    It is hard to forecast the position of localized weather phenomena such as clouds, precipitation, and fronts. Moreover, cloudy areas are important since this is where most of the active weather occurs. Position errors, also known as phase or alignment or displacement errors, can have several causes; timing errors, deficient model physics, inadequate model resolution, etc. Furthermore, position errors have been shown to be non-additive and non-Gaussian, which violates the error model most data assimilation methods rely on. Remote sensing data contain coherent information on the weather development in time and space. By comparing structures in radar or satellite images with the forecast model state it is possible to get information about position errors. We use an image registration (optical flow) method to find a transformation, in terms of a displacement field, that aligns the model state with the corresponding remote sensing data. In particular, we surmise that assimilation of radiances in cloudy areas will benefit from a better aligned first guess. Analysis perturbations should become smaller and be easier to handle by the linearizations in the observation operator. In the variational setting the displacement field is used as a mapping function to obtain a new, better aligned, first guess from the old one by means of interpolation (warping). To reduce the effect of imbalances, the aligned first guess is not used as is. Instead it is used for generation of pseudo observations that are assimilated in a first step to get an aligned and balanced first guess. This step reduces the non-additive errors due to mis-alignment and is followed by a second step with a standard variational assimilation to compensate for the remaining additive errors. In ensemble data assimilation a displacement field is estimated for each ensemble member and is used as a distance measure. In areas where a member has a smaller displacement (smaller position error) than the control it is given

  11. Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Garcia, P.

    2016-04-01

    Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its

  12. Systematic Errors in the Measurement of Emissivity Caused by Directional Effects

    NASA Astrophysics Data System (ADS)

    Kribus, Abraham; Vishnevetsky, Irna; Rotenberg, Eyal; Yakir, Dan

    2003-04-01

    Accurate knowledge of surface emissivity is essential for applications in remote sensing (remote temperature measurement), radiative transport, and modeling of environmental energy balances. Direct measurements of surface emissivity are difficult when there is considerable background radiation at the same wavelength as the emitted radiation. This occurs, for example, when objects at temperatures near room temperature are measured in a terrestrial environment by use of the infrared 8 -14- μm band. This problem is usually treated by assumption of a perfectly diffuse surface or of diffuse background radiation. However, real surfaces and actual background radiation are not diffuse; therefore there will be a systematic measurement error. It is demonstrated that, in some cases, the deviations from a diffuse behavior lead to large errors in the measured emissivity. Past measurements made with simplifying assumptions should therefore be reevaluated and corrected. Recommendations are presented for improving experimental procedures in emissivity measurement.

  13. A posteriori compensation of the systematic error due to polynomial interpolation in digital image correlation

    NASA Astrophysics Data System (ADS)

    Baldi, Antonio; Bertolino, Filippo

    2013-10-01

    It is well known that displacement components estimated using digital image correlation are affected by a systematic error due to the polynomial interpolation required by the numerical algorithm. The magnitude of bias depends on the characteristics of the speckle pattern (i.e., the frequency content of the image), on the fractional part of displacements and on the type of polynomial used for intensity interpolation. In literature, B-Spline polynomials are pointed out as being able to introduce the smaller errors, whereas bilinear and cubic interpolants generally give the worst results. However, the small bias of B-Spline polynomials is partially counterbalanced by a somewhat larger execution time. We will try to improve the accuracy of lower order polynomials by a posteriori correcting their results so as to obtain a faster and more accurate analysis.

  14. Assessment of Systematic Measurement Errors for Acoustic Travel-Time Tomography of the Atmosphere

    DTIC Science & Technology

    2013-01-01

    times obtained with Algorithm 3, the reconstructions become relatively accurate, see Figs. 6(g), 6( h ), and 6(i). The magnitudes of all fields are...Temperature. (e) u0 þ u. (f) v0 þ v. Reconstruction with the estimated systematic errors by Algorithm 3: (g) Temperature. ( h ) u0 þ u. (i) v0 þ v. TABLE V...tomographic monitoring of the atmospheric surface layer,” J. Atmos. Ocean. Technol. 11, 751–769 (1994). 2A. Ziemann, K. Arnold, and A. Raabe , “Acoustic

  15. First Year Wilkinson Microwave Anisotropy Probe(WMAP) Observations: Data Processing Methods and Systematic Errors Limits

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.

    2003-01-01

    We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.

  16. Synoptic scale forecast skill and systematic errors in the MASS 2.0 model. [Mesoscale Atmospheric Simulation System

    NASA Technical Reports Server (NTRS)

    Koch, S. E.; Skillman, W. C.; Kocin, P. J.; Wetzel, P. J.; Brill, K. F.

    1985-01-01

    The synoptic scale performance characteristics of MASS 2.0 are determined by comparing filtered 12-24 hr model forecasts to same-case forecasts made by the National Meteorological Center's synoptic-scale Limited-area Fine Mesh model. Characteristics of the two systems are contrasted, and the analysis methodology used to determine statistical skill scores and systematic errors is described. The overall relative performance of the two models in the sample is documented, and important systematic errors uncovered are presented.

  17. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.

    PubMed

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip

    2015-08-06

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%.

  18. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances

    PubMed Central

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvořáček, Filip

    2015-01-01

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777

  19. Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction

    NASA Technical Reports Server (NTRS)

    Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.

    2013-01-01

    The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.

  20. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    NASA Astrophysics Data System (ADS)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik; Mikkelsen, Peter Steen; Rieckermann, Jörg

    2015-07-01

    In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can provide probabilistic predictions of wastewater discharge in a similarly reliable way, both for periods ranging from a few hours up to more than 1 week ahead of time. The EBD produces more accurate predictions on long horizons but relies on computationally heavy MCMC routines for parameter inferences. These properties make it more suitable for off-line applications. The IND can help in diagnosing the causes of output errors and is computationally inexpensive. It produces best results on short forecast horizons that are typical for online applications.

  1. Carers' Medication Administration Errors in the Domiciliary Setting: A Systematic Review

    PubMed Central

    Garfield, Sara; Vincent, Charles; Franklin, Bryony Dean

    2016-01-01

    Purpose Medications are mostly taken in patients’ own homes, increasingly administered by carers, yet studies of medication safety have been largely conducted in the hospital setting. We aimed to review studies of how carers cause and/or prevent medication administration errors (MAEs) within the patient’s home; to identify types, prevalence and causes of these MAEs and any interventions to prevent them. Methods A narrative systematic review of literature published between 1 Jan 1946 and 23 Sep 2013 was carried out across the databases EMBASE, MEDLINE, PSYCHINFO, COCHRANE and CINAHL. Empirical studies were included where carers were responsible for preventing/causing MAEs in the home and standardised tools used for data extraction and quality assessment. Results Thirty-six papers met the criteria for narrative review, 33 of which included parents caring for children, two predominantly comprised adult children and spouses caring for older parents/partners, and one focused on paid carers mostly looking after older adults. The carer administration error rate ranged from 1.9 to 33% of medications administered and from 12 to 92.7% of carers administering medication. These included dosage errors, omitted administration, wrong medication and wrong time or route of administration. Contributory factors included individual carer factors (e.g. carer age), environmental factors (e.g. storage), medication factors (e.g. number of medicines), prescription communication factors (e.g. comprehensibility of instructions), psychosocial factors (e.g. carer-to-carer communication), and care-recipient factors (e.g. recipient age). The few interventions effective in preventing MAEs involved carer training and tailored equipment. Conclusion This review shows that home medication administration errors made by carers are a potentially serious patient safety issue. Carers made similar errors to those made by professionals in other contexts and a wide variety of contributory factors were

  2. SU-D-BRD-07: Evaluation of the Effectiveness of Statistical Process Control Methods to Detect Systematic Errors For Routine Electron Energy Verification

    SciTech Connect

    Parker, S

    2015-06-15

    Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignment of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors

  3. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  4. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis.

    PubMed

    Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-08-05

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  5. An Examination of the Spatial Distribution of Carbon Dioxide and Systematic Errors

    NASA Technical Reports Server (NTRS)

    Coffey, Brennan; Gunson, Mike; Frankenberg, Christian; Osterman, Greg

    2011-01-01

    The industrial period and modern age is characterized by combustion of coal, oil, and natural gas for primary energy and transportation leading to rising levels of atmospheric of CO2. This increase, which is being carefully measured, has ramifications throughout the biological world. Through remote sensing, it is possible to measure how many molecules of CO2 lie in a defined column of air. However, other gases and particles are present in the atmosphere, such as aerosols and water, which make such measurements more complicated1. Understanding the detailed geometry and path length of the observation is vital to computing the concentration of CO2. Comparing these satellite readings with ground-truth data (TCCON) the systematic errors arising from these sources can be assessed. Once the error is understood, it can be scaled for in the retrieval algorithms to create a set of data, which is closer to the TCCON measurements1. Using this process, the algorithms are being developed to reduce bias, within.1% worldwide of the true value. At this stage, the accuracy is within 1%, but through correcting small errors contained in the algorithms, such as accounting for the scattering of sunlight, the desired accuracy can be achieved.

  6. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    SciTech Connect

    Li, T. S.

    2016-05-27

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.

  7. Systematic evaluation of autoregressive error models as post-processors for a probabilistic streamflow forecast system

    NASA Astrophysics Data System (ADS)

    Morawietz, Martin; Xu, Chong-Yu; Gottschalk, Lars; Tallaksen, Lena

    2010-05-01

    A post-processor that accounts for the hydrologic uncertainty in a probabilistic streamflow forecast system is necessary to account for the uncertainty introduced by the hydrological model. In this study different variants of an autoregressive error model that can be used as a post-processor for short to medium range streamflow forecasts, are evaluated. The deterministic HBV model is used to form the basis for the streamflow forecast. The general structure of the error models then used as post-processor is a first order autoregressive model of the form dt = αdt-1 + σɛt where dt is the model error (observed minus simulated streamflow) at time t, α and σ are the parameters of the error model, and ɛt is the residual error described through a probability distribution. The following aspects are investigated: (1) Use of constant parameters α and σ versus the use of state dependent parameters. The state dependent parameters vary depending on the states of temperature, precipitation, snow water equivalent and simulated streamflow. (2) Use of a Standard Normal distribution for ɛt versus use of an empirical distribution function constituted through the normalized residuals of the error model in the calibration period. (3) Comparison of two different transformations, i.e. logarithmic versus square root, that are applied to the streamflow data before the error model is applied. The reason for applying a transformation is to make the residuals of the error model homoscedastic over the range of streamflow values of different magnitudes. Through combination of these three characteristics, eight variants of the autoregressive post-processor are generated. These are calibrated and validated in 55 catchments throughout Norway. The discrete ranked probability score with 99 flow percentiles as standardized thresholds is used for evaluation. In addition, a non-parametric bootstrap is used to construct confidence intervals and evaluate the significance of the results. The main

  8. A table of integrals of the error function. II - Additions and corrections.

    NASA Technical Reports Server (NTRS)

    Geller, M.; Ng, E. W.

    1971-01-01

    Integrals of products of error functions with other functions are presented, taking into account a combination of the error function with powers, a combination of the error function with exponentials and powers, a combination of the error function with exponentials of more complicated arguments, definite integrals from Laplace transforms, and a combination of the error function with trigonometric functions. Other integrals considered include a combination of the error function with logarithms and powers, a combination of two error functions, and a combination of the error function with other special functions.

  9. Calibration and systematic error analysis for the COBE(1) DMR 4year sky maps

    SciTech Connect

    Kogut, A.; Banday, A.J.; Bennett, C.L.; Gorski, K.M.; Hinshaw,G.; Jackson, P.D.; Keegstra, P.; Lineweaver, C.; Smoot, G.F.; Tenorio,L.; Wright, E.L.

    1996-01-04

    The Differential Microwave Radiometers (DMR) instrument aboard the Cosmic Background Explorer (COBE) has mapped the full microwave sky to mean sensitivity 26 mu K per 7 degrees held of view. The absolute calibration is determined to 0.7 percent with drifts smaller than 0.2 percent per year. We have analyzed both the raw differential data and the pixelized sky maps for evidence of contaminating sources such as solar system foregrounds, instrumental susceptibilities, and artifacts from data recovery and processing. Most systematic effects couple only weakly to the sky maps. The largest uncertainties in the maps result from the instrument susceptibility to Earth's magnetic field, microwave emission from Earth, and upper limits to potential effects at the spacecraft spin period. Systematic effects in the maps are small compared to either the noise or the celestial signal: the 95 percent confidence upper limit for the pixel-pixel rms from all identified systematics is less than 6 mu K in the worst channel. A power spectrum analysis of the (A-B)/2 difference maps shows no evidence for additional undetected systematic effects.

  10. Calibration and Systematic Error Analysis for the COBE DMR 4 Year Sky Maps

    NASA Astrophysics Data System (ADS)

    Kogut, A.; Banday, A. J.; Bennett, C. L.; Gorski, K. M.; Hinshaw, G.; Jackson, P. D.; Keegstra, P.; Lineweaver, C.; Smoot, G. F.; Tenorio, L.; Wright, E. L.

    1996-10-01

    The Differential Microwave Radiometers (DMR) instrument aboard the Cosmic Background Explorer (CO BE) has mapped the full microwave sky to mean sensitivity 26 μK per 7° field of view. The absolute calibration is determined to 0.7% with drifts smaller than 0.2% per year. We have analyzed both the raw differential data and the pixelized sky maps for evidence of contaminating sources such as solar system foregrounds, instrumental susceptibilities, and artifacts from data recovery and processing. Most systematic effects couple only weakly to the sky maps. The largest uncertainties in the maps result from the instrument susceptibility to Earth's magnetic field, microwave emission from Earth, and upper limits to potential effects at the spacecraft spin period. Systematic effects in the maps are small compared to either the noise or the celestial signal: the 95% confidence upper limit for the pixel-pixel rms from all identified systematics is less than 6 μK in the worst channel. A power spectrum analysis of the (A - B)/2 difference maps shows no evidence for additional undetected systematic effects.

  11. Modelling non-linear redshift-space distortions in the galaxy clustering pattern: systematic errors on the growth rate parameter

    NASA Astrophysics Data System (ADS)

    de la Torre, Sylvain; Guzzo, Luigi

    2012-11-01

    We investigate the ability of state-of-the-art redshift-space distortion models for the galaxy anisotropic two-point correlation function, ξ(r⊥, r∥), to recover precise and unbiased estimates of the linear growth rate of structure f, when applied to catalogues of galaxies characterized by a realistic bias relation. To this aim, we make use of a set of simulated catalogues at z = 0.1 and 1 with different luminosity thresholds, obtained by populating dark matter haloes from a large N-body simulation using halo occupation prescriptions. We examine the most recent developments in redshift-space distortion modelling, which account for non-linearities on both small and intermediate scales produced, respectively, by randomized motions in virialized structures and non-linear coupling between the density and velocity fields. We consider the possibility of including the linear component of galaxy bias as a free parameter and directly estimate the growth rate of structure f. Results are compared to those obtained using the standard dispersion model, over different ranges of scales. We find that the model of Taruya et al., the most sophisticated one considered in this analysis, provides in general the most unbiased estimates of the growth rate of structure, with systematic errors within ±4 per cent over a wide range of galaxy populations spanning luminosities between L > L* and L > 3L*. The scale dependence of galaxy bias plays a role on recovering unbiased estimates of f when fitting quasi-non-linear scales. Its effect is particularly severe for most luminous galaxies, for which systematic effects in the modelling might be more difficult to mitigate and have to be further investigated. Finally, we also test the impact of neglecting the presence of non-negligible velocity bias with respect to mass in the galaxy catalogues. This can produce an additional systematic error of the order of 1-3 per cent depending on the redshift, comparable to the statistical errors the we

  12. Systematic and Statistical Errors Associated with Nuclear Decay Constant Measurements Using the Counting Technique

    NASA Astrophysics Data System (ADS)

    Koltick, David; Wang, Haoyu; Liu, Shih-Chieh; Heim, Jordan; Nistor, Jonathan

    2016-03-01

    Typical nuclear decay constants are measured at the accuracy level of 10-2. There are numerous reasons: tests of unconventional theories, dating of materials, and long term inventory evolution which require decay constants accuracy at a level of 10-4 to 10-5. The statistical and systematic errors associated with precision measurements of decays using the counting technique are presented. Precision requires high count rates, which introduces time dependent dead time and pile-up corrections. An approach to overcome these issues is presented by continuous recording of the detector current. Other systematic corrections include, the time dependent dead time due to background radiation, control of target motion and radiation flight path variation due to environmental conditions, and the time dependent effects caused by scattered events are presented. The incorporation of blind experimental techniques can help make measurement independent of past results. A spectrometer design and data analysis is reviewed that can accomplish these goals. The author would like to thank TechSource, Inc. and Advanced Physics Technologies, LLC. for their support in this work.

  13. Mapping systematic errors in helium abundance determinations using Markov Chain Monte Carlo

    SciTech Connect

    Aver, Erik; Olive, Keith A.; Skillman, Evan D. E-mail: olive@umn.edu

    2011-03-01

    Monte Carlo techniques have been used to evaluate the statistical and systematic uncertainties in the helium abundances derived from extragalactic H II regions. The helium abundance is sensitive to several physical parameters associated with the H II region. In this work, we introduce Markov Chain Monte Carlo (MCMC) methods to efficiently explore the parameter space and determine the helium abundance, the physical parameters, and the uncertainties derived from observations of metal poor nebulae. Experiments with synthetic data show that the MCMC method is superior to previous implementations (based on flux perturbation) in that it is not affected by biases due to non-physical parameter space. The MCMC analysis allows a detailed exploration of degeneracies, and, in particular, a false minimum that occurs at large values of optical depth in the He I emission lines. We demonstrate that introducing the electron temperature derived from the [O III] emission lines as a prior, in a very conservative manner, produces negligible bias and effectively eliminates the false minima occurring at large optical depth. We perform a frequentist analysis on data from several ''high quality'' systems. Likelihood plots illustrate degeneracies, asymmetries, and limits of the determination. In agreement with previous work, we find relatively large systematic errors, limiting the precision of the primordial helium abundance for currently available spectra.

  14. Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra

    NASA Astrophysics Data System (ADS)

    Whitmore, Jonathan B.; Murphy, Michael T.

    2015-02-01

    We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high-resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium-argon calibration can be tracked with ˜10 m s-1 precision over the entire optical wavelength range on scales of both echelle orders (˜50-100 Å) and entire spectrographs arms (˜1000-3000 Å). Using archival spectra from the past 20 yr, we have probed the supercalibration history of the Very Large Telescope-Ultraviolet and Visible Echelle Spectrograph (VLT-UVES) and Keck-High Resolution Echelle Spectrograph (HIRES) spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically ±200 m s-1 per 1000 Å. We apply a simple model of these distortions to simulated spectra that characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the fine-structure constant, α. The spurious deviations in α produced by the model closely match important aspects of the VLT-UVES quasar results at all redshifts and partially explain the HIRES results, though not self-consistently at all redshifts. That is, the apparent ubiquity, size and general characteristics of the distortions are capable of significantly weakening the evidence for variations in α from quasar absorption lines.

  15. X-ray optics metrology limited by random noise, instrumental drifts, and systematic errors

    SciTech Connect

    Yashchuk, Valeriy V.; Anderson, Erik H.; Barber, Samuel K.; Cambie, Rossana; Celestre, Richard; Conley, Raymond; Goldberg, Kenneth A.; McKinney, Wayne R.; Morrison, Gregory; Takacs, Peter Z.; Voronov, Dmitriy L.; Yuan, Sheng; Padmore, Howard A.

    2010-07-09

    Continuous, large-scale efforts to improve and develop third- and forth-generation synchrotron radiation light sources for unprecedented high-brightness, low emittance, and coherent x-ray beams demand diffracting and reflecting x-ray optics suitable for micro- and nano-focusing, brightness preservation, and super high resolution. One of the major impediments for development of x-ray optics with the required beamline performance comes from the inadequate present level of optical and at-wavelength metrology and insufficient integration of the metrology into the fabrication process and into beamlines. Based on our experience at the ALS Optical Metrology Laboratory, we review the experimental methods and techniques that allow us to mitigate significant optical metrology problems related to random, systematic, and drift errors with super-high-quality x-ray optics. Measurement errors below 0.2 mu rad have become routine. We present recent results from the ALS of temperature stabilized nano-focusing optics and dedicated at-wavelength metrology. The international effort to develop a next generation Optical Slope Measuring System (OSMS) to address these problems is also discussed. Finally, we analyze the remaining obstacles to further improvement of beamline x-ray optics and dedicated metrology, and highlight the ways we see to overcome the problems.

  16. IMRT optimization including random and systematic geometric errors based on the expectation of TCP and NTCP.

    PubMed

    Witte, Marnix G; van der Geer, Joris; Schneider, Christoph; Lebesque, Joos V; Alber, Markus; van Herk, Marcel

    2007-09-01

    The purpose of this work was the development of a probabilistic planning method with biological cost functions that does not require the definition of margins. Geometrical uncertainties were integrated in tumor control probability (TCP) and normal tissue complication probability (NTCP) objective functions for inverse planning. For efficiency reasons random errors were included by blurring the dose distribution and systematic errors by shifting structures with respect to the dose. Treatment plans were made for 19 prostate patients following four inverse strategies: Conformal with homogeneous dose to the planning target volume (PTV), a simultaneous integrated boost using a second PTV, optimization using TCP and NTCP functions together with a PTV, and probabilistic TCP and NTCP optimization for the clinical target volume without PTV. The resulting plans were evaluated by independent Monte Carlo simulation of many possible treatment histories including geometrical uncertainties. The results showed that the probabilistic optimization technique reduced the rectal wall volume receiving high dose, while at the same time increasing the dose to the clinical target volume. Without sacrificing the expected local control rate, the expected rectum toxicity could be reduced by 50% relative to the boost technique. The improvement over the conformal technique was larger yet. The margin based biological technique led to toxicity in between the boost and probabilistic techniques, but its control rates were very variable and relatively low. During evaluations, the sensitivity of the local control probability to variations in biological parameters appeared similar for all four strategies. The sensitivity to variations of the geometrical error distributions was strongest for the probabilistic technique. It is concluded that probabilistic optimization based on tumor control probability and normal tissue complication probability is feasible. It results in robust prostate treatment plans

  17. Defense Additive Manufacturing: DOD Needs to Systematically Track Department-wide 3D Printing Efforts

    DTIC Science & Technology

    2015-10-01

    Clip Additively Manufactured • The Navy installed a 3D printer aboard the USS Essex to demonstrate the ability to additively develop and produce...desired result and vision to have the capability on the fleet. These officials stated that the Navy plans to install 3D printers on two additional...DEFENSE ADDITIVE MANUFACTURING DOD Needs to Systematically Track Department-wide 3D Printing Efforts Report to

  18. The Additional Error of Inertial Sensors Induced by Hypersonic Flight Conditions

    PubMed Central

    Karachun, Volodimir; Mel’nick, Viktorij; Korobiichuk, Igor; Nowicki, Michał; Szewczyk, Roman; Kobzar, Svitlana

    2016-01-01

    The emergence of hypersonic technology pose a new challenge for inertial navigation sensors, widely used in aerospace industry. The main problems are: extremely high temperatures, vibration of the fuselage, penetrating acoustic radiation and shock N-waves. The nature of the additional errors of the gyroscopic inertial sensor with hydrostatic suspension components under operating conditions generated by forced precession of the movable part of the suspension due to diffraction phenomena in acoustic fields is explained. The cause of the disturbing moments in the form of the Coriolis inertia forces during the transition of the suspension surface into the category of impedance is revealed. The boundaries of occurrence of the features on the resonance wave match are described. The values of the “false” angular velocity as a result of the elastic-stress state of suspension in the acoustic fields are determined. PMID:26927122

  19. The Additional Error of Inertial Sensors Induced by Hypersonic Flight Conditions.

    PubMed

    Karachun, Volodimir; Mel'nick, Viktorij; Korobiichuk, Igor; Nowicki, Michał; Szewczyk, Roman; Kobzar, Svitlana

    2016-02-26

    The emergence of hypersonic technology pose a new challenge for inertial navigation sensors, widely used in aerospace industry. The main problems are: extremely high temperatures, vibration of the fuselage, penetrating acoustic radiation and shock N-waves. The nature of the additional errors of the gyroscopic inertial sensor with hydrostatic suspension components under operating conditions generated by forced precession of the movable part of the suspension due to diffraction phenomena in acoustic fields is explained. The cause of the disturbing moments in the form of the Coriolis inertia forces during the transition of the suspension surface into the category of impedance is revealed. The boundaries of occurrence of the features on the resonance wave match are described. The values of the "false" angular velocity as a result of the elastic-stress state of suspension in the acoustic fields are determined.

  20. Systematic Errors in Low-latency Gravitational Wave Parameter Estimation Impact Electromagnetic Follow-up Observations

    NASA Astrophysics Data System (ADS)

    Littenberg, Tyson B.; Farr, Ben; Coughlin, Scott; Kalogera, Vicky

    2016-03-01

    Among the most eagerly anticipated opportunities made possible by Advanced LIGO/Virgo are multimessenger observations of compact mergers. Optical counterparts may be short-lived so rapid characterization of gravitational wave (GW) events is paramount for discovering electromagnetic signatures. One way to meet the demand for rapid GW parameter estimation is to trade off accuracy for speed, using waveform models with simplified treatment of the compact objects’ spin. We report on the systematic errors in GW parameter estimation suffered when using different spin approximations to recover generic signals. Component mass measurements can be biased by \\gt 5σ using simple-precession waveforms and in excess of 20σ when non-spinning templates are employed. This suggests that electromagnetic observing campaigns should not take a strict approach to selecting which LIGO/Virgo candidates warrant follow-up observations based on low-latency mass estimates. For sky localization, we find that searched areas are up to a factor of ∼ 2 larger for non-spinning analyses, and are systematically larger for any of the simplified waveforms considered in our analysis. Distance biases for the non-precessing waveforms can be in excess of 100% and are largest when the spin angular momenta are in the orbital plane of the binary. We confirm that spin-aligned waveforms should be used for low-latency parameter estimation at the minimum. Including simple precession, though more computationally costly, mitigates biases except for signals with extreme precession effects. Our results shine a spotlight on the critical need for development of computationally inexpensive precessing waveforms and/or massively parallel algorithms for parameter estimation.

  1. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    SciTech Connect

    Strömberg, Sten; Nistor, Mihaela; Liu, Jing

    2014-11-15

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2{sup 4} full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world.

  2. Derivation and Application of a Global Albedo yielding an Optical Brightness To Physical Size Transformation Free of Systematic Errors

    NASA Technical Reports Server (NTRS)

    Mulrooney, Dr. Mark K.; Matney, Dr. Mark J.

    2007-01-01

    Orbital object data acquired via optical telescopes can play a crucial role in accurately defining the space environment. Radar systems probe the characteristics of small debris by measuring the reflected electromagnetic energy from an object of the same order of size as the wavelength of the radiation. This signal is affected by electrical conductivity of the bulk of the debris object, as well as its shape and orientation. Optical measurements use reflected solar radiation with wavelengths much smaller than the size of the objects. Just as with radar, the shape and orientation of an object are important, but we only need to consider the surface electrical properties of the debris material (i.e., the surface albedo), not the bulk electromagnetic properties. As a result, these two methods are complementary in that they measure somewhat independent physical properties to estimate the same thing, debris size. Short arc optical observations such as are typical of NASA's Liquid Mirror Telescope (LMT) give enough information to estimate an Assumed Circular Orbit (ACO) and an associated range. This information, combined with the apparent magnitude, can be used to estimate an "absolute" brightness (scaled to a fixed range and phase angle). This absolute magnitude is what is used to estimate debris size. However, the shape and surface albedo effects make the size estimates subject to systematic and random errors, such that it is impossible to ascertain the size of an individual object with any certainty. However, as has been shown with radar debris measurements, that does not preclude the ability to estimate the size distribution of a number of objects statistically. After systematic errors have been eliminated (range errors, phase function assumptions, photometry) there remains a random geometric albedo distribution that relates object size to absolute magnitude. Measurements by the LMT of a subset of tracked debris objects with sizes estimated from their radar cross

  3. Standard addition/absorption detection microfluidic system for salt error-free nitrite determination.

    PubMed

    Ahn, Jae-Hoon; Jo, Kyoung Ho; Hahn, Jong Hoon

    2015-07-30

    A continuous-flow microfluidic chip-based standard addition/absorption detection system has been developed for accurate determination of nitrite in water of varying salinity. The absorption detection of nitrite is made via color development using the Griess reaction. We have found the yield of the reaction is significantly affected by salinity (e.g., -12% error for 30‰ NaCl, 50.0 μg L(-1)N-NO2(-) solution). The microchip has been designed to perform standard addition, color development, and absorbance detection in sequence. To effectively block stray light, the microchip made from black poly(dimethylsiloxane) is placed on the top of a compact housing that accommodates a light-emitting diode, a photomultiplier tube, and an interference filter, where the light source and the detector are optically isolated. An 80-mm liquid-core waveguide mounted on the chip externally has been employed as the absorption detection flow cell. These designs for optics secure a wide linear response range (up to 500 μg L(-1)N-NO2(-)) and a low detection limit (0.12 μg L(-1)N-NO2(-) = 8.6 nM N-NO2(-), S/N = 3). From determination of nitrite in standard samples and real samples collected from an estuary, it has been demonstrated that our microfluidic system is highly accurate (<1% RSD, n = 3) and precise (<1% RSD, n = 3).

  4. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    SciTech Connect

    Nelms, Benjamin E.; Chan, Maria F.; Jarry, Geneviève; Lemire, Matthieu; Lowden, John; Hampton, Carnell

    2013-11-15

    correctable after detection and diagnosis, and the uncorrectable errors provided useful information about system limitations, which is another key element of system commissioning.Conclusions: Many forms of relevant systematic errors can go undetected when the currently prevalent metrics for IMRT/VMAT commissioning are used. If alternative methods and metrics are used instead of (or in addition to) the conventional metrics, these errors are more likely to be detected, and only once they are detected can they be properly diagnosed and rooted out of the system. Removing systematic errors should be a goal not only of commissioning by the end users but also product validation by the manufacturers. For any systematic errors that cannot be removed, detecting and quantifying them is important as it will help the physicist understand the limits of the system and work with the manufacturer on improvements. In summary, IMRT and VMAT commissioning, along with product validation, would benefit from the retirement of the 3%/3 mm passing rates as a primary metric of performance, and the adoption instead of tighter tolerances, more diligent diagnostics, and more thorough analysis.

  5. UNDERSTANDING SYSTEMATIC MEASUREMENT ERROR IN THERMAL-OPTICAL ANALYSIS FOR PM BLACK CARBON USING RESPONSE SURFACES AND SURFACE CONFIDENCE INTERVALS

    EPA Science Inventory

    Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...

  6. Analysis of systematic errors of the ASM/RXTE monitor and GT-48 γ-ray telescope

    NASA Astrophysics Data System (ADS)

    Fidelis, V. V.

    2011-06-01

    The observational data concerning variations of light curves of supernovae remnants—the Crab Nebula, Cassiopeia A, Tycho Brahe, and pulsar Vela—over 14 days scale that may be attributed to systematic errors of the ASM/RXTE monitor are presented. The experimental systematic errors of the GT-48 γ-ray telescope in the mono mode of operation were also determined. For this the observational data of TeV J2032 + 4130 (Cyg γ-2, according to the Crimean version) were used and the stationary nature of its γ-ray emission was confirmed by long-term observations performed with HEGRA and MAGIC. The results of research allow us to draw the following conclusions: (1) light curves of supernovae remnants averaged for long observing periods have false statistically significant flux variations, (2) the level of systematic errors is proportional to the registered flux and decreases with increasing temporal scale of averaging, (3) the light curves of sources may be modulated by the year period, and (4) the systematic errors of the GT-48 γ-ray telescope, in the amount caused by observations in the mono mode and data processing with the stereo-algorithm come to 0.12 min-1.

  7. Measurements of Intrahost Viral Diversity Are Extremely Sensitive to Systematic Errors in Variant Calling

    PubMed Central

    McCrone, John T.

    2016-01-01

    ABSTRACT With next-generation sequencing technologies, it is now feasible to efficiently sequence patient-derived virus populations at a depth of coverage sufficient to detect rare variants. However, each sequencing platform has characteristic error profiles, and sample collection, target amplification, and library preparation are additional processes whereby errors are introduced and propagated. Many studies account for these errors by using ad hoc quality thresholds and/or previously published statistical algorithms. Despite common usage, the majority of these approaches have not been validated under conditions that characterize many studies of intrahost diversity. Here, we use defined populations of influenza virus to mimic the diversity and titer typically found in patient-derived samples. We identified single-nucleotide variants using two commonly employed variant callers, DeepSNV and LoFreq. We found that the accuracy of these variant callers was lower than expected and exquisitely sensitive to the input titer. Small reductions in specificity had a significant impact on the number of minority variants identified and subsequent measures of diversity. We were able to increase the specificity of DeepSNV to >99.95% by applying an empirically validated set of quality thresholds. When applied to a set of influenza virus samples from a household-based cohort study, these changes resulted in a 10-fold reduction in measurements of viral diversity. We have made our sequence data and analysis code available so that others may improve on our work and use our data set to benchmark their own bioinformatics pipelines. Our work demonstrates that inadequate quality control and validation can lead to significant overestimation of intrahost diversity. IMPORTANCE Advances in sequencing technology have made it feasible to sequence patient-derived viral samples at a level sufficient for detection of rare mutations. These high-throughput, cost-effective methods are revolutionizing

  8. Basis set limit and systematic errors in local-orbital based all-electron DFT

    NASA Astrophysics Data System (ADS)

    Blum, Volker; Behler, Jörg; Gehrke, Ralf; Reuter, Karsten; Scheffler, Matthias

    2006-03-01

    With the advent of efficient integration schemes,^1,2 numeric atom-centered orbitals (NAO's) are an attractive basis choice in practical density functional theory (DFT) calculations of nanostructured systems (surfaces, clusters, molecules). Though all-electron, the efficiency of practical implementations promises to be on par with the best plane-wave pseudopotential codes, while having a noticeably higher accuracy if required: Minimal-sized effective tight-binding like calculations and chemically accurate all-electron calculations are both possible within the same framework; non-periodic and periodic systems can be treated on equal footing; and the localized nature of the basis allows in principle for O(N)-like scaling. However, converging an observable with respect to the basis set is less straightforward than with competing systematic basis choices (e.g., plane waves). We here investigate the basis set limit of optimized NAO basis sets in all-electron calculations, using as examples small molecules and clusters (N2, Cu2, Cu4, Cu10). meV-level total energy convergence is possible using <=50 basis functions per atom in all cases. We also find a clear correlation between the errors which arise from underconverged basis sets, and the system geometry (interatomic distance). ^1 B. Delley, J. Chem. Phys. 92, 508 (1990), ^2 J.M. Soler et al., J. Phys.: Condens. Matter 14, 2745 (2002).

  9. Comparison of weak lensing by NFW and Einasto halos and systematic errors

    SciTech Connect

    Sereno, Mauro; Moscardini, Lauro

    2016-01-01

    Recent N-body simulations have shown that Einasto radial profiles provide the most accurate description of dark matter halos. Predictions based on the traditional NFW functional form may fail to describe the structural properties of cosmic objects at the percent level required by precision cosmology. We computed the systematic errors expected for weak lensing analyses of clusters of galaxies if one wrongly models the lens density profile. Even though the NFW fits of observed tangential shear profiles can be excellent, viral masses and concentrations of very massive halos (∼> 10{sup 15}M{sub ⊙}/h) can be over- and underestimated by 0∼ 1 per cent, respectively. Misfitting effects also steepen the observed mass-concentration relation, as observed in multi-wavelength observations of galaxy groups and clusters. Based on shear analyses, Einasto and NFW halos can be set apart either with deep observations of exceptionally massive structures (∼> 2×10{sup 15}M{sub ⊙}/h) or by stacking the shear profiles of thousands of group-sized lenses (∼> 10{sup 14}M{sub ⊙}/h)

  10. An online model correction method based on an inverse problem: Part II—systematic model error correction

    NASA Astrophysics Data System (ADS)

    Xue, Haile; Shen, Xueshun; Chou, Jifan

    2015-11-01

    An online systematic error correction is presented and examined as a technique to improve the accuracy of real-time numerical weather prediction, based on the dataset of model errors (MEs) in past intervals. Given the analyses, the ME in each interval (6 h) between two analyses can be iteratively obtained by introducing an unknown tendency term into the prediction equation, shown in Part I of this two-paper series. In this part, after analyzing the 5-year (2001-2005) GRAPES-GFS (Global Forecast System of the Global and Regional Assimilation and Prediction System) error patterns and evolution, a systematic model error correction is given based on the least-squares approach by firstly using the past MEs. To test the correction, we applied the approach in GRAPES-GFS for July 2009 and January 2010. The datasets associated with the initial condition and SST used in this study were based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results indicated that the Northern Hemispheric systematically underestimated equator-to-pole geopotential gradient and westerly wind of GRAPES-GFS were largely enhanced, and the biases of temperature and wind in the tropics were strongly reduced. Therefore, the correction results in a more skillful forecast with lower mean bias and root-mean-square error and higher anomaly correlation coefficient.

  11. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    PubMed Central

    Casas, Francisco J.; Ortiz, David; Villa, Enrique; Cano, Juan L.; Cagigas, Jaime; Pérez, Ana R.; Aja, Beatriz; Terán, J. Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-01-01

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906

  12. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-area Sky Surveys

    NASA Astrophysics Data System (ADS)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; Tucker, D.; Kessler, R.; Annis, J.; Bernstein, G. M.; Boada, S.; Burke, D. L.; Finley, D. A.; James, D. J.; Kent, S.; Lin, H.; Marriner, J.; Mondrik, N.; Nagasawa, D.; Rykoff, E. S.; Scolnic, D.; Walker, A. R.; Wester, W.; Abbott, T. M. C.; Allam, S.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Capozzi, D.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Doel, P.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gaztanaga, E.; Goldstein, D. A.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Kuehn, K.; Kuropatkin, N.; Maia, M. A. G.; Melchior, P.; Miller, C. J.; Miquel, R.; Mohr, J. J.; Neilsen, E.; Nichol, R. C.; Nord, B.; Ogando, R.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Thomas, D.; Vikram, V.; DES Collaboration

    2016-06-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%-2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for

  13. SU-E-T-550: Range Effects in Proton Therapy Caused by Systematic Errors in the Stoichiometric Calibration

    SciTech Connect

    Doolan, P; Dias, M; Collins Fekete, C; Seco, J

    2014-06-01

    Purpose: The procedure for proton treatment planning involves the conversion of the patient's X-ray CT from Hounsfield units into relative stopping powers (RSP), using a stoichiometric calibration curve (Schneider 1996). In clinical practice a 3.5% margin is added to account for the range uncertainty introduced by this process and other errors. RSPs for real tissues are calculated using composition data and the Bethe-Bloch formula (ICRU 1993). The purpose of this work is to investigate the impact that systematic errors in the stoichiometric calibration have on the proton range. Methods: Seven tissue inserts of the Gammex 467 phantom were imaged using our CT scanner. Their known chemical compositions (Watanabe 1999) were then used to calculate the theoretical RSPs, using the same formula as would be used for human tissues in the stoichiometric procedure. The actual RSPs of these inserts were measured using a Bragg peak shift measurement in the proton beam at our institution. Results: The theoretical calculation of the RSP was lower than the measured RSP values, by a mean/max error of - 1.5/-3.6%. For all seven inserts the theoretical approach underestimated the RSP, with errors variable across the range of Hounsfield units. Systematic errors for lung (average of two inserts), adipose and cortical bone were - 3.0/-2.1/-0.5%, respectively. Conclusion: There is a systematic underestimation caused by the theoretical calculation of RSP; a crucial step in the stoichiometric calibration procedure. As such, we propose that proton calibration curves should be based on measured RSPs. Investigations will be made to see if the same systematic errors exist for biological tissues. The impact of these differences on the range of proton beams, for phantoms and patient scenarios, will be investigated. This project was funded equally by the Engineering and Physical Sciences Research Council (UK) and Ion Beam Applications (Louvain-La-Neuve, Belgium)

  14. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    DOE PAGES

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; ...

    2016-06-01

    Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift

  15. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    SciTech Connect

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; Tucker, D.; Kessler, R.; Annis, J.; Bernstein, G. M.; Boada, S.; Burke, D. L.; Finley, D. A.; James, D. J.; Kent, S.; Lin, H.; Marriner, J.; Mondrik, N.; Nagasawa, D.; Rykoff, E. S.; Scolnic, D.; Walker, A. R.; Wester, W.; Abbott, T. M. C.; Allam, S.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Capozzi, D.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Crocce, M.; Cunha, C. E.; D’Andrea, C. B.; Costa, L. N. da; Desai, S.; Diehl, H. T.; Doel, P.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gaztanaga, E.; Goldstein, D. A.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Kuehn, K.; Kuropatkin, N.; Maia, M. A. G.; Melchior, P.; Miller, C. J.; Miquel, R.; Mohr, J. J.; Neilsen, E.; Nichol, R. C.; Nord, B.; Ogando, R.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Thomas, D.; Vikram, V.

    2016-06-01

    Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift

  16. Minimizing systematic errors from atmospheric multiple scattering and satellite viewing geometry in coastal zone color scanner level IIA imagery

    NASA Technical Reports Server (NTRS)

    Martin, D. L.; Perry, M. J.

    1994-01-01

    Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.

  17. Assessment of the accuracy of global geodetic satellite laser ranging observations and estimated impact on ITRF scale: estimation of systematic errors in LAGEOS observations 1993-2014

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Rodríguez, José; Altamimi, Zuheir

    2016-12-01

    Satellite laser ranging (SLR) to the geodetic satellites LAGEOS and LAGEOS-2 uniquely determines the origin of the terrestrial reference frame and, jointly with very long baseline interferometry, its scale. Given such a fundamental role in satellite geodesy, it is crucial that any systematic errors in either technique are at an absolute minimum as efforts continue to realise the reference frame at millimetre levels of accuracy to meet the present and future science requirements. Here, we examine the intrinsic accuracy of SLR measurements made by tracking stations of the International Laser Ranging Service using normal point observations of the two LAGEOS satellites in the period 1993 to 2014. The approach we investigate in this paper is to compute weekly reference frame solutions solving for satellite initial state vectors, station coordinates and daily Earth orientation parameters, estimating along with these weekly average range errors for each and every one of the observing stations. Potential issues in any of the large number of SLR stations assumed to have been free of error in previous realisations of the ITRF may have been absorbed in the reference frame, primarily in station height. Likewise, systematic range errors estimated against a fixed frame that may itself suffer from accuracy issues will absorb network-wide problems into station-specific results. Our results suggest that in the past two decades, the scale of the ITRF derived from the SLR technique has been close to 0.7 ppb too small, due to systematic errors either or both in the range measurements and their treatment. We discuss these results in the context of preparations for ITRF2014 and additionally consider the impact of this work on the currently adopted value of the geocentric gravitational constant, GM.

  18. Effects of Systematic and Random Errors on the Retrieval of Particle Microphysical Properties from Multiwavelength Lidar Measurements Using Inversion with Regularization

    NASA Technical Reports Server (NTRS)

    Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas

    2013-01-01

    In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.

  19. Multiplicative errors in the galaxy power spectrum: self-calibration of unknown photometric systematics for precision cosmology

    NASA Astrophysics Data System (ADS)

    Shafer, Daniel L.; Huterer, Dragan

    2015-03-01

    We develop a general method to `self-calibrate' observations of galaxy clustering with respect to systematics associated with photometric calibration errors. We first point out the danger posed by the multiplicative effect of calibration errors, where large-angle error propagates to small scales and may be significant even if the large-scale information is cleaned or not used in the cosmological analysis. We then propose a method to measure the arbitrary large-scale calibration errors and use these measurements to correct the small-scale (high-multipole) power which is most useful for constraining the majority of cosmological parameters. We demonstrate the effectiveness of our approach on synthetic examples and briefly discuss how it may be applied to real data.

  20. Systematic estimation of forecast and observation error covariances in four-dimensional data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, D. P.; Cohn, S. E.; Ghil, M.

    1985-01-01

    A two-part algorithm is presented for reliably computing weather forecast model and observational error covariances during data assimilation. Data errors arise from instrumental inaccuracies and sub-grid scale variability, whereas forecast errors occur because of modeling errors and the propagation of previous analysis errors. A Kalman filter is defined as the primary algorithm for estimating the forecast and analysis error convariance matrices. A second algorithm is described for quantifying the noise covariance matrices of any degree to obtain accurate values for the observational error covariances. Numerical results are provided from a linearized one-dimensional shallow-water model. The results cover observational noise covariances, initial instrumental errors and erroneous model values.

  1. Procedures for dealing with certain types of noise and systematic errors common to many Hadamard transform optical systems

    NASA Technical Reports Server (NTRS)

    Harwit, M.

    1977-01-01

    Sources of noise and error correcting procedures characteristic of Hadamard transform optical systems were investigated. Reduction of spectral noise due to noise spikes in the data, the effect of random errors, the relative performance of Fourier and Hadamard transform spectrometers operated under identical detector-noise-limited conditions, and systematic means for dealing with mask defects are among the topics discussed. The distortion in Hadamard transform optical instruments caused by moving Masks, incorrect mask alignment, missing measurements, and diffraction is analyzed and techniques for reducing or eliminating this distortion are described.

  2. A Novel, Physics-Based Data Analytics Framework for Reducing Systematic Model Errors

    NASA Astrophysics Data System (ADS)

    Wu, W.; Liu, Y.; Vandenberghe, F. C.; Knievel, J. C.; Hacker, J.

    2015-12-01

    Most climate and weather models exhibit systematic biases, such as under predicted diurnal temperatures in the WRF (Weather Research and Forecasting) model. General approaches to alleviate the systematic biases include improving model physics and numerics, improving data assimilation, and bias correction through post-processing. In this study, we developed a novel, physics-based data analytics framework in post processing by taking advantage of ever-growing high-resolution (spatial and temporal) observational and modeling data. In the framework, a spatiotemporal PCA (Principal Component Analysis) is first applied on the observational data to filter out noise and information on scales that a model may not be able to resolve. The filtered observations are then used to establish regression relationships with archived model forecasts in the same spatiotemporal domain. The regressions along with the model forecasts predict the projected observations in the forecasting period. The pre-regression PCA procedure strengthens regressions, and enhances predictive skills. We then combine the projected observations with the past observations to apply PCA iteratively to derive the final forecasts. This post-regression PCA reconstructs variances and scales of information that are lost in the regression. The framework was examined and validated with 24 days of 5-minute observational data and archives from the WRF model at 27 stations near Dugway Proving Ground, Utah. The validation shows significant bias reduction in the diurnal cycle of predicted surface air temperature compared to the direct output from the WRF model. Additionally, unlike other post-processing bias correction schemes, the data analytics framework does not require long-term historic data and model archives. A week or two of the data is enough to take into account changes in weather regimes. The program, written in python, is also computationally efficient.

  3. Optimizing MRI-targeted fusion prostate biopsy: the effect of systematic error and anisotropy on tumor sampling

    NASA Astrophysics Data System (ADS)

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2015-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the 21-47% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still has a substantial false negative rate. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. As a step toward this optimization, we obtained multiparametric MRI (mpMRI) and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy, and investigated the effects of systematic errors and anisotropy on P. Our experiments indicated that a biopsy system's lateral and elevational errors have a much greater effect on sampling probabilities, relative to its axial error. We have also determined that for a system with RMS error of 3.5 mm, tumors of volume 1.9 cm3 and smaller may require more than one biopsy core to ensure 95% probability of a sample with 50% core involvement, and tumors 1.0 cm3 and smaller may require more than two cores.

  4. Nature versus nurture: A systematic approach to elucidate gene-environment interactions in the development of myopic refractive errors.

    PubMed

    Miraldi Utz, Virginia

    2017-01-01

    Myopia is the most common eye disorder and major cause of visual impairment worldwide. As the incidence of myopia continues to rise, the need to further understand the complex roles of molecular and environmental factors controlling variation in refractive error is of increasing importance. Tkatchenko and colleagues applied a systematic approach using a combination of gene set enrichment analysis, genome-wide association studies, and functional analysis of a murine model to identify a myopia susceptibility gene, APLP2. Differential expression of refractive error was associated with time spent reading for those with low frequency variants in this gene. This provides support for the longstanding hypothesis of gene-environment interactions in refractive error development.

  5. Doppler imaging of chemical spots on magnetic Ap/Bp stars. Numerical tests and assessment of systematic errors

    NASA Astrophysics Data System (ADS)

    Kochukhov, O.

    2017-01-01

    Context. Doppler imaging (DI) is a powerful spectroscopic inversion technique that enables conversion of a line profile time series into a two-dimensional map of the stellar surface inhomogeneities. DI has been repeatedly applied to reconstruct chemical spot topologies of magnetic Ap/Bp stars with the goal of understanding variability of these objects and gaining an insight into the physical processes responsible for spot formation. Aims: In this paper we investigate the accuracy of chemical abundance DI and assess the impact of several different systematic errors on the reconstructed spot maps. Methods: We have simulated spectroscopic observational data for two different Fe spot distributions with a surface abundance contrast of 1.5 dex in the presence of a moderately strong dipolar magnetic field. We then reconstructed chemical maps using different sets of spectral lines and making different assumptions about line formation in the inversion calculations. Results: Our numerical experiments demonstrate that a modern DI code successfully recovers the input chemical spot distributions comprised of multiple circular spots at different latitudes or an element overabundance belt at the magnetic equator. For the optimal reconstruction based on half a dozen spectral intervals, the average reconstruction errors do not exceed 0.10 dex. The errors increase to about 0.15 dex when abundance distributions are recovered from a few and/or blended spectral lines. Ignoring a 2.5 kG dipolar magnetic field in chemical abundance DI leads to an average relative error of 0.2 dex and maximum errors of 0.3 dex. Similar errors are encountered if a DI inversion is carried out neglecting a non-uniform continuum brightness distribution and variation of the local atmospheric structure. None of the considered systematic effects lead to major spurious features in the recovered abundance maps. Conclusions: This series of numerical DI simulations proves that inversions based on one or two spectral

  6. Systematic evaluation of errors occurring during the preparation of intravenous medication

    PubMed Central

    Parshuram, Christopher S.; To, Teresa; Seto, Winnie; Trope, Angela; Koren, Gideon; Laupacis, Andreas

    2008-01-01

    Introduction Errors in the concentration of intravenous medications are not uncommon. We evaluated steps in the infusion-preparation process to identify factors associated with preventable medication errors. Methods We included 118 health care professionals who would be involved in the preparation of intravenous medication infusions as part of their regular clinical activities. Participants performed 5 infusion-preparation tasks (drug-volume calculation, rounding, volume measurement, dose-volume calculation, mixing) and prepared 4 morphine infusions to specified concentrations. The primary outcome was the occurrence of error (deviation of > 5% for volume measurement and > 10% for other measures). The secondary outcome was the magnitude of error. Results Participants performed 1180 drug-volume calculations, 1180 rounding calculations and made 1767 syringe-volume measurements, and they prepared 464 morphine infusions. We detected errors in 58 (4.9%, 95% confidence interval [CI] 3.7% to 6.2%) drug-volume calculations, 30 (2.5%, 95% CI 1.6% to 3.4%) rounding calculations and 29 (1.6%, 95% CI 1.1% to 2.2%) volume measurements. We found 7 errors (1.6%, 95% CI 0.4% to 2.7%) in drug mixing. Of the 464 infusion preparations, 161 (34.7%, 95% CI 30.4% to 39%) contained concentration errors. Calculator use was associated with fewer errors in dose-volume calculations (4% v. 10%, p = 0.001). Four factors were positively associated with the occurence of a concentration error: fewer infusions prepared in the previous week (p = 0.007), increased number of years of professional experience (p = 0.01), the use of the more concentrated stock solution (p < 0.001) and the preparation of smaller dose volumes (p < 0.001). Larger magnitude errors were associated with fewer hours of sleep in the previous 24 hours (p = 0.02), the use of more concentrated solutions (p < 0.001) and preparation of smaller infusion doses (p < 0.001). Interpretation Our data suggest that the reduction of provider

  7. Systematic sparse matrix error control for linear scaling electronic structure calculations.

    PubMed

    Rubensson, Emanuel H; Sałek, Paweł

    2005-11-30

    Efficient truncation criteria used in multiatom blocked sparse matrix operations for ab initio calculations are proposed. As system size increases, so does the need to stay on top of errors and still achieve high performance. A variant of a blocked sparse matrix algebra to achieve strict error control with good performance is proposed. The presented idea is that the condition to drop a certain submatrix should depend not only on the magnitude of that particular submatrix, but also on which other submatrices that are dropped. The decision to remove a certain submatrix is based on the contribution the removal would cause to the error in the chosen norm. We study the effect of an accumulated truncation error in iterative algorithms like trace correcting density matrix purification. One way to reduce the initial exponential growth of this error is presented. The presented error control for a sparse blocked matrix toolbox allows for achieving optimal performance by performing only necessary operations needed to maintain the requested level of accuracy.

  8. Evaluation of the systematic error in using 3D dose calculation in scanning beam proton therapy for lung cancer.

    PubMed

    Li, Heng; Liu, Wei; Park, Peter; Matney, Jason; Liao, Zhongxing; Chang, Joe; Zhang, Xiaodong; Li, Yupeng; Zhu, Ronald X

    2014-09-08

    The objective of this study was to evaluate and understand the systematic error between the planned three-dimensional (3D) dose and the delivered dose to patient in scanning beam proton therapy for lung tumors. Single-field and multifield optimized scanning beam proton therapy plans were generated for ten patients with stage II-III lung cancer with a mix of tumor motion and size. 3D doses in CT datasets for different respiratory phases and the time-weighted average CT, as well as the four-dimensional (4D) doses were computed for both plans. The 3D and 4D dose differences for the targets and different organs at risk were compared using dose-volume histogram (DVH) and voxel-based techniques, and correlated with the extent of tumor motion. The gross tumor volume (GTV) dose was maintained in all 3D and 4D doses, using the internal GTV override technique. The DVH and voxel-based techniques are highly correlated. The mean dose error and the standard deviation of dose error for all target volumes were both less than 1.5% for all but one patient. However, the point dose difference between the 3D and 4D doses was up to 6% for the GTV and greater than 10% for the clinical and planning target volumes. Changes in the 4D and 3D doses were not correlated with tumor motion. The planning technique (single-field or multifield optimized) did not affect the observed systematic error. In conclusion, the dose error in 3D dose calculation varies from patient to patient and does not correlate with lung tumor motion. Therefore, patient-specific evaluation of the 4D dose is important for scanning beam proton therapy for lung tumors.

  9. Constituent quarks and systematic errors in mid-rapidity charged multiplicity (dNch / dη distributions

    NASA Astrophysics Data System (ADS)

    Tannenbaum, Michael

    2017-01-01

    Although it was demonstrated more than 13 years ago that the increase in midrapidity dNch / dη with increasing centrality of Au+Au collisions at RHIC was linearly proportional to the number of constituent quark participants (or ``wounded quarks'', QW) in the collision, it was only in the last few years that generating the spatial positions of the three quarks in a nucleon according to the Fourier transform of the measured electric charge form factor of the proton could be used to connect dNch / dη /QW as a function of centrality in p(d) +A and A +A collisions with the same value of dNch / dη /QW determined in p +p collisions. One calculation, which only compared its calculated dNch / dη /QW in p +p at √{sNN} = 200 GeV to the least central of 12 centrality bin measurements in Au +Au by PHENIX, claimed that the p +p value was higher by ``about 30%'' from the band of measurements vs. centrality. However the clearly quoted systematic errors were ignored for which a 1 standard deviation systematic shift would move all the 12 Au +Au data points to within 1.3 standard deviations of the p +p value, or if the statistical and systematic errors are added in quadrature a difference of 35 +/- 21%. Rearch supported by U.S. Department of Energy, Contract No. DE-SC0012704.

  10. Effect of critical care pharmacist's intervention on medication errors: A systematic review and meta-analysis of observational studies.

    PubMed

    Wang, Tiansheng; Benedict, Neal; Olsen, Keith M; Luan, Rong; Zhu, Xi; Zhou, Ningning; Tang, Huilin; Yan, Yingying; Peng, Yao; Shi, Luwen

    2015-10-01

    Pharmacists are integral members of the multidisciplinary team for critically ill patients. Multiple nonrandomized controlled studies have evaluated the outcomes of pharmacist interventions in the intensive care unit (ICU). This systematic review focuses on controlled clinical trials evaluating the effect of pharmacist intervention on medication errors (MEs) in ICU settings. Two independent reviewers searched Medline, Embase, and Cochrane databases. The inclusion criteria were nonrandomized controlled studies that evaluated the effect of pharmacist services vs no intervention on ME rates in ICU settings. Four studies were included in the meta-analysis. Results suggest that pharmacist intervention has no significant contribution to reducing general MEs, although pharmacist intervention may significantly reduce preventable adverse drug events and prescribing errors. This meta-analysis highlights the need for high-quality studies to examine the effect of the critical care pharmacist.

  11. Cluster Monte Carlo: Scaling of systematic errors in the two-dimensional Ising model

    SciTech Connect

    Shchur, L.N.; Bloete, H.W.

    1997-05-01

    We present an extensive analysis of systematic deviations in Wolff cluster simulations of the critical Ising model, using random numbers generated by binary shift registers. We investigate how these deviations depend on the lattice size, the shift-register length, and the number of bits correlated by the production rule. They appear to satisfy scaling relations. {copyright} {ital 1997} {ital The American Physical Society}

  12. Systematic Errors in Stereo PIV When Imaging through a Glass Window

    NASA Technical Reports Server (NTRS)

    Green, Richard; McAlister, Kenneth W.

    2004-01-01

    This document assesses the magnitude of velocity measurement errors that may arise when performing stereo particle image velocimetry (PIV) with cameras viewing through thick, refractive window and where the calibration is performed in one plane only. The effect of the window is to introduce a refractive error that increases with window thickness and the camera angle of incidence. The calibration should be performed while viewing through the test section window, otherwise a potentially significant error may be introduced that affects each velocity component differently. However, even when the calibration is performed correctly, another error may arise during the stereo reconstruction if the perspective angle determined for each camera does not account for the displacement of the light rays as they refract through the thick window. Care should be exercised when applying in a single-plane calibration since certain implicit assumptions may in fact require conditions that are extremely difficult to meet in a practical laboratory environment. It is suggested that the effort expended to ensure this accuracy may be better expended in performing a more lengthy volumetric calibration procedure, which does not rely upon the assumptions implicit in the single plane method and avoids the need for the perspective angle to be calculated.

  13. Reduction of Systematic Errors in Diagnostic Receivers Through the Use of Balanced Dicke-Switching and Y-Factor Noise Calibrations

    SciTech Connect

    John Musson, Trent Allison, Roger Flood, Jianxun Yan

    2009-05-01

    Receivers designed for diagnostic applications range from those having moderate sensitivity to those possessing large dynamic range. Digital receivers have a dynamic range which are a function of the number of bits represented by the ADC and subsequent processing. If some of this range is sacrificed for extreme sensitivity, noise power can then be used to perform two-point load calibrations. Since load temperatures can be precisely determined, the receiver can be quickly and accurately characterized; minute changes in system gain can then be detected, and systematic errors corrected. In addition, using receiver pairs in a balanced approach to measuring X+, X-, Y+, Y-, reduces systematic offset errors from non-identical system gains, and changes in system performance. This paper describes and demonstrates a balanced BPM-style diagnostic receiver, employing Dicke-switching to establish and maintain real-time system calibration. Benefits of such a receiver include wide bandwidth, solid absolute accuracy, improved position accuracy, and phase-sensitive measurements. System description, static and dynamic modelling, and measurement data are presented.

  14. Modeling Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter

    NASA Astrophysics Data System (ADS)

    Stephenson, Edward; Imig, Astrid

    2009-10-01

    The Storage Ring EDM Collaboration has obtained a set of measurements detailing the sensitivity of a storage ring polarimeter for deuterons to small geometrical and rate changes. Various schemes, such as the calculation of the cross ratio [1], can cancel effects due to detector acceptance differences and luminosity differences for states of opposite polarization. Such schemes fail at second-order in the errors, becoming sensitive to geometrical changes, polarization magnitude differences between opposite polarization states, and changes to the detector response with changing data rates. An expansion of the polarimeter response in a Taylor series based on small errors about the polarimeter operating point can parametrize such effects, primarily in terms of the logarithmic derivatives of the cross section and analyzing power. A comparison will be made to measurements obtained with the EDDA detector at COSY-J"ulich. [4pt] [1] G.G. Ohlsen and P.W. Keaton, Jr., NIM 109, 41 (1973).

  15. Diagnostic errors in older patients: a systematic review of incidence and potential causes in seven prevalent diseases

    PubMed Central

    Skinner, Thomas R; Scott, Ian A; Martin, Jennifer H

    2016-01-01

    Background Misdiagnosis, either over- or underdiagnosis, exposes older patients to increased risk of inappropriate or omitted investigations and treatments, psychological distress, and financial burden. Objective To evaluate the frequency and nature of diagnostic errors in 16 conditions prevalent in older patients by undertaking a systematic literature review. Data sources and study selection Cohort studies, cross-sectional studies, or systematic reviews of such studies published in Medline between September 1993 and May 2014 were searched using key search terms of “diagnostic error”, “misdiagnosis”, “accuracy”, “validity”, or “diagnosis” and terms relating to each disease. Data synthesis A total of 938 articles were retrieved. Diagnostic error rates of >10% for both over- and underdiagnosis were seen in chronic obstructive pulmonary disease, dementia, Parkinson’s disease, heart failure, stroke/transient ischemic attack, and acute myocardial infarction. Diabetes was overdiagnosed in <5% of cases. Conclusion Over- and underdiagnosis are common in older patients. Explanations for over-diagnosis include subjective diagnostic criteria and the use of criteria not validated in older patients. Underdiagnosis was associated with long preclinical phases of disease or lack of sensitive diagnostic criteria. Factors that predispose to misdiagnosis in older patients must be emphasized in education and clinical guidelines. PMID:27284262

  16. No additional value of fusion techniques on anterior discectomy for neck pain: a systematic review.

    PubMed

    van Middelkoop, Marienke; Rubinstein, Sidney M; Ostelo, Raymond; van Tulder, Maurits W; Peul, Wilco; Koes, Bart W; Verhagen, Arianne P

    2012-11-01

    We aimed to assess the effects of additional fusion on surgical interventions to the cervical spine for patients with neck pain with or without radiculopathy or myelopathy by performing a systematic review. The search strategy outlined by the Cochrane Back Review Group (CBRG) was followed. The primary search was conducted in MEDLINE, EMBASE, CINAHL, CENTRAL and PEDro up to June 2011. Only randomised, controlled trials of adults with neck pain that evaluated at least one clinically relevant primary outcome measure (pain, functional status, recovery) were included. Two authors independently assessed the risk of bias by using the criteria recommended by the CBRG and extracted the data. Data were pooled using a random effects model. The quality of the evidence was rated using the GRADE method. In total, 10 randomised, controlled trials were identified comparing additional fusion upon anterior decompression techniques, including 2 studies with a low risk of bias. Results revealed no clinically relevant differences in recovery: the pooled risk difference in the short-term follow-up was -0.06 (95% confidence interval -0.22 to 0.10) and -0.07 (95% confidence interval -0.14 to 0.00) in the long-term follow-up. Pooled risk differences for pain and return to work all demonstrated no differences. There is no additional benefit of fusion techniques applied within an anterior discectomy procedure on pain, recovery and return to work.

  17. Diagnostic and therapeutic errors in trigeminal autonomic cephalalgias and hemicrania continua: a systematic review

    PubMed Central

    2013-01-01

    Trigeminal autonomic cephalalgias (TACs) and hemicrania continua (HC) are relatively rare but clinically rather well-defined primary headaches. Despite the existence of clear-cut diagnostic criteria (The International Classification of Headache Disorders, 2nd edition - ICHD-II) and several therapeutic guidelines, errors in workup and treatment of these conditions are frequent in clinical practice. We set out to review all available published data on mismanagement of TACs and HC patients in order to understand and avoid its causes. The search strategy identified 22 published studies. The most frequent errors described in the management of patients with TACs and HC are: referral to wrong type of specialist, diagnostic delay, misdiagnosis, and the use of treatments without overt indication. Migraine with and without aura, trigeminal neuralgia, sinus infection, dental pain and temporomandibular dysfunction are the disorders most frequently overdiagnosed. Even when the clinical picture is clear-cut, TACs and HC are frequently not recognized and/or mistaken for other disorders, not only by general physicians, dentists and ENT surgeons, but also by neurologists and headache specialists. This seems to be due to limited knowledge of the specific characteristics and variants of these disorders, and it results in the unnecessary prescription of ineffective and sometimes invasive treatments which may have negative consequences for patients. Greater knowledge of and education about these disorders, among both primary care physicians and headache specialists, might contribute to improving the quality of life of TACs and HC patients. PMID:23565739

  18. The curious anomaly of skewed judgment distributions and systematic error in the wisdom of crowds.

    PubMed

    Nash, Ulrik W

    2014-01-01

    Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ) model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem.

  19. The Curious Anomaly of Skewed Judgment Distributions and Systematic Error in the Wisdom of Crowds

    PubMed Central

    Nash, Ulrik W.

    2014-01-01

    Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ) model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem. PMID:25406078

  20. Random and systematic errors in case-control studies calculating the injury risk of driving under the influence of psychoactive substances.

    PubMed

    Houwing, Sjoerd; Hagenzieker, Marjan; Mathijssen, René P M; Legrand, Sara-Ann; Verstraete, Alain G; Hels, Tove; Bernhoft, Inger Marie; Simonsen, Kirsten Wiese; Lillsunde, Pirjo; Favretto, Donata; Ferrara, Santo D; Caplinskiene, Marija; Movig, Kris L L; Brookhuis, Karel A

    2013-03-01

    Between 2006 and 2010, six population based case-control studies were conducted as part of the European research-project DRUID (DRiving Under the Influence of Drugs, alcohol and medicines). The aim of these case-control studies was to calculate odds ratios indicating the relative risk of serious injury in car crashes. The calculated odds ratios in these studies showed large variations, despite the use of uniform guidelines for the study designs. The main objective of the present article is to provide insight into the presence of random and systematic errors in the six DRUID case-control studies. Relevant information was gathered from the DRUID-reports for eleven indicators for errors. The results showed that differences between the odds ratios in the DRUID case-control studies may indeed be (partially) explained by random and systematic errors. Selection bias and errors due to small sample sizes and cell counts were the most frequently observed errors in the six DRUID case-control studies. Therefore, it is recommended that epidemiological studies that assess the risk of psychoactive substances in traffic pay specific attention to avoid these potential sources of random and systematic errors. The list of indicators that was identified in this study is useful both as guidance for systematic reviews and meta-analyses and for future epidemiological studies in the field of driving under the influence to minimize sources of errors already at the start of the study.

  1. Systematic identification and correction of annotation errors in the genetic interaction map of Saccharomyces cerevisiae

    PubMed Central

    Atias, Nir; Kupiec, Martin; Sharan, Roded

    2016-01-01

    The yeast mutant collections are a fundamental tool in deciphering genomic organization and function. Over the last decade, they have been used for the systematic exploration of ∼6 000 000 double gene mutants, identifying and cataloging genetic interactions among them. Here we studied the extent to which these data are prone to neighboring gene effects (NGEs), a phenomenon by which the deletion of a gene affects the expression of adjacent genes along the genome. Analyzing ∼90,000 negative genetic interactions observed to date, we found that more than 10% of them are incorrectly annotated due to NGEs. We developed a novel algorithm, GINGER, to identify and correct erroneous interaction annotations. We validated the algorithm using a comparative analysis of interactions from Schizosaccharomyces pombe. We further showed that our predictions are significantly more concordant with diverse biological data compared to their mis-annotated counterparts. Our work uncovered about 9500 new genetic interactions in yeast. PMID:26602688

  2. A review of sources of systematic errors and uncertainties in observations and simulations at 183 GHz

    NASA Astrophysics Data System (ADS)

    Brogniez, Helene; English, Stephen; Mahfouf, Jean-Francois; Behrendt, Andreas; Berg, Wesley; Boukabara, Sid; Buehler, Stefan Alexander; Chambon, Philippe; Gambacorta, Antonia; Geer, Alan; Ingram, William; Kursinski, E. Robert; Matricardi, Marco; Odintsova, Tatyana A.; Payne, Vivienne H.; Thorne, Peter W.; Tretyakov, Mikhail Yu.; Wang, Junhong

    2016-05-01

    Several recent studies have observed systematic differences between measurements in the 183.31 GHz water vapor line by space-borne sounders and calculations using radiative transfer models, with inputs from either radiosondes (radiosonde observations, RAOBs) or short-range forecasts by numerical weather prediction (NWP) models. This paper discusses all the relevant categories of observation-based or model-based data, quantifies their uncertainties and separates biases that could be common to all causes from those attributable to a particular cause. Reference observations from radiosondes, Global Navigation Satellite System (GNSS) receivers, differential absorption lidar (DIAL) and Raman lidar are thus overviewed. Biases arising from their calibration procedures, NWP models and data assimilation, instrument biases and radiative transfer models (both the models themselves and the underlying spectroscopy) are presented and discussed. Although presently no single process in the comparisons seems capable of explaining the observed structure of bias, recommendations are made in order to better understand the causes.

  3. SU-E-T-405: Robustness of Volumetric-Modulated Arc Therapy (VMAT) Plans to Systematic MLC Positional Errors

    SciTech Connect

    Qi, P; Xia, P

    2014-06-01

    Purpose: To evaluate the dosimetric impact of systematic MLC positional errors (PEs) on the quality of volumetric-modulated arc therapy (VMAT) plans. Methods: Five patients with head-and-neck cancer (HN) and five patients with prostate cancer were randomly chosen for this study. The clinically approved VMAT plans were designed with 2–4 coplanar arc beams with none-zero collimator angles in the Pinnacle planning system. The systematic MLC PEs of 0.5, 1.0, and 2.0 mm on both MLC banks were introduced into the original VMAT plans using an in-house program, and recalculated with the same planned Monitor Units in the Pinnacle system. For each patient, the original VMAT plans and plans with MLC PEs were evaluated according to the dose-volume histogram information and Gamma index analysis. Results: For one primary target, the ratio of V100 in the plans with 0.5, 1.0, and 2.0 mm MLC PEs to those in the clinical plans was 98.8 ± 2.2%, 97.9 ± 2.1%, 90.1 ± 9.0% for HN cases and 99.5 ± 3.2%, 98.9 ± 1.0%, 97.0 ± 2.5% for prostate cases. For all OARs, the relative difference of Dmean in all plans was less than 1.5%. With 2mm/2% criteria for Gamma analysis, the passing rates were 99.0 ± 1.5% for HN cases and 99.7 ± 0.3% for prostate cases between the planar doses from the original plans and the plans with 1.0 mm MLC errors. The corresponding Gamma passing rates dropped to 88.9 ± 5.3% for HN cases and 83.4 ± 3.2% for prostate cases when comparing planar doses from the original plans and the plans with 2.0 mm MLC errors. Conclusion: For VMAT plans, systematic MLC PEs up to 1.0 mm did not affect the plan quality in term of target coverage, OAR sparing, and Gamma analysis with 2mm/2% criteria.

  4. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    PubMed

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  5. Resilience to emotional distress in response to failure, error or mistakes: A systematic review.

    PubMed

    Johnson, Judith; Panagioti, Maria; Bass, Jennifer; Ramsey, Lauren; Harrison, Reema

    2017-03-01

    Perceptions of failure have been implicated in a range of psychological disorders, and even a single experience of failure can heighten anxiety and depression. However, not all individuals experience significant emotional distress following failure, indicating the presence of resilience. The current systematic review synthesised studies investigating resilience factors to emotional distress resulting from the experience of failure. For the definition of resilience we used the Bi-Dimensional Framework for resilience research (BDF) which suggests that resilience factors are those which buffer the impact of risk factors, and outlines criteria a variable should meet in order to be considered as conferring resilience. Studies were identified through electronic searches of PsycINFO, MEDLINE, EMBASE and Web of Knowledge. Forty-six relevant studies reported in 38 papers met the inclusion criteria. These provided evidence of the presence of factors which confer resilience to emotional distress in response to failure. The strongest support was found for the factors of higher self-esteem, more positive attributional style, and lower socially-prescribed perfectionism. Weaker evidence was found for the factors of lower trait reappraisal, lower self-oriented perfectionism and higher emotional intelligence. The majority of studies used experimental or longitudinal designs. These results identify specific factors which should be targeted by resilience-building interventions. Resilience; failure; stress; self-esteem; attributional style; perfectionism.

  6. Bad Analogies as the Source of Systematic Errors in Problem Solving Skills

    DTIC Science & Technology

    1987-09-29

    to the student as "Define a function called numline. It takes one arguement that is a number and returns a two element list. The first element of the...to do is unsystematic. Some student- abort the inference: some try subtraction. some try addition. some try substitution. etc This is the only high

  7. Variations in Learning Rate: Student Classification Based on Systematic Residual Error Patterns across Practice Opportunities

    ERIC Educational Resources Information Center

    Liu, Ran; Koedinger, Kenneth R.

    2015-01-01

    A growing body of research suggests that accounting for student specific variability in educational data can improve modeling accuracy and may have implications for individualizing instruction. The Additive Factors Model (AFM), a logistic regression model used to fit educational data and discover/refine skill models of learning, contains a…

  8. Diagnosis of Systematic Errors in Atmospheric River Forecasts Using Satellite Observations of Integrated Water Vapor

    NASA Astrophysics Data System (ADS)

    Wick, G. A.; Neiman, P. J.; Ralph, F. M.

    2011-12-01

    Narrow regions of strong water vapor transport in the atmosphere, termed atmospheric rivers, have been observed to be present and an important contributor to recent major winter flooding events along the west coast of the United States. CalWater research objectives include documenting the influence of atmospheric river (AR) events and assessing the uncertainty of their representation in numerical weather prediction forecast and reanalysis models. Understanding how well AR events are reproduced in model forecast fields is particularly important as we look forward to how their frequency and intensity might evolve in a changing climate. To support these goals, previous work defined objective characteristics for the identification of AR events in satellite-based integrated water vapor (IWV) retrievals. These techniques have been extended in the development of an automated tool to identify and characterize AR events in both satellite-derived and model fields of IWV. To evaluate how accurately present models reproduce the occurrence and representation of AR events, forecasts and analyses of IWV from multiple models are compared with corresponding satellite-based retrievals over several cool seasons. The automated AR detection procedure is applied to compare the representation of such characteristics as the frequency, size, position, and intensity of AR events in both the analyses and forecasts. Forecast fields are taken from several of the operational models included in the THORPEX Interactive Grand Global Ensemble (TIGGE). Results are presented as a function of forecast lead time in terms of quantities including probability of detection and false alarm rate. Overall the frequency and timing of events is generally well forecast but the size and position are subject to larger errors, particularly at longer forecast lead times.

  9. DtaRefinery: a software tool for elimination of systematic errors from parent ion mass measurements in tandem mass spectra datasets

    SciTech Connect

    Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.; Polpitiya, Ashoka D.; Purvine, Samuel O.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

    2009-12-16

    Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and MS/MS fragmentation have become widely available in recent years and have allowed for sig-nificantly better discrimination between true and false MS/MS pep-tide identifications by applying relatively narrow windows for maxi-mum allowable deviations for parent ion mass measurements. To fully gain the advantage of highly accurate parent ion mass meas-urements, it is important to limit systematic mass measurement errors. The DtaRefinery software tool can correct systematic errors in parent ion masses by reading a set of fragmentation spectra, searching for MS/MS peptide identifications, then fitting a model that can estimate systematic errors, and removing them. This results in a new fragmentation spectrum file with updated parent ion masses.

  10. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  11. Contrast evaluation of the polarimetric images of different targets in turbid medium: possible sources of systematic errors

    NASA Astrophysics Data System (ADS)

    Novikova, T.; Bénière, A.; Goudail, F.; De Martino, A.

    2010-04-01

    Subsurface polarimetric (differential polarization, degree of polarization or Mueller matrix) imaging of various targets in turbid media shows image contrast enhancement compared with total intensity measurements. The image contrast depends on the target immersion depth and on both target and background medium optical properties, such as scattering coefficient, absorption coefficient and anisotropy. The differential polarization image contrast is usually not the same for circularly and linearly polarized light. With linearly and circularly polarized light we acquired the orthogonal state contrast (OSC) images of reflecting, scattering and absorbing targets. The targets were positioned at various depths within the container filled with polystyrene particle suspension in water. We also performed numerical Monte Carlo modelling of backscattering Mueller matrix images of the experimental set-up. Quite often the dimensions of container, its shape and optical properties of container walls are not reported for similar experiments and numerical simulations. However, we found, that depending on the photon transport mean free path in the scattering medium, the above mentioned parameters, as well as multiple target design could all be sources of significant systematic errors in the evaluation of polarimetric image contrast. Thus, proper design of experiment geometry is of prime importance in order to remove the sources of possible artefacts in the image contrast evaluation and to make a correct choice between linear and circular polarization of the light for better target detection.

  12. Statistical tests against systematic errors in data sets based on the equality of residual means and variances from control samples: theory and applications.

    PubMed

    Henn, Julian; Meindl, Kathrin

    2015-03-01

    Statistical tests are applied for the detection of systematic errors in data sets from least-squares refinements or other residual-based reconstruction processes. Samples of the residuals of the data are tested against the hypothesis that they belong to the same distribution. For this it is necessary that they show the same mean values and variances within the limits given by statistical fluctuations. When the samples differ significantly from each other, they are not from the same distribution within the limits set by the significance level. Therefore they cannot originate from a single Gaussian function in this case. It is shown that a significance cutoff results in exactly this case. Significance cutoffs are still frequently used in charge-density studies. The tests are applied to artificial data with and without systematic errors and to experimental data from the literature.

  13. The apparent British sea slope is caused by systematic errors in the levelling-based vertical datum

    NASA Astrophysics Data System (ADS)

    Penna, N. T.; Featherstone, W. E.; Gazeaux, J.; Bingham, R. J.

    2013-08-01

    The spirit-levelling-based British vertical datum (Ordnance Datum Newlyn) implies a south-north apparent slope in mean sea level of up to 53 mm deg-1 latitude, due to the datum falling on heading northwards. Although this apparent slope has been investigated since the 1960s, explanations of its origin have remained inconclusive. It has also been suggested that, rather than a slope, the British vertical datum includes a step of about 240 mm affecting all sites north of about 53°N. In either case, the British vertical datum may be of limited use for any study requiring accurate heights or changes in heights, such as testing geoid models, groundwater and hydrocarbon extraction, the calibration and validation of satellite-based digital terrain models, and the unification of vertical datums internationally. Within the last decade, however, based on an apparent reduction in the slope to only -12 mm deg-1 latitude with respect to recent geoid models, it has been claimed that the British vertical datum does provide a physically meaningful surface for use in scientific applications. In this paper, we reinvestigate the presence of apparent south-north sea slopes around Britain and reported slopes in the vertical datum, using the EGM2008 global gravitational model, together with mean sea level and GPS data from British tide gauges, GPS ellipsoidal heights of 178 fundamental benchmarks across mainland Britain, and vertical deflection observations at 192 stations. We demonstrate a south-north slope in the British vertical datum of -(20-25) mm deg-1 latitude with respect to both mean sea level (corrected for the ocean's mean dynamic topography and the inverse barometer response to atmospheric pressure loading) and the EGM2008 quasigeoid model, while EGM2008 is shown to exhibit a negligible slope of (2 ± 4) mm deg-1 with respect to mean sea level. It is clear, therefore, that the slope can only arise from systematic errors in the levelling, although we are unable to isolate

  14. Edge profile analysis of Joint European Torus (JET) Thomson scattering data: Quantifying the systematic error due to edge localised mode synchronisation.

    PubMed

    Leyland, M J; Beurskens, M N A; Flanagan, J C; Frassinetti, L; Gibson, K J; Kempenaars, M; Maslov, M; Scannell, R

    2016-01-01

    The Joint European Torus (JET) high resolution Thomson scattering (HRTS) system measures radial electron temperature and density profiles. One of the key capabilities of this diagnostic is measuring the steep pressure gradient, termed the pedestal, at the edge of JET plasmas. The pedestal is susceptible to limiting instabilities, such as Edge Localised Modes (ELMs), characterised by a periodic collapse of the steep gradient region. A common method to extract the pedestal width, gradient, and height, used on numerous machines, is by performing a modified hyperbolic tangent (mtanh) fit to overlaid profiles selected from the same region of the ELM cycle. This process of overlaying profiles, termed ELM synchronisation, maximises the number of data points defining the pedestal region for a given phase of the ELM cycle. When fitting to HRTS profiles, it is necessary to incorporate the diagnostic radial instrument function, particularly important when considering the pedestal width. A deconvolved fit is determined by a forward convolution method requiring knowledge of only the instrument function and profiles. The systematic error due to the deconvolution technique incorporated into the JET pedestal fitting tool has been documented by Frassinetti et al. [Rev. Sci. Instrum. 83, 013506 (2012)]. This paper seeks to understand and quantify the systematic error introduced to the pedestal width due to ELM synchronisation. Synthetic profiles, generated with error bars and point-to-point variation characteristic of real HRTS profiles, are used to evaluate the deviation from the underlying pedestal width. We find on JET that the ELM synchronisation systematic error is negligible in comparison to the statistical error when assuming ten overlaid profiles (typical for a pre-ELM fit to HRTS profiles). This confirms that fitting a mtanh to ELM synchronised profiles is a robust and practical technique for extracting the pedestal structure.

  15. Statistical and systematic errors in the measurement of weak-lensing Minkowski functionals: Application to the Canada-France-Hawaii Lensing Survey

    SciTech Connect

    Shirasaki, Masato; Yoshida, Naoki

    2014-05-01

    The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degrades the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ∼1400 deg{sup 2} will constrain the dark energy equation of the state parameter with an error of Δw {sub 0} ∼ 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1σ error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density Ω{sub m0}=0.256±{sub 0.046}{sup 0.054}.

  16. Single-lens 3D digital image correlation system based on a bilateral telecentric lens and a bi-prism: Systematic error analysis and correction

    NASA Astrophysics Data System (ADS)

    Wu, Lifu; Zhu, Jianguo; Xie, Huimin; Zhou, Mengmeng

    2016-12-01

    Recently, we proposed a single-lens 3D digital image correlation (3D DIC) method and established a measurement system on the basis of a bilateral telecentric lens (BTL) and a bi-prism. This system can retrieve the 3D morphology of a target and measure its deformation using a single BTL with relatively high accuracy. Nevertheless, the system still suffers from systematic errors caused by manufacturing deficiency of the bi-prism and distortion of the BTL. In this study, in-depth evaluations of these errors and their effects on the measurement results are performed experimentally. The bi-prism deficiency and the BTL distortion are characterized by two in-plane rotation angles and several distortion coefficients, respectively. These values are obtained from a calibration process using a chessboard placed into the field of view of the system; this process is conducted after the measurement of tested specimen. A modified mathematical model is proposed, which takes these systematic errors into account and corrects them during 3D reconstruction. Experiments on retrieving the 3D positions of the chessboard grid corners and the morphology of a ceramic plate specimen are performed. The results of the experiments reveal that ignoring the bi-prism deficiency will induce attitude error to the retrieved morphology, and the BTL distortion can lead to its pseudo out-of-plane deformation. Correcting these problems can further improve the measurement accuracy of the bi-prism-based single-lens 3D DIC system.

  17. Performance analysis for time-frequency MUSIC algorithm in presence of both additive noise and array calibration errors

    NASA Astrophysics Data System (ADS)

    Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim

    2012-12-01

    This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.

  18. Systematics of the family Plectopylidae in Vietnam with additional information on Chinese taxa (Gastropoda, Pulmonata, Stylommatophora)

    PubMed Central

    Páll-Gergely, Barna; Hunyadi, András; Ablett, Jonathan; Lương, Hào Văn; Fred Naggs; Asami, Takahiro

    2015-01-01

    Abstract Vietnamese species from the family Plectopylidae are revised based on the type specimens of all known taxa, more than 600 historical non-type museum lots, and almost 200 newly-collected samples. Altogether more than 7000 specimens were investigated. The revision has revealed that species diversity of the Vietnamese Plectopylidae was previously overestimated. Overall, thirteen species names (anterides Gude, 1909, bavayi Gude, 1901, congesta Gude, 1898, fallax Gude, 1909, gouldingi Gude, 1909, hirsuta Möllendorff, 1901, jovia Mabille, 1887, moellendorffi Gude, 1901, persimilis Gude, 1901, pilsbryana Gude, 1901, soror Gude, 1908, tenuis Gude, 1901, verecunda Gude, 1909) were synonymised with other species. In addition to these, Gudeodiscus hemmeni sp. n. and Gudeodiscus messageri raheemi ssp. n. are described from north-western Vietnam. Sixteen species and two subspecies are recognized from Vietnam. The reproductive anatomy of eight taxa is described. Based on anatomical information, Halongella gen. n. is erected to include Plectopylis schlumbergeri and Plectopylis fruhstorferi. Additionally, the genus Gudeodiscus is subdivided into two subgenera (Gudeodiscus and Veludiscus subgen. n.) on the basis of the morphology of the reproductive anatomy and the radula. The Chinese Gudeodiscus phlyarius werneri Páll-Gergely, 2013 is moved to synonymy of Gudeodiscus phlyarius. A spermatophore was found in the organ situated next to the gametolytic sac in one specimen. This suggests that this organ in the Plectopylidae is a diverticulum. Statistically significant evidence is presented for the presence of calcareous hook-like granules inside the penis being associated with the absence of embryos in the uterus in four genera. This suggests that these probably play a role in mating periods before disappearing when embryos develop. Sicradiscus mansuyi is reported from China for the first time. PMID:25632253

  19. Measuring water vapor isotopes using Cavity Ring-Down Spectroscopy: improving data quality by understanding systematic errors and calibration techniques

    NASA Astrophysics Data System (ADS)

    Dennis, Kate J.; Jacobson, Gloria

    2014-05-01

    do exist when making ambient water vapor isotope measurements via CRDS and these should be addressed to ensure data integrity. Here we review a number of systematic errors introduced when making ambient water vapor measurements using CRDS, and where appropriate, provide suggestions for how to correct for them. These include: the dependence of reported delta values on water vapor concentration, the interference of CH4 on water spectra, achieving reliable low humidity measurements ([H2O] < 5,000 ppm), and calibration for both absolute accuracy and instrument drift. We will also demonstrate the relationship between calibration frequency and precision, and make recommendations for ongoing calibration and maintenance. Our aim is to improve the quality of data collected and support the continued use of water vapor isotope measurements by the research community. [1] Noone, D., Galewsky, J., et al. (2011), JGR, 116, D22113. [2] Galewsky, J., Rella, C., et al. (2011), GRL, 38, L17803. [3] Tremoy, G., Vimeux, F., et al. (2012), GRL, 39, L08805. [4] Sturm, C., Zhang, Q. and Noone, D. (2010), Clim. Past, 6, 115-129.

  20. Impacts of nitrogen addition on plant biodiversity in mountain grasslands depend on dose, application duration and climate: a systematic review.

    PubMed

    Humbert, Jean-Yves; Dwyer, John M; Andrey, Aline; Arlettaz, Raphaël

    2016-01-01

    Although the influence of nitrogen (N) addition on grassland plant communities has been widely studied, it is still unclear whether observed patterns and underlying mechanisms are constant across biomes. In this systematic review, we use meta-analysis and metaregression to investigate the influence of N addition (here referring mostly to fertilization) upon the biodiversity of temperate mountain grasslands (including montane, subalpine and alpine zones). Forty-two studies met our criteria of inclusion, resulting in 134 measures of effect size. The main general responses of mountain grasslands to N addition were increases in phytomass and reductions in plant species richness, as observed in lowland grasslands. More specifically, the analysis reveals that negative effects on species richness were exacerbated by dose (ha(-1) year(-1) ) and duration of N application (years) in an additive manner. Thus, sustained application of low to moderate levels of N over time had effects similar to short-term application of high N doses. The climatic context also played an important role: the overall effects of N addition on plant species richness and diversity (Shannon index) were less pronounced in mountain grasslands experiencing cool rather than warm summers. Furthermore, the relative negative effect of N addition on species richness was more pronounced in managed communities and was strongly negatively related to N-induced increases in phytomass, that is the greater the phytomass response to N addition, the greater the decline in richness. Altogether, this review not only establishes that plant biodiversity of mountain grasslands is negatively affected by N addition, but also demonstrates that several local management and abiotic factors interact with N addition to drive plant community changes. This synthesis yields essential information for a more sustainable management of mountain grasslands, emphasizing the importance of preserving and restoring grasslands with both low

  1. A systematic approach of tracking and reporting medication errors at a tertiary care university hospital, Karachi, Pakistan

    PubMed Central

    Khowaja, Khurshid; Nizar, Rozmin; Merchant, Rashida J; Dias, Jacqueline; Bustamante-Gavino, Irma; Malik, Amina

    2008-01-01

    Introduction: Administering medication is one of the high risk areas for any health professional. It is a multidisciplinary process, which begins with the doctor’s prescription, followed by review and provision by a pharmacist, and ends with preparation and administration by a nurse. Several studies have highlighted a high medication incident rate at several healthcare institutions. Methods: Our study design was exploratory and evaluative and used methodological triangulation. Sample size was of two types. First, a convenient sample of 1000 medication dosages to estimate the medication error (95% CI). We took another sample from subjects involved in medication usage processes such as physicians, nurses, pharmacists, and patients. Two sets of instruments were designed via extensive literature review: a medication tracking error form and a focus group interview questionnaire. Results: Our study findings revealed 100% compliance with a computerized physician order entry (CPOE) system by physicians, nurses, and pharmacists. The main error rate was 5.5% and pharmacists contributed an higher error rate of 2.6% followed by nurses (1.1%) and physicians (1%). Major areas for improvement in error rates were identified: delay in medication delivery, lab results reviewed electronically before prescription, dispension, and administration. PMID:19209247

  2. Meta-analysis of gene-environment-wide association scans accounting for education level identifies additional loci for refractive error.

    PubMed

    Fan, Qiao; Verhoeven, Virginie J M; Wojciechowski, Robert; Barathi, Veluchamy A; Hysi, Pirro G; Guggenheim, Jeremy A; Höhn, René; Vitart, Veronique; Khawaja, Anthony P; Yamashiro, Kenji; Hosseini, S Mohsen; Lehtimäki, Terho; Lu, Yi; Haller, Toomas; Xie, Jing; Delcourt, Cécile; Pirastu, Mario; Wedenoja, Juho; Gharahkhani, Puya; Venturini, Cristina; Miyake, Masahiro; Hewitt, Alex W; Guo, Xiaobo; Mazur, Johanna; Huffman, Jenifer E; Williams, Katie M; Polasek, Ozren; Campbell, Harry; Rudan, Igor; Vatavuk, Zoran; Wilson, James F; Joshi, Peter K; McMahon, George; St Pourcain, Beate; Evans, David M; Simpson, Claire L; Schwantes-An, Tae-Hwi; Igo, Robert P; Mirshahi, Alireza; Cougnard-Gregoire, Audrey; Bellenguez, Céline; Blettner, Maria; Raitakari, Olli; Kähönen, Mika; Seppala, Ilkka; Zeller, Tanja; Meitinger, Thomas; Ried, Janina S; Gieger, Christian; Portas, Laura; van Leeuwen, Elisabeth M; Amin, Najaf; Uitterlinden, André G; Rivadeneira, Fernando; Hofman, Albert; Vingerling, Johannes R; Wang, Ya Xing; Wang, Xu; Tai-Hui Boh, Eileen; Ikram, M Kamran; Sabanayagam, Charumathi; Gupta, Preeti; Tan, Vincent; Zhou, Lei; Ho, Candice E H; Lim, Wan'e; Beuerman, Roger W; Siantar, Rosalynn; Tai, E-Shyong; Vithana, Eranga; Mihailov, Evelin; Khor, Chiea-Chuen; Hayward, Caroline; Luben, Robert N; Foster, Paul J; Klein, Barbara E K; Klein, Ronald; Wong, Hoi-Suen; Mitchell, Paul; Metspalu, Andres; Aung, Tin; Young, Terri L; He, Mingguang; Pärssinen, Olavi; van Duijn, Cornelia M; Jin Wang, Jie; Williams, Cathy; Jonas, Jost B; Teo, Yik-Ying; Mackey, David A; Oexle, Konrad; Yoshimura, Nagahisa; Paterson, Andrew D; Pfeiffer, Norbert; Wong, Tien-Yin; Baird, Paul N; Stambolian, Dwight; Wilson, Joan E Bailey; Cheng, Ching-Yu; Hammond, Christopher J; Klaver, Caroline C W; Saw, Seang-Mei; Rahi, Jugnoo S; Korobelnik, Jean-François; Kemp, John P; Timpson, Nicholas J; Smith, George Davey; Craig, Jamie E; Burdon, Kathryn P; Fogarty, Rhys D; Iyengar, Sudha K; Chew, Emily; Janmahasatian, Sarayut; Martin, Nicholas G; MacGregor, Stuart; Xu, Liang; Schache, Maria; Nangia, Vinay; Panda-Jonas, Songhomitra; Wright, Alan F; Fondran, Jeremy R; Lass, Jonathan H; Feng, Sheng; Zhao, Jing Hua; Khaw, Kay-Tee; Wareham, Nick J; Rantanen, Taina; Kaprio, Jaakko; Pang, Chi Pui; Chen, Li Jia; Tam, Pancy O; Jhanji, Vishal; Young, Alvin L; Döring, Angela; Raffel, Leslie J; Cotch, Mary-Frances; Li, Xiaohui; Yip, Shea Ping; Yap, Maurice K H; Biino, Ginevra; Vaccargiu, Simona; Fossarello, Maurizio; Fleck, Brian; Yazar, Seyhan; Tideman, Jan Willem L; Tedja, Milly; Deangelis, Margaret M; Morrison, Margaux; Farrer, Lindsay; Zhou, Xiangtian; Chen, Wei; Mizuki, Nobuhisa; Meguro, Akira; Mäkelä, Kari Matti

    2016-03-29

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10(-5)), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia.

  3. Meta-analysis of gene–environment-wide association scans accounting for education level identifies additional loci for refractive error

    PubMed Central

    Fan, Qiao; Verhoeven, Virginie J. M.; Wojciechowski, Robert; Barathi, Veluchamy A.; Hysi, Pirro G.; Guggenheim, Jeremy A.; Höhn, René; Vitart, Veronique; Khawaja, Anthony P.; Yamashiro, Kenji; Hosseini, S Mohsen; Lehtimäki, Terho; Lu, Yi; Haller, Toomas; Xie, Jing; Delcourt, Cécile; Pirastu, Mario; Wedenoja, Juho; Gharahkhani, Puya; Venturini, Cristina; Miyake, Masahiro; Hewitt, Alex W.; Guo, Xiaobo; Mazur, Johanna; Huffman, Jenifer E.; Williams, Katie M.; Polasek, Ozren; Campbell, Harry; Rudan, Igor; Vatavuk, Zoran; Wilson, James F.; Joshi, Peter K.; McMahon, George; St Pourcain, Beate; Evans, David M.; Simpson, Claire L.; Schwantes-An, Tae-Hwi; Igo, Robert P.; Mirshahi, Alireza; Cougnard-Gregoire, Audrey; Bellenguez, Céline; Blettner, Maria; Raitakari, Olli; Kähönen, Mika; Seppala, Ilkka; Zeller, Tanja; Meitinger, Thomas; Ried, Janina S.; Gieger, Christian; Portas, Laura; van Leeuwen, Elisabeth M.; Amin, Najaf; Uitterlinden, André G.; Rivadeneira, Fernando; Hofman, Albert; Vingerling, Johannes R.; Wang, Ya Xing; Wang, Xu; Tai-Hui Boh, Eileen; Ikram, M. Kamran; Sabanayagam, Charumathi; Gupta, Preeti; Tan, Vincent; Zhou, Lei; Ho, Candice E. H.; Lim, Wan'e; Beuerman, Roger W.; Siantar, Rosalynn; Tai, E-Shyong; Vithana, Eranga; Mihailov, Evelin; Khor, Chiea-Chuen; Hayward, Caroline; Luben, Robert N.; Foster, Paul J.; Klein, Barbara E. K.; Klein, Ronald; Wong, Hoi-Suen; Mitchell, Paul; Metspalu, Andres; Aung, Tin; Young, Terri L.; He, Mingguang; Pärssinen, Olavi; van Duijn, Cornelia M.; Jin Wang, Jie; Williams, Cathy; Jonas, Jost B.; Teo, Yik-Ying; Mackey, David A.; Oexle, Konrad; Yoshimura, Nagahisa; Paterson, Andrew D.; Pfeiffer, Norbert; Wong, Tien-Yin; Baird, Paul N.; Stambolian, Dwight; Wilson, Joan E. Bailey; Cheng, Ching-Yu; Hammond, Christopher J.; Klaver, Caroline C. W.; Saw, Seang-Mei; Rahi, Jugnoo S.; Korobelnik, Jean-François; Kemp, John P.; Timpson, Nicholas J.; Smith, George Davey; Craig, Jamie E.; Burdon, Kathryn P.; Fogarty, Rhys D.; Iyengar, Sudha K.; Chew, Emily; Janmahasatian, Sarayut; Martin, Nicholas G.; MacGregor, Stuart; Xu, Liang; Schache, Maria; Nangia, Vinay; Panda-Jonas, Songhomitra; Wright, Alan F.; Fondran, Jeremy R.; Lass, Jonathan H.; Feng, Sheng; Zhao, Jing Hua; Khaw, Kay-Tee; Wareham, Nick J.; Rantanen, Taina; Kaprio, Jaakko; Pang, Chi Pui; Chen, Li Jia; Tam, Pancy O.; Jhanji, Vishal; Young, Alvin L.; Döring, Angela; Raffel, Leslie J.; Cotch, Mary-Frances; Li, Xiaohui; Yip, Shea Ping; Yap, Maurice K.H.; Biino, Ginevra; Vaccargiu, Simona; Fossarello, Maurizio; Fleck, Brian; Yazar, Seyhan; Tideman, Jan Willem L.; Tedja, Milly; Deangelis, Margaret M.; Morrison, Margaux; Farrer, Lindsay; Zhou, Xiangtian; Chen, Wei; Mizuki, Nobuhisa; Meguro, Akira; Mäkelä, Kari Matti

    2016-01-01

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10−5), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia. PMID:27020472

  4. A New Approach to Detection of Systematic Errors in Secondary Substation Monitoring Equipment Based on Short Term Load Forecasting

    PubMed Central

    Moriano, Javier; Rodríguez, Francisco Javier; Martín, Pedro; Jiménez, Jose Antonio; Vuksanovic, Branislav

    2016-01-01

    In recent years, Secondary Substations (SSs) are being provided with equipment that allows their full management. This is particularly useful not only for monitoring and planning purposes but also for detecting erroneous measurements, which could negatively affect the performance of the SS. On the other hand, load forecasting is extremely important since they help electricity companies to make crucial decisions regarding purchasing and generating electric power, load switching, and infrastructure development. In this regard, Short Term Load Forecasting (STLF) allows the electric power load to be predicted over an interval ranging from one hour to one week. However, important issues concerning error detection by employing STLF has not been specifically addressed until now. This paper proposes a novel STLF-based approach to the detection of gain and offset errors introduced by the measurement equipment. The implemented system has been tested against real power load data provided by electricity suppliers. Different gain and offset error levels are successfully detected. PMID:26771613

  5. Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations

    PubMed Central

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human–robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human–robot interaction experiments. For that, we analyzed 201 videos of five human–robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human–robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies. PMID:26217266

  6. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    PubMed

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

  7. Fast and robust population transfer in two-level quantum systems with dephasing noise and/or systematic frequency errors

    NASA Astrophysics Data System (ADS)

    Lu, Xiao-Jing; Chen, Xi; Ruschhaupt, A.; Alonso, D.; Guérin, S.; Muga, J. G.

    2013-09-01

    We design, by invariant-based inverse engineering, driving fields that invert the population of a two-level atom in a given time, robustly with respect to dephasing noise and/or systematic frequency shifts. Without imposing constraints, optimal protocols are insensitive to the perturbations but need an infinite energy. For a constrained value of the Rabi frequency, a flat π pulse is the least sensitive protocol to phase noise but not to systematic frequency shifts, for which we describe and optimize a family of protocols.

  8. Toward reducing systematic errors in NWP - cross-evaluation of common physics from 6h-regional to 6d-global to 6mon-coupled applications

    NASA Astrophysics Data System (ADS)

    Benjamin, S.

    2015-12-01

    An integrated evaluation system against gridded data and observations is being applied against global models (FIM, GFS) and regional models (WRF-ARW applications for RAP/HRRR). An overview will be presented on wind, relative humidity, and temperature model errors as measured against rawinsonde and aircraft observations in common at 12h forecast duration for global and regional models. Systematic errors common to both applications will be presented. A common problem with deficient cloud cover has been evident in both 6h (3km HRRR-WRF-ARW) regional forecasts and 6-month coupled-global (FIM-HYCOM) forecasts, allowing improvements in a common deep/shallow convection scheme (Grell-Freitas) with subgrid-scale clouds to be evaluated across time scales.

  9. Impact of random and systematic recall errors and selection bias in case--control studies on mobile phone use and brain tumors in adolescents (CEFALO study).

    PubMed

    Aydin, Denis; Feychting, Maria; Schüz, Joachim; Andersen, Tina Veje; Poulsen, Aslak Harbo; Prochazka, Michaela; Klaeboe, Lars; Kuehni, Claudia E; Tynes, Tore; Röösli, Martin

    2011-07-01

    Whether the use of mobile phones is a risk factor for brain tumors in adolescents is currently being studied. Case--control studies investigating this possible relationship are prone to recall error and selection bias. We assessed the potential impact of random and systematic recall error and selection bias on odds ratios (ORs) by performing simulations based on real data from an ongoing case--control study of mobile phones and brain tumor risk in children and adolescents (CEFALO study). Simulations were conducted for two mobile phone exposure categories: regular and heavy use. Our choice of levels of recall error was guided by a validation study that compared objective network operator data with the self-reported amount of mobile phone use in CEFALO. In our validation study, cases overestimated their number of calls by 9% on average and controls by 34%. Cases also overestimated their duration of calls by 52% on average and controls by 163%. The participation rates in CEFALO were 83% for cases and 71% for controls. In a variety of scenarios, the combined impact of recall error and selection bias on the estimated ORs was complex. These simulations are useful for the interpretation of previous case-control studies on brain tumor and mobile phone use in adults as well as for the interpretation of future studies on adolescents.

  10. The effectiveness of computerized order entry at reducing preventable adverse drug events and medication errors in hospital settings: a systematic review and meta-analysis

    PubMed Central

    2014-01-01

    Background The Health Information Technology for Economic and Clinical Health (HITECH) Act subsidizes implementation by hospitals of electronic health records with computerized provider order entry (CPOE), which may reduce patient injuries caused by medication errors (preventable adverse drug events, pADEs). Effects on pADEs have not been rigorously quantified, and effects on medication errors have been variable. The objectives of this analysis were to assess the effectiveness of CPOE at reducing pADEs in hospital-related settings, and examine reasons for heterogeneous effects on medication errors. Methods Articles were identified using MEDLINE, Cochrane Library, Econlit, web-based databases, and bibliographies of previous systematic reviews (September 2013). Eligible studies compared CPOE with paper-order entry in acute care hospitals, and examined diverse pADEs or medication errors. Studies on children or with limited event-detection methods were excluded. Two investigators extracted data on events and factors potentially associated with effectiveness. We used random effects models to pool data. Results Sixteen studies addressing medication errors met pooling criteria; six also addressed pADEs. Thirteen studies used pre-post designs. Compared with paper-order entry, CPOE was associated with half as many pADEs (pooled risk ratio (RR) = 0.47, 95% CI 0.31 to 0.71) and medication errors (RR = 0.46, 95% CI 0.35 to 0.60). Regarding reasons for heterogeneous effects on medication errors, five intervention factors and two contextual factors were sufficiently reported to support subgroup analyses or meta-regression. Differences between commercial versus homegrown systems, presence and sophistication of clinical decision support, hospital-wide versus limited implementation, and US versus non-US studies were not significant, nor was timing of publication. Higher baseline rates of medication errors predicted greater reductions (P < 0.001). Other context and

  11. Impact of contacting study authors to obtain additional data for systematic reviews: diagnostic accuracy studies for hepatic fibrosis

    PubMed Central

    2014-01-01

    Background Seventeen of 172 included studies in a recent systematic review of blood tests for hepatic fibrosis or cirrhosis reported diagnostic accuracy results discordant from 2 × 2 tables, and 60 studies reported inadequate data to construct 2 × 2 tables. This study explores the yield of contacting authors of diagnostic accuracy studies and impact on the systematic review findings. Methods Sixty-six corresponding authors were sent letters requesting additional information or clarification of data from 77 studies. Data received from the authors were synthesized with data included in the previous review, and diagnostic accuracy sensitivities, specificities, and positive and likelihood ratios were recalculated. Results Of the 66 authors, 68% were successfully contacted and 42% provided additional data for 29 out of 77 studies (38%). All authors who provided data at all did so by the third emailed request (ten authors provided data after one request). Authors of more recent studies were more likely to be located and provide data compared to authors of older studies. The effects of requests for additional data on the conclusions regarding the utility of blood tests to identify patients with clinically significant fibrosis or cirrhosis were generally small for ten out of 12 tests. Additional data resulted in reclassification (using median likelihood ratio estimates) from less useful to moderately useful or vice versa for the remaining two blood tests and enabled the calculation of an estimate for a third blood test for which previously the data had been insufficient to do so. We did not identify a clear pattern for the directional impact of additional data on estimates of diagnostic accuracy. Conclusions We successfully contacted and received results from 42% of authors who provided data for 38% of included studies. Contacting authors of studies evaluating the diagnostic accuracy of serum biomarkers for hepatic fibrosis and cirrhosis in hepatitis C patients

  12. The additional effect of orthotic devices on exercise therapy for patients with patellofemoral pain syndrome: a systematic review.

    PubMed

    Swart, Nynke M; van Linschoten, Robbart; Bierma-Zeinstra, Sita M A; van Middelkoop, Marienke

    2012-06-01

    The aim of the study is to determine "the additional effect of... function" for patellofemoral pain syndrome (PFPS). The additional effect of orthotic devices over exercise therapy on pain and function. A systematic literature search was conducted in MEDLINE, CINAHL, EMBASE, Cochrane and PEDro. Randomised controlled trials and controlled clinical trials of patients diagnosed with PFPS evaluating a clinically relevant outcome were included. Treatment had to include exercise therapy combined with orthotics, compared with an identical exercise programme with or without sham orthotics. Data were summarised using a best evidence synthesis. Eight trials fulfilled the inclusion criteria, of which three had a low risk of bias. There is moderate evidence for no additive effectiveness of knee braces to exercise therapy on pain (effect sizes (ES) varied from -0.14 to 0.04) and conflicting evidence on function (ES -0.33). There is moderate evidence for no difference between knee braces and exercise therapy versus placebo knee braces and exercise therapy on pain and function (ES -0.1-0.10). More studies of high methodological quality are needed to draw definitive conclusions.

  13. Elimination of 'ghost'-effect-related systematic error in metrology of X-ray optics with a long trace profiler

    SciTech Connect

    Yashchuk, Valeriy V.; Irick, Steve C.; MacDowell, Alastair A.

    2005-04-28

    A data acquisition technique and relevant program for suppression of one of the systematic effects, namely the ''ghost'' effect, of a second generation long trace profiler (LTP) is described. The ''ghost'' effect arises when there is an unavoidable cross-contamination of the LTP sample and reference signals into one another, leading to a systematic perturbation in the recorded interference patterns and, therefore, a systematic variation of the measured slope trace. Perturbations of about 1-2 {micro}rad have been observed with a cylindrically shaped X-ray mirror. Even stronger ''ghost'' effects show up in an LTP measurement with a mirror having a toroidal surface figure. The developed technique employs separate measurement of the ''ghost''-effect-related interference patterns in the sample and the reference arms and then subtraction of the ''ghost'' patterns from the sample and the reference interference patterns. The procedure preserves the advantage of simultaneously measuring the sample and reference signals. The effectiveness of the technique is illustrated with LTP metrology of a variety of X-ray mirrors.

  14. Effectiveness of Barcoding for Reducing Patient Specimen and Laboratory Testing Identification Errors: A Laboratory Medicine Best Practices Systematic Review and Meta-Analysis

    PubMed Central

    Snyder, Susan R.; Favoretto, Alessandra M.; Derzon, James H.; Christenson, Robert; Kahn, Stephen; Shaw, Colleen; Baetz, Rich Ann; Mass, Diana; Fantz, Corrine; Raab, Stephen; Tanasijevic, Milenko; Liebow, Edward B.

    2015-01-01

    Objectives This is the first systematic review of the effectiveness of barcoding practices for reducing patient specimen and laboratory testing identification errors. Design and Methods The CDC-funded Laboratory Medicine Best Practices Initiative systematic review methods for quality improvement practices were used. Results A total of 17 observational studies reporting on barcoding systems are included in the body of evidence; 10 for patient specimens and 7 for point-of-care testing. All 17 studies favored barcoding, with meta-analysis mean odds ratios for barcoding systems of 4.39 (95% CI: 3.05 – 6.32) and for point-of-care testing of 5.93 (95% CI: 5.28 – 6.67). Conclusions Barcoding is effective for reducing patient specimen and laboratory testing identification errors in diverse hospital settings and is recommended as an evidence-based “best practice.” The overall strength of evidence rating is high and the effect size rating is substantial. Unpublished studies made an important contribution comprising almost half of the body of evidence. PMID:22750145

  15. Toward a Framework for Systematic Error Modeling of NASA Spaceborne Radar with NOAA/NSSL Ground Radar-Based National Mosaic QPE

    NASA Technical Reports Server (NTRS)

    Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.

    2011-01-01

    Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.

  16. A Learning-Based Wrapper Method to Correct Systematic Errors in Automatic Image Segmentation: Consistently Improved Performance in Hippocampus, Cortex and Brain Segmentation

    PubMed Central

    Wang, Hongzhi; Das, Sandhitsu R.; Suh, Jung Wook; Altinay, Murat; Pluta, John; Craige, Caryne; Avants, Brian; Yushkevich, Paul A.

    2011-01-01

    We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter

  17. Adenomyomatosis of the gallbladder in childhood: A systematic review of the literature and an additional case report

    PubMed Central

    Parolini, Filippo; Indolfi, Giuseppe; Magne, Miguel Garcia; Salemme, Marianna; Cheli, Maurizio; Boroni, Giovanni; Alberti, Daniele

    2016-01-01

    AIM: To investigate the diagnostic and therapeutic assessment in children with adenomyomatosis of the gallbladder (AMG). METHODS: AMG is a degenerative disease characterized by a proliferation of the mucosal epithelium which deeply invaginates and extends into the thickened muscular layer of the gallbladder, causing intramural diverticula. Although AMG is found in up to 5% of cholecystectomy specimens in adult populations, this condition in childhood is extremely uncommon. Authors provide a detailed systematic review of the pediatric literature according to PRISMA guidelines, focusing on diagnostic and therapeutic assessment. An additional case of AMG is also presented. RESULTS: Five studies were finally enclosed, encompassing 5 children with AMG. Analysis was extended to our additional 11-year-old patient, who presented diffuse AMG and pancreatic acinar metaplasia of the gallbladder mucosa and was successfully managed with laparoscopic cholecystectomy. Mean age at presentation was 7.2 years. Unspecific abdominal pain was the commonest symptom. Abdominal ultrasound was performed on all patients, with a diagnostic accuracy of 100%. Five patients underwent cholecystectomy, and at follow-up were asymptomatic. In the remaining patient, completely asymptomatic at diagnosis, a conservative approach with monthly monitoring via ultrasonography was undertaken. CONCLUSION: Considering the remote but possible degeneration leading to cancer and the feasibility of laparoscopic cholecystectomy even in small children, evidence suggests that elective laparoscopic cholecystectomy represent the treatment of choice. Pre-operative evaluation of the extrahepatic biliary tree anatomy with cholangio-MRI is strongly recommended. PMID:27170933

  18. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    SciTech Connect

    Yu, Jaehyung; Wagner, Lucas K.; Ertekin, Elif

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.

  19. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  20. Kinematic GPS solutions for aircraft trajectories: Identifying and minimizing systematic height errors associated with atmospheric propagation delays

    USGS Publications Warehouse

    Shan, S.; Bevis, M.; Kendrick, E.; Mader, G.L.; Raleigh, D.; Hudnut, K.; Sartori, M.; Phillips, D.

    2007-01-01

    When kinematic GPS processing software is used to estimate the trajectory of an aircraft, unless the delays imposed on the GPS signals by the atmosphere are either estimated or calibrated via external observations, then vertical height errors of decimeters can occur. This problem is clearly manifested when the aircraft is positioned against multiple base stations in areas of pronounced topography because the aircraft height solutions obtained using different base stations will tend to be mutually offset, or biased, in proportion to the elevation differences between the base stations. When performing kinematic surveys in areas with significant topography it should be standard procedure to use multiple base stations, and to separate them vertically to the maximum extent possible, since it will then be much easier to detect mis-modeling of the atmosphere. Copyright 2007 by the American Geophysical Union.

  1. Analysis and mitigation of systematic errors in spectral shearing interferometry of pulses approaching the single-cycle limit [Invited

    SciTech Connect

    Birge, Jonathan R.; Kaertner, Franz X.

    2008-06-15

    We derive an analytical approximation for the measured pulse width error in spectral shearing methods, such as spectral phase interferometry for direct electric-field reconstruction (SPIDER), caused by an anomalous delay between the two sheared pulse components. This analysis suggests that, as pulses approach the single-cycle limit, the resulting requirements on the calibration and stability of this delay become significant, requiring precision orders of magnitude higher than the scale of a wavelength. This is demonstrated by numerical simulations of SPIDER pulse reconstruction using actual data from a sub-two-cycle laser. We briefly propose methods to minimize the effects of this sensitivity in SPIDER and review variants of spectral shearing that attempt to avoid this difficulty.

  2. Benzodiazepine Use During Hospitalization: Automated Identification of Potential Medication Errors and Systematic Assessment of Preventable Adverse Events

    PubMed Central

    Niedrig, David Franklin; Hoppe, Liesa; Mächler, Sarah; Russmann, Heike; Russmann, Stefan

    2016-01-01

    Objective Benzodiazepines and “Z-drug” GABA-receptor modulators (BDZ) are among the most frequently used drugs in hospitals. Adverse drug events (ADE) associated with BDZ can be the result of preventable medication errors (ME) related to dosing, drug interactions and comorbidities. The present study evaluated inpatient use of BDZ and related ME and ADE. Methods We conducted an observational study within a pharmacoepidemiological database derived from the clinical information system of a tertiary care hospital. We developed algorithms that identified dosing errors and interacting comedication for all administered BDZ. Associated ADE and risk factors were validated in medical records. Results Among 53,081 patients contributing 495,813 patient-days BDZ were administered to 25,626 patients (48.3%) on 115,150 patient-days (23.2%). We identified 3,372 patient-days (2.9%) with comedication that inhibits BDZ metabolism, and 1,197 (1.0%) with lorazepam administration in severe renal impairment. After validation we classified 134, 56, 12, and 3 cases involving lorazepam, zolpidem, midazolam and triazolam, respectively, as clinically relevant ME. Among those there were 23 cases with associated adverse drug events, including severe CNS-depression, falls with subsequent injuries and severe dyspnea. Causality for BDZ was formally assessed as ‘possible’ or ‘probable’ in 20 of those cases. Four cases with ME and associated severe ADE required administration of the BDZ antagonist flumazenil. Conclusions BDZ use was remarkably high in the studied setting, frequently involved potential ME related to dosing, co-medication and comorbidities, and rarely cases with associated ADE. We propose the implementation of automated ME screening and validation for the prevention of BDZ-related ADE. PMID:27711224

  3. Systematic errors in detecting biased agonism: Analysis of current methods and development of a new model-free approach

    PubMed Central

    Onaran, H. Ongun; Ambrosio, Caterina; Uğur, Özlem; Madaras Koncz, Erzsebet; Grò, Maria Cristina; Vezzi, Vanessa; Rajagopal, Sudarshan; Costa, Tommaso

    2017-01-01

    Discovering biased agonists requires a method that can reliably distinguish the bias in signalling due to unbalanced activation of diverse transduction proteins from that of differential amplification inherent to the system being studied, which invariably results from the non-linear nature of biological signalling networks and their measurement. We have systematically compared the performance of seven methods of bias diagnostics, all of which are based on the analysis of concentration-response curves of ligands according to classical receptor theory. We computed bias factors for a number of β-adrenergic agonists by comparing BRET assays of receptor-transducer interactions with Gs, Gi and arrestin. Using the same ligands, we also compared responses at signalling steps originated from the same receptor-transducer interaction, among which no biased efficacy is theoretically possible. In either case, we found a high level of false positive results and a general lack of correlation among methods. Altogether this analysis shows that all tested methods, including some of the most widely used in the literature, fail to distinguish true ligand bias from “system bias” with confidence. We also propose two novel semi quantitative methods of bias diagnostics that appear to be more robust and reliable than currently available strategies. PMID:28290478

  4. Systematic errors in detecting biased agonism: Analysis of current methods and development of a new model-free approach.

    PubMed

    Onaran, H Ongun; Ambrosio, Caterina; Uğur, Özlem; Madaras Koncz, Erzsebet; Grò, Maria Cristina; Vezzi, Vanessa; Rajagopal, Sudarshan; Costa, Tommaso

    2017-03-14

    Discovering biased agonists requires a method that can reliably distinguish the bias in signalling due to unbalanced activation of diverse transduction proteins from that of differential amplification inherent to the system being studied, which invariably results from the non-linear nature of biological signalling networks and their measurement. We have systematically compared the performance of seven methods of bias diagnostics, all of which are based on the analysis of concentration-response curves of ligands according to classical receptor theory. We computed bias factors for a number of β-adrenergic agonists by comparing BRET assays of receptor-transducer interactions with Gs, Gi and arrestin. Using the same ligands, we also compared responses at signalling steps originated from the same receptor-transducer interaction, among which no biased efficacy is theoretically possible. In either case, we found a high level of false positive results and a general lack of correlation among methods. Altogether this analysis shows that all tested methods, including some of the most widely used in the literature, fail to distinguish true ligand bias from "system bias" with confidence. We also propose two novel semi quantitative methods of bias diagnostics that appear to be more robust and reliable than currently available strategies.

  5. Adaptation to random and systematic errors: Comparison of amputee and non-amputee control interfaces with varying levels of process noise

    PubMed Central

    Kording, Konrad P.; Hargrove, Levi J.; Sensinger, Jonathon W.

    2017-01-01

    The objective of this study was to understand how people adapt to errors when using a myoelectric control interface. We compared adaptation across 1) non-amputee subjects using joint angle, joint torque, and myoelectric control interfaces, and 2) amputee subjects using myoelectric control interfaces with residual and intact limbs (five total control interface conditions). We measured trial-by-trial adaptation to self-generated errors and random perturbations during a virtual, single degree-of-freedom task with two levels of feedback uncertainty, and evaluated adaptation by fitting a hierarchical Kalman filter model. We have two main results. First, adaptation to random perturbations was similar across all control interfaces, whereas adaptation to self-generated errors differed. These patterns matched predictions of our model, which was fit to each control interface by changing the process noise parameter that represented system variability. Second, in amputee subjects, we found similar adaptation rates and error levels between residual and intact limbs. These results link prosthesis control to broader areas of motor learning and adaptation and provide a useful model of adaptation with myoelectric control. The model of adaptation will help us understand and solve prosthesis control challenges, such as providing additional sensory feedback. PMID:28301512

  6. Bacterial Cooperation Causes Systematic Errors in Pathogen Risk Assessment due to the Failure of the Independent Action Hypothesis

    PubMed Central

    Cornforth, Daniel M.; Matthews, Andrew; Brown, Sam P.; Raymond, Ben

    2015-01-01

    The Independent Action Hypothesis (IAH) states that pathogenic individuals (cells, spores, virus particles etc.) behave independently of each other, so that each has an independent probability of causing systemic infection or death. The IAH is not just of basic scientific interest; it forms the basis of our current estimates of infectious disease risk in humans. Despite the important role of the IAH in managing disease interventions for food and water-borne pathogens, experimental support for the IAH in bacterial pathogens is indirect at best. Moreover since the IAH was first proposed, cooperative behaviors have been discovered in a wide range of microorganisms, including many pathogens. A fundamental principle of cooperation is that the fitness of individuals is affected by the presence and behaviors of others, which is contrary to the assumption of independent action. In this paper, we test the IAH in Bacillus thuringiensis (B.t), a widely occurring insect pathogen that releases toxins that benefit others in the inoculum, infecting the diamondback moth, Plutella xylostella. By experimentally separating B.t. spores from their toxins, we demonstrate that the IAH fails because there is an interaction between toxin and spore effects on mortality, where the toxin effect is synergistic and cannot be accommodated by independence assumptions. Finally, we show that applying recommended IAH dose-response models to high dose data leads to systematic overestimation of mortality risks at low doses, due to the presence of synergistic pathogen interactions. Our results show that cooperative secretions can easily invalidate the IAH, and that such mechanistic details should be incorporated into pathogen risk analysis. PMID:25909384

  7. Additive Synergism between Asbestos and Smoking in Lung Cancer Risk: A Systematic Review and Meta-Analysis

    PubMed Central

    Ngamwong, Yuwadee; Tangamornsuksan, Wimonchat; Lohitnavy, Ornrat; Chaiyakunapruk, Nathorn; Scholfield, C. Norman; Reisfeld, Brad; Lohitnavy, Manupat

    2015-01-01

    Smoking and asbestos exposure are important risks for lung cancer. Several epidemiological studies have linked asbestos exposure and smoking to lung cancer. To reconcile and unify these results, we conducted a systematic review and meta-analysis to provide a quantitative estimate of the increased risk of lung cancer associated with asbestos exposure and cigarette smoking and to classify their interaction. Five electronic databases were searched from inception to May, 2015 for observational studies on lung cancer. All case-control (N = 10) and cohort (N = 7) studies were included in the analysis. We calculated pooled odds ratios (ORs), relative risks (RRs) and 95% confidence intervals (CIs) using a random-effects model for the association of asbestos exposure and smoking with lung cancer. Lung cancer patients who were not exposed to asbestos and non-smoking (A-S-) were compared with; (i) asbestos-exposed and non-smoking (A+S-), (ii) non-exposure to asbestos and smoking (A-S+), and (iii) asbestos-exposed and smoking (A+S+). Our meta-analysis showed a significant difference in risk of developing lung cancer among asbestos exposed and/or smoking workers compared to controls (A-S-), odds ratios for the disease (95% CI) were (i) 1.70 (A+S-, 1.31–2.21), (ii) 5.65; (A-S+, 3.38–9.42), (iii) 8.70 (A+S+, 5.8–13.10). The additive interaction index of synergy was 1.44 (95% CI = 1.26–1.77) and the multiplicative index = 0.91 (95% CI = 0.63–1.30). Corresponding values for cohort studies were 1.11 (95% CI = 1.00–1.28) and 0.51 (95% CI = 0.31–0.85). Our results point to an additive synergism for lung cancer with co-exposure of asbestos and cigarette smoking. Assessments of industrial health risks should take smoking and other airborne health risks when setting occupational asbestos exposure limits. PMID:26274395

  8. Additive Synergism between Asbestos and Smoking in Lung Cancer Risk: A Systematic Review and Meta-Analysis.

    PubMed

    Ngamwong, Yuwadee; Tangamornsuksan, Wimonchat; Lohitnavy, Ornrat; Chaiyakunapruk, Nathorn; Scholfield, C Norman; Reisfeld, Brad; Lohitnavy, Manupat

    2015-01-01

    Smoking and asbestos exposure are important risks for lung cancer. Several epidemiological studies have linked asbestos exposure and smoking to lung cancer. To reconcile and unify these results, we conducted a systematic review and meta-analysis to provide a quantitative estimate of the increased risk of lung cancer associated with asbestos exposure and cigarette smoking and to classify their interaction. Five electronic databases were searched from inception to May, 2015 for observational studies on lung cancer. All case-control (N = 10) and cohort (N = 7) studies were included in the analysis. We calculated pooled odds ratios (ORs), relative risks (RRs) and 95% confidence intervals (CIs) using a random-effects model for the association of asbestos exposure and smoking with lung cancer. Lung cancer patients who were not exposed to asbestos and non-smoking (A-S-) were compared with; (i) asbestos-exposed and non-smoking (A+S-), (ii) non-exposure to asbestos and smoking (A-S+), and (iii) asbestos-exposed and smoking (A+S+). Our meta-analysis showed a significant difference in risk of developing lung cancer among asbestos exposed and/or smoking workers compared to controls (A-S-), odds ratios for the disease (95% CI) were (i) 1.70 (A+S-, 1.31-2.21), (ii) 5.65; (A-S+, 3.38-9.42), (iii) 8.70 (A+S+, 5.8-13.10). The additive interaction index of synergy was 1.44 (95% CI = 1.26-1.77) and the multiplicative index = 0.91 (95% CI = 0.63-1.30). Corresponding values for cohort studies were 1.11 (95% CI = 1.00-1.28) and 0.51 (95% CI = 0.31-0.85). Our results point to an additive synergism for lung cancer with co-exposure of asbestos and cigarette smoking. Assessments of industrial health risks should take smoking and other airborne health risks when setting occupational asbestos exposure limits.

  9. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  10. The SDSS-IV eBOSS: emission line galaxy catalogues at z ≈ 0.8 and study of systematic errors in the angular clustering

    NASA Astrophysics Data System (ADS)

    Delubac, T.; Raichoor, A.; Comparat, J.; Jouvel, S.; Kneib, J.-P.; Yèche, C.; Zou, H.; Brownstein, J. R.; Abdalla, F. B.; Dawson, K.; Jullo, E.; Myers, A. D.; Newman, J. A.; Percival, W. J.; Prada, F.; Ross, A. J.; Schneider, D. P.; Zhou, X.; Zhou, Z.; Zhu, G.

    2017-02-01

    We present two wide-field catalogues of photometrically selected emission line galaxies (ELGs) at z ≈ 0.8 covering about 2800 deg2over the south galactic cap. The catalogues were obtained using a Fisher discriminant technique described in a companion paper. The two catalogues differ by the imaging used to define the Fisher discriminant: the first catalogue includes imaging from the Sloan Digital Sky Survey and the Wide-field Infrared Survey Explorer, the second also includes information from the South Galactic Cap U-band Sky Survey. Containing respectively 560 045 and 615 601 objects, they represent the largest ELG catalogues available today and were designed for the ELG programme of the extended Baryon Oscillation Spectroscopic Survey (eBOSS). We study potential sources of systematic variation in the angular distribution of the selected ELGs due to fluctuations of the observational parameters. We model the influence of the observational parameters using a multivariate regression and implement a weighting scheme which allows effective removal of all of the systematic errors induced by the observational parameters. We show that fluctuations in the imaging zero-points of the photometric bands have minor impact on the angular distribution of objects in our catalogues. We compute the angular clustering of both catalogues and show that our weighting procedure effectively removes spurious clustering on large scales. We fit a model to the small-scale angular clustering, showing that the selections have similar biases of 1.35/Da(z) and 1.28/Da(z). Both catalogues are publicly available.

  11. Effects of Active Student Response during Error Correction on the Acquisition, Maintenance, and Generalization of Science Vocabulary by Elementary Students: A Systematic Replication.

    ERIC Educational Resources Information Center

    Drevno, Gregg E.; And Others

    1994-01-01

    This study compared active student response (ASR) error correction and no-response (NR) error correction while teaching science terms to five elementary students. When a student erred, the teacher modeled the definition and the student either repeated it (ASR) or not (NR). ASR error correction was superior on each of seven dependent variables.…

  12. Addition of Ezetimibe to statins for patients at high cardiovascular risk: Systematic review of patient-important outcomes.

    PubMed

    Fei, Yutong; Guyatt, Gordon Henry; Alexander, Paul Elias; El Dib, Regina; Siemieniuk, Reed A C; Vandvik, Per Olav; Nunnally, Mark E; Gomaa, Huda; Morgan, Rebecca L; Agarwal, Arnav; Zhang, Ying; Bhatnagar, Neera; Spencer, Frederick A

    2017-01-16

    Ezetimibe is widely used in combination with statins to reduce low-density lipoprotein. We sought to examine the impact of ezetimibe when added to statins on patient-important outcomes. Medline, EMBASE, CINAHL, and CENTRAL were searched through July, 2016. Randomized controlled trials (RCTs) of ezetimibe combined with statins versus statins alone that followed patients for at least 6 months and reported on at least one of all-cause mortality, cardiovascular deaths, non-fatal myocardial infarctions (MI), and non-fatal strokes were included. Pairs of reviewers extracted study data and assessed risk of bias independently and in duplicate. Quality of evidence was assessed using the GRADE approach. We conducted a narrative review with complementary subgroup and sensitivity analyses. IMPROVE-IT study enrolled 93% of all patients enrolled in the 8 included trials. Our analysis of the IMPROVE-IT study results showed that in patients at high risk of cardiovascular events, ezetimibe added to statins was associated with i) a likely reduction in non-fatal MI (17 fewer/1000 treated over 6 years, moderate certainty in evidence); ii) a possible reduction in non-fatal stroke (6 fewer/1000 treated over 6 years, low certainty); iii) no impact on myopathy (moderate certainty); iv) potentially no impact on all-cause mortality and cardiovascular death (both moderate certainty); and v) possibly no impact on cancer (low certainty). Addition of ezetimibe to moderate-dose statins is likely to result in 17 fewer MIs and possibly 6 fewer strokes/1000 treated over 6 years but is unlikely to reduce all-cause mortality or cardiovascular death. Patients who place a high value on a small absolute reduction in MI and are not adverse to use of an additional medication over a long duration may opt for ezetimibe in addition to statin therapy. Our analysis revealed no increased specific harms associated with addition of ezetimibe to statins.

  13. Preventive zinc supplementation for children, and the effect of additional iron: a systematic review and meta-analysis

    PubMed Central

    Mayo-Wilson, Evan; Imdad, Aamer; Junior, Jean; Dean, Sohni; Bhutta, Zulfiqar A

    2014-01-01

    Objective Zinc deficiency is widespread, and preventive supplementation may have benefits in young children. Effects for children over 5 years of age, and effects when coadministered with other micronutrients are uncertain. These are obstacles to scale-up. This review seeks to determine if preventive supplementation reduces mortality and morbidity for children aged 6 months to 12 years. Design Systematic review conducted with the Cochrane Developmental, Psychosocial and Learning Problems Group. Two reviewers independently assessed studies. Meta-analyses were performed for mortality, illness and side effects. Data sources We searched multiple databases, including CENTRAL and MEDLINE in January 2013. Authors were contacted for missing information. Eligibility criteria for selecting studies Randomised trials of preventive zinc supplementation. Hospitalised children and children with chronic diseases were excluded. Results 80 randomised trials with 205 401 participants were included. There was a small but non-significant effect on all-cause mortality (risk ratio (RR) 0.95 (95% CI 0.86 to 1.05)). Supplementation may reduce incidence of all-cause diarrhoea (RR 0.87 (0.85 to 0.89)), but there was evidence of reporting bias. There was no evidence of an effect of incidence or prevalence of respiratory infections or malaria. There was moderate quality evidence of a very small effect on linear growth (standardised mean difference 0.09 (0.06 to 0.13)) and an increase in vomiting (RR 1.29 (1.14 to 1.46)). There was no evidence of an effect on iron status. Comparing zinc with and without iron cosupplementation and direct comparisons of zinc plus iron versus zinc administered alone favoured cointervention for some outcomes and zinc alone for other outcomes. Effects may be larger for children over 1 year of age, but most differences were not significant. Conclusions Benefits of preventive zinc supplementation may outweigh any potentially adverse effects in areas where

  14. Systematic Dissection of Coding Exons at Single Nucleotide Resolution Supports an Additional Role in Cell-Specific Transcriptional Regulation

    PubMed Central

    Kim, Mee J.; Findlay, Gregory M.; Martin, Beth; Zhao, Jingjing; Bell, Robert J. A.; Smith, Robin P.; Ku, Angel A.; Shendure, Jay; Ahituv, Nadav

    2014-01-01

    In addition to their protein coding function, exons can also serve as transcriptional enhancers. Mutations in these exonic-enhancers (eExons) could alter both protein function and transcription. However, the functional consequence of eExon mutations is not well known. Here, using massively parallel reporter assays, we dissect the enhancer activity of three liver eExons (SORL1 exon 17, TRAF3IP2 exon 2, PPARG exon 6) at single nucleotide resolution in the mouse liver. We find that both synonymous and non-synonymous mutations have similar effects on enhancer activity and many of the deleterious mutation clusters overlap known liver-associated transcription factor binding sites. Carrying a similar massively parallel reporter assay in HeLa cells with these three eExons found differences in their mutation profiles compared to the liver, suggesting that enhancers could have distinct operating profiles in different tissues. Our results demonstrate that eExon mutations could lead to multiple phenotypes by disrupting both the protein sequence and enhancer activity and that enhancers can have distinct mutation profiles in different cell types. PMID:25340400

  15. Systematic errors in digital volume correlation due to the self-heating effect of a laboratory x-ray CT scanner

    NASA Astrophysics Data System (ADS)

    Wang, B.; Pan, B.; Tao, R.; Lubineau, G.

    2017-04-01

    The use of digital volume correlation (DVC) in combination with a laboratory x-ray computed tomography (CT) for full-field internal 3D deformation measurement of opaque materials has flourished in recent years. During x-ray tomographic imaging, the heat generated by the x-ray tube changes the imaging geometry of x-ray scanner, and further introduces noticeable errors in DVC measurements. In this work, to provide practical guidance high-accuracy DVC measurement, the errors in displacements and strains measured by DVC due to the self-heating for effect of a commercially available x-ray scanner were experimentally investigated. The errors were characterized by performing simple rescan tests with different scan durations. The results indicate that the maximum strain errors associated with the self-heating of the x-ray scanner exceed 400 µε. Possible approaches for minimizing or correcting these displacement and strain errors are discussed. Finally, a series of translation and uniaxial compression tests were performed, in which strain errors were detected and then removed using pre-established artificial dilatational strain-time curve. Experimental results demonstrate the efficacy and accuracy of the proposed strain error correction approach.

  16. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  17. Precise accounting of bit errors in floating-point computations

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2009-08-01

    Floating-point computation generates errors at the bit level through four processes, namely, overflow, underflow, truncation, and rounding. Overflow and underflow can be detected electronically, and represent systematic errors that are not of interest in this study. Truncation occurs during shifting toward the least-significant bit (herein called right-shifting), and rounding error occurs at the least significant bit. Such errors are not easy to track precisely using published means. Statistical error propagation theory typically yields conservative estimates that are grossly inadequate for deep computational cascades. Forward error analysis theory developed for image and signal processing or matrix operations can yield a more realistic typical case, but the error of the estimate tends to be high in relationship to the estimated error. In this paper, we discuss emerging technology for forward error analysis, which allows an algorithm designer to precisely estimate the output error of a given operation within a computational cascade, under a prespecified set of constraints on input error and computational precision. This technique, called bit accounting, precisely tracks the number of rounding and truncation errors in each bit position of interest to the algorithm designer. Because all errors associated with specific bit positions are tracked, and because integer addition only is involved in error estimation, the error of the estimate is zero. The technique of bit accounting is evaluated for its utility in image and signal processing. Complexity analysis emphasizes the relationship between the work and space estimates of the algorithm being analyzed, and its error estimation algorithm. Because of the significant overhead involved in error representation, it is shown that bit accounting is less useful for real-time error estimation, but is well suited to analysis in support of algorithm design.

  18. Systematic review of ERP and fMRI studies investigating inhibitory control and error processing in people with substance dependence and behavioural addictions

    PubMed Central

    Luijten, Maartje; Machielsen, Marise W.J.; Veltman, Dick J.; Hester, Robert; de Haan, Lieuwe; Franken, Ingmar H.A.

    2014-01-01

    Background Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The combined evaluation of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) findings in the present review offers unique information on neural deficits in addicted individuals. Methods We selected 19 ERP and 22 fMRI studies using stop-signal, go/no-go or Flanker paradigms based on a search of PubMed and Embase. Results The most consistent findings in addicted individuals relative to healthy controls were lower N2, error-related negativity and error positivity amplitudes as well as hypoactivation in the anterior cingulate cortex (ACC), inferior frontal gyrus and dorsolateral prefrontal cortex. These neural deficits, however, were not always associated with impaired task performance. With regard to behavioural addictions, some evidence has been found for similar neural deficits; however, studies are scarce and results are not yet conclusive. Differences among the major classes of substances of abuse were identified and involve stronger neural responses to errors in individuals with alcohol dependence versus weaker neural responses to errors in other substance-dependent populations. Limitations Task design and analysis techniques vary across studies, thereby reducing comparability among studies and the potential of clinical use of these measures. Conclusion Current addiction theories were supported by identifying consistent abnormalities in prefrontal brain function in individuals with addiction. An integrative model is proposed, suggesting that neural deficits in the dorsal ACC may constitute a hallmark neurocognitive deficit underlying addictive behaviours, such as loss of control. PMID:24359877

  19. Error Sources in Asteroid Astrometry

    NASA Technical Reports Server (NTRS)

    Owen, William M., Jr.

    2000-01-01

    Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.

  20. Emergency department discharge prescription errors in an academic medical center

    PubMed Central

    Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.

    2017-01-01

    This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm.

  1. ALTIMETER ERRORS,

    DTIC Science & Technology

    CIVIL AVIATION, *ALTIMETERS, FLIGHT INSTRUMENTS, RELIABILITY, ERRORS , PERFORMANCE(ENGINEERING), BAROMETERS, BAROMETRIC PRESSURE, ATMOSPHERIC TEMPERATURE, ALTITUDE, CORRECTIONS, AVIATION SAFETY, USSR.

  2. NLO error propagation exercise: statistical results

    SciTech Connect

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or /sup 235/U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, /sup 235/U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and /sup 235/U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods.

  3. Determining the isotopic abundance of a labeled compound by mass spectrometry and how correcting for natural abundance distribution using analogous data from the unlabeled compound leads to a systematic error.

    PubMed

    Schenk, David J; Lockley, William J S; Elmore, Charles S; Hesk, Dave; Roberts, Drew

    2016-04-01

    When the isotopic abundance or specific activity of a labeled compound is determined by mass spectrometry (MS), it is necessary to correct the raw MS data to eliminate ion intensity contributions, which arise from the presence of heavy isotopes at natural abundance (e.g., a typical carbon compound contains ~1.1% (13) C per carbon atom). The most common approach is to employ a correction in which the mass-to-charge distribution of the corresponding unlabeled compound is used to subtract the natural abundance contributions from the raw mass-to-charge distribution pattern of the labeled compound. Following this correction, the residual intensities should be due to the presence of the newly introduced labeled atoms only. However, this will only be the case when the natural abundance mass isotopomer distribution of the unlabeled compound is the same as that of the labeled species. Although this may be a good approximation, it cannot be accurate in all cases. The implications of this approximation for the determination of isotopic abundance and specific activity have been examined in practice. Isotopically mixed stable-atom labeled valine batches were produced, and both these and [(14) C6 ]carbamazepine were analyzed by MS to determine the extent of the error introduced by the approach. Our studies revealed that significant errors are possible for small highly-labeled compounds, such as valine, under some circumstances. In the case with [(14) C6 ]carbamazepine, the errors introduced were minor but could be significant for (14) C-labeled compounds with particular isotopic distributions. This source of systematic error can be minimized, although not eliminated, by the selection of an appropriate isotopic correction pattern or by the use of a program that varies the natural abundance distribution throughout the correction.

  4. Empathy and error processing.

    PubMed

    Larson, Michael J; Fair, Joseph E; Good, Daniel A; Baldwin, Scott A

    2010-05-01

    Recent research suggests a relationship between empathy and error processing. Error processing is an evaluative control function that can be measured using post-error response time slowing and the error-related negativity (ERN) and post-error positivity (Pe) components of the event-related potential (ERP). Thirty healthy participants completed two measures of empathy, the Interpersonal Reactivity Index (IRI) and the Empathy Quotient (EQ), and a modified Stroop task. Post-error slowing was associated with increased empathic personal distress on the IRI. ERN amplitude was related to overall empathy score on the EQ and the fantasy subscale of the IRI. The Pe and measures of empathy were not related. Results remained consistent when negative affect was controlled via partial correlation, with an additional relationship between ERN amplitude and empathic concern on the IRI. Findings support a connection between empathy and error processing mechanisms.

  5. Improving regional ozone modeling through systematic evaluation of errors using the aircraft observations during the International Consortium for Atmospheric Research on Transport and Transformation

    NASA Astrophysics Data System (ADS)

    Mena-Carrasco, Marcelo; Tang, Youhua; Carmichael, Gregory R.; Chai, Tianfeng; Thongbongchoo, Narisara; Campbell, J. Elliott; Kulkarni, Sarika; Horowitz, Larry; Vukovich, Jeffrey; Avery, Melody; Brune, William; Dibb, Jack E.; Emmons, Louisa; Flocke, Frank; Sachse, Glen W.; Tan, David; Shetter, Rick; Talbot, Robert W.; Streets, David G.; Frost, Gregory; Blake, Donald

    2007-06-01

    During the operational phase of the ICARTT field experiment in 2004, the regional air quality model STEM showed a strong positive surface bias and a negative upper troposphere bias (compared to observed DC-8 and WP-3 observations) with respect to ozone. After updating emissions from NEI 1999 to NEI 2001 (with a 2004 large point sources inventory update), and modifying boundary conditions, low-level model bias decreases from 11.21 to 1.45 ppbv for the NASA DC-8 observations and from 8.26 to -0.34 for the NOAA WP-3. Improvements in boundary conditions provided by global models decrease the upper troposphere negative ozone bias, while accounting for biomass burning emissions improved model performance for CO. The covariances of ozone bias were highly correlated to NOz, NOy, and HNO3 biases. Interpolation of bias information through kriging showed that decreasing emissions in SE United States would reduce regional ozone model bias and improve model correlation coefficients. The spatial distribution of forecast errors was analyzed using kriging, which identified distinct features, which when compared to errors in postanalysis simulations, helped document improvements. Changes in dry deposition to crops were shown to reduce substantially high bias in the forecasts in the Midwest, while updated emissions were shown to account for decreases in bias in the eastern United States. Observed and modeled ozone production efficiencies for the DC-8 were calculated and shown to be very similar (7.8) suggesting that recurring ozone bias is due to overestimation of NOx emissions. Sensitivity studies showed that ozone formation in the United States is most sensitive to NOx emissions, followed by VOCs and CO. PAN as a reservoir of NOx can contribute to a significant amount of surface ozone through thermal decomposition.

  6. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  7. Exercise training alone or with the addition of activity counseling improves physical activity levels in COPD: a systematic review and meta-analysis of randomized controlled trials

    PubMed Central

    Lahham, Aroub; McDonald, Christine F; Holland, Anne E

    2016-01-01

    Background Physical inactivity is associated with poor outcomes in COPD, and as a result, interventions to improve physical activity (PA) are a current research focus. However, many trials have been small and inconclusive. Objective The aim of this systematic review and meta-analysis was to study the effects of randomized controlled trials (RCTs) targeting PA in COPD. Methods Databases (Physiotherapy Evidence Database [PEDro], Embase, MEDLINE, CINAHL and the Cochrane Central Register for Controlled Trials) were searched using the following keywords: “COPD”, “intervention” and “physical activity” from inception to May 20, 2016; published RCTs that aimed to increase PA in individuals with COPD were included. The PEDro scale was used to rate study quality. Standardized mean differences (effect sizes, ESs) with 95% confidence intervals (CIs) were determined. Effects of included interventions were also measured according to the minimal important difference (MID) in daily steps for COPD (599 daily steps). Results A total of 37 RCTs with 4,314 participants (mean forced expiratory volume in one second (FEV1) % predicted 50.5 [SD=10.4]) were identified. Interventions including exercise training (ET; n=3 studies, 103 participants) significantly increased PA levels in COPD compared to standard care (ES [95% CI]; 0.84 [0.44–1.25]). The addition of activity counseling to pulmonary rehabilitation (PR; n=4 studies, 140 participants) showed important effects on PA levels compared to PR alone (0.47 [0.02–0.92]), achieving significant increases that exceeded the MID for daily steps in COPD (mean difference [95% CI], 1,452 daily steps [549–2,356]). Reporting of methodological quality was poor in most included RCTs. Conclusion Interventions that included ET and PA counseling during PR were effective strategies to improve PA in COPD. PMID:27994451

  8. Evidence for axis-aligned motion bias: football axis-trajectory misalignment causes systematic error in projected final destinations of thrown American footballs.

    PubMed

    Dolgov, Igor; McBeath, Michael K; Sugar, Thomas

    2009-01-01

    The axis-aligned motion (AAM) bias is the tendency of observers to assume that symmetric moving objects maintain axis-trajectory alignment and to bias their judgments of trajectory toward the axis when they are misaligned. We tested whether humans exhibit an AAM bias in a realistic, cue-rich, 3-D setting by examining the impact of axis-trajectory misalignment on estimates of final destinations of thrown American footballs. In experiments 1 and 2 we show that observers are significantly worse in judging destinations of footballs than those of volleyballs and basketballs. This difference in performance is due to the deviation of the football's axis from trajectory in flight, as shown by the correspondence of participants' lateral judgment error and the football's lateral axial deviation from trajectory, which was predicted by passer handedness. Nearly all animals exhibit bilateral symmetry and maintain axis-trajectory alignment during locomotion, and we argue that the AAM bias is complementary mental attunement to the natural regularity of this axis-aligned motion. Furthermore, this bias is also a prototypical example of a perceptual regularity that is a mixed blessing-advantageous in perceptual judgment tasks of axis trajectory-aligned moving entities like most living creatures, and disadvantageous in tasks demanding judgments of axis-trajectory-misaligned moving objects which are typically artifacts.

  9. Modeling the glucose sensor error.

    PubMed

    Facchinetti, Andrea; Del Favero, Simone; Sparacino, Giovanni; Castle, Jessica R; Ward, W Kenneth; Cobelli, Claudio

    2014-03-01

    Continuous glucose monitoring (CGM) sensors are portable devices, employed in the treatment of diabetes, able to measure glucose concentration in the interstitium almost continuously for several days. However, CGM sensors are not as accurate as standard blood glucose (BG) meters. Studies comparing CGM versus BG demonstrated that CGM is affected by distortion due to diffusion processes and by time-varying systematic under/overestimations due to calibrations and sensor drifts. In addition, measurement noise is also present in CGM data. A reliable model of the different components of CGM inaccuracy with respect to BG (briefly, "sensor error") is important in several applications, e.g., design of optimal digital filters for denoising of CGM data, real-time glucose prediction, insulin dosing, and artificial pancreas control algorithms. The aim of this paper is to propose an approach to describe CGM sensor error by exploiting n multiple simultaneous CGM recordings. The model of sensor error description includes a model of blood-to-interstitial glucose diffusion process, a linear time-varying model to account for calibration and sensor drift-in-time, and an autoregressive model to describe the additive measurement noise. Model orders and parameters are identified from the n simultaneous CGM sensor recordings and BG references. While the model is applicable to any CGM sensor, here, it is used on a database of 36 datasets of type 1 diabetic adults in which n = 4 Dexcom SEVEN Plus CGM time series and frequent BG references were available simultaneously. Results demonstrates that multiple simultaneous sensor data and proper modeling allow dissecting the sensor error into its different components, distinguishing those related to physiology from those related to technology.

  10. Statistical errors and systematic biases in the calibration of the convective core overshooting with eclipsing binaries. A case study: TZ Fornacis

    NASA Astrophysics Data System (ADS)

    Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.

    2017-03-01

    Context. Recently published work has made high-precision fundamental parameters available for the binary system TZ Fornacis, making it an ideal target for the calibration of stellar models. Aims: Relying on these observations, we attempt to constrain the initial helium abundance, the age and the efficiency of the convective core overshooting. Our main aim is in pointing out the biases in the results due to not accounting for some sources of uncertainty. Methods: We adopt the SCEPtER pipeline, a maximum likelihood technique based on fine grids of stellar models computed for various values of metallicity, initial helium abundance and overshooting efficiency by means of two independent stellar evolutionary codes, namely FRANEC and MESA. Results: Beside the degeneracy between the estimated age and overshooting efficiency, we found the existence of multiple independent groups of solutions. The best one suggests a system of age 1.10 ± 0.07 Gyr composed of a primary star in the central helium burning stage and a secondary in the sub-giant branch (SGB). The resulting initial helium abundance is consistent with a helium-to-metal enrichment ratio of ΔY/ ΔZ = 1; the core overshooting parameter is β = 0.15 ± 0.01 for FRANEC and fov = 0.013 ± 0.001 for MESA. The second class of solutions, characterised by a worse goodness-of-fit, still suggest a primary star in the central helium-burning stage but a secondary in the overall contraction phase, at the end of the main sequence (MS). In this case, the FRANEC grid provides an age of Gyr and a core overshooting parameter , while the MESA grid gives 1.23 ± 0.03 Gyr and fov = 0.025 ± 0.003. We analyse the impact on the results of a larger, but typical, mass uncertainty and of neglecting the uncertainty in the initial helium content of the system. We show that very precise mass determinations with uncertainty of a few thousandths of solar mass are required to obtain reliable determinations of stellar parameters, as mass errors

  11. Image pre-filtering for measurement error reduction in digital image correlation

    NASA Astrophysics Data System (ADS)

    Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing

    2015-02-01

    In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random

  12. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  13. Error Analysis in Mathematics Education.

    ERIC Educational Resources Information Center

    Rittner, Max

    1982-01-01

    The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)

  14. Medication Errors

    MedlinePlus

    ... common links HHS U.S. Department of Health and Human Services U.S. Food and Drug Administration A to Z Index Follow ... Practices National Patient Safety Foundation To Err is Human: ... Errors: Quality Chasm Series National Coordinating Council for Medication Error ...

  15. Discretization errors in particle tracking

    NASA Astrophysics Data System (ADS)

    Carmon, G.; Mamman, N.; Feingold, M.

    2007-03-01

    High precision video tracking of microscopic particles is limited by systematic and random errors. Systematic errors are partly due to the discretization process both in position and in intensity. We study the behavior of such errors in a simple tracking algorithm designed for the case of symmetric particles. This symmetry algorithm uses interpolation to estimate the value of the intensity at arbitrary points in the image plane. We show that the discretization error is composed of two parts: (1) the error due to the discretization of the intensity, bD and (2) that due to interpolation, bI. While bD behaves asymptotically like N-1 where N is the number of intensity gray levels, bI is small when using cubic spline interpolation.

  16. Error Analysis and the EFL Classroom Teaching

    ERIC Educational Resources Information Center

    Xie, Fang; Jiang, Xue-mei

    2007-01-01

    This paper makes a study of error analysis and its implementation in the EFL (English as Foreign Language) classroom teaching. It starts by giving a systematic review of the concepts and theories concerning EA (Error Analysis), the various reasons causing errors are comprehensively explored. The author proposes that teachers should employ…

  17. Refractive error blindness.

    PubMed Central

    Dandona, R.; Dandona, L.

    2001-01-01

    Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669

  18. Systematic approach for simultaneously correcting the band-gap andp-dseparation errors of common cation III-V or II-VI binaries in density functional theory calculations within a local density approximation

    DOE PAGES

    Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang

    2015-07-31

    We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles methodmore » can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.« less

  19. Systematizing Trial and Error Using Spreadsheets.

    ERIC Educational Resources Information Center

    Sgroi, Richard J.

    1992-01-01

    Presents two spreadsheets for middle school students applying Polya's heuristic to help develop number sense, reasoning abilities, and problem-solving skills. Spreadsheet 1, "the coin problem," allows students to vary coin quantities to total $8.32. Spreadsheet 2, "ratios," develops number relationships while finding 3 3-digit…

  20. Error Modeling of Multi-baseline Optical Truss. Part II; Application to SIM Metrology Truss Field Dependent Error

    NASA Technical Reports Server (NTRS)

    Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert

    2004-01-01

    The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices

  1. The need for annual echocardiography to detect cabergoline-associated valvulopathy in patients with prolactinoma: a systematic review and additional clinical data.

    PubMed

    Caputo, Carmela; Prior, David; Inder, Warrick J

    2015-11-01

    Present recommendations by the US Food and Drug Administration advise that patients with prolactinoma treated with cabergoline should have an annual echocardiogram to screen for valvular heart disease. Here, we present new clinical data and a systematic review of the scientific literature showing that the prevalence of cabergoline-associated valvulopathy is very low. We prospectively assessed 40 patients with prolactinoma taking cabergoline. Cardiovascular examination before echocardiography detected an audible systolic murmur in 10% of cases (all were functional murmurs), and no clinically significant valvular lesion was shown on echocardiogram in the 90% of patients without a murmur. Our systematic review identified 21 studies that assessed the presence of valvular abnormalities in patients with prolactinoma treated with cabergoline. Including our new clinical data, only two (0·11%) of 1811 patients were confirmed to have cabergoline-associated valvulopathy (three [0·17%] if possible cases were included). The probability of clinically significant valvular heart disease is low in the absence of a murmur. On the basis of these findings, we challenge the present recommendations to do routine echocardiography in all patients taking cabergoline for prolactinoma every 12 months. We propose that such patients should be screened by a clinical cardiovascular examination and that echocardiogram should be reserved for those patients with an audible murmur, those treated for more than 5 years at a dose of more than 3 mg per week, or those who maintain cabergoline treatment after the age of 50 years.

  2. Cardinality Balanced Multi-Target Multi-Bernoulli Filter with Error Compensation

    PubMed Central

    He, Xiangyu; Liu, Guixi

    2016-01-01

    The cardinality balanced multi-target multi-Bernoulli (CBMeMBer) filter developed recently has been proved an effective multi-target tracking (MTT) algorithm based on the random finite set (RFS) theory, and it can jointly estimate the number of targets and their states from a sequence of sensor measurement sets. However, because of the existence of systematic errors in sensor measurements, the CBMeMBer filter can easily produce different levels of performance degradation. In this paper, an extended CBMeMBer filter, in which the joint probability density function of target state and systematic error is recursively estimated, is proposed to address the MTT problem based on the sensor measurements with systematic errors. In addition, an analytic implementation of the extended CBMeMBer filter is also presented for linear Gaussian models. Simulation results confirm that the proposed algorithm can track multiple targets with better performance. PMID:27589764

  3. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  4. Efficacy of additional psychosocial intervention in reducing low birth weight and preterm birth in teenage pregnancy: A systematic review and meta-analysis.

    PubMed

    Sukhato, Kanokporn; Wongrathanandha, Chathaya; Thakkinstian, Ammarin; Dellow, Alan; Horsuwansak, Pornpot; Anothaisintawee, Thunyarat

    2015-10-01

    This systematic review aimed to assess the efficacy of psychosocial interventions in reducing risk of low birth weight (LBW) and preterm birth (PTB) in teenage pregnancy. Relevant studies were identified from Medline, Scopus, CINAHL, and CENTRAL databases. Randomized controlled trials investigating effect of psychosocial interventions on risk of LBW and PTB, compared to routine antenatal care (ANC) were eligible. Relative risks (RR) of LBW and PTB were pooled using inverse variance method. Mean differences of birth weight (BW) between intervention and control groups were pooled using unstandardized mean difference (USMD). Five studies were included in the review. Compared with routine ANC, psychosocial interventions significantly reduced risk of LBW by 40% (95%CI: 8%,62%) but not for PTB (pooled RR = 0.67, 95%CI: 0.42,1.05). Mean BW of the intervention group was significantly higher than that of the control group with USMD of 200.63 g (95% CI: 21.02, 380.25). Results of our study suggest that psychosocial interventions significantly reduced risk of LBW in teenage pregnancy.

  5. Neural network calibration of a snapshot birefringent Fourier transform spectrometer with periodic phase errors.

    PubMed

    Luo, David; Kudenov, Michael W

    2016-05-16

    Systematic phase errors in Fourier transform spectroscopy can severely degrade the calculated spectra. Compensation of these errors is typically accomplished using post-processing techniques, such as Fourier deconvolution, linear unmixing, or iterative solvers. This results in increased computational complexity when reconstructing and calibrating many parallel interference patterns. In this paper, we describe a new method of calibrating a Fourier transform spectrometer based on the use of artificial neural networks (ANNs). In this way, it is demonstrated that a simpler and more straightforward reconstruction process can be achieved at the cost of additional calibration equipment. To this end, we provide a theoretical model for general systematic phase errors in a polarization birefringent interferometer. This is followed by a discussion of our experimental setup and a demonstration of our technique, as applied to data with and without phase error. The technique's utility is then supported by comparison to alternative reconstruction techniques using fast Fourier transforms (FFTs) and linear unmixing.

  6. Theory of Test Translation Error

    ERIC Educational Resources Information Center

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  7. Stopping the error cascade: a report on ameliorators from the ASIPS collaborative

    PubMed Central

    Parnes, Bennett; Fernald, Douglas; Quintela, Javán; Araya‐Guerra, Rodrigo; Westfall, John; Harris, Daniel; Pace, Wilson

    2007-01-01

    Objective To present a novel examination of how error cascades are stopped (ameliorated) before they affect patients. Design Qualitative analysis of reported errors in primary care. Setting Over a three‐year period, clinicians and staff in two practice‐based research networks voluntarily reported medical errors to a primary care patient safety reporting system, Applied Strategies for Improving Patient Safety (ASIPS). The authors found a number of reports where the error was corrected before it had an adverse impact on the patient. Results Of 754 codeable reported events, 60 were classified as ameliorated events. In these events, a participant stopped the progression of the event before it reached or affected the patient. Ameliorators included doctors, nurses, pharmacists, diagnostic laboratories and office staff. Additionally, patients or family members may be ameliorators by recognising the error and taking action. Ameliorating an event after an initial error requires an opportunity to catch the error by systems, chance or attentiveness. Correcting the error before it affects the patient requires action either directed by protocols and systems or by vigilance, power to change course and perseverance on the part of the ameliorator. Conclusion Despite numerous individual and systematic methods to prevent errors, a system to prevent all potential errors is not feasible. However, a more pervasive culture of safety that builds on simple acts in addition to more costly and complex electronic systems may improve patient outcomes. Medical staff and patients who are encouraged to be vigilant, ask questions and seek solutions may correct otherwise inevitable wrongs. PMID:17301195

  8. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  9. Twenty questions about student errors

    NASA Astrophysics Data System (ADS)

    Fisher, Kathleen M.; Lipson, Joseph Isaac

    Errors in science learning (errors in expression of organized, purposeful thought within the domain of science) provide a window through which glimpses of mental functioning can be obtained. Errors are valuable and normal occurrences in the process of learning science. A student can use his/her errors to develop a deeper understanding of a concept as long as the error can be recognized and appropriate, informative feedback can be obtained. A safe, non-threatening, and nonpunitive environment which encourages dialogue helps students to express their conceptions and to risk making errors. Pedagogical methods that systematically address common student errors produce significant gains in student learning. Just as the nature-nurture interaction is integral to the development of living things, so the individual-environment interaction is basic to thought processes. At a minimum, four systems interact: (1) the individual problem solver (who has a worldview, relatively stable cognitive characteristics, relatively malleable mental states and conditions, and aims or intentions), (2) task to be performed (including relative importance and nature of the task), (3) knowledge domain in which task is contained, and (4) the environment (including orienting conditions and the social and physical context).Several basic assumptions underlie research on errors and alternative conceptions. Among these are: Knowledge and thought involve active, constructive processes; there are many ways to acquire, organize, store, retrieve, and think about a given concept or event; and understanding is achieved by successive approximations. Application of these ideas will require a fundamental change in how science is taught.

  10. Discretization vs. Rounding Error in Euler's Method

    ERIC Educational Resources Information Center

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  11. Additions and corrections to the systematics of mayfly species assigned to the genus Callibaetis Eaton 1881 (Ephemeroptera: Baetidae) from South America.

    PubMed

    Cruz, Paulo Vilela; Salles, Frederico Falcão; Hamada, Neusa

    2017-02-13

    Due to historical taxonomic impediments, species of Callibaetis Eaton are difficult to identify. Recent studies have attempted to resolve this problem, although many species still lack complete descriptions; nymphs of several species remain undetermined; and type specimens are lost or poorly known. Given these hindrances, the aim of this study is to review some of the type specimens of Callibaetis from South America. This review provides a series of taxonomic additions and corrections supported by improved morphological evaluations, illustrations and photographs of Callibaetis camposi Navás, C. (Abaetetuba) capixaba Cruz, Salles & Hamada, C. gregarius Navás, C. (C.) guttatus Navás, C. jaffueli Navás, C. (C.) jocosus Navás, C. nigrivenosus Banks, C. (A.) pollens Needham & Murphy, C. (C.) radiatus Navás, C. (A.) sellacki (Weyenbergh), C. stictogaster Navás, C. (C.) viviparus Needham & Murphy, C. (C.) willineri Navás, and C. (C.) zonalis Navás. From among these species, C. stictogaster and C. jaffueli are revalidated; C. nigrivenosus and C. gregarius are designated as nomina dubia; C. (C.) fluminensis Cruz, Salles & Hamada is proposed as a junior subjective synonym of C. (C.) zonalis; and C. gloriosus Navás is proposed as a junior subjective synonym of C. (A.) sellacki (Weyenbergh). Lectotypes are designated for C. camposi, C. jaffueli, C. (C.) radiatus and C. stictogaster.

  12. Error handling strategies in multiphase inverse modeling

    SciTech Connect

    Finsterle, S.; Zhang, Y.

    2010-12-01

    Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.

  13. Coping with model error in variational data assimilation using optimal mass transport

    NASA Astrophysics Data System (ADS)

    Ning, Lipeng; Carli, Francesca P.; Ebtehaj, Ardeshir Mohammad; Foufoula-Georgiou, Efi; Georgiou, Tryphon T.

    2014-07-01

    Classical variational data assimilation methods address the problem of optimally combining model predictions with observations in the presence of zero-mean Gaussian random errors. However, in many natural systems, uncertainty in model structure and/or model parameters often results in systematic errors or biases. Prior knowledge about such systematic model error for parametric removal is not always feasible in practice, limiting the efficient use of observations for improved prediction. The main contribution of this work is to advocate the relevance of transportation metrics for quantifying nonrandom model error in variational data assimilation for nonnegative natural states and fluxes. Transportation metrics (also known as Wasserstein metrics) originate in the theory of Optimal Mass Transport (OMT) and provide a nonparametric way to compare distributions which is natural in the sense that it penalizes mismatch in the values and relative position of "masses" in the two distributions. We demonstrate the promise of the proposed methodology using 1-D and 2-D advection-diffusion dynamics with systematic error in the velocity and diffusivity parameters. Moreover, we combine this methodology with additional regularization functionals, such as the ℓ1-norm of the state in a properly chosen domain, to incorporate both model error and potential prior information in the presence of sparsity or sharp fronts in the underlying state of interest.

  14. Formal Estimation of Errors in Computed Absolute Interaction Energies of Protein-ligand Complexes

    PubMed Central

    Faver, John C.; Benson, Mark L.; He, Xiao; Roberts, Benjamin P.; Wang, Bing; Marshall, Michael S.; Kennedy, Matthew R.; Sherrill, C. David; Merz, Kenneth M.

    2011-01-01

    A largely unsolved problem in computational biochemistry is the accurate prediction of binding affinities of small ligands to protein receptors. We present a detailed analysis of the systematic and random errors present in computational methods through the use of error probability density functions, specifically for computed interaction energies between chemical fragments comprising a protein-ligand complex. An HIV-II protease crystal structure with a bound ligand (indinavir) was chosen as a model protein-ligand complex. The complex was decomposed into twenty-one (21) interacting fragment pairs, which were studied using a number of computational methods. The chemically accurate complete basis set coupled cluster theory (CCSD(T)/CBS) interaction energies were used as reference values to generate our error estimates. In our analysis we observed significant systematic and random errors in most methods, which was surprising especially for parameterized classical and semiempirical quantum mechanical calculations. After propagating these fragment-based error estimates over the entire protein-ligand complex, our total error estimates for many methods are large compared to the experimentally determined free energy of binding. Thus, we conclude that statistical error analysis is a necessary addition to any scoring function attempting to produce reliable binding affinity predictions. PMID:21666841

  15. Errors in CT colonography.

    PubMed

    Trilisky, Igor; Ward, Emily; Dachman, Abraham H

    2015-10-01

    CT colonography (CTC) is a colorectal cancer screening modality which is becoming more widely implemented and has shown polyp detection rates comparable to those of optical colonoscopy. CTC has the potential to improve population screening rates due to its minimal invasiveness, no sedation requirement, potential for reduced cathartic examination, faster patient throughput, and cost-effectiveness. Proper implementation of a CTC screening program requires careful attention to numerous factors, including patient preparation prior to the examination, the technical aspects of image acquisition, and post-processing of the acquired data. A CTC workstation with dedicated software is required with integrated CTC-specific display features. Many workstations include computer-aided detection software which is designed to decrease errors of detection by detecting and displaying polyp-candidates to the reader for evaluation. There are several pitfalls which may result in false-negative and false-positive reader interpretation. We present an overview of the potential errors in CTC and a systematic approach to avoid them.

  16. Mal-Adaptation of Event-Related EEG Responses Preceding Performance Errors

    PubMed Central

    Eichele, Heike; Juvodden, Hilde T.; Ullsperger, Markus; Eichele, Tom

    2010-01-01

    Recent EEG and fMRI evidence suggests that behavioral errors are foreshadowed by systematic changes in brain activity preceding the outcome by seconds. In order to further characterize this type of error precursor activity, we investigated single-trial event-related EEG activity from 70 participants performing a modified Eriksen flanker task, in particular focusing on the trial-by-trial dynamics of a fronto-central independent component that previously has been associated with error and feedback processing. The stimulus-locked peaks in the N2 and P3 latency range in the event-related averages showed expected compatibility and error-related modulations. In addition, a small pre-stimulus negative slow wave was present at erroneous trials. Significant error-preceding activity was found in local stimulus sequences with decreased conflict in the form of less negativity at the N2 latency (310–350 ms) accumulating across five trials before errors; concomitantly response times were speeding across trials. These results illustrate that error-preceding activity in event-related EEG is associated with the performance monitoring system and we conclude that the dynamics of performance monitoring contribute to the generation of error-prone states in addition to the more remote and indirect effects in ongoing activity such as posterior alpha power in EEG and default mode drifts in fMRI. PMID:20740080

  17. Correcting image placement errors using registration control (RegC®) technology in the photomask periphery

    NASA Astrophysics Data System (ADS)

    Cohen, Avi; Lange, Falk; Ben-Zvi, Guy; Graitzer, Erez; Vladimir, Dmitriev

    2012-11-01

    The ITRS roadmap specifies wafer overlay control as one of the major tasks for the sub 40 nm nodes in addition to CD control and defect control. Wafer overlay is strongly dependent on mask image placement error (registration errors or Reg errors)1. The specifications for registration or mask placement accuracy are significantly tighter in some of the double patterning techniques (DPT). This puts a heavy challenge on mask manufacturers (mask shops) to comply with advanced node registration specifications. The conventional methods of feeding back the systematic registration error to the E-beam writer and re-writing the mask are becoming difficult, expensive and not sufficient for the advanced nodes especially for double pattering technologies. Six production masks were measured on a standard registration metrology tool and the registration errors were calculated and plotted. Specially developed algorithm along with the RegC Wizard (dedicated software) was used to compute a correction lateral strain field that would minimize the registration errors. This strain field was then implemented in the photomask bulk material using an ultra short pulse laser based system. Finally the post process registration error maps were measured and the resulting residual registration error field with and without scale and orthogonal errors removal was calculated. In this paper we present a robust process flow in the mask shop which leads up to 32% registration 3sigma improvement, bringing some out-of-spec masks into spec, utilizing the RegC® process in the photomask periphery while leaving the exposure field optically unaffected.

  18. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  19. Using data assimilation for systematic model improvement

    NASA Astrophysics Data System (ADS)

    Lang, Matthew S.; van Leeuwen, Peter Jan; Browne, Phil

    2016-04-01

    In Numerical Weather Prediction parameterisations are used to simulate missing physics in the model. These can be due to a lack of scientific understanding or a lack of computing power available to address all the known physical processes. Parameterisations are sources of large uncertainty in a model as parameter values used in these parameterisations cannot be measured directly and hence are often not well known, and the parameterisations themselves are approximations of the processes present in the true atmosphere. Whilst there are many efficient and effective methods for combined state/parameter estimation in data assimilation, such as state augmentation, these are not effective at estimating the structure of parameterisations. A new method of parameterisation estimation is proposed that uses sequential data assimilation methods to estimate errors in the numerical models at each space-time point for each model equation. These errors are then fitted to predetermined functional forms of missing physics or parameterisations, that are based upon prior information. The method picks out the functional form, or that combination of functional forms, that bests fits the error structure. The prior information typically takes the form of expert knowledge. We applied the method to a one-dimensional advection model with additive model error, and it is shown that the method can accurately estimate parameterisations, with consistent error estimates. It is also demonstrated that state augmentation is not successful. The results indicate that this new method is a powerful tool in systematic model improvement.

  20. A Systematic Methodology for Verifying Superscalar Microprocessors

    NASA Technical Reports Server (NTRS)

    Srivas, Mandayam; Hosabettu, Ravi; Gopalakrishnan, Ganesh

    1999-01-01

    We present a systematic approach to decompose and incrementally build the proof of correctness of pipelined microprocessors. The central idea is to construct the abstraction function by using completion functions, one per unfinished instruction, each of which specifies the effect (on the observables) of completing the instruction. In addition to avoiding the term size and case explosion problem that limits the pure flushing approach, our method helps localize errors, and also handles stages with interactive loops. The technique is illustrated on pipelined and superscalar pipelined implementations of a subset of the DLX architecture. It has also been applied to a processor with out-of-order execution.

  1. Systematic reviews.

    PubMed

    Milner, Kerry A

    2015-01-01

    Systematic reviews are a type of literature review in which authors systematically search for, critically appraise, and synthesize evidence from several studies on the same topic (Grant & Booth, 2009). The precise and systematic method differentiates systematic reviews from traditional reviews (Khan, Kunz, Kleijnen, & Antes, 2003). In all types of systematic reviews, a quality assessment is done of the individual studies that meet inclusion criteria. These individual assessments are synthesized, and aggregated results are reported. Systematic reviews are considered the highest level of evidence in evidence-based health care because the reviewers strive to use transparent, rigorous methods that minimize bias.

  2. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the

  3. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have approximately 7am/7pm orbital geometry) and. afternoon satellites (NOAA 7, 9, 11 and 14 that have approximately 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error eo. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error eo. We find eo can decrease the global temperature trend by approximately 0.07 K/decade. In addition there are systematic time dependent errors ed and ec present in the data that are introduced by the drift in the satellite orbital geometry. ed arises from the diurnal cycle in temperature and ec is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error ed can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observation made in the MSU Ch 1 (50.3 GHz) support this approach. The error ec is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the

  4. Error probability performance of unbalanced QPSK receivers

    NASA Technical Reports Server (NTRS)

    Simon, M. K.

    1978-01-01

    A simple technique for calculating the error probability performance and associated noisy reference loss of practical unbalanced QPSK receivers is presented. The approach is based on expanding the error probability conditioned on the loop phase error in a power series in the loop phase error and then, keeping only the first few terms of this series, averaging this conditional error probability over the probability density function of the loop phase error. Doing so results in an expression for the average error probability which is in the form of a leading term representing the ideal (perfect synchronization references) performance plus a term proportional to the mean-squared crosstalk. Thus, the additional error probability due to noisy synchronization references occurs as an additive term proportional to the mean-squared phase jitter directly associated with the receiver's tracking loop. Similar arguments are advanced to give closed-form results for the noisy reference loss itself.

  5. Violation of Heisenberg's error-disturbance uncertainty relation in neutron-spin measurements

    NASA Astrophysics Data System (ADS)

    Sulyok, Georg; Sponar, Stephan; Erhart, Jacqueline; Badurek, Gerald; Ozawa, Masanao; Hasegawa, Yuji

    2013-08-01

    In its original formulation, Heisenberg's uncertainty principle dealt with the relationship between the error of a quantum measurement and the thereby induced disturbance on the measured object. Meanwhile, Heisenberg's heuristic arguments have turned out to be correct only for special cases. An alternative universally valid relation was derived by Ozawa in 2003. Here, we demonstrate that Ozawa's predictions hold for projective neutron-spin measurements. The experimental inaccessibility of error and disturbance claimed elsewhere has been overcome using a tomographic method. By a systematic variation of experimental parameters in the entire configuration space, the physical behavior of error and disturbance for projective spin-(1)/(2) measurements is illustrated comprehensively. The violation of Heisenberg's original relation, as well as the validity of Ozawa's relation become manifest. In addition, our results conclude that the widespread assumption of a reciprocal relation between error and disturbance is not valid in general.

  6. Sources of Error in Mammalian Genetic Screens

    PubMed Central

    Sack, Laura Magill; Davoli, Teresa; Xu, Qikai; Li, Mamie Z.; Elledge, Stephen J.

    2016-01-01

    Genetic screens are invaluable tools for dissection of biological phenomena. Optimization of such screens to enhance discovery of candidate genes and minimize false positives is thus a critical aim. Here, we report several sources of error common to pooled genetic screening techniques used in mammalian cell culture systems, and demonstrate methods to eliminate these errors. We find that reverse transcriptase-mediated recombination during retroviral replication can lead to uncoupling of molecular tags, such as DNA barcodes (BCs), from their associated library elements, leading to chimeric proviral genomes in which BCs are paired to incorrect ORFs, shRNAs, etc. This effect depends on the length of homologous sequence between unique elements, and can be minimized with careful vector design. Furthermore, we report that residual plasmid DNA from viral packaging procedures can contaminate transduced cells. These plasmids serve as additional copies of the PCR template during library amplification, resulting in substantial inaccuracies in measurement of initial reference populations for screen normalization. The overabundance of template in some samples causes an imbalance between PCR cycles of contaminated and uncontaminated samples, which results in a systematic artifactual depletion of GC-rich library elements. Elimination of contaminating plasmid DNA using the bacterial endonuclease Benzonase can restore faithful measurements of template abundance and minimize GC bias. PMID:27402361

  7. Evaluation of Topographic Error and Quality with Stereophotoclinometry

    NASA Astrophysics Data System (ADS)

    Palmer, Eric; Weirich, John; Campbell, Tanner; Lambert, Diane; Drozd, Kristofer

    2016-10-01

    One of the primary means to evaluate the accuracy of a shape model is to measure the deviation between a truth model (if available) and the shape model. Typically, this is done by calculating the square root of the average error squared of all the points, i.e the root mean squared error (RMS).This technique provides valuable insight into the error distribution of a shape model, as well as providing an objective measurement of deviations. However, it does not fully explain the error and especially the quality of a digital terrain model. Systematic errors can obscure poorly performing regions and may over-report errors.We have begun an extensive analysis of using normalized cross-correlation to evaluate the quality of shape models compared to truth topography, as well as the agreement between images rendered from the model with the original images. This technique provides a tool to differentiate between local accuracy and global accuracy. It also provides an effective way to decompose the error vector into horizontal and vertical displacements. It is especially useful for stereophotoclinometry (SPC) because it allows a clear determination of the quality of the model at the resolution of the source images (i.e. if the source images have a 5cm pixel size, it shows how well the SPC solution is at 5cm). Additionally, it demonstrates how essential a good imaging plan is to the quality of the shape model.We are using these techniques in support of the OSIRIS-REx mission to the asteroid Bennu.

  8. Errors Associated with the Direct Measurement of Radionuclides in Wounds

    SciTech Connect

    Hickman, D P

    2006-03-02

    Work in radiation areas can occasionally result in accidental wounds containing radioactive materials. When a wound is incurred within a radiological area, the presence of radioactivity in the wound needs to be confirmed to determine if additional remedial action needs to be taken. Commonly used radiation area monitoring equipment is poorly suited for measurement of radioactive material buried within the tissue of the wound. The Lawrence Livermore National Laboratory (LLNL) In Vivo Measurement Facility has constructed a portable wound counter that provides sufficient detection of radioactivity in wounds as shown in Fig. 1. The LLNL wound measurement system is specifically designed to measure low energy photons that are emitted from uranium and transuranium radionuclides. The portable wound counting system uses a 2.5cm diameter by 1mm thick NaI(Tl) detector. The detector is connected to a Canberra NaI InSpector{trademark}. The InSpector interfaces with an IBM ThinkPad laptop computer, which operates under Genie 2000 software. The wound counting system is maintained and used at the LLNL In Vivo Measurement Facility. The hardware is designed to be portable and is occasionally deployed to respond to the LLNL Health Services facility or local hospitals for examination of personnel that may have radioactive materials within a wound. The typical detection levels in using the LLNL portable wound counter in a low background area is 0.4 nCi to 0.6 nCi assuming a near zero mass source. This paper documents the systematic errors associated with in vivo measurement of radioactive materials buried within wounds using the LLNL portable wound measurement system. These errors are divided into two basic categories, calibration errors and in vivo wound measurement errors. Within these categories, there are errors associated with particle self-absorption of photons, overlying tissue thickness, source distribution within the wound, and count errors. These errors have been examined and

  9. Single Antenna Phase Errors for NAVSPASUR Receivers

    DTIC Science & Technology

    1988-11-30

    with data from the Kickapoo transmitter 3 are larger than the errors from the low-power transmitters (i.e., Gila River and Jordan Lake). Further, the...errors in the phase data associated with the Kickapoo transmitter show significant variability among data taken on different days.i We have applied a...a clear systematic bias in the derived chirp for targets illuminated by the Kickapoo transmitter. Near-field effects probably account for the larger

  10. The effectiveness of selected feed and water additives for reducing Salmonella spp. of public health importance in broiler chickens: a systematic review, meta-analysis, and meta-regression approach.

    PubMed

    Totton, Sarah C; Farrar, Ashley M; Wilkins, Wendy; Bucher, Oliver; Waddell, Lisa A; Wilhelm, Barbara J; McEwen, Scott A; Rajić, Andrijana

    2012-10-01

    Eating inappropriately prepared poultry meat is a major cause of foodborne salmonellosis. Our objectives were to determine the efficacy of feed and water additives (other than competitive exclusion and antimicrobials) on reducing Salmonella prevalence or concentration in broiler chickens using systematic review-meta-analysis and to explore sources of heterogeneity found in the meta-analysis through meta-regression. Six electronic databases were searched (Current Contents (1999-2009), Agricola (1924-2009), MEDLINE (1860-2009), Scopus (1960-2009), Centre for Agricultural Bioscience (CAB) (1913-2009), and CAB Global Health (1971-2009)), five topic experts were contacted, and the bibliographies of review articles and a topic-relevant textbook were manually searched to identify all relevant research. Study inclusion criteria comprised: English-language primary research investigating the effects of feed and water additives on the Salmonella prevalence or concentration in broiler chickens. Data extraction and study methodological assessment were conducted by two reviewers independently using pretested forms. Seventy challenge studies (n=910 unique treatment-control comparisons), seven controlled studies (n=154), and one quasi-experiment (n=1) met the inclusion criteria. Compared to an assumed control group prevalence of 44 of 1000 broilers, random-effects meta-analysis indicated that the Salmonella cecal colonization in groups with prebiotics (fructooligosaccharide, lactose, whey, dried milk, lactulose, lactosucrose, sucrose, maltose, mannanoligosaccharide) added to feed or water was 15 out of 1000 broilers; with lactose added to feed or water it was 10 out of 1000 broilers; with experimental chlorate product (ECP) added to feed or water it was 21 out of 1000. For ECP the concentration of Salmonella in the ceca was decreased by 0.61 log(10)cfu/g in the treated group compared to the control group. Significant heterogeneity (Cochran's Q-statistic p≤0.10) was observed

  11. Errors of measurement by laser goniometer

    NASA Astrophysics Data System (ADS)

    Agapov, Mikhail Y.; Bournashev, Milhail N.

    2000-11-01

    The report is dedicated to research of systematic errors of angle measurement by a dynamic laser goniometer (DLG) on the basis of a ring laser (RL), intended of certification of optical angle encoders (OE), and development of methods of separation the errors of different types and their algorithmic compensation. The OE was of the absolute photoelectric angle encoder type with an informational capacity of 14 bits. Cinematic connection with a rotary platform was made through mechanical connection unit (CU). The measurement and separation of a systematic error to components was carried out with applying of a method of cross-calibration at mutual turns OE in relation to DLG base and CU in relation to OE rotor. Then the Fourier analysis of observed data was made. The research of dynamic errors of angle measurements was made with use of dependence of measured angle between reference direction assigned by the interference null-indicator (NI) with an 8-faced optical polygon (OP), and direction defined by means of the OE, on angular rate of rotation. The obtained results allow to make algorithmic compensation of a systematic error and in the total considerably to reduce a total error of measurements.

  12. Errors in general practice: development of an error classification and pilot study of a method for detecting errors

    PubMed Central

    Rubin, G; George, A; Chinn, D; Richardson, C

    2003-01-01

    Objective: To describe a classification of errors and to assess the feasibility and acceptability of a method for recording staff reported errors in general practice. Design: An iterative process in a pilot practice was used to develop a classification of errors. This was incorporated in an anonymous self-report form which was then used to collect information on errors during June 2002. The acceptability of the reporting process was assessed using a self-completion questionnaire. Setting: UK general practice. Participants: Ten general practices in the North East of England. Main outcome measures: Classification of errors, frequency of errors, error rates per 1000 appointments, acceptability of the process to participants. Results: 101 events were used to create an initial error classification. This contained six categories: prescriptions, communication, appointments, equipment, clinical care, and "other" errors. Subsequently, 940 errors were recorded in a single 2 week period from 10 practices, providing additional information. 42% (397/940) were related to prescriptions, although only 6% (22/397) of these were medication errors. Communication errors accounted for 30% (282/940) of errors and clinical errors 3% (24/940). The overall error rate was 75.6/1000 appointments (95% CI 71 to 80). The method of error reporting was found to be acceptable by 68% (36/53) of respondents with only 8% (4/53) finding the process threatening. Conclusion: We have developed a classification of errors and described a practical and acceptable method for reporting them that can be used as part of the process of risk management. Errors are common and, although all have the potential to lead to an adverse event, most are administrative. PMID:14645760

  13. Single antenna phase errors for NAVSPASUR receivers

    NASA Astrophysics Data System (ADS)

    Andrew, M. D.; Wadiak, E. J.

    1988-11-01

    Interferometrics Inc. has investigated the phase errors on single antenna NAVSPASUR data. We find that the single antenna phase errors are well modeled as a function of signal strength only. The phase errors associated with data from the Kickapoo transmitter are larger than the errors from the low-power transmitters (i.e., Gila River and Jordan Lake). Further, the errors in the phase data associated with the Kickapoo transmitter show significant variability among data taken on different days. We have applied a quadratic polynomial fit to the single antenna phases to derive the Doppler shift and chirp, and we have estimated the formal errors associated with these quantities. These formal errors have been parameterized as a function of peak signal strength and number of data frames. We find that for a typical satellite observation the derived Doppler shift has a formal error of approx. 0.2 Hz and the derived chirp has a formal error of 0 less than or approx. 1 Hz/sec. There is a clear systematic bias in the derived chirp for targets illuminated by the Kickapoo transmitter. Near-field effects probably account for the larger phase errors and the chirp bias of the Kickapoo transmitter.

  14. The effects of recall errors and of selection bias in epidemiologic studies of mobile phone use and cancer risk.

    PubMed

    Vrijheid, Martine; Deltour, Isabelle; Krewski, Daniel; Sanchez, Marie; Cardis, Elisabeth

    2006-07-01

    This paper examines the effects of systematic and random errors in recall and of selection bias in case-control studies of mobile phone use and cancer. These sensitivity analyses are based on Monte-Carlo computer simulations and were carried out within the INTERPHONE Study, an international collaborative case-control study in 13 countries. Recall error scenarios simulated plausible values of random and systematic, non-differential and differential recall errors in amount of mobile phone use reported by study subjects. Plausible values for the recall error were obtained from validation studies. Selection bias scenarios assumed varying selection probabilities for cases and controls, mobile phone users, and non-users. Where possible these selection probabilities were based on existing information from non-respondents in INTERPHONE. Simulations used exposure distributions based on existing INTERPHONE data and assumed varying levels of the true risk of brain cancer related to mobile phone use. Results suggest that random recall errors of plausible levels can lead to a large underestimation in the risk of brain cancer associated with mobile phone use. Random errors were found to have larger impact than plausible systematic errors. Differential errors in recall had very little additional impact in the presence of large random errors. Selection bias resulting from underselection of unexposed controls led to J-shaped exposure-response patterns, with risk apparently decreasing at low to moderate exposure levels. The present results, in conjunction with those of the validation studies conducted within the INTERPHONE study, will play an important role in the interpretation of existing and future case-control studies of mobile phone use and cancer risk, including the INTERPHONE study.

  15. Studies of Error Sources in Geodetic VLBI

    NASA Technical Reports Server (NTRS)

    Rogers, A. E. E.; Niell, A. E.; Corey, B. E.

    1996-01-01

    Achieving the goal of millimeter uncertainty in three dimensional geodetic positioning on a global scale requires significant improvement in the precision and accuracy of both random and systematic error sources. For this investigation we proposed to study errors due to instrumentation in Very Long Base Interferometry (VLBI) and due to the atmosphere. After the inception of this work we expanded the scope to include assessment of error sources in GPS measurements, especially as they affect the vertical component of site position and the measurement of water vapor in the atmosphere. The atmosphere correction 'improvements described below are of benefit to both GPS and VLBI.

  16. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  17. Error management in blood establishments: results of eight years of experience (2003–2010) at the Croatian Institute of Transfusion Medicine

    PubMed Central

    Vuk, Tomislav; Barišić, Marijan; Očić, Tihomir; Mihaljević, Ivanka; Šarlija, Dorotea; Jukić, Irena

    2012-01-01

    Background. Continuous and efficient error management, including procedures from error detection to their resolution and prevention, is an important part of quality management in blood establishments. At the Croatian Institute of Transfusion Medicine (CITM), error management has been systematically performed since 2003. Materials and methods. Data derived from error management at the CITM during an 8-year period (2003–2010) formed the basis of this study. Throughout the study period, errors were reported to the Department of Quality Assurance. In addition to surveys and the necessary corrective activities, errors were analysed and classified according to the Medical Event Reporting System for Transfusion Medicine (MERS-TM). Results. During the study period, a total of 2,068 errors were recorded, including 1,778 (86.0%) in blood bank activities and 290 (14.0%) in blood transfusion services. As many as 1,744 (84.3%) errors were detected before issue of the product or service. Among the 324 errors identified upon release from the CITM, 163 (50.3%) errors were detected by customers and reported as complaints. In only five cases was an error detected after blood product transfusion however without any harmful consequences for the patients. All errors were, therefore, evaluated as “near miss” and “no harm” events. Fifty-two (2.5%) errors were evaluated as high-risk events. With regards to blood bank activities, the highest proportion of errors occurred in the processes of labelling (27.1%) and blood collection (23.7%). With regards to blood transfusion services, errors related to blood product issuing prevailed (24.5%). Conclusion. This study shows that comprehensive management of errors, including near miss errors, can generate data on the functioning of transfusion services, which is a precondition for implementation of efficient corrective and preventive actions that will ensure further improvement of the quality and safety of transfusion treatment. PMID

  18. A spectral filter for ESMR's sidelobe errors

    NASA Technical Reports Server (NTRS)

    Chesters, D.

    1979-01-01

    Fourier analysis was used to remove periodic errors from a series of NIMBUS-5 electronically scanned microwave radiometer brightness temperatures. The observations were all taken from the midnight orbits over fixed sites in the Australian grasslands. The angular dependence of the data indicates calibration errors consisted of broad sidelobes and some miscalibration as a function of beam position. Even though an angular recalibration curve cannot be derived from the available data, the systematic errors can be removed with a spectral filter. The 7 day cycle in the drift of the orbit of NIMBUS-5, coupled to the look-angle biases, produces an error pattern with peaks in its power spectrum at the weekly harmonics. About plus or minus 4 K of error is removed by simply blocking the variations near two- and three-cycles-per-week.

  19. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  20. How social is error observation? The neural mechanisms underlying the observation of human and machine errors

    PubMed Central

    Deschrijver, Eliane; Brass, Marcel

    2014-01-01

    Recently, it has been shown that the medial prefrontal cortex (MPFC) is involved in error execution as well as error observation. Based on this finding, it has been argued that recognizing each other’s mistakes might rely on motor simulation. In the current functional magnetic resonance imaging (fMRI) study, we directly tested this hypothesis by investigating whether medial prefrontal activity in error observation is restricted to situations that enable simulation. To this aim, we compared brain activity related to the observation of errors that can be simulated (human errors) with brain activity related to errors that cannot be simulated (machine errors). We show that medial prefrontal activity is not only restricted to the observation of human errors but also occurs when observing errors of a machine. In addition, our data indicate that the MPFC reflects a domain general mechanism of monitoring violations of expectancies. PMID:23314011

  1. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  2. Prescribing Errors Involving Medication Dosage Forms

    PubMed Central

    Lesar, Timothy S

    2002-01-01

    CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138

  3. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  4. Military veterans with mental health problems: a protocol for a systematic review to identify whether they have an additional risk of contact with criminal justice systems compared with other veterans groups

    PubMed Central

    2012-01-01

    Background There is concern that some veterans of armed forces, in particular those with mental health, drug or alcohol problems, experience difficulty returning to a civilian way of life and may subsequently come into contact with criminal justice services and imprisonment. The aim of this review is to examine whether military veterans with mental health problems, including substance use, have an additional risk of contact with criminal justice systems when compared with veterans who do not have such problems. The review will also seek to identify veterans’ views and experiences on their contact with criminal justice services, what contributed to or influenced their contact and whether there are any differences, including international and temporal, in incidence, contact type, veteran type, their presenting health needs and reported experiences. Methods/design In this review we will adopt a methodological model similar to that previously used by other researchers when reviewing intervention studies. The model, which we will use as a framework for conducting a review of observational and qualitative studies, consists of two parallel synthesis stages within the review process; one for quantitative research and the other for qualitative research. The third stage involves a cross study synthesis, enabling a deeper understanding of the results of the quantitative synthesis. A range of electronic databases, including MEDLINE, PsychINFO, CINAHL, will be systematically searched, from 1939 to present day, using a broad range of search terms that cover four key concepts: mental health, military veterans, substance misuse, and criminal justice. Studies will be screened against topic specific inclusion/exclusion criteria and then against a smaller subset of design specific inclusion/exclusion criteria. Data will be extracted for those studies that meet the inclusion criteria, and all eligible studies will be critically appraised. Included studies, both quantitative and

  5. Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel

    ERIC Educational Resources Information Center

    Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.

    2007-01-01

    A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…

  6. Some considerations of reduction of reference phase error in phase-stepping interferometry.

    PubMed

    Schwider, J; Dresel, T; Manzke, B

    1999-02-01

    Positioning errors and miscalibrations of the phase-stepping device in a phase-stepping interferometer lead to systematic errors proportional to twice the measured phase distribution. We discuss the historical development of various error-compensating phase-shift algorithms from a unified mathematical point of view. Furthermore, we demonstrate experimentally that systematic errors can also be removed a posteriori. A Twyman-Green-type microlens test interferometer was used for the experiments.

  7. Food additives

    PubMed Central

    Spencer, Michael

    1974-01-01

    Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857

  8. Error analysis using organizational simulation.

    PubMed Central

    Fridsma, D. B.

    2000-01-01

    Organizational simulations have been used by project organizations in civil and aerospace industries to identify work processes and organizational structures that are likely to fail under certain conditions. Using a simulation system based on Galbraith's information-processing theory and Simon's notion of bounded-rationality, we retrospectively modeled a chemotherapy administration error that occurred in a hospital setting. Our simulation suggested that when there is a high rate of unexpected events, the oncology fellow was differentially backlogged with work when compared with other organizational members. Alternative scenarios suggested that providing more knowledge resources to the oncology fellow improved her performance more effectively than adding additional staff to the organization. Although it is not possible to know whether this might have prevented the error, organizational simulation may be an effective tool to prospectively evaluate organizational "weak links", and explore alternative scenarios to correct potential organizational problems before they generate errors. PMID:11079885

  9. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  10. Search, Memory, and Choice Error: An Experiment.

    PubMed

    Sanjurjo, Adam

    2015-01-01

    Multiple attribute search is a central feature of economic life: we consider much more than price when purchasing a home, and more than wage when choosing a job. An experiment is conducted in order to explore the effects of cognitive limitations on choice in these rich settings, in accordance with the predictions of a new model of search memory load. In each task, subjects are made to search the same information in one of two orders, which differ in predicted memory load. Despite standard models of choice treating such variations in order of acquisition as irrelevant, lower predicted memory load search orders are found to lead to substantially fewer choice errors. An implication of the result for search behavior, more generally, is that in order to reduce memory load (thus choice error) a limited memory searcher ought to deviate from the search path of an unlimited memory searcher in predictable ways-a mechanism that can explain the systematic deviations from optimal sequential search that have recently been discovered in peoples' behavior. Further, as cognitive load is induced endogenously (within the task), and found to affect choice behavior, this result contributes to the cognitive load literature (in which load is induced exogenously), as well as the cognitive ability literature (in which cognitive ability is measured in a separate task). In addition, while the information overload literature has focused on the detrimental effects of the quantity of information on choice, this result suggests that, holding quantity constant, the order that information is observed in is an essential determinant of choice failure.

  11. Search, Memory, and Choice Error: An Experiment

    PubMed Central

    Sanjurjo, Adam

    2015-01-01

    Multiple attribute search is a central feature of economic life: we consider much more than price when purchasing a home, and more than wage when choosing a job. An experiment is conducted in order to explore the effects of cognitive limitations on choice in these rich settings, in accordance with the predictions of a new model of search memory load. In each task, subjects are made to search the same information in one of two orders, which differ in predicted memory load. Despite standard models of choice treating such variations in order of acquisition as irrelevant, lower predicted memory load search orders are found to lead to substantially fewer choice errors. An implication of the result for search behavior, more generally, is that in order to reduce memory load (thus choice error) a limited memory searcher ought to deviate from the search path of an unlimited memory searcher in predictable ways-a mechanism that can explain the systematic deviations from optimal sequential search that have recently been discovered in peoples' behavior. Further, as cognitive load is induced endogenously (within the task), and found to affect choice behavior, this result contributes to the cognitive load literature (in which load is induced exogenously), as well as the cognitive ability literature (in which cognitive ability is measured in a separate task). In addition, while the information overload literature has focused on the detrimental effects of the quantity of information on choice, this result suggests that, holding quantity constant, the order that information is observed in is an essential determinant of choice failure. PMID:26121356

  12. A Cross-Linguistic Speech Error Investigation of Functional Complexity

    ERIC Educational Resources Information Center

    Wells-Jensen, Sheri

    2007-01-01

    This work is a systematic, cross-linguistic examination of speech errors in English, Hindi, Japanese, Spanish and Turkish. It first describes a methodology for the generation of parallel corpora of error data, then uses these data to examine three general hypotheses about the relationship between language structure and the speech production…

  13. Error Analysis: Past, Present, and Future

    ERIC Educational Resources Information Center

    McCloskey, George

    2017-01-01

    This commentary will take an historical perspective on the Kaufman Test of Educational Achievement (KTEA) error analysis, discussing where it started, where it is today, and where it may be headed in the future. In addition, the commentary will compare and contrast the KTEA error analysis procedures that are rooted in psychometric methodology and…

  14. QUANTIFIERS UNDONE: REVERSING PREDICTABLE SPEECH ERRORS IN COMPREHENSION

    PubMed Central

    Frazier, Lyn; Clifton, Charles

    2015-01-01

    Speakers predictably make errors during spontaneous speech. Listeners may identify such errors and repair the input, or their analysis of the input, accordingly. Two written questionnaire studies investigated error compensation mechanisms in sentences with doubled quantifiers such as Many students often turn in their assignments late. Results show a considerable number of undoubled interpretations for all items tested (though fewer for sentences containing doubled negation than for sentences containing many-often, every-always or few-seldom.) This evidence shows that the compositional form-meaning pairing supplied by the grammar is not the only systematic mapping between form and meaning. Implicit knowledge of the workings of the performance systems provides an additional mechanism for pairing sentence form and meaning. Alternate accounts of the data based on either a concord interpretation or an emphatic interpretation of the doubled quantifier don’t explain why listeners fail to apprehend the ‘extra meaning’ added by the potentially redundant material only in limited circumstances. PMID:26478637

  15. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  16. A Fully Bayesian Approach to Improved Calibration and Prediction of Groundwater Models With Structure Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.

    2014-12-01

    Effective water resource management typically relies on numerical models to analyse groundwater flow and solute transport processes. These models are usually subject to model structure error due to simplification and/or misrepresentation of the real system. As a result, the model outputs may systematically deviate from measurements, thus violating a key assumption for traditional regression-based calibration and uncertainty analysis. On the other hand, model structure error induced bias can be described statistically in an inductive, data-driven way based on historical model-to-measurement misfit. We adopt a fully Bayesian approach that integrates a Gaussian process error model to account for model structure error to the calibration, prediction and uncertainty analysis of groundwater models. The posterior distributions of parameters of the groundwater model and the Gaussian process error model are jointly inferred using DREAM, an efficient Markov chain Monte Carlo sampler. We test the usefulness of the fully Bayesian approach towards a synthetic case study of surface-ground water interaction under changing pumping conditions. We first illustrate through this example that traditional least squares regression without accounting for model structure error yields biased parameter estimates due to parameter compensation as well as biased predictions. In contrast, the Bayesian approach gives less biased parameter estimates. Moreover, the integration of a Gaussian process error model significantly reduces predictive bias and leads to prediction intervals that are more consistent with observations. The results highlight the importance of explicit treatment of model structure error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification. In addition, the data-driven error modelling approach is capable of extracting more information from observation data than using a groundwater model alone.

  17. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  18. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  19. Homography-Based Correction of Positional Errors in MRT Survey

    NASA Astrophysics Data System (ADS)

    Nayak, A.; Daiboo, S.; Udaya Shankar, N.

    2009-09-01

    The Mauritius Radio Telescope (MRT) images show systematics in the positional errors of sources when compared to source positions in the Molonglo Reference Catalogue (MRC). We have applied two-dimensional homography to correct positional errors in the image domain and avoid re-processing the visibility data. Positions of bright (above 15 σ) sources, common to MRT and MRC catalogues, are used to set up an over-determined system to solve for the 2-D homography matrix. After correction, the errors are found to be within 10% of the beamwidth for these bright sources and the systematics are eliminated from the images.

  20. Food additives

    MedlinePlus

    ... or natural. Natural food additives include: Herbs or spices to add flavor to foods Vinegar for pickling ... Certain colors improve the appearance of foods. Many spices, as well as natural and man-made flavors, ...

  1. Addressing Medical Errors in Hand Surgery

    PubMed Central

    Johnson, Shepard P.; Adkinson, Joshua M.; Chung, Kevin C.

    2014-01-01

    Influential think-tank such as the Institute of Medicine has raised awareness about the implications of medical errors. In response, organizations, medical societies, and institutions have initiated programs to decrease the incidence and effects of these errors. Surgeons deal with the direct implications of adverse events involving patients. In addition to managing the physical consequences, they are confronted with ethical and social issues when caring for a harmed patient. Although there is considerable effort to implement system-wide changes, there is little guidance for hand surgeons on how to address medical errors. Admitting an error is difficult, but a transparent environment where patients are notified of errors and offered consolation and compensation is essential to maintain trust. Further, equipping hand surgeons with a guide for addressing medical errors will promote compassionate patient interaction, help identify system failures, provide learning points for safety improvement, and demonstrate a commitment to ethically responsible medical care. PMID:25154576

  2. Extending the Error Correction Capability of Linear Codes,

    DTIC Science & Technology

    be made to tolerate and correct up to (k-1) bit failures. Thus if the classical error correction bounds are assumed, a linear transmission code used...in digital circuitry is under-utilized. For example, the single- error - correction , double-error-detection Hamming code could be used to correct up to...two bit failures with some additional error correction circuitry. A simple algorithm for correcting these extra errors in linear codoes is presented. (Author)

  3. Twenty Questions about Student Errors.

    ERIC Educational Resources Information Center

    Fisher, Kathleen M.; Lipson, Joseph Isaac

    1986-01-01

    Discusses the value of studying errors made by students in the process of learning science. Addresses 20 research questions dealing with student learning errors. Attempts to characterize errors made by students and clarify some terms used in error research. (TW)

  4. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  5. Teacher-Induced Errors.

    ERIC Educational Resources Information Center

    Richmond, Kent C.

    Students of English as a second language (ESL) often come to the classroom with little or no experience in writing in any language and with inaccurate assumptions about writing. Rather than correct these assumptions, teachers often seem to unwittingly reinforce them, actually inducing errors into their students' work. Teacher-induced errors occur…

  6. Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada

    NASA Astrophysics Data System (ADS)

    Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.

    2015-08-01

    Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion

  7. The Crucial Role of Error Correlation for Uncertainty Modeling of CFD-Based Aerodynamics Increments

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.; Walker, Eric L.

    2011-01-01

    The Ares I ascent aerodynamics database for Design Cycle 3 (DAC-3) was built from wind-tunnel test results and CFD solutions. The wind tunnel results were used to build the baseline response surfaces for wind-tunnel Reynolds numbers at power-off conditions. The CFD solutions were used to build increments to account for Reynolds number effects. We calculate the validation errors for the primary CFD code results at wind tunnel Reynolds number power-off conditions and would like to be able to use those errors to predict the validation errors for the CFD increments. However, the validation errors are large compared to the increments. We suggest a way forward that is consistent with common practice in wind tunnel testing which is to assume that systematic errors in the measurement process and/or the environment will subtract out when increments are calculated, thus making increments more reliable with smaller uncertainty than absolute values of the aerodynamic coefficients. A similar practice has arisen for the use of CFD to generate aerodynamic database increments. The basis of this practice is the assumption of strong correlation of the systematic errors inherent in each of the results used to generate an increment. The assumption of strong correlation is the inferential link between the observed validation uncertainties at wind-tunnel Reynolds numbers and the uncertainties to be predicted for flight. In this paper, we suggest a way to estimate the correlation coefficient and demonstrate the approach using code-to-code differences that were obtained for quality control purposes during the Ares I CFD campaign. Finally, since we can expect the increments to be relatively small compared to the baseline response surface and to be typically of the order of the baseline uncertainty, we find that it is necessary to be able to show that the correlation coefficients are close to unity to avoid overinflating the overall database uncertainty with the addition of the increments.

  8. Geometric calibration of a terrestrial laser scanner with local additional parameters: An automatic strategy

    NASA Astrophysics Data System (ADS)

    García-San-Miguel, D.; Lerma, J. L.

    2013-05-01

    Terrestrial laser scanning systems are steadily increasing in many fields of engineering, geoscience and architecture namely for fast data acquisition, 3-D modeling and mapping. Similarly to other precision instruments, these systems provide measurements with implicit systematic errors. Systematic errors are physically corrected by manufacturers before delivery and sporadically afterwards. The approach presented herein tackles the raw observables acquired by a laser scanner with additional parameters, a set of geometric calibration parameters that model the systematic error of the instrument to achieve the most accurate point cloud outputs, improving eventual workflow owing to less filtering, better registration and best 3D modeling. This paper presents a fully automatic strategy to calibrate geometrically terrestrial laser scanning datasets. The strategy is tested with multiple scans taken by a FARO FOCUS 3D, a phase-based terrestrial laser scanner. A calibration with local parameters for datasets is undertaken to improve the raw observables and a weighted mathematical index is proposed to select the most significant set of additional parameters. The improvements achieved are exposed, highlighting the necessity of correcting the terrestrial laser scanner before handling multiple data sets.

  9. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  10. Potlining Additives

    SciTech Connect

    Rudolf Keller

    2004-08-10

    In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.

  11. Phosphazene additives

    DOEpatents

    Harrup, Mason K; Rollins, Harry W

    2013-11-26

    An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.

  12. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  13. Systematic approach for simultaneously correcting the band-gap andp-dseparation errors of common cation III-V or II-VI binaries in density functional theory calculations within a local density approximation

    SciTech Connect

    Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang

    2015-07-31

    We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles method can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.

  14. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  15. Slowing after Observed Error Transfers across Tasks

    PubMed Central

    Wang, Lijun; Pan, Weigang; Tan, Jinfeng; Liu, Congcong; Chen, Antao

    2016-01-01

    . Moreover, the PES effect appears across tasksets with distinct stimuli and response rules in the context of observed errors, reflecting a generic process. Additionally, the slowing effect and improved accuracy in the post-observed error trial do not occur together, suggesting that they are independent behavioral adjustments in the context of observed errors. PMID:26934579

  16. Simulation of systematic errors in the SLC magnets

    SciTech Connect

    Jaeger, J.

    1983-08-08

    The distance (iron to iron) between a focusing and a defocusing magnet in the SLC-arcs is 6.7056 cm and the iron length of each of them is 2.52914 m. To represent these magnets by a hard-edge model in computer codes TRANSPORT or TURTLE the magnetic length rather than the core length of these magnets is of interest. In the present lattice the magnetic length for the field and the gradient of each of these magnets is assumed to be 2.5462 m.

  17. Errors from Image Analysis

    SciTech Connect

    Wood, William Monford

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  18. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  19. Diagnostic errors in interactive telepathology.

    PubMed

    Stauch, G; Schweppe, K W; Kayser, K

    2000-01-01

    Telepathology (TP) as a service in pathology at a distance is now widely used. It is integrated in the daily workflow of numerous pathologists. Meanwhile, in Germany 15 departments of pathology are using the telepathology technique for frozen section service; however, a common recognised quality standard in diagnostic accuracy is still missing. In a first step, the working group Aurich uses a TP system for frozen section service in order to analyse the frequency and sources of errors in TP frozen section diagnoses for evaluating the quality of frozen section slides, the important components of image quality and their influences an diagnostic accuracy. The authors point to the necessity of an optimal training program for all participants in this service in order to reduce the risk of diagnostic errors. In addition, there is need for optimal cooperation of all partners involved in TP service.

  20. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  1. Dinosaur Systematics

    NASA Astrophysics Data System (ADS)

    Carpenter, Kenneth; Currie, Philip J.

    1992-07-01

    In recent years dinosaurs have captured the attention of the public at an unprecedented level. At the heart of this resurgence in popular interest is an increased level of research activity, much of which is innovative in the field of paleontology. For instance, whereas earlier paleontological studies emphasized basic morphologic description and taxonomic classification, modern studies attempt to examine the role and nature of dinosaurs as living animals. More than ever before, we understand how these extinct species functioned, behaved, interacted with each other and the environment, and evolved. Nevertheless, these studies rely on certain basic building blocks of knowledge, including facts about dinosaur anatomy and taxonomic relationships. One of the purposes of this volume is to unravel some of the problems surrounding dinosaur systematics and to increase our understanding of dinosaurs as a biological species. Dinosaur Systematics presents a current overview of dinosaur systematics using various examples to explore what is a species in a dinosaur, what separates genders in dinosaurs, what morphological changes occur with maturation of a species, and what morphological variations occur within a species.

  2. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-04-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by the so-called smoothing error. In this paper it is shown that the concept of the smoothing error is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state. The idea of a sufficiently fine sampling of this reference atmospheric state is untenable because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully talk about temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the involved a priori covariance matrix has been evaluated on the comparison grid rather than resulting from interpolation. This is, because the undefined component of the smoothing error, which is the effect of smoothing implied by the finite grid on which the measurements are compared, cancels out when the difference is calculated.

  3. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  4. Understanding the impact of RapidArc therapy delivery errors for prostate cancer.

    PubMed

    Oliver, Michael; Bush, Karl; Zavgorodni, Sergei; Ansbacher, Will; Beckham, Wayne A

    2011-05-20

    The purpose of this study is to simulate random and systematic RapidArc delivery errors for external beam prostate radiotherapy plans in order to determine the dose sensitivity for each error type. Ten prostate plans were created with a single 360° arc. The DICOM files for these treatment plans were then imported into an in-house computer program that introduced delivery errors. Random and systematic gantry position (0.25°, 0.5°, 1°), monitor unit (MU) (1.25%, 2.5%, 5%), and multileaf collimator (MLC) position (0.5, 1, 2 mm) errors were introduced. The MLC errors were either random or one of three types of systematic errors, where the MLC banks moved in the same (MLC gaps remain unchanged) or opposing directions (increasing or decreasing the MLC gaps). The generalized equivalent uniform dose (gEUD) was calculated for the original plan and all treatment plans with errors introduced. The dose sensitivity for the cohort was calculated using linear regression for the gantry position, MU, and MLC position errors. Because there was a large amount of variability for systematic MLC position errors, the dose sensitivity of each plan was calculated and correlated with plan MU, mean MLC gap, and the percentage of MLC leaf gaps less than 1 and 2 cm for each individual plan. We found that random and systematic gantry position errors were relatively insignificant (< 0.1% gEUD change) for gantry errors up to 1°. Random MU errors were also insignificant, and systematic MU increases caused a systematic increase in gEUD. For MLC position errors, random MLC errors were relatively insignificant up to 2 mm as had been determined in previous IMRT studies. Systematic MLC shift errors caused a decrease of approximately -1% in the gEUD per mm. For systematic MLC gap open errors, the dose sensitivity was 8.2%/mm and for MLC gap close errors the dose sensitivity was -7.2%/mm. There was a large variability for MLC gap open/close errors for the ten RapidArc plans which correlated strongly

  5. Neuroimaging measures of error-processing: Extracting reliable signals from event-related potentials and functional magnetic resonance imaging.

    PubMed

    Steele, Vaughn R; Anderson, Nathaniel E; Claus, Eric D; Bernat, Edward M; Rao, Vikram; Assaf, Michal; Pearlson, Godfrey D; Calhoun, Vince D; Kiehl, Kent A

    2016-05-15

    Error-related brain activity has become an increasingly important focus of cognitive neuroscience research utilizing both event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI). Given the significant time and resources required to collect these data, it is important for researchers to plan their experiments such that stable estimates of error-related processes can be achieved efficiently. Reliability of error-related brain measures will vary as a function of the number of error trials and the number of participants included in the averages. Unfortunately, systematic investigations of the number of events and participants required to achieve stability in error-related processing are sparse, and none have addressed variability in sample size. Our goal here is to provide data compiled from a large sample of healthy participants (n=180) performing a Go/NoGo task, resampled iteratively to demonstrate the relative stability of measures of error-related brain activity given a range of sample sizes and event numbers included in the averages. We examine ERP measures of error-related negativity (ERN/Ne) and error positivity (Pe), as well as event-related fMRI measures locked to False Alarms. We find that achieving stable estimates of ERP measures required four to six error trials and approximately 30 participants; fMRI measures required six to eight trials and approximately 40 participants. Fewer trials and participants were required for measures where additional data reduction techniques (i.e., principal component analysis and independent component analysis) were implemented. Ranges of reliability statistics for various sample sizes and numbers of trials are provided. We intend this to be a useful resource for those planning or evaluating ERP or fMRI investigations with tasks designed to measure error-processing.

  6. Interactions of timing and prediction error learning.

    PubMed

    Kirkpatrick, Kimberly

    2014-01-01

    Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields.

  7. Error monitoring in musicians

    PubMed Central

    Maidhof, Clemens

    2013-01-01

    To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255

  8. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  9. A Parametric Analysis of Errors of Commission during Discrete-Trial Training

    ERIC Educational Resources Information Center

    DiGennaro Reed, Florence D.; Reed, Derek D.; Baez, Cynthia N.; Maguire, Helena

    2011-01-01

    We investigated the effects of systematic changes in levels of treatment integrity by altering errors of commission during error-correction procedures as part of discrete-trial training. We taught 3 students with autism receptive nonsense shapes under 3 treatment integrity conditions (0%, 50%, or 100% errors of commission). Participants exhibited…

  10. A Long-Term Memory Competitive Process Model of a Common Procedural Error

    DTIC Science & Technology

    2013-08-01

    A novel computational cognitive model explains human procedural error in terms of declarative memory processes. This is an early version of a process ... model intended to predict and explain multiple classes of procedural error a priori. We begin with postcompletion error (PCE) a type of systematic

  11. Properties of a Proposed Approximation to the Standard Error of Measurement.

    ERIC Educational Resources Information Center

    Nitko, Anthony J.

    An approximation formula for the standard error of measurement was recently proposed by Garvin. The properties of this approximation to the standard error of measurement are described in this paper and illustrated with hypothetical data. It is concluded that the approximation is a systematic overestimate of the standard error of measurement…

  12. Discretization error estimation and exact solution generation using the method of nearby problems.

    SciTech Connect

    Sinclair, Andrew J.; Raju, Anil; Kurzen, Matthew J.; Roy, Christopher John; Phillips, Tyrone S.

    2011-10-01

    The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.

  13. Error analysis of compensation cutting technique for wavefront error of KH2PO4 crystal.

    PubMed

    Tie, Guipeng; Dai, Yifan; Guan, Chaoliang; Zhu, Dengchao; Song, Bing

    2013-09-20

    Considering the wavefront error of KH(2)PO(4) (KDP) crystal is difficult to control through face fly cutting process because of surface shape deformation during vacuum suction, an error compensation technique based on a spiral turning method is put forward. An in situ measurement device is applied to measure the deformed surface shape after vacuum suction, and the initial surface figure error, which is obtained off-line, is added to the in situ surface shape to obtain the final surface figure to be compensated. Then a three-axis servo technique is utilized to cut the final surface shape. In traditional cutting processes, in addition to common error sources such as the error in the straightness of guide ways, spindle rotation error, and error caused by ambient environment variance, three other errors, the in situ measurement error, position deviation error, and servo-following error, are the main sources affecting compensation accuracy. This paper discusses the effect of these three errors on compensation accuracy and provides strategies to improve the final surface quality. Experimental verification was carried out on one piece of KDP crystal with the size of Φ270 mm×11 mm. After one compensation process, the peak-to-valley value of the transmitted wavefront error dropped from 1.9λ (λ=632.8 nm) to approximately 1/3λ, and the mid-spatial-frequency error does not become worse when the frequency of the cutting tool trajectory is controlled by use of a low-pass filter.

  14. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  15. Error-finding and error-correcting methods for the start-up of the SLC

    SciTech Connect

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper.

  16. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    NASA Astrophysics Data System (ADS)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  17. A posteriori error estimates for Maxwell equations

    NASA Astrophysics Data System (ADS)

    Schoeberl, Joachim

    2008-06-01

    Maxwell equations are posed as variational boundary value problems in the function space H(operatorname{curl}) and are discretized by Nedelec finite elements. In Beck et al., 2000, a residual type a posteriori error estimator was proposed and analyzed under certain conditions onto the domain. In the present paper, we prove the reliability of that error estimator on Lipschitz domains. The key is to establish new error estimates for the commuting quasi-interpolation operators recently introduced in J. Schoeberl, Commuting quasi-interpolation operators for mixed finite elements. Similar estimates are required for additive Schwarz preconditioning. To incorporate boundary conditions, we establish a new extension result.

  18. Reduction of Orifice-Induced Pressure Errors

    NASA Technical Reports Server (NTRS)

    Plentovich, Elizabeth B.; Gloss, Blair B.; Eves, John W.; Stack, John P.

    1987-01-01

    Use of porous-plug orifice reduces or eliminates errors, induced by orifice itself, in measuring static pressure on airfoil surface in wind-tunnel experiments. Piece of sintered metal press-fitted into static-pressure orifice so it matches surface contour of model. Porous material reduces orifice-induced pressure error associated with conventional orifice of same or smaller diameter. Also reduces or eliminates additional errors in pressure measurement caused by orifice imperfections. Provides more accurate measurements in regions with very thin boundary layers.

  19. Multijoint error compensation mediates unstable object control.

    PubMed

    Cluff, Tyler; Manos, Aspasia; Lee, Timothy D; Balasubramaniam, Ramesh

    2012-08-01

    A key feature of skilled object control is the ability to correct performance errors. This process is not straightforward for unstable objects (e.g., inverted pendulum or "stick" balancing) because the mechanics of the object are sensitive to small control errors, which can lead to rapid performance changes. In this study, we have characterized joint recruitment and coordination processes in an unstable object control task. Our objective was to determine whether skill acquisition involves changes in the recruitment of individual joints or distributed error compensation. To address this problem, we monitored stick-balancing performance across four experimental sessions. We confirmed that subjects learned the task by showing an increase in the stability and length of balancing trials across training sessions. We demonstrated that motor learning led to the development of a multijoint error compensation strategy such that after training, subjects preferentially constrained joint angle variance that jeopardized task performance. The selective constraint of destabilizing joint angle variance was an important metric of motor learning. Finally, we performed a combined uncontrolled manifold-permutation analysis to ensure the variance structure was not confounded by differences in the variance of individual joint angles. We showed that reliance on multijoint error compensation increased, whereas individual joint variation (primarily at the wrist joint) decreased systematically with training. We propose a learning mechanism that is based on the accurate estimation of sensory states.

  20. Testing Scientific Software: A Systematic Literature Review

    PubMed Central

    Kanewala, Upulee; Bieman, James M.

    2014-01-01

    Context Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. Objective This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. Method We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. Results We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Conclusions Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques. PMID:25125798

  1. Diagnosis of inborn errors of metabolism.

    PubMed

    Velázquez, A; Vela-Amieva, M; Cicerón-Arellano, I; Ibarra-González, I; Pérez-Andrade, M E; Olivares-Sandoval, Z; Jiménez-Sánchez, G

    2000-01-01

    Systematic detection of inborn errors of metabolism (IEM) has usually encountered difficulties in developing countries. We present our experience in a high-risk population in Mexico between 1973 and 1998 with particular reference to the last 10 years, during which time infrastructure and support were considerably improved. Only disorders of intermediary metabolism were sought. The total number of patients studied is not available, but in the last 10 years, patients numbered 5,186. Routine metabolic screening was performed on all patients, with additional tests according to the clinical picture and screening results. The referral criteria have increasingly diversified, one-third being neurological conditions. Of the referrals, 33.8% were from pediatricians (31.1% of whom were at critical medicine departments) and the remainder from specialists. The number of diagnosed patients has increased to 1 per 43.9 patients studied. Amino acid defects have been the most prevalent, the proportion of organic acid and carbohydrate disorders having increased in the last 10 years, associated with improved diagnostic facilities. The most frequently diagnosed diseases were PKU, type 1a glycogen storage, and maple syrup urine disease (MSUD), their frequency apparently varying among different regions of Mexico. Other results of our program include training of specialists and technicians, development of the Latin American Metabolic Information Network, a procedure to locally prepare a special food product low in phenylalanine for the treatment of PKU patients, and extension of approaches for these disorders to the investigation metabolic derangements of infant malnutrition. This work demonstrates that inherited metabolic diseases constitute a significant load in pediatric pathology and that their study can and should be pursued in developing nations.

  2. Error Sensitivity Model.

    DTIC Science & Technology

    1980-04-01

    Philosophy The Positioning/Error Model has been defined in three dis- tinct phases: I - Error Sensitivity Model II - Operonal Positioning Model III...X inv VH,’itat NX*YImpY -IY+X 364: mat AX+R 365: ara R+L+R 366: if NC1,1J-N[2,2)=O and N[1,2<135+T;j, 6 367: if NC1,1]-N2,2J=6 and NCI2=;0.T;jmp 5

  3. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  4. Enhanced orbit determination filter: Inclusion of ground system errors as filter parameters

    NASA Technical Reports Server (NTRS)

    Masters, W. C.; Scheeres, D. J.; Thurman, S. W.

    1994-01-01

    The theoretical aspects of an orbit determination filter that incorporates ground-system error sources as model parameters for use in interplanetary navigation are presented in this article. This filter, which is derived from sequential filtering theory, allows a systematic treatment of errors in calibrations of transmission media, station locations, and earth orientation models associated with ground-based radio metric data, in addition to the modeling of the spacecraft dynamics. The discussion includes a mathematical description of the filter and an analytical comparison of its characteristics with more traditional filtering techniques used in this application. The analysis in this article shows that this filter has the potential to generate navigation products of substantially greater accuracy than more traditional filtering procedures.

  5. Test models for improving filtering with model errors through stochastic parameter estimation

    SciTech Connect

    Gershgorin, B.; Harlim, J. Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  6. Feature-binding errors after eye movements and shifts of attention.

    PubMed

    Golomb, Julie D; L'heureux, Zara E; Kanwisher, Nancy

    2014-05-01

    When people move their eyes, the eye-centered (retinotopic) locations of objects must be updated to maintain world-centered (spatiotopic) stability. Here, we demonstrated that the attentional-updating process temporarily distorts the fundamental ability to bind object locations with their features. Subjects were simultaneously presented with four colors after a saccade-one in a precued spatiotopic target location-and were instructed to report the target's color using a color wheel. Subjects' reports were systematically shifted in color space toward the color of the distractor in the retinotopic location of the cue. Probabilistic modeling exposed both crude swapping errors and subtler feature mixing (as if the retinotopic color had blended into the spatiotopic percept). Additional experiments conducted without saccades revealed that the two types of errors stemmed from different attentional mechanisms (attention shifting vs. splitting). Feature mixing not only reflects a new perceptual phenomenon, but also provides novel insight into how attention is remapped across saccades.

  7. Rapid mapping of volumetric machine errors using distance measurements

    SciTech Connect

    Krulewich, D.A.

    1998-04-01

    This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are

  8. [Sources of error in the European Pharmacopoeia assay of halide salts of organic bases by titration with alkali].

    PubMed

    Kószeginé, S H; Ráfliné, R Z; Paál, T; Török, I

    2000-01-01

    A short overview has been given by the authors on the titrimetric assay methods of halide salts of organic bases in the pharmacopoeias of greatest importance. The alternative procedures introduced by the European Pharmacopoeia Commission some years ago to replace the non-aqueous titration with perchloric acid in the presence of mercuric acetate have also been presented and evaluated. The authors investigated the limits of applicability and the sources of systematic errors (bias) of the strongly preferred titration with sodium hydroxide in an alcoholic medium. To assess the bias due to the differences between the results calculated from the two inflexion points of the titration curves and the two real endpoints corresponding to the strong and weak acids, respectively, the mathematical analysis of the titration curve function was carried out. This bias, generally negligible when the pH change near the endpoint of the titration is more than 1 unit, is the function of the concentration, the apparent pK of the analyte and the ionic product of water (ethanol) in the alcohol-water mixtures. Using the validation data gained for the method with the titration of ephedrine hydrochloride the authors analysed the impact of carbon dioxide in the titration medium on the additive and proportional systematic errors of the method. The newly introduced standardisation procedure of the European Pharmacopoeia for the sodium hydroxide titrant to decrease the systematic errors caused by carbon dioxide has also been evaluated.

  9. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  10. Microdensitometer errors: Their effect on photometric data reduction

    NASA Technical Reports Server (NTRS)

    Bozyan, E. P.; Opal, C. B.

    1984-01-01

    The performance of densitometers used for photometric data reduction of high dynamic range electrographic plate material is analyzed. Densitometer repeatability is tested by comparing two scans of one plate. Internal densitometer errors are examined by constructing histograms of digitized densities and finding inoperative bits and differential nonlinearity in the analog to digital converter. Such problems appear common to the four densitometers used in this investigation and introduce systematic algorithm dependent errors in the results. Strategies to improve densitometer performance are suggested.

  11. Identification and Minimization of Errors in Doppler Global Velocimetry Measurements

    NASA Technical Reports Server (NTRS)

    Meyers, James F.; Lee, Joseph W.

    2000-01-01

    A systematic laboratory investigation was conducted to identify potential measurement error sources in Doppler Global Velocimetry technology. Once identified, methods were developed to eliminate or at least minimize the effects of these errors. The areas considered included the Iodine vapor cell, optical alignment, scattered light characteristics, noise sources, and the laser. Upon completion the demonstrated measurement uncertainty was reduced to 0.5 m/sec.

  12. Breast Patient Setup Error Assessment: Comparison of Electronic Portal Image Devices and Cone-Beam Computed Tomography Matching Results

    SciTech Connect

    Topolnjak, Rajko; Sonke, Jan-Jakob; Nijkamp, Jasper; Rasch, Coen; Minkema, Danny; Remeijer, Peter; Vliet-Vroegindeweij, Corine van

    2010-11-15

    Purpose: To quantify the differences in setup errors measured with the cone-beam computed tomography (CBCT) and electronic portal image devices (EPID) in breast cancer patients. Methods and Materials: Repeat CBCT scan were acquired for routine offline setup verification in 20 breast cancer patients. During the CBCT imaging fractions, EPID images of the treatment beams were recorded. Registrations of the bony anatomy for CBCT to planning CT and EPID to digitally reconstructed-radiographs (DRRs) were compared. In addition, similar measurements of an anthropomorphic thorax phantom were acquired. Bland-Altman and linear regression analysis were performed for clinical and phantom registrations. Systematic and random setup errors were quantified for CBCT and EPID-driven correction protocols in the EPID coordinate system (U, V), with V parallel to the cranial-caudal axis and U perpendicular to V and the central beam axis. Results: Bland-Altman analysis of clinical EPID and CBCT registrations yielded 4 to 6-mm limits of agreement, indicating that both methods were not compatible. The EPID-based setup errors were smaller than the CBCT-based setup errors. Phantom measurements showed that CBCT accurately measures setup error whereas EPID underestimates setup errors in the cranial-caudal direction. In the clinical measurements, the residual bony anatomy setup errors after offline CBCT-based corrections were {Sigma}{sub U} = 1.4 mm, {Sigma}{sub V} = 1.7 mm, and {sigma}{sub U} = 2.6 mm, {sigma}{sub V} = 3.1 mm. Residual setup errors of EPID driven corrections corrected for underestimation were estimated at {Sigma}{sub U} = 2.2mm, {Sigma}{sub V} = 3.3 mm, and {sigma}{sub U} = 2.9 mm, {sigma}{sub V} = 2.9 mm. Conclusion: EPID registration underestimated the actual bony anatomy setup error in breast cancer patients by 20% to 50%. Using CBCT decreased setup uncertainties significantly.

  13. Report of the Subpanel on Error Characterization and Error Budgets

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The state of knowledge of both user positioning requirements and error models of current and proposed satellite systems is reviewed. In particular the error analysis models for LANDSAT D are described. Recommendations are given concerning the geometric error model for the thematic mapper; interactive user involvement in system error budgeting and modeling and verification on real data sets; and the identification of a strawman mission for modeling key error sources.

  14. SIP: Systematics-Insensitive Periodograms

    NASA Astrophysics Data System (ADS)

    Angus, Ruth

    2016-09-01

    SIP (Systematics-Insensitive Periodograms) extends the generative model used to create traditional sine-fitting periodograms for finding the frequency of a sinusoid by including systematic trends based on a set of eigen light curves in the generative model in addition to using a sum of sine and cosine functions over a grid of frequencies, producing periodograms with vastly reduced systematic features. Acoustic oscillations in giant stars and measurement of stellar rotation periods can be recovered from the SIP periodograms without detrending. The code can also be applied to detection other periodic phenomena, including eclipsing binaries and short-period exoplanet candidates.

  15. Systematic neutron guide misalignment for an accelerator-driven spallation neutron source

    NASA Astrophysics Data System (ADS)

    Zendler, C.; Bentley, P. M.

    2016-08-01

    The European Spallation Source (ESS) is a long pulse spallation neutron source that is currently under construction in Lund, Sweden. A considerable fraction of the 22 planned instruments extend as far as 75-150 m from the source. In such long beam lines, misalignment between neutron guide segments can decrease the neutron transmission significantly. In addition to a random misalignment from installation tolerances, the ground on which ESS is built can be expected to sink with time, and thus shift the neutron guide segments further away from the ideal alignment axis in a systematic way. These systematic errors are correlated to the ground structure, position of buildings and shielding installation. Since the largest deformation is expected close to the target, even short instruments might be noticeably affected. In this study, the effect of this systematic misalignment on short and long ESS beam lines is analyzed, and a possible mitigation by overillumination of subsequent guide sections investigated.

  16. A Review of the Literature on Computational Errors With Whole Numbers. Mathematics Education Diagnostic and Instructional Centre (MEDIC).

    ERIC Educational Resources Information Center

    Burrows, J. K.

    Research on error patterns associated with whole number computation is reviewed. Details of the results of some of the individual studies cited are given in the appendices. In Appendix A, 33 addition errors, 27 subtraction errors, 41 multiplication errors, and 41 division errors are identified, and the frequency of these errors made by 352…

  17. Target Uncertainty Mediates Sensorimotor Error Correction

    PubMed Central

    Vijayakumar, Sethu; Wolpert, Daniel M.

    2017-01-01

    Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects’ scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one’s response. By suggesting that subjects’ decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated. PMID:28129323

  18. Target Uncertainty Mediates Sensorimotor Error Correction.

    PubMed

    Acerbi, Luigi; Vijayakumar, Sethu; Wolpert, Daniel M

    2017-01-01

    Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects' scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one's response. By suggesting that subjects' decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated.

  19. Financial errors in dementia: testing a neuroeconomic conceptual framework.

    PubMed

    Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L; Rosen, Howard J

    2014-08-01

    Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer's disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p < 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p < 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention.

  20. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  1. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  2. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  3. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  4. Interpolation Errors in Thermistor Calibration Equations

    NASA Astrophysics Data System (ADS)

    White, D. R.

    2017-04-01

    Thermistors are widely used temperature sensors capable of measurement uncertainties approaching those of standard platinum resistance thermometers. However, the extreme nonlinearity of thermistors means that complicated calibration equations are required to minimize the effects of interpolation errors and achieve low uncertainties. This study investigates the magnitude of interpolation errors as a function of temperature range and the number of terms in the calibration equation. Approximation theory is used to derive an expression for the interpolation error and indicates that the temperature range and the number of terms in the calibration equation are the key influence variables. Numerical experiments based on published resistance-temperature data confirm these conclusions and additionally give guidelines on the maximum and minimum interpolation error likely to occur for a given temperature range and number of terms in the calibration equation.

  5. TOA/FOA geolocation error analysis.

    SciTech Connect

    Mason, John Jeffrey

    2008-08-01

    This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.

  6. Hyponatremia: management errors.

    PubMed

    Seo, Jang Won; Park, Tae Jin

    2006-11-01

    Rapid correction of hyponatremia is frequently associated with increased morbidity and mortality. Therefore, it is important to estimate the proper volume and type of infusate required to increase the serum sodium concentration predictably. The major common management errors during the treatment of hyponatremia are inadequate investigation, treatment with fluid restriction for diuretic-induced hyponatremia and treatment with fluid restriction plus intravenous isotonic saline simultaneously. We present two cases of management errors. One is about the problem of rapid correction of hyponatremia in a patient with sepsis and acute renal failure during continuous renal replacement therapy in the intensive care unit. The other is the case of hypothyroidism in which hyponatremia was aggravated by intravenous infusion of dextrose water and isotonic saline infusion was erroneously used to increase serum sodium concentration.

  7. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  8. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  9. Surface temperature measurement errors

    SciTech Connect

    Keltner, N.R.; Beck, J.V.

    1983-05-01

    Mathematical models are developed for the response of surface mounted thermocouples on a thick wall. These models account for the significant causes of errors in both the transient and steady-state response to changes in the wall temperature. In many cases, closed form analytical expressions are given for the response. The cases for which analytical expressions are not obtained can be easily evaluated on a programmable calculator or a small computer.

  10. Laser Doppler anemometer measurements using nonorthogonal velocity components - Error estimates

    NASA Technical Reports Server (NTRS)

    Orloff, K. L.; Snyder, P. K.

    1982-01-01

    Laser Doppler anemometers (LDAs) that are arranged to measure nonorthogonal velocity components (from which orthogonal components are computed through transformation equations) are more susceptible to calibration and sampling errors than are systems with uncoupled channels. In this paper uncertainty methods and estimation theory are used to evaluate, respectively, the systematic and statistical errors that are present when such devices are applied to the measurement of mean velocities in turbulent flows. Statistical errors are estimated for two-channel LDA data that are either correlated or uncorrelated. For uncorrelated data the directional uncertainty of the measured velocity vector is considered for applications where mean streamline patterns are desired.

  11. Laser Doppler anemometer measurements using nonorthogonal velocity components: error estimates.

    PubMed

    Orloff, K L; Snyder, P K

    1982-01-15

    Laser Doppler anemometers (LDAs) that are arranged to measure nonorthogonal velocity components (from which orthogonal components are computed through transformation equations) are more susceptible to calibration and sampling errors than are systems with uncoupled channels. In this paper uncertainty methods and estimation theory are used to evaluate, respectively, the systematic and statistical errors that are present when such devices are applied to the measurement of mean velocities in turbulent flows. Statistical errors are estimated for two-channel LDA data that are either correlated or uncorrelated. For uncorrelated data the directional uncertainty of the measured velocity vector is considered for applications where mean streamline patterns are desired.

  12. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  13. Sensitivity errors in interferometric deformation metrology.

    PubMed

    Farrant, David I; Petzing, Jon N

    2003-10-01

    Interferometric measurement techniques such as holographic interferometry and electronic speckle-pattern interferometry are valuable for measuring the deformation of objects. Conventional theoretical models of deformation measurement assume collimated illumination and telecentric imaging, which are usually only practical for small objects. Large objects often require divergent illumination, for which the models are valid only when the object is planar, and then only in the paraxial region. We present an analysis and discussion of the three-dimensional systematic sensitivity errors for both in-plane and out-of-plane interferometer configurations, where it is shown that the errors can be significant. A dimensionless approach is adopted to make the analysis generic and hence scalable to a system of any size.

  14. Measurement process error determination and control

    SciTech Connect

    Everhart, J.

    1992-01-01

    Traditional production processes have required repeated inspection activities to assure product quality. A typical production process follows this pattern: production makes product; production inspects product; Quality Control (QC) inspects product to ensure production inspected properly QC then inspects the product on a different gage to ensure the production gage performance; and QC often inspects on a different day to determine environmental effect. All of these costly inspection activities are due to the lack of confidence in the initial production measurement. The Process Measurement Assurance Program (PMAP) is a method of determining and controlling measurement error in design, development, and production. It is a preventive rather than an appraisal method that determines, improves, and controls the error in the measurement process, including measurement equipment, environment, procedure, and personnel. PMAP expands the concept of the Measurement Assurance Program developed in the 1960's by the National Bureau of Standards (NBS), today known as the National Institute of Standards and Technology (NIST). PMAP acts as a bridge in the gap between the Metrology Laboratory and the production environment by introducing standards (or certified parts) into the production process. These certified control standards are then measured as part of the production process. A control system is present to examine the measurement results of the control standards before, during, and after the manufacturing and measuring of the product. The results of the PMAP control charts determine random uncertainty and systematic (bias from the standard) error of the measurement process. The combinations of these uncertainties determine the margin of error of the measurement process. The total measurement process error is determined by combining the margin of error and the uncertainty in the control standard.

  15. Measurement process error determination and control

    SciTech Connect

    Everhart, J.

    1992-11-01

    Traditional production processes have required repeated inspection activities to assure product quality. A typical production process follows this pattern: production makes product; production inspects product; Quality Control (QC) inspects product to ensure production inspected properly QC then inspects the product on a different gage to ensure the production gage performance; and QC often inspects on a different day to determine environmental effect. All of these costly inspection activities are due to the lack of confidence in the initial production measurement. The Process Measurement Assurance Program (PMAP) is a method of determining and controlling measurement error in design, development, and production. It is a preventive rather than an appraisal method that determines, improves, and controls the error in the measurement process, including measurement equipment, environment, procedure, and personnel. PMAP expands the concept of the Measurement Assurance Program developed in the 1960`s by the National Bureau of Standards (NBS), today known as the National Institute of Standards and Technology (NIST). PMAP acts as a bridge in the gap between the Metrology Laboratory and the production environment by introducing standards (or certified parts) into the production process. These certified control standards are then measured as part of the production process. A control system is present to examine the measurement results of the control standards before, during, and after the manufacturing and measuring of the product. The results of the PMAP control charts determine random uncertainty and systematic (bias from the standard) error of the measurement process. The combinations of these uncertainties determine the margin of error of the measurement process. The total measurement process error is determined by combining the margin of error and the uncertainty in the control standard.

  16. Prevention of wrong-site and wrong-patient surgical errors.

    PubMed

    2013-01-01

    Surgical errors recorded between 2002 and 2008 in a US medical liability insurance database have been analysed. Twenty-five wrong-patient procedures were recorded, resulting in 5 serious adverse events: three unnecessary prostatectomies were performed after prostate biopsy samples were mislabelled; vitrectomy was performed on the wrong patient in an ophthalmology department after confusion between two patients with identical names; and a child scheduled for adenoidectomy received a tympanic drain. There were also 107 wrong-site procedures, with one death resulting from implantation of a pleural drain on the wrong side. Another 38 patients experienced significant harm: 5 patients had surgery on the wrong vertebrae; 4 had chest tubes placed on the wrong side; 4 underwent vascular surgery at the wrong site; and 4 underwent resection of the wrong segment of the intestine. In addition, there were: 4 organ resection errors; 6 wrong-site or wrong-sided limb surgeries; 2 wrong-sided ovariectomies; 2 wrong-sided eye operations; 2 wrong-sided craniotomies; 2 wrong-sided ureteric procedures; 1 wrong-sided maxillofacial operation; and 2 radiation therapy field errors. Most errors were due to poor communication, incorrect diagnosis, or failure to implement a final set of preoperative checks. Other studies conducted in the United Kingdom and the United States have provided similar results, while data are lacking in France. The World Health Organization Surgical Safety Checklist is an effective way of preventing such errors but its adoption by healthcare professionals is variable. In practice, surgical errors involving the wrong patient or wrong body site are preventable. Final pre-operative checks must be applied methodically and systematically.This includes asking the patient to confirm his/her identity and the intended site of the operation. Healthcare staff must be aware of these measures.

  17. Understanding error generation in fused deposition modeling

    NASA Astrophysics Data System (ADS)

    Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David

    2015-03-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.

  18. A Guideline for Applying Systematic Reviews to Child Language Intervention

    ERIC Educational Resources Information Center

    Hargrove, Patricia; Lund, Bonnie; Griffer, Mona

    2005-01-01

    This article focuses on applying systematic reviews to the Early Intervention (EI) literature. Systematic reviews are defined and differentiated from traditional, or narrative, reviews and from meta-analyses. In addition, the steps involved in critiquing systematic reviews and an illustration of a systematic review from the EI literature are…

  19. Error Management Behavior in Classrooms: Teachers' Responses to Student Mistakes

    ERIC Educational Resources Information Center

    Tulis, Maria

    2013-01-01

    Only a few studies have focused on how teachers deal with mistakes in actual classroom settings. Teachers' error management behavior was analyzed based on data obtained from direct (Study 1) and videotaped systematic observation (Study 2), and students' self-reports. In Study 3 associations between students' and teachers' attitudes towards…

  20. Conceptual Bases of Arithmetic Errors: The Case of Decimal Fractions.

    ERIC Educational Resources Information Center

    Resnick, Lauren B.; And Others

    Considered is a conceptual analog of buggy algorithms and rule-based mathematical development. The investigations consider whether children's efforts to make conceptual sense of new mathematics instruction in terms of their available knowledge may sometimes lead them to make systematic errors. In particular, the possibility is explored that…

  1. Pitch Error Analysis of Young Piano Students' Music Reading Performances

    ERIC Educational Resources Information Center

    Rut Gudmundsdottir, Helga

    2010-01-01

    This study analyzed the music reading performances of 6-13-year-old piano students (N = 35) in their second year of piano study. The stimuli consisted of three piano pieces, systematically constructed to vary in terms of left-hand complexity and input simultaneity. The music reading performances were recorded digitally and a code of error analysis…

  2. A hardware error estimate for floating-point computations

    NASA Astrophysics Data System (ADS)

    Lang, Tomás; Bruguera, Javier D.

    2008-08-01

    , this type has some anomalies that make it difficult to use. We propose a scaled absolute error, whose value is close to the relative error but does not have these anomalies. The main cost issue might be the additional storage and the narrow datapath required for the estimate computation. We evaluate our proposal and compare it with other alternatives. We conclude that the proposed approach might be beneficial.

  3. Design the algorithm compensation of vignetting error at optical-electronic autoreflection system by modelling vignetted image

    NASA Astrophysics Data System (ADS)

    Konyakhin, Igor A.; Sakhariyanova, Aiganym M.; Li, Renpu

    2016-04-01

    Nowadays one of metrology problems is the measurement of angular values, in particular, angular deformations in the critical points of oversized objects. For the solution of this problem, effectively use optoelectronic autoreflection systems. The autoreflection systems allows measuring a mirror turning angle as sensitive element in a point of angular deformation with a potential accuracy up to 0.05". Actually the error can exceed considerably the specified value because of existence of systematic error, one of which main components is the error flowing to vignetting of a working beam. The component of systematic error due to vignetting of the beam can be eliminated in case of existence of the analytical description of changes in irradiance distribution of the analyzed image. Because of the complexity of the analytical description of the vignetting processes proposes the use of computer models. Based on the received dependence for compensation of systematic error due to vignetting is equal D=30 arcsecs. As this systematic measurement error unacceptably large, there is a need to compensate for this error. For the design of the algorithm compensate for systematic error were considered three cases of displacement vignetting field on a matrix analyzer due to the rotation of control element. Using the compensation algorithm, the error due to the vignetting amounts to a negligible value 0.4 arcsecs. The designed algorithm compensation systematic error due to vignetting allows to increase the working distance at the autoreflection measurements.

  4. Advancing the research agenda for diagnostic error reduction.

    PubMed

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  5. Non-Gaussian Error Distributions of LMC Distance Moduli Measurements

    NASA Astrophysics Data System (ADS)

    Crandall, Sara; Ratra, Bharat

    2015-12-01

    We construct error distributions for a compilation of 232 Large Magellanic Cloud (LMC) distance moduli values from de Grijs et al. that give an LMC distance modulus of (m - M)0 = 18.49 ± 0.13 mag (median and 1σ symmetrized error). Central estimates found from weighted mean and median statistics are used to construct the error distributions. The weighted mean error distribution is non-Gaussian—flatter and broader than Gaussian—with more (less) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of unaccounted-for systematic uncertainties. The median statistics error distribution, which does not make use of the individual measurement errors, is also non-Gaussian—more peaked than Gaussian—with less (more) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of publication bias and/or the non-independence of the measurements. We also construct the error distributions of 247 SMC distance moduli values from de Grijs & Bono. We find a central estimate of {(m-M)}0=18.94+/- 0.14 mag (median and 1σ symmetrized error), and similar probabilities for the error distributions.

  6. Geographically correlated errors observed from a laser-based short-arc technique

    NASA Astrophysics Data System (ADS)

    Bonnefond, P.; Exertier, P.; Barlier, F.

    1999-07-01

    The laser-based short-arc technique has been developed in order to avoid local errors which affect the dynamical orbit computation, such as those due to mismodeling in the geopotential. It is based on a geometric method and consists in fitting short arcs (about 4000 km), issued from a global orbit, with satellite laser ranging tracking measurements from a ground station network. Ninety-two TOPEX/Poseidon (T/P) cycles of laser-based short-arc orbits have then been compared to JGM-2 and JGM-3 T/P orbits computed by the Precise Orbit Determination (POD) teams (Service d'Orbitographie Doris/Centre National d'Etudes Spatiales and Goddard Space Flight Center/NASA) over two areas: (1) the Mediterranean area and (2) a part of the Pacific (including California and Hawaii) called hereafter the U.S. area. Geographically correlated orbit errors in these areas are clearly evidenced: for example, -2.6 cm and +0.7 cm for the Mediterranean and U.S. areas, respectively, relative to JGM-3 orbits. However, geographically correlated errors (GCE) which are commonly linked to errors in the gravity model, can also be due to systematic errors in the reference frame and/or to biases in the tracking measurements. The short-arc technique being very sensitive to such error sources, our analysis however demonstrates that the induced geographical systematic effects are at the level of 1-2 cm on the radial orbit component. Results are also compared with those obtained with the GPS-based reduced dynamic technique. The time-dependent part of GCE has also been studied. Over 6 years of T/P data, coherent signals in the radial component of T/P Precise Orbit Ephemeris (POE) are clearly evidenced with a time period of about 6 months. In addition, impact of time varying-error sources coming from the reference frame and the tracking data accuracy has been analyzed, showing a possible linear trend of about 0.5-1 mm/yr in the radial component of T/P POE.

  7. The propagation of inventory-based positional errors into statistical landslide susceptibility models

    NASA Astrophysics Data System (ADS)

    Steger, Stefan; Brenning, Alexander; Bell, Rainer; Glade, Thomas

    2016-12-01

    There is unanimous agreement that a precise spatial representation of past landslide occurrences is a prerequisite to produce high quality statistical landslide susceptibility models. Even though perfectly accurate landslide inventories rarely exist, investigations of how landslide inventory-based errors propagate into subsequent statistical landslide susceptibility models are scarce. The main objective of this research was to systematically examine whether and how inventory-based positional inaccuracies of different magnitudes influence modelled relationships, validation results, variable importance and the visual appearance of landslide susceptibility maps. The study was conducted for a landslide-prone site located in the districts of Amstetten and Waidhofen an der Ybbs, eastern Austria, where an earth-slide point inventory was available. The methodological approach comprised an artificial introduction of inventory-based positional errors into the present landslide data set and an in-depth evaluation of subsequent modelling results. Positional errors were introduced by artificially changing the original landslide position by a mean distance of 5, 10, 20, 50 and 120 m. The resulting differently precise response variables were separately used to train logistic regression models. Odds ratios of predictor variables provided insights into modelled relationships. Cross-validation and spatial cross-validation enabled an assessment of predictive performances and permutation-based variable importance. All analyses were additionally carried out with synthetically generated data sets to further verify the findings under rather controlled conditions. The results revealed that an increasing positional inventory-based error was generally related to increasing distortions of modelling and validation results. However, the findings also highlighted that interdependencies between inventory-based spatial inaccuracies and statistical landslide susceptibility models are complex. The

  8. (Errors in statistical tests)3.

    PubMed

    Phillips, Carl V; MacLehose, Richard F; Kaufman, Jay S

    2008-07-14

    departure from uniformity, not just its test statistics. We found variation in digit frequencies in the additional data and describe the distinctive pattern of these results. Furthermore, we found that the combined data diverge unambiguously from a uniform distribution. The explanation for this divergence seems unlikely to be that suggested by the previous authors: errors in calculations and transcription.

  9. Wavefront error sensing

    NASA Technical Reports Server (NTRS)

    Tubbs, Eldred F.

    1986-01-01

    A two-step approach to wavefront sensing for the Large Deployable Reflector (LDR) was examined as part of an effort to define wavefront-sensing requirements and to determine particular areas for more detailed study. A Hartmann test for coarse alignment, particularly segment tilt, seems feasible if LDR can operate at 5 microns or less. The direct measurement of the point spread function in the diffraction limited region may be a way to determine piston error, but this can only be answered by a detailed software model of the optical system. The question of suitable astronomical sources for either test must also be addressed.

  10. Detecting Errors in Programs

    DTIC Science & Technology

    1979-02-01

    unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 DETECTING ERRORS IN PROGRAMS* Lloyd D. Fosdick...from a finite set of tests [35,36]a Recently Howden [37] presented a result showing that for a particular class of Lindenmayer grammars it was possible...Diego, CA. 37o Howden, W.E.: Lindenmayer grammars and symbolic testing. Information Processing Letters 7,1 (Jano 1978), 36-39. 38~ Fitzsimmons, Ann

  11. DNA systematics. Volume II

    SciTech Connect

    Dutta, S.K.

    1986-01-01

    This book discusses the following topics: PLANTS: PLANT DNA: Contents and Systematics. Repeated DNA Sequences and Polyploidy in Cereal Crops. Homology of Nonrepeated DNA Sequences in Phylogeny of Fungal Species. Chloropast DNA and Phylogenetic Relationships. rDNA: Evolution Over a Billion Years. 23S rRNA-derived Small Ribosomal RNAs: Their Structure and Evolution with Reference to Plant Phylogeny. Molecular Analysis of Plant DNA Genomes: Conserved and Diverged DNA Sequences. A Critical Review of Some Terminologies Used for Additional DNA in Plant Chromosomes and Index.

  12. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    PubMed Central

    Du, Zhongzhou; Su, Rijian; Liu, Wenzhong; Huang, Zhixing

    2015-01-01

    The signal transmission module of a magnetic nanoparticle thermometer (MNPT) was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias), was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA) when the hardware system of the MNPT was designed with the aforementioned method. PMID:25875188

  13. Magnetic nanoparticle thermometer: an investigation of minimum error transmission path and AC bias error.

    PubMed

    Du, Zhongzhou; Su, Rijian; Liu, Wenzhong; Huang, Zhixing

    2015-04-14

    The signal transmission module of a magnetic nanoparticle thermometer (MNPT) was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias), was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA) when the hardware system of the MNPT was designed with the aforementioned method.

  14. Power Measurement Errors on a Utility Aircraft

    NASA Technical Reports Server (NTRS)

    Bousman, William G.

    2002-01-01

    Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.

  15. Speech Errors, Error Correction, and the Construction of Discourse.

    ERIC Educational Resources Information Center

    Linde, Charlotte

    Speech errors have been used in the construction of production models of the phonological and semantic components of language, and for a model of interactional processes. Errors also provide insight into how speakers plan discourse and syntactic structure,. Different types of discourse exhibit different types of error. The present data are taken…

  16. On the undetected error probability of a concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Deng, H.; Costello, D. J., Jr.

    1984-01-01

    Consider a concatenated coding scheme for error control on a binary symmetric channel, called the inner channel. The bit error rate (BER) of the channel is correspondingly called the inner BER, and is denoted by Epsilon (sub i). Two linear block codes, C(sub f) and C(sub b), are used. The inner code C(sub f), called the frame code, is an (n,k) systematic binary block code with minimum distance, d(sub f). The frame code is designed to correct + or fewer errors and simultaneously detect gamma (gamma +) or fewer errors, where + + gamma + 1 = to or d(sub f). The outer code C(sub b) is either an (n(sub b), K(sub b)) binary block with a n(sub b) = mk, or an (n(sub b), k(Sub b) maximum distance separable (MDS) code with symbols from GF(q), where q = 2(b) and the code length n(sub b) satisfies n(sub)(b) = mk. The integerim is the number of frames. The outercode is designed for error detection only.

  17. Inborn Errors in Immunity

    PubMed Central

    Lionakis, M.S.; Hajishengallis, G.

    2015-01-01

    In recent years, the study of genetic defects arising from inborn errors in immunity has resulted in the discovery of new genes involved in the function of the immune system and in the elucidation of the roles of known genes whose importance was previously unappreciated. With the recent explosion in the field of genomics and the increasing number of genetic defects identified, the study of naturally occurring mutations has become a powerful tool for gaining mechanistic insight into the functions of the human immune system. In this concise perspective, we discuss emerging evidence that inborn errors in immunity constitute real-life models that are indispensable both for the in-depth understanding of human biology and for obtaining critical insights into common diseases, such as those affecting oral health. In the field of oral mucosal immunity, through the study of patients with select gene disruptions, the interleukin-17 (IL-17) pathway has emerged as a critical element in oral immune surveillance and susceptibility to inflammatory disease, with disruptions in the IL-17 axis now strongly linked to mucosal fungal susceptibility, whereas overactivation of the same pathways is linked to inflammatory periodontitis. PMID:25900229

  18. Prospective issues for error detection.

    PubMed

    Blavier, Adélaïde; Rouy, Emmanuelle; Nyssen, Anne-Sophie; de Keyser, Véronique

    2005-06-10

    From the literature on error detection, the authors select several concepts relating error detection mechanisms and prospective memory features. They emphasize the central role of intention in the classification of the errors into slips/lapses/mistakes, in the error handling process and in the usual distinction between action-based and outcome-based detection. Intention is again a core concept in their investigation of prospective memory theory, where they point out the contribution of intention retrievals, intention persistence and output monitoring in the individual's possibilities for detecting their errors. The involvement of the frontal lobes in prospective memory and in error detection is also analysed. From the chronology of a prospective memory task, the authors finally suggest a model for error detection also accounting for neural mechanisms highlighted by studies on error-related brain activity.

  19. Precision of circular systematic sampling.

    PubMed

    Cruz-Orive, L M; Gual-Arnau, X

    2002-09-01

    In design stereology, many estimators require isotropic orientation of a test probe relative to the object in order to attain unbiasedness. In such cases, systematic sampling of orientations becomes imperative on grounds of efficiency and practical applicability. For instance, the planar nucleator and the vertical rotator imply systematic sampling on the circle, whereas the Buffon-Steinhaus method to estimate curve length in the plane, or the vertical designs to estimate surface area and curve length, imply systematic sampling on the semicircle. This leads to the need for predicting the precision of systematic sampling on the circle and the semicircle from a single sample. There are two main prediction approaches, namely the classical one of G. Matheron for non-necessarily periodic measurement functions, and a recent approach based on a global symmetric model of the covariogram, more specific for periodic measurement functions. The latter approach seems at least as satisfactory as the former for small sample sizes, and it is developed here incorporating local errors. Detailed examples illustrating common stereological tools are included.

  20. Dose error analysis for a scanned proton beam delivery system

    NASA Astrophysics Data System (ADS)

    Coutrakon, G.; Wang, N.; Miller, D. W.; Yang, Y.

    2010-12-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  1. Error analysis and system optimization of non-null aspheric testing system

    NASA Astrophysics Data System (ADS)

    Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo

    2010-10-01

    A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.

  2. Error Patterns in Problem Solving.

    ERIC Educational Resources Information Center

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  3. Measurement Error. For Good Measure....

    ERIC Educational Resources Information Center

    Johnson, Stephen; Dulaney, Chuck; Banks, Karen

    No test, however well designed, can measure a student's true achievement because numerous factors interfere with the ability to measure achievement. These factors are sources of measurement error, and the goal in creating tests is to have as little measurement error as possible. Error can result from the test design, factors related to individual…

  4. Feature Referenced Error Correction Apparatus.

    DTIC Science & Technology

    A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)

  5. Consequences of leaf calibration errors on IMRT delivery

    NASA Astrophysics Data System (ADS)

    Sastre-Padro, M.; Welleweerd, J.; Malinen, E.; Eilertsen, K.; Olsen, D. R.; van der Heide, U. A.

    2007-02-01

    IMRT treatments using multi-leaf collimators may involve a large number of segments in order to spare the organs at risk. When a large proportion of these segments are small, leaf positioning errors may become relevant and have therapeutic consequences. The performance of four head and neck IMRT treatments under eight different cases of leaf positioning errors has been studied. Systematic leaf pair offset errors in the range of ±2.0 mm were introduced, thus modifying the segment sizes of the original IMRT plans. Thirty-six films were irradiated with the original and modified segments. The dose difference and the gamma index (with 2%/2 mm criteria) were used for evaluating the discrepancies between the irradiated films. The median dose differences were linearly related to the simulated leaf pair errors. In the worst case, a 2.0 mm error generated a median dose difference of 1.5%. Following the gamma analysis, two out of the 32 modified plans were not acceptable. In conclusion, small systematic leaf bank positioning errors have a measurable impact on the delivered dose and may have consequences for the therapeutic outcome of IMRT.

  6. Structured error recovery for code-word-stabilized quantum codes

    SciTech Connect

    Li Yunfan; Dumer, Ilya; Grassl, Markus; Pryadko, Leonid P.

    2010-05-15

    Code-word-stabilized (CWS) codes are, in general, nonadditive quantum codes that can correct errors by an exhaustive search of different error patterns, similar to the way that we decode classical nonlinear codes. For an n-qubit quantum code correcting errors on up to t qubits, this brute-force approach consecutively tests different errors of weight t or less and employs a separate n-qubit measurement in each test. In this article, we suggest an error grouping technique that allows one to simultaneously test large groups of errors in a single measurement. This structured error recovery technique exponentially reduces the number of measurements by about 3{sup t} times. While it still leaves exponentially many measurements for a generic CWS code, the technique is equivalent to syndrome-based recovery for the special case of additive CWS codes.

  7. Structured error recovery for code-word-stabilized quantum codes

    NASA Astrophysics Data System (ADS)

    Li, Yunfan; Dumer, Ilya; Grassl, Markus; Pryadko, Leonid P.

    2010-05-01

    Code-word-stabilized (CWS) codes are, in general, nonadditive quantum codes that can correct errors by an exhaustive search of different error patterns, similar to the way that we decode classical nonlinear codes. For an n-qubit quantum code correcting errors on up to t qubits, this brute-force approach consecutively tests different errors of weight t or less and employs a separate n-qubit measurement in each test. In this article, we suggest an error grouping technique that allows one to simultaneously test large groups of errors in a single measurement. This structured error recovery technique exponentially reduces the number of measurements by about 3t times. While it still leaves exponentially many measurements for a generic CWS code, the technique is equivalent to syndrome-based recovery for the special case of additive CWS codes.

  8. Identification of Error Patterns in Terminal-Area ATC Communications

    NASA Technical Reports Server (NTRS)

    Quinn, Cheryl; Walter, Kim E.; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    Advancing air traffic management technologies have enabled a greater number of aircraft to use the same airspace more effectively. As aircraft separations are reduced and final approaches are more finely timed, there is less room for error. The present study examined 122 terminal-area, loss-of-separation and procedure violation incidents reported to the Aviation Safety Reporting System (ASRS) by air traffic controllers. Narrative description codes were used for the incidents for type of violation, contributing factors, recovery strategies, and consequences. Usually multiple errors occurred prior to the violation. Error sequences were analyzed and common patterns of errors were identified. In half of the incidents, errors were noticed in time to correct mistakes. Of these, almost 43% committed additional errors during the recovery attempt. This analysis shows that redundancies in the present air traffic control system may not be sufficient to support large increases in traffic density. Error prevention and design considerations for air traffic management systems are discussed.

  9. On typographical errors.

    PubMed

    Hamilton, J W

    1993-09-01

    In his overall assessment of parapraxes in 1901, Freud included typographical mistakes but did not elaborate on or study this subject nor did he have anything to say about it in his later writings. This paper lists textual errors from a variety of current literary sources and explores the dynamic importance of their execution and the failure to make necessary corrections during the editorial process. While there has been a deemphasis of the role of unconscious determinants in the genesis of all slips as a result of recent findings in cognitive psychology, the examples offered suggest that, with respect to motivation, lapses in compulsivity contribute to their original commission while thematic compliance and voyeuristic issues are important in their not being discovered prior to publication.

  10. Register file soft error recovery

    DOEpatents

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  11. Error-budget paradigms and laser mask pattern generator evolution

    NASA Astrophysics Data System (ADS)

    Hamaker, H. Christopher; Jolley, Matthew J.; Berwick, Andrew D.

    2009-01-01

    The evolution of the ALTA(R) series of laser mask pattern generators has increased the relative contribution of intensity errors on critical-dimension (CD) control to those from placement errors. This paradigm shift has driven a change in rasterization strategy wherein aerial image sharpness is improved at the cost of a slight decrease in the averaging of column-to-column placement errors. Print performance evaluation using small-area CD test patterns show improvements in stripe-axis local CD uniformity (CDU) 3σ values of 15-25% using the new strategy, and systematic brush-error contributions were reduced by 50%. The increased importance of intensity errors, coupled with the improvement of ALTA system performance, has also made the mask-blank and process-induced errors a more significant part of the overall error budget. A simple model based on two components, a pattern-invariant footprint and one related to the exposure density ρ(x, y), is shown to describe adequately the errors induced by these sources. The first component is modeled by a fourth-order, two-dimensional polynomial, whereas the second is modeled as a convolution of ρ(x, y) with one or more Gaussian kernels. Implementation of this model on the ALTA 4700 system shows improvements in global CDU of 50%.

  12. A Simple Error Formula for the Lunar Ephemeris of Regiomontanus

    NASA Astrophysics Data System (ADS)

    Brosche, P.; Kokott, W.

    The errors of the lunar ephemeris of Regiomontanus are a function mainly of lunar age. This is because the "variation" in the modern theory has no counterpart in ptolemaic theory. In addition to this sinusoidal error constituent (with zero point at syzygies and an amplitude +/-0°.66 at the first and last quarters), there is a constant error due to longitude inconsistencies and a random part +/-0°.5 from various sources.

  13. Systematic Alternatives to Proposal Preparation.

    ERIC Educational Resources Information Center

    Knirk, Frederick G.; And Others

    Educators who have to develop proposals must be concerned with making effective decisions. This paper discusses a number of educational systems management tools which can be used to reduce the time and effort in developing a proposal. In addition, ways are introduced to systematically increase the quality of the proposal through the development of…

  14. Towards error-free interaction.

    PubMed

    Tsoneva, Tsvetomira; Bieger, Jordi; Garcia-Molina, Gary

    2010-01-01

    Human-machine interaction (HMI) relies on pat- tern recognition algorithms that are not perfect. To improve the performance and usability of these systems we can utilize the neural mechanisms in the human brain dealing with error awareness. This study aims at designing a practical error detection algorithm using electroencephalogram signals that can be integrated in an HMI system. Thus, real-time operation, customization, and operation convenience are important. We address these requirements in an experimental framework simulating machine errors. Our results confirm the presence of brain potentials related to processing of machine errors. These are used to implement an error detection algorithm emphasizing the differences in error processing on a per subject basis. The proposed algorithm uses the individual best bipolar combination of electrode sites and requires short calibration. The single-trial error detection performance on six subjects, characterized by the area under the ROC curve ranges from 0.75 to 0.98.

  15. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  16. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled

  17. Estimation of Model Error Variances During Data Assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick

    2003-01-01

    Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data

  18. Improved Error Thresholds for Measurement-Free Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  19. Improved Error Thresholds for Measurement-Free Error Correction.

    PubMed

    Crow, Daniel; Joynt, Robert; Saffman, M

    2016-09-23

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10^{-3} to 10^{-4}-comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  20. Model of glucose sensor error components: identification and assessment for new Dexcom G4 generation devices.

    PubMed

    Facchinetti, Andrea; Del Favero, Simone; Sparacino, Giovanni; Cobelli, Claudio

    2015-12-01

    It is clinically well-established that minimally invasive subcutaneous continuous glucose monitoring (CGM) sensors can significantly improve diabetes treatment. However, CGM readings are still not as reliable as those provided by standard fingerprick blood glucose (BG) meters. In addition to unavoidable random measurement noise, other components of sensor error are distortions due to the blood-to-interstitial glucose kinetics and systematic under-/overestimations associated with the sensor calibration process. A quantitative assessment of these components, and the ability to simulate them with precision, is of paramount importance in the design of CGM-based applications, e.g., the artificial pancreas (AP), and in their in silico testing. In the present paper, we identify and assess a model of sensor error of for two sensors, i.e., the G4 Platinum (G4P) and the advanced G4 for artificial pancreas studies (G4AP), both belonging to the recently presented "fourth" generation of Dexcom CGM sensors but different in their data processing. Results are also compared with those obtained by a sensor belonging to the previous, "third," generation by the same manufacturer, the SEVEN Plus (7P). For each sensor, the error model is derived from 12-h CGM recordings of two sensors used simultaneously and BG samples collected in parallel every 15 ± 5 min. Thanks to technological innovations, G4P outperforms 7P, with average mean absolute relative difference (MARD) of 11.1 versus 14.2%, respectively, and lowering of about 30% the error of each component. Thanks to the more sophisticated data processing algorithms, G4AP resulted more reliable than G4P, with a MARD of 10.0%, and a further decrease to 20% of the error due to blood-to-interstitial glucose kinetics.

  1. The Role of Supralexical Prosodic Units in Speech Production: Evidence from the Distribution of Speech Errors

    ERIC Educational Resources Information Center

    Choe, Wook Kyung

    2013-01-01

    The current dissertation represents one of the first systematic studies of the distribution of speech errors within supralexical prosodic units. Four experiments were conducted to gain insight into the specific role of these units in speech planning and production. The first experiment focused on errors in adult English. These were found to be…

  2. Empirical Error Analysis of GPS RO Atmospheric Profiles

    NASA Astrophysics Data System (ADS)

    Scherllin-Pirscher, B.; Steiner, A. K.; Foelsche, U.; Kirchengast, G.; Kuo, Y.

    2010-12-01

    height. Seasonal and latitudinal characteristics of the stratospheric observational error are accounted for. All parameters are modeled separately for bending angle, refractivity, dry pressure, dry geopotential height, and dry temperature. We also briefly address the current status of upper bound estimates for residual systematic errors in climate-averaged profiles.

  3. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  4. Comparison of analytical error and sampling error for contaminated soil.

    PubMed

    Gustavsson, Björn; Luthbom, Karin; Lagerkvist, Anders

    2006-11-16

    Investigation of soil from contaminated sites requires several sample handling steps that, most likely, will induce uncertainties in the sample. The theory of sampling describes seven sampling errors that can be calculated, estimated or discussed in order to get an idea of the size of the sampling uncertainties. With the aim of comparing the size of the analytical error to the total sampling error, these seven errors were applied, estimated and discussed, to a case study of a contaminated site. The manageable errors were summarized, showing a range of three orders of magnitudes between the examples. The comparisons show that the quotient between the total sampling error and the analytical error is larger than 20 in most calculation examples. Exceptions were samples taken in hot spots, where some components of the total sampling error get small and the analytical error gets large in comparison. Low concentration of contaminant, small extracted sample size and large particles in the sample contribute to the extent of uncertainty.

  5. Grammatical Errors Produced by English Majors: The Translation Task

    ERIC Educational Resources Information Center

    Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad

    2011-01-01

    This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…

  6. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  7. Nutrition Informatics Applications in Clinical Practice: a Systematic Review.

    PubMed

    North, Jennifer C; Jordan, Kristine C; Metos, Julie; Hurdle, John F

    2015-01-01

    Nutrition care and metabolic control contribute to clinical patient outcomes. Biomedical informatics applications represent a way to potentially improve quality and efficiency of nutrition management. We performed a systematic literature review to identify clinical decision support and computerized provider order entry systems used to manage nutrition care. Online research databases were searched using a specific set of keywords. Additionally, bibliographies were referenced for supplemental citations. Four independent reviewers selected sixteen studies out of 364 for review. These papers described adult and neonatal nutrition support applications, blood glucose management applications, and other nutrition applications. Overall, results indicated that computerized interventions could contribute to improved patient outcomes and provider performance. Specifically, computer systems in the clinical setting improved nutrient delivery, rates of malnutrition, weight loss, blood glucose values, clinician efficiency, and error rates. In conclusion, further investigation of informatics applications on nutritional and performance outcomes utilizing rigorous study designs is recommended.

  8. Nutrition Informatics Applications in Clinical Practice: a Systematic Review

    PubMed Central

    North, Jennifer C.; Jordan, Kristine C.; Metos, Julie; Hurdle, John F.

    2015-01-01

    Nutrition care and metabolic control contribute to clinical patient outcomes. Biomedical informatics applications represent a way to potentially improve quality and efficiency of nutrition management. We performed a systematic literature review to identify clinical decision support and computerized provider order entry systems used to manage nutrition care. Online research databases were searched using a specific set of keywords. Additionally, bibliographies were referenced for supplemental citations. Four independent reviewers selected sixteen studies out of 364 for review. These papers described adult and neonatal nutrition support applications, blood glucose management applications, and other nutrition applications. Overall, results indicated that computerized interventions could contribute to improved patient outcomes and provider performance. Specifically, computer systems in the clinical setting improved nutrient delivery, rates of malnutrition, weight loss, blood glucose values, clinician efficiency, and error rates. In conclusion, further investigation of informatics applications on nutritional and performance outcomes utilizing rigorous study designs is recommended. PMID:26958233

  9. Estimating Measurement Error of the Patient Activation Measure for Respondents with Partially Missing Data.

    PubMed

    Linden, Ariel

    2015-01-01

    The patient activation measure (PAM) is an increasingly popular instrument used as the basis for interventions to improve patient engagement and as an outcome measure to assess intervention effect. However, a PAM score may be calculated when there are missing responses, which could lead to substantial measurement error. In this paper, measurement error is systematically estimated across the full possible range of missing items (one to twelve), using simulation in which populated items were randomly replaced with missing data for each of 1,138 complete surveys obtained in a randomized controlled trial. The PAM score was then calculated, followed by comparisons of overall simulated average mean, minimum, and maximum PAM scores to the true PAM score in order to assess the absolute percentage error (APE) for each comparison. With only one missing item, the average APE was 2.5% comparing the true PAM score to the simulated minimum score and 4.3% compared to the simulated maximum score. APEs increased with additional missing items, such that surveys with 12 missing items had average APEs of 29.7% (minimum) and 44.4% (maximum). Several suggestions and alternative approaches are offered that could be pursued to improve measurement accuracy when responses are missing.

  10. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  11. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  12. Error correction for IFSAR

    DOEpatents

    Doerry, Armin W.; Bickel, Douglas L.

    2002-01-01

    IFSAR images of a target scene are generated by compensating for variations in vertical separation between collection surfaces defined for each IFSAR antenna by adjusting the baseline projection during image generation. In addition, height information from all antennas is processed before processing range and azimuth information in a normal fashion to create the IFSAR image.

  13. The Attraction Effect Modulates Reward Prediction Errors and Intertemporal Choices.

    PubMed

    Gluth, Sebastian; Hotaling, Jared M; Rieskamp, Jörg

    2017-01-11

    Classical economic theory contends that the utility of a choice option should be independent of other options. This view is challenged by the attraction effect, in which the relative preference between two options is altered by the addition of a third, asymmetrically dominated option. Here, we leveraged the attraction effect in the context of intertemporal choices to test whether both decisions and reward prediction errors (RPE) in the absence of choice violate the independence of irrelevant alternatives principle. We first demonstrate that intertemporal decision making is prone to the attraction effect in humans. In an independent group of participants, we then investigated how this affects the neural and behavioral valuation of outcomes using a novel intertemporal lottery task and fMRI. Participants' behavioral responses (i.e., satisfaction ratings) were modulated systematically by the attraction effect and this modulation was correlated across participants with the respective change of the RPE signal in the nucleus accumbens. Furthermore, we show that, because exponential and hyperbolic discounting models are unable to account for the attraction effect, recently proposed sequential sampling models might be more appropriate to describe intertemporal choices. Our findings demonstrate for the first time that the attraction effect modulates subjective valuation even in the absence of choice. The findings also challenge the prospect of using neuroscientific methods to measure utility in a context-free manner and have important implications for theories of reinforcement learning and delay discounting.

  14. Processor register error correction management

    SciTech Connect

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  15. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  16. Error compensation in a pointing system based on Risley prisms.

    PubMed

    Bravo-Medina, Beethoven; Strojnik, Marija; Garcia-Torales, Guillermo; Torres-Ortega, Hector; Estrada-Marmolejo, Ruben; Beltrán-González, Anuar; Flores, Jorge L

    2017-03-10

    Risley prisms are widely used for beam pointing in several optical systems. The exact solution for the inverse problem does not exist, except using numerical methods. However, the errors introduced by misalignment are usually greater than the approximation errors. We present a novel method to compensate alignment errors in pointing systems based on Risley prisms. The prism model that we used is based on paraxial approximation with an additional vector to compensate typical alignment errors. Simulation and experimental results show that the improvement in pointing accuracy is achievable even in comparison with exact ray tracing methods.

  17. Compensating For GPS Ephemeris Error

    NASA Technical Reports Server (NTRS)

    Wu, Jiun-Tsong

    1992-01-01

    Method of computing position of user station receiving signals from Global Positioning System (GPS) of navigational satellites compensates for most of GPS ephemeris error. Present method enables user station to reduce error in its computed position substantially. User station must have access to two or more reference stations at precisely known positions several hundred kilometers apart and must be in neighborhood of reference stations. Based on fact that when GPS data used to compute baseline between reference station and user station, vector error in computed baseline is proportional ephemeris error and length of baseline.

  18. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  19. Confidence limits and their errors

    SciTech Connect

    Rajendran Raja

    2002-03-22

    Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.

  20. Systematic Clustering of Transcription Start Site Landscapes

    PubMed Central

    Zhao, Xiaobei; Valen, Eivind; Parker, Brian J.; Sandelin, Albin

    2011-01-01

    Genome-wide, high-throughput methods for transcription start site (TSS) detection have shown that most promoters have an array of neighboring TSSs where some are used more than others, forming a distribution of initiation propensities. TSS distributions (TSSDs) vary widely between promoters and earlier studies have shown that the TSSDs have biological implications in both regulation and function. However, no systematic study has been made to explore how many types of TSSDs and by extension core promoters exist and to understand which biological features distinguish them. In this study, we developed a new non-parametric dissimilarity measure and clustering approach to explore the similarities and stabilities of clusters of TSSDs. Previous studies have used arbitrary thresholds to arrive at two general classes: broad and sharp. We demonstrated that in addition to the previous broad/sharp dichotomy an additional category of promoters exists. Unlike typical TATA-driven sharp TSSDs where the TSS position can vary a few nucleotides, in this category virtually all TSSs originate from the same genomic position. These promoters lack epigenetic signatures of typical mRNA promoters and a substantial subset of them are mapping upstream of ribosomal protein pseudogenes. We present evidence that these are likely mapping errors, which have confounded earlier analyses, due to the high similarity of ribosomal gene promoters in combination with known G addition bias in the CAGE libraries. Thus, previous two-class separations of promoter based on TSS distributions are motivated, but the ultra-sharp TSS distributions will confound downstream analyses if not removed. PMID:21887249

  1. The Drop Volume Method for Interfacial Tension Determination: An Error Analysis.

    PubMed

    Earnshaw; Johnson; Carroll; Doyle

    1996-01-15

    An error analysis of the drop volume method of determination of surface or interfacial tension is presented. It is shown that the presence of the empirical correction term may lead to either a decrease or an increase in the final uncertainty of the calculated tension. Recommendations to maximize the precision of measurement are made. It is further shown that the systematic error due to the correction term is less than 0.04%; under the conditions recommended to minimize the statistical uncertainty, the systematic error should be less than half this figure. Tabulations of recommended values of the correction function are given.

  2. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  3. Correcting numerical integration errors caused by small aliasing errors

    SciTech Connect

    Smallwood, D.O.

    1997-11-01

    Small sampling errors can have a large effect on numerically integrated waveforms. An example is the integration of acceleration to compute velocity and displacement waveforms. These large integration errors complicate checking the suitability of the acceleration waveform for reproduction on shakers. For waveforms typically used for shaker reproduction, the errors become significant when the frequency content of the waveform spans a large frequency range. It is shown that these errors are essentially independent of the numerical integration method used, and are caused by small aliasing errors from the frequency components near the Nyquist frequency. A method to repair the integrated waveforms is presented. The method involves using a model of the acceleration error, and fitting this model to the acceleration, velocity, and displacement waveforms to force the waveforms to fit the assumed initial and final values. The correction is then subtracted from the acceleration before integration. The method is effective where the errors are isolated to a small section of the time history. It is shown that the common method to repair these errors using a high pass filter is sometimes ineffective for this class of problem.

  4. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    ERIC Educational Resources Information Center

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  5. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE).

    PubMed

    Haney, L N

    2000-09-01

    FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) is a framework and methodology for the systematic analysis, characterization, and prediction of human error. It was developed in a NASA Advanced Concepts Project by Idaho National Engineering and Environmental Laboratory, NASA Ames Research Center, Boeing, and America West Airlines, with input from United Airlines and Idaho State University. It was hypothesized that development of a comprehensive taxonomy of error-type and contributing-influences, in a framework and methodology addressing issues important for error analysis, would result in a useful tool for human error analysis. The development method included capturing expertise of human factors and domain experts in the framework, and ensuring that the approach addressed issues important for future human error analysis. This development resulted in creation of a FRANCIE taxonomy for airline maintenance, and a FRANCIE framework and approach that addresses important issues: proactive and reactive, comprehensive error-type and contributing-influences taxonomy, meaningful error reduction strategies, multilevel analyses, multiple user types, compatible with existing methods, applied in design phase or throughout system life cycle, capture of lessons learned, and ease of application. FRANCIE was designed to apply to any domain, given taxonomy refinement. This is demonstrated by its application for an aviation operations scenario for a new precision landing aid. Representative error-types and contributing-influences, two example analyses, and a case study are presented. In conclusion, FRANCIE is useful for analysis of human error, and the taxonomy is a starting point for development of taxonomies allowing application to other domains, such as spacecraft maintenance, operations, medicine, process control, and other transportation industries.

  6. Error studies for SNS Linac. Part 1: Transverse errors

    SciTech Connect

    Crandall, K.R.

    1998-12-31

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll).

  7. Peeling Away Timing Error in NetFlow Data

    NASA Astrophysics Data System (ADS)

    Trammell, Brian; Tellenbach, Bernhard; Schatzmann, Dominik; Burkhart, Martin

    In this paper, we characterize, quantify, and correct timing errors introduced into network flow data by collection and export via Cisco NetFlow version 9. We find that while some of these sources of error (clock skew, export delay) are generally implementation-dependent and known in the literature, there is an additional cyclic error of up to one second that is inherent to the design of the export protocol. We present a method for correcting this cyclic error in the presence of clock skew and export delay. In an evaluation using traffic with known timing collected from a national-scale network, we show that this method can successfully correct the cyclic error. However, there can also be other implementation-specific errors for which insufficient information remains for correction. On the routers we have deployed in our network, this limits the accuracy to about 70ms, reinforcing the point that implementation matters when conducting research on network measurement data.

  8. Disarming smiles: irrelevant happy faces slow post-error responses.

    PubMed

    Gupta, Rashmi; Deák, Gedeon O

    2015-11-01

    When we make errors, we tend to experience a negative emotional state. In addition, if our errors are witnessed by other people, we might expect those observers to respond negatively. However, little is known about how implicit social feedback like facial expressions influences error processing. We explored this using the cognitive control phenomenon of post-error slowing: the tendency to slow the response immediately following an error. Adult participants performed a difficult perceptual task: estimating which of two lines (horizontal or vertical) was longer. The background showed an irrelevant distractor face with a happy, sad, or neutral expression. Participants slowed after errors only when the subsequent distractor face was happy, but not when the subsequent distractor was sad or neutral nor when a happy face followed a correct response. This suggests that information about others' affect, even non-interactive, task-irrelevant information, has performance- and valence-dependent effects on adaptive cognitive control.

  9. Errors in visuo-haptic and haptic-haptic location matching are stable over long periods of time.

    PubMed

    Kuling, Irene A; Brenner, Eli; Smeets, Jeroen B J

    2016-05-01

    People make systematic errors when they move their unseen dominant hand to a visual target (visuo-haptic matching) or to their other unseen hand (haptic-haptic matching). Why they make such errors is still unknown. A key question in determining the reason is to what extent individual participants' errors are stable over time. To examine this, we developed a method to quantify the consistency. With this method, we studied the stability of systematic matching errors across time intervals of at least a month. Within this time period, individual subjects' matches were as consistent as one could expect on the basis of the variability in the individual participants' performance within each session. Thus individual participants make quite different systematic errors, but in similar circumstances they make the same errors across long periods of time.

  10. Error begat error: design error analysis and prevention in social infrastructure projects.

    PubMed

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated.

  11. MEMS IMU Error Mitigation Using Rotation Modulation Technique.

    PubMed

    Du, Shuang; Sun, Wei; Gao, Yang

    2016-11-29

    Micro-electro-mechanical-systems (MEMS) inertial measurement unit (IMU) outputs are corrupted by significant sensor errors. The navigation errors of a MEMS-based inertial navigation system will therefore accumulate very quickly over time. This requires aiding from other sensors such as Global Navigation Satellite Systems (GNSS). However, it will still remain a significant challenge in the presence of GNSS outages, which are typically in urban canopies. This paper proposed a rotary inertial navigation system (INS) to mitigate navigation errors caused by MEMS inertial sensor errors when external aiding information is not available. A rotary INS is an inertial navigator in which the IMU is installed on a rotation platform. Application of proper rotation schemes can effectively cancel and reduce sensor errors. A rotary INS has the potential to significantly increase the time period that INS can bridge GNSS outages and make MEMS IMU possible to maintain longer autonomous navigation performance when there is no external aiding. In this research, several IMU rotation schemes (rotation about X-, Y- and Z-axes) are analyzed to mitigate the navigation errors caused by MEMS IMU sensor errors. As the IMU rotation induces additional sensor errors, a calibration process is proposed to remove the induced errors. Tests are further conducted with two MEMS IMUs installed on a tri-axial rotation table to verify the error mitigation by IMU rotations.

  12. Error-related electrocorticographic activity in humans during continuous movements

    NASA Astrophysics Data System (ADS)

    Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten

    2012-04-01

    Brain-machine interface (BMI) devices make errors in decoding. Detecting these errors online from neuronal activity can improve BMI performance by modifying the decoding algorithm and by correcting the errors made. Here, we study the neuronal correlates of two different types of errors which can both be employed in BMI: (i) the execution error, due to inaccurate decoding of the subjects’ movement intention; (ii) the outcome error, due to not achieving the goal of the movement. We demonstrate that, in electrocorticographic (ECoG) recordings from the surface of the human brain, strong error-related neural responses (ERNRs) for both types of errors can be observed. ERNRs were present in the low and high frequency components of the ECoG signals, with both signal components carrying partially independent information. Moreover, the observed ERNRs can be used to discriminate between error types, with high accuracy (≥83%) obtained already from single electrode signals. We found ERNRs in multiple cortical areas, including motor and somatosensory cortex. As the motor cortex is the primary target area for recording control signals for a BMI, an adaptive motor BMI utilizing these error signals may not require additional electrode implants in other brain areas.

  13. MEMS IMU Error Mitigation Using Rotation Modulation Technique

    PubMed Central

    Du, Shuang; Sun, Wei; Gao, Yang

    2016-01-01

    Micro-electro-mechanical-systems (MEMS) inertial measurement unit (IMU) outputs are corrupted by significant sensor errors. The navigation errors of a MEMS-based inertial navigation system will therefore accumulate very quickly over time. This requires aiding from other sensors such as Global Navigation Satellite Systems (GNSS). However, it will still remain a significant challenge in the presence of GNSS outages, which are typically in urban canopies. This paper proposed a rotary inertial navigation system (INS) to mitigate navigation errors caused by MEMS inertial sensor errors when external aiding information is not available. A rotary INS is an inertial navigator in which the IMU is installed on a rotation platform. Application of proper rotation schemes can effectively cancel and reduce sensor errors. A rotary INS has the potential to significantly increase the time period that INS can bridge GNSS outages and make MEMS IMU possible to maintain longer autonomous navigation performance when there is no external aiding. In this research, several IMU rotation schemes (rotation about X-, Y- and Z-axes) are analyzed to mitigate the navigation errors caused by MEMS IMU sensor errors. As the IMU rotation induces additional sensor errors, a calibration process is proposed to remove the induced errors. Tests are further conducted with two MEMS IMUs installed on a tri-axial rotation table to verify the error mitigation by IMU rotations. PMID:27916852

  14. Medical error and related factors during internship and residency.

    PubMed

    Ahmadipour, Habibeh; Nahid, Mortazavi

    2015-01-01

    It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors.

  15. Automatic pronunciation error detection in non-native speech: the case of vowel errors in Dutch.

    PubMed

    van Doremalen, Joost; Cucchiarini, Catia; Strik, Helmer

    2013-08-01

    This research is aimed at analyzing and improving automatic pronunciation error detection in a second language. Dutch vowels spoken by adult non-native learners of Dutch are used as a test case. A first study on Dutch pronunciation by L2 learners with different L1s revealed that vowel pronunciation errors are relatively frequent and often concern subtle acoustic differences between the realization and the target sound. In a second study automatic pronunciation error detection experiments were conducted to compare existing measures to a metric that takes account of the error patterns observed to capture relevant acoustic differences. The results of the two studies do indeed show that error patterns bear information that can be usefully employed in weighted automatic measures of pronunciation quality. In addition, it appears that combining such a weighted metric with existing measures improves the equal error rate by 6.1 percentage points from 0.297, for the Goodness of Pronunciation (GOP) algorithm, to 0.236.

  16. SU-E-I-83: Error Analysis of Multi-Modality Image-Based Volumes of Rodent Solid Tumors Using a Preclinical Multi-Modality QA Phantom

    SciTech Connect

    Lee, Y; Fullerton, G; Goins, B

    2015-06-15

    Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group; 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement

  17. Children's Scale Errors with Tools

    ERIC Educational Resources Information Center

    Casler, Krista; Eshleman, Angelica; Greene, Kimberly; Terziyan, Treysi

    2011-01-01

    Children sometimes make "scale errors," attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to…

  18. Dual Processing and Diagnostic Errors

    ERIC Educational Resources Information Center

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  19. Explaining Errors in Children's Questions

    ERIC Educational Resources Information Center

    Rowland, Caroline F.

    2007-01-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…

  20. Error Estimates for Mixed Methods.

    DTIC Science & Technology

    1979-03-01

    This paper presents abstract error estimates for mixed methods for the approximate solution of elliptic boundary value problems. These estimates are...then applied to obtain quasi-optimal error estimates in the usual Sobolev norms for four examples: three mixed methods for the biharmonic problem and a mixed method for 2nd order elliptic problems. (Author)

  1. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  2. Error Correction, Revision, and Learning

    ERIC Educational Resources Information Center

    Truscott, John; Hsu, Angela Yi-ping

    2008-01-01

    Previous research has shown that corrective feedback on an assignment helps learners reduce their errors on that assignment during the revision process. Does this finding constitute evidence that learning resulted from the feedback? Differing answers play an important role in the ongoing debate over the effectiveness of error correction,…

  3. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  4. Application of human error analysis to aviation and space operations

    SciTech Connect

    Nelson, W.R.

    1998-03-01

    For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) the authors have been working to apply methods of human error analysis to the design of complex systems. They have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. They are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. The primary vehicle the authors have used to develop and apply these methods has been a series of projects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. They are currently adapting their methods and tools of human error analysis to the domain of air traffic management (ATM) systems. Under the NASA-sponsored Advanced Air Traffic Technologies (AATT) program they are working to address issues of human reliability in the design of ATM systems to support the development of a free flight environment for commercial air traffic in the US. They are also currently testing the application of their human error analysis approach for space flight operations. They have developed a simplified model of the critical habitability functions for the space station Mir, and have used this model to assess the affects of system failures and human errors that have occurred in the wake of the collision incident last year. They are developing an approach so that lessons learned from Mir operations can be systematically applied to design and operation of long-term space missions such as the International Space Station (ISS) and the manned Mars mission.

  5. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  6. Angle interferometer cross axis errors

    SciTech Connect

    Bryan, J.B.; Carter, D.L.; Thompson, S.L.

    1994-01-01

    Angle interferometers are commonly used to measure surface plate flatness. An error can exist when the centerline of the double comer cube mirror assembly is not square to the surface plate and the guide bar for the mirror sled is curved. Typical errors can be one to two microns per meter. A similar error can exist in the calibration of rotary tables when the centerline of the double comer cube mirror assembly is not square to the axes of rotation of the angle calibrator and the calibrator axis is not parallel to the rotary table axis. Commercial double comer cube assemblies typically have non-parallelism errors of ten milli-radians between their centerlines and their sides and similar values for non-squareness between their centerlines and end surfaces. The authors have developed a simple method for measuring these errors and correcting them by remachining the reference surfaces.

  7. Angle interferometer cross axis errors

    NASA Astrophysics Data System (ADS)

    Bryan, J. B.; Carter, D. L.; Thompson, S. L.

    1994-01-01

    Angle interferometers are commonly used to measure surface plate flatness. An error can exist when the centerline of the double comer cube mirror assembly is not square to the surface plate and the guide bar for the mirror sled is curved. Typical errors can be one to two microns per meter. A similar error can exist in the calibration of rotary tables when the centerline of the double comer cube mirror assembly is not square to the axes of rotation of the angle calibrator and the calibrator axis is not parallel to the rotary table axis. Commercial double comer cube assemblies typically have non-parallelism errors of ten milli-radians between their centerlines and their sides and similar values for non-squareness between their centerlines and end surfaces. The authors have developed a simple method for measuring these errors and correcting them.

  8. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  9. [Medication errors and medication reconciliation from a hospital pharmacist's perspective].

    PubMed

    Amann, Steffen; Kantelhardt, Pamela

    2012-01-01

    To reduce medication errors and other drug-related problems, their systematic discovery, documentation and evaluation is essential. The web-based documentation database ADKA-DokuPIK enables both the documentation and the publication of annotated individual cases and, moreover, systematic errors or accumulations of risk drugs may be determined. Medication reconciliation is another important component to increase safety in drug therapy. Hospital pharmacists may support and significantly improve this process. In Germany some initial information from various projects is available. Medication reconciliation performed by hospital pharmacists may significantly increase the completeness and accuracy of medication regimens. Patient counselling together with the necessary drug supply at discharge improves patients' knowledge, closes supply gaps and improves the satisfaction of all parties.

  10. Effects of Setup Errors and Shape Changes on Breast Radiotherapy

    SciTech Connect

    Mourik, Anke van; Kranen, Simon van; Hollander, Suzanne den; Sonke, Jan-Jakob; Herk, Marcel van; Vliet-Vroegindeweij, Corine van

    2011-04-01

    Purpose: The purpose of the present study was to quantify the robustness of the dose distributions from three whole-breast radiotherapy (RT) techniques involving different levels of intensity modulation against whole patient setup inaccuracies and breast shape changes. Methods and Materials: For 19 patients (one computed tomography scan and five cone beam computed tomography scans each), three treatment plans were made (wedge, simple intensity-modulated RT [IMRT], and full IMRT). For each treatment plan, four dose distributions were calculated. The first dose distribution was the original plan. The other three included the effects of patient setup errors (rigid displacement of the bony anatomy) or breast errors (e.g., rotations and shape changes of the breast with respect to the bony anatomy), or both, and were obtained through deformable image registration and dose accumulation. Subsequently, the effects of the plan type and error sources on target volume coverage, mean lung dose, and excess dose were determined. Results: Systematic errors of 1-2 mm and random errors of 2-3 mm (standard deviation) were observed for both patient- and breast-related errors. Planning techniques involving glancing fields (wedge and simple IMRT) were primarily affected by patient errors ({approx}6% loss of coverage near the dorsal field edge and {approx}2% near the skin). In contrast, plan deterioration due to breast errors was primarily observed in planning techniques without glancing fields (full IMRT, {approx}2% loss of coverage near the dorsal field edge and {approx}4% near the skin). Conclusion: The influences of patient and breast errors on the dose distributions are comparable in magnitude for whole breast RT plans, including glancing open fields, rendering simple IMRT the preferred technique. Dose distributions from planning techniques without glancing open fields were more seriously affected by shape changes of the breast, demanding specific attention in partial breast

  11. Flux control coefficients determined by inhibitor titration: the design and analysis of experiments to minimize errors.

    PubMed Central

    Small, J R

    1993-01-01

    This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434

  12. Elimination of Abbe error method of large-scale laser comparator

    NASA Astrophysics Data System (ADS)

    Li, Jianshuang; Zhang, Manshan; He, Mingzhao; Miao, Dongjing; Deng, Xiangrui; Li, Lianfu

    2015-02-01

    Abbe error is the inherent systematic error in all large-scale laser comparators because the standard laser axis is not in line with measured optical axis. Any angular error of the moving platform will result in the offset from the measured optical axis to the standard laser axis. This paper describes to an algorithm which could be used to calculate the displacement of an equivalent standard laser interferometer and to eliminate an Abbe error. The algorithm could also be used to reduce the Abbe error of a large-scale laser comparator. Experimental results indicated that the uncertainty of displacement measurement due to Abbe error could be effectively reduced when the position error of the measured optical axis was taken into account.

  13. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test.

  14. Mars gravitational field estimation error

    NASA Technical Reports Server (NTRS)

    Compton, H. R.; Daniels, E. F.

    1972-01-01

    The error covariance matrices associated with a weighted least-squares differential correction process have been analyzed for accuracy in determining the gravitational coefficients through degree and order five in the Mars gravitational potential junction. The results are presented in terms of standard deviations for the assumed estimated parameters. The covariance matrices were calculated by assuming Doppler tracking data from a Mars orbiter, a priori statistics for the estimated parameters, and model error uncertainties for tracking-station locations, the Mars ephemeris, the astronomical unit, the Mars gravitational constant (G sub M), and the gravitational coefficients of degrees six and seven. Model errors were treated by using the concept of consider parameters.

  15. Stochastic Models of Human Errors

    NASA Technical Reports Server (NTRS)

    Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)

    2002-01-01

    Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.

  16. Error bounds in cascading regressions

    USGS Publications Warehouse

    Karlinger, M.R.; Troutman, B.M.

    1985-01-01

    Cascading regressions is a technique for predicting a value of a dependent variable when no paired measurements exist to perform a standard regression analysis. Biases in coefficients of a cascaded-regression line as well as error variance of points about the line are functions of the correlation coefficient between dependent and independent variables. Although this correlation cannot be computed because of the lack of paired data, bounds can be placed on errors through the required properties of the correlation coefficient. The potential meansquared error of a cascaded-regression prediction can be large, as illustrated through an example using geomorphologic data. ?? 1985 Plenum Publishing Corporation.

  17. A cross-linguistic speech error investigation of functional complexity.

    PubMed

    Wells-Jensen, Sheri

    2007-03-01

    This work is a systematic, cross-linguistic examination of speech errors in English, Hindi, Japanese, Spanish and Turkish. It first describes a methodology for the generation of parallel corpora of error data, then uses these data to examine three general hypotheses about the relationship between language structure and the speech production system. All of the following hypotheses were supported by the data. Languages are equally complex. No overall differences were found in the numbers of errors made by speakers of the five languages in the study. Languages are processed in similar ways. English-based generalizations about language production were tested to see to what extent they would hold true across languages. It was found that, to a large degree, languages follow similar patterns. However, the relative numbers of phonological anticipations and perseverations in other languages did not follow the English pattern. Languages differ in that speech errors tend to cluster around loci of complexity within each language. Languages such as Turkish and Spanish, which have more inflectional morphology, exhibit more errors involving inflected forms, while languages such as Japanese, with rich systems of closed-class forms, tend to have more errors involving closed-class items.

  18. Exploring Discretization Error in Simulation-Based Aerodynamic Databases

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2010-01-01

    This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.

  19. Radiometric error and re-calibration of the MGS TES spectra

    NASA Astrophysics Data System (ADS)

    Pankine, Alexey A.

    2016-12-01

    Several sources of systematic error were identified in the spectra of the Thermal Emission Spectrometer (TES) onboard the Mars Global Surveyor (MGS) spacecraft during its mission. Some of these errors were corrected, some still remain and contaminate spectra. One of the most significant remaining errors is a time-variable systematic radiometric error. This error significantly affects nighttime and polar spectra, and spectra of the Mars' limb. The existence of this error hampered analysis of roughly half of the data collected by TES spectrometer. The error arises due to a periodic sampling error of TES interferograms, which is a common problem in Fourier-transform interferometers. The error negatively affects calibrated TES spectra in two ways: it introduces an error into estimates of the Instrument Response Functions (IRF) and instrument's radiances that are used to calibrate TES spectra, and it introduces an error into TES spectra themselves. This paper presents a new approach to calibrating TES spectra that enables removing the error from the calibration functions. The new approach utilizes long-term averages of uncalibrated TES spectra of deep space to estimate the true shape of the TES IRF and its dependence on instrument temperature. This, and parameterization of the radiometric error spectral shape, enables removing the error from calibration. Examples of re-calibrated spectra are presented. The largest improvement in the quality of the spectra is observed for nighttime and polar spectra, and spectra of the Mars' limb. Re-calibration would significantly improve retrievals of aerosol abundances and surface temperatures from these spectra.

  20. Highly porous thermal protection materials: Modelling and prediction of the methodical experimental errors

    NASA Astrophysics Data System (ADS)

    Cherepanov, Valery V.; Alifanov, Oleg M.; Morzhukhina, Alena V.; Budnik, Sergey A.

    2016-11-01

    The formation mechanisms and the main factors affecting the systematic error of thermocouples were investigated. According to the results of experimental studies and mathematical modelling it was established that in highly porous heat resistant materials for aerospace application the thermocouple errors are determined by two competing mechanisms provided correlation between the errors and the difference between radiation and conduction heat fluxes. The comparative analysis was carried out and some features of the methodical error formation related to the distances from the heated surface were established.

  1. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  2. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  3. Learning (from) the errors of a systems biology model

    NASA Astrophysics Data System (ADS)

    Engelhardt, Benjamin; Frőhlich, Holger; Kschischo, Maik

    2016-02-01

    Mathematical modelling is a labour intensive process involving several iterations of testing on real data and manual model modifications. In biology, the domain knowledge guiding model development is in many cases itself incomplete and uncertain. A major problem in this context is that biological systems are open. Missed or unknown external influences as well as erroneous interactions in the model could thus lead to severely misleading results. Here we introduce the dynamic elastic-net, a data driven mathematical method which automatically detects such model errors in ordinary differential equation (ODE) models. We demonstrate for real and simulated data, how the dynamic elastic-net approach can be used to automatically (i) reconstruct the error signal, (ii) identify the target variables of model error, and (iii) reconstruct the true system state even for incomplete or preliminary models. Our work provides a systematic computational method facilitating modelling of open biological systems under uncertain knowledge.

  4. Aging transition by random errors

    PubMed Central

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-01-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice. PMID:28198430

  5. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  6. Static Detection of Disassembly Errors

    SciTech Connect

    Krishnamoorthy, Nithya; Debray, Saumya; Fligg, Alan K

    2009-10-13

    Static disassembly is a crucial first step in reverse engineering executable files, and there is a consider- able body of work in reverse-engineering of binaries, as well as areas such as semantics-based security anal- ysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine- learning-based approach, using decision trees, for stat- ically identifying possible errors in a static disassem- bly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental re- sults using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.

  7. Prospective errors determine motor learning

    PubMed Central

    Takiyama, Ken; Hirashima, Masaya; Nozaki, Daichi

    2015-01-01

    Diverse features of motor learning have been reported by numerous studies, but no single theoretical framework concurrently accounts for these features. Here, we propose a model for motor learning to explain these features in a unified way by extending a motor primitive framework. The model assumes that the recruitment pattern of motor primitives is determined by the predicted movement error of an upcoming movement (prospective error). To validate this idea, we perform a behavioural experiment to examine the model’s novel prediction: after experiencing an environment in which the movement error is more easily predictable, subsequent motor learning should become faster. The experimental results support our prediction, suggesting that the prospective error might be encoded in the motor primitives. Furthermore, we demonstrate that this model has a strong explanatory power to reproduce a wide variety of motor-learning-related phenomena that have been separately explained by different computational models. PMID:25635628

  8. Aging transition by random errors

    NASA Astrophysics Data System (ADS)

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-02-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice.

  9. Algorithmic Error Correction of Impedance Measuring Sensors

    PubMed Central

    Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira

    2009-01-01

    This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177

  10. Detecting Soft Errors in Stencil based Computations

    SciTech Connect

    Sharma, V.; Gopalkrishnan, G.; Bronevetsky, G.

    2015-05-06

    Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.

  11. Interpolation Errors in Spectrum Analyzers

    NASA Technical Reports Server (NTRS)

    Martin, J. L.

    1996-01-01

    To obtain the proper measurement amplitude with a spectrum analyzer, the correct frequency-dependent transducer factor must be added to the voltage measured by the transducer. This report examines how entering transducer factors into a spectrum analyzer can cause significant errors in field amplitude due to the misunderstanding of the analyzer's interpolation methods. It also discusses how to reduce these errors to obtain a more accurate field amplitude reading.

  12. Systematic review automation technologies.

    PubMed

    Tsafnat, Guy; Glasziou, Paul; Choong, Miew Keen; Dunn, Adam; Galgani, Filippo; Coiera, Enrico

    2014-07-09

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects.We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time.

  13. Systematic review automation technologies

    PubMed Central

    2014-01-01

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128

  14. Correcting the optimal resampling-based error rate by estimating the error rate of wrapper algorithms.

    PubMed

    Bernau, Christoph; Augustin, Thomas; Boulesteix, Anne-Laure

    2013-09-01

    High-dimensional binary classification tasks, for example, the classification of microarray samples into normal and cancer tissues, usually involve a tuning parameter. By reporting the performance of the best tuning parameter value only, over-optimistic prediction errors are obtained. For correcting this tuning bias, we develop a new method which is based on a decomposition of the unconditional error rate involving the tuning procedure, that is, we estimate the error rate of wrapper algorithms as introduced in the context of internal cross-validation (ICV) by Varma and Simon (2006, BMC Bioinformatics 7, 91). Our subsampling-based estimator can be written as a weighted mean of the errors obtained using the different tuning parameter values, and thus can be interpreted as a smooth version of ICV, which is the standard approach for avoiding tuning bias. In contrast to ICV, our method guarantees intuitive bounds for the corrected error. Additionally, we suggest to use bias correction methods also to address the conceptually similar method selection bias that results from the optimal choice of the classification method itself when evaluating several methods successively. We demonstrate the performance of our method on microarray and simulated data and compare it to ICV. This study suggests that our approach yields competitive estimates at a much lower computational price.

  15. The Importance of Medication Errors Reporting in Improving the Quality of Clinical Care Services

    PubMed Central

    Elden, Nesreen Mohamed Kamal; Ismail, Amira

    2016-01-01

    Introduction: Medication errors have significant implications on patient safety. Error detection through an active management and effective reporting system discloses medication errors and encourages safe practices. Objectives: To improve patient safety through determining and reducing the major causes of medication errors (MEs), after applying tailored preventive strategies. Methodology: A pre-test, post-test study was conducted on all inpatients at a 177 bed hospital where all medication procedures in each ward were monitored by a clinical pharmacist. The patient files were reviewed, as well. Error reports were submitted to a hospital multidisciplinary committee to identify major causes of errors. Accordingly, corrective interventions that consisted of targeted training programs for nurses and physicians were conducted. Results: Medication errors were higher during ordering/prescription stage (38.1%), followed by administration phase (20.9%). About 45% of errors reached the patients: 43.5% were harmless and 1.4% harmful. 7.7% were potential errors and more than 47% could be prevented. After the intervention, error rates decreased from (6.7%) to (3.6%) (P≤0.001). Conclusion: The role of a ward based clinical pharmacist with a hospital multidisciplinary committee was effective in recognizing, designing and implementing tailored interventions for reduction of medication errors. A systematic approach is urgently needed to decrease organizational susceptibility to errors, through providing required resources to monitor, analyze and implement effective interventions. PMID:27045415

  16. Quantum error correction for beginners.

    PubMed

    Devitt, Simon J; Munro, William J; Nemoto, Kae

    2013-07-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future.

  17. Error image aware content restoration

    NASA Astrophysics Data System (ADS)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  18. Dominant modes via model error

    NASA Technical Reports Server (NTRS)

    Yousuff, A.; Breida, M.

    1992-01-01

    Obtaining a reduced model of a stable mechanical system with proportional damping is considered. Such systems can be conveniently represented in modal coordinates. Two popular schemes, the modal cost analysis and the balancing method, offer simple means of identifying dominant modes for retention in the reduced model. The dominance is measured via the modal costs in the case of modal cost analysis and via the singular values of the Gramian-product in the case of balancing. Though these measures do not exactly reflect the more appropriate model error, which is the H2 norm of the output-error between the full and the reduced models, they do lead to simple computations. Normally, the model error is computed after the reduced model is obtained, since it is believed that, in general, the model error cannot be easily computed a priori. The authors point out that the model error can also be calculated a priori, just as easily as the above measures. Hence, the model error itself can be used to determine the dominant modes. Moreover, the simplicity of the computations does not presume any special properties of the system, such as small damping, orthogonal symmetry, etc.

  19. Error localization in RHIC by fitting difference orbits

    SciTech Connect

    Liu C.; Minty, M.; Ptitsyn, V.

    2012-05-20

    The presence of realistic errors in an accelerator or in the model used to describe the accelerator are such that a measurement of the beam trajectory may deviate from prediction. Comparison of measurements to model can be used to detect such errors. To do so the initial conditions (phase space parameters at any point) must be determined which can be achieved by fitting the difference orbit compared to model prediction using only a few beam position measurements. Using these initial conditions, the fitted orbit can be propagated along the beam line based on the optics model. Measurement and model will agree up to the point of an error. The error source can be better localized by additionally fitting the difference orbit using downstream BPMs and back-propagating the solution. If one dominating error source exist in the machine, the fitted orbit will deviate from the difference orbit at the same point.

  20. Seismic Station Installation Orientation Errors at ANSS and IRIS/USGS Stations

    USGS Publications Warehouse

    Ringler, Adam T.; Hutt, Charles R.; Persfield, K.; Gee, Lind S.

    2013-01-01

    the vault (e.g., GSN station WCI in Wyandotte Cave, Indiana). Finally, the third source of error