Science.gov

Sample records for additional systematic error

  1. Systematic Errors in an Air Track Experiment.

    ERIC Educational Resources Information Center

    Ramirez, Santos A.; Ham, Joe S.

    1990-01-01

    Errors found in a common physics experiment to measure acceleration resulting from gravity using a linear air track are investigated. Glider position at release and initial velocity are shown to be sources of systematic error. (CW)

  2. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  3. Protecting weak measurements against systematic errors

    NASA Astrophysics Data System (ADS)

    Pang, Shengshi; Alonso, Jose Raul Gonzalez; Brun, Todd A.; Jordan, Andrew N.

    2016-07-01

    In this work, we consider the systematic error of quantum metrology by weak measurements under decoherence. We derive the systematic error of maximum likelihood estimation in general to the first-order approximation of a small deviation in the probability distribution and study the robustness of standard weak measurement and postselected weak measurements against systematic errors. We show that, with a large weak value, the systematic error of a postselected weak measurement when the probe undergoes decoherence can be significantly lower than that of a standard weak measurement. This indicates another advantage of weak-value amplification in improving the performance of parameter estimation. We illustrate the results by an exact numerical simulation of decoherence arising from a bosonic mode and compare it to the first-order analytical result we obtain.

  4. Measuring Systematic Error with Curve Fits

    ERIC Educational Resources Information Center

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

  5. Antenna pointing systematic error model derivations

    NASA Technical Reports Server (NTRS)

    Guiar, C. N.; Lansing, F. L.; Riggs, R.

    1987-01-01

    The pointing model used to represent and correct systematic errors for the Deep Space Network (DSN) antennas is presented. Analytical expressions are given in both azimuth-elevation (az-el) and hour angle-declination (ha-dec) mounts for RF axis collimation error, encoder offset, nonorthogonality of axes, axis plane tilt, and structural flexure due to gravity loading. While the residual pointing errors (rms) after correction appear to be within the ten percent of the half-power beamwidth criterion commonly set for good pointing accuracy, the DSN has embarked on an extensive pointing improvement and modeling program aiming toward an order of magnitude higher pointing precision.

  6. Systematic errors in long baseline oscillation experiments

    SciTech Connect

    Harris, Deborah A.; /Fermilab

    2006-02-01

    This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.

  7. Systematic errors in strong lens modeling

    NASA Astrophysics Data System (ADS)

    Johnson, Traci Lin; Sharon, Keren; Bayliss, Matthew B.

    2015-08-01

    The lensing community has made great strides in quantifying the statistical errors associated with strong lens modeling. However, we are just now beginning to understand the systematic errors. Quantifying these errors is pertinent to Frontier Fields science, as number counts and luminosity functions are highly sensitive to the value of the magnifications of background sources across the entire field of view. We are aware that models can be very different when modelers change their assumptions about the parameterization of the lensing potential (i.e., parametric vs. non-parametric models). However, models built while utilizing a single methodology can lead to inconsistent outcomes for different quantities, distributions, and qualities of redshift information regarding the multiple images used as constraints in the lens model. We investigate how varying the number of multiple image constraints and available redshift information of those constraints (ex., spectroscopic vs. photometric vs. no redshift) can influence the outputs of our parametric strong lens models, specifically, the mass distribution and magnifications of background sources. We make use of the simulated clusters by M. Meneghetti et al. and the first two Frontier Fields clusters, which have a high number of multiply imaged galaxies with spectroscopically-measured redshifts (or input redshifts, in the case of simulated clusters). This work will not only inform upon Frontier Field science, but also for work on the growing collection of strong lensing galaxy clusters, most of which are less massive and are capable of lensing a handful of galaxies, and are more prone to these systematic errors.

  8. More on Systematic Error in a Boyle's Law Experiment

    ERIC Educational Resources Information Center

    McCall, Richard P.

    2012-01-01

    A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.

  9. More on Systematic Error in a Boyle's Law Experiment

    NASA Astrophysics Data System (ADS)

    McCall, Richard P.

    2012-01-01

    A recent article in The Physics Teacher describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.2-7

  10. Systematic error analysis of rotating coil using computer simulation

    SciTech Connect

    Li, Wei-chuan; Coles, M.

    1993-04-01

    This report describes a study of the systematic and random measurement uncertainties of magnetic multipoles which are due to construction errors, rotational speed variation, and electronic noise in a digitally bucked tangential coil assembly with dipole bucking windings. The sensitivities of the systematic multipole uncertainty to construction errors are estimated analytically and using a computer simulation program.

  11. Improved Systematic Pointing Error Model for the DSN Antennas

    NASA Technical Reports Server (NTRS)

    Rochblatt, David J.; Withington, Philip M.; Richter, Paul H.

    2011-01-01

    New pointing models have been developed for large reflector antennas whose construction is founded on elevation over azimuth mount. At JPL, the new models were applied to the Deep Space Network (DSN) 34-meter antenna s subnet for corrections of their systematic pointing errors; it achieved significant improvement in performance at Ka-band (32-GHz) and X-band (8.4-GHz). The new models provide pointing improvements relative to the traditional models by a factor of two to three, which translate to approximately 3-dB performance improvement at Ka-band. For radio science experiments where blind pointing performance is critical, the new innovation provides a new enabling technology. The model extends the traditional physical models with higher-order mathematical terms, thereby increasing the resolution of the model for a better fit to the underlying systematic imperfections that are the cause of antenna pointing errors. The philosophy of the traditional model was that all mathematical terms in the model must be traced to a physical phenomenon causing antenna pointing errors. The traditional physical terms are: antenna axis tilts, gravitational flexure, azimuth collimation, azimuth encoder fixed offset, azimuth and elevation skew, elevation encoder fixed offset, residual refraction, azimuth encoder scale error, and antenna pointing de-rotation terms for beam waveguide (BWG) antennas. Besides the addition of spherical harmonics terms, the new models differ from the traditional ones in that the coefficients for the cross-elevation and elevation corrections are completely independent and may be different, while in the traditional model, some of the terms are identical. In addition, the new software allows for all-sky or mission-specific model development, and can utilize the previously used model as an a priori estimate for the development of the updated models.

  12. Identifying and Reducing Systematic Errors in Chromosome Conformation Capture Data

    PubMed Central

    Hahn, Seungsoo; Kim, Dongsup

    2015-01-01

    Chromosome conformation capture (3C)-based techniques have recently been used to uncover the mystic genomic architecture in the nucleus. These techniques yield indirect data on the distances between genomic loci in the form of contact frequencies that must be normalized to remove various errors. This normalization process determines the quality of data analysis. In this study, we describe two systematic errors that result from the heterogeneous local density of restriction sites and different local chromatin states, methods to identify and remove those artifacts, and three previously described sources of systematic errors in 3C-based data: fragment length, mappability, and local DNA composition. To explain the effect of systematic errors on the results, we used three different published data sets to show the dependence of the results on restriction enzymes and experimental methods. Comparison of the results from different restriction enzymes shows a higher correlation after removing systematic errors. In contrast, using different methods with the same restriction enzymes shows a lower correlation after removing systematic errors. Notably, the improved correlation of the latter case caused by systematic errors indicates that a higher correlation between results does not ensure the validity of the normalization methods. Finally, we suggest a method to analyze random error and provide guidance for the maximum reproducibility of contact frequency maps. PMID:26717152

  13. A Systematic Error Correction Method for TOVS Radiances

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.

  14. Systematic errors for a Mueller matrix dual rotating compensator ellipsometer.

    PubMed

    Broch, Laurent; En Naciri, Aotmane; Johann, Luc

    2008-06-01

    The characterization of anisotropic materials and complex systems by ellipsometry has pushed the design of instruments to require the measurement of the full reflection Mueller matrix of the sample with a great precision. Therefore Mueller matrix ellipsometers have emerged over the past twenty years. The values of some coefficients of the matrix can be very small and errors due to noise or systematic errors can induce distored analysis. We present a detailed characterization of the systematic errors for a Mueller Matrix Ellipsometer in the dual-rotating compensator configuration. Starting from a general formalism, we derive explicit first-order expressions for the errors on all the coefficients of the Mueller matrix of the sample. The errors caused by inaccuracy of the azimuthal arrangement of the optical components and residual ellipticity introduced by imperfect optical elements are shown. A new method based on a four-zone averaging measurement is proposed to vanish the systematic errors.

  15. Straightness error evaluation of additional constraints

    NASA Astrophysics Data System (ADS)

    Pei, Ling; Wang, Shenghuai; Liu, Yong

    2011-05-01

    A new generation of Dimensional and Geometrical Product Specifications (GPS) and Verification standard system is based on both the Mathematical structure and the Metrology. To determine the eligibility of the product should be adapt to modern digital measuring instruments. But in mathematizating measurement when the geometric tolerance specifications has additional constraints requirement, such as straightness with an additional constraint, required to qualify the additional form requirements of the feature within the tolerance zone. Knowing how to close the geometrical specification to the functional specification will result in the correctness of measurement results. Adopting the methodology to evaluate by analyzing various forms including the ideal features and the extracted features and their combinations in an additional form constraint of the straightness in tolerance zone had been found correctly acceptance decision for products. The results show that different combinations of the various forms had affected acceptance on the product qualification and the appropriate forms matching can meet the additional form requirements for product features.

  16. Straightness error evaluation of additional constraints

    NASA Astrophysics Data System (ADS)

    Pei, Ling; Wang, Shenghuai; Liu, Yong

    2010-12-01

    A new generation of Dimensional and Geometrical Product Specifications (GPS) and Verification standard system is based on both the Mathematical structure and the Metrology. To determine the eligibility of the product should be adapt to modern digital measuring instruments. But in mathematizating measurement when the geometric tolerance specifications has additional constraints requirement, such as straightness with an additional constraint, required to qualify the additional form requirements of the feature within the tolerance zone. Knowing how to close the geometrical specification to the functional specification will result in the correctness of measurement results. Adopting the methodology to evaluate by analyzing various forms including the ideal features and the extracted features and their combinations in an additional form constraint of the straightness in tolerance zone had been found correctly acceptance decision for products. The results show that different combinations of the various forms had affected acceptance on the product qualification and the appropriate forms matching can meet the additional form requirements for product features.

  17. Tackling systematic errors in quantum logic gates with composite rotations

    SciTech Connect

    Cummins, Holly K.; Llewellyn, Gavin; Jones, Jonathan A.

    2003-04-01

    We describe the use of composite rotations to combat systematic errors in single-qubit quantum logic gates and discuss three families of composite rotations which can be used to correct off-resonance and pulse length errors. Although developed and described within the context of nuclear magnetic resonance quantum computing, these sequences should be applicable to any implementation of quantum computation.

  18. Systematic Error Estimation for Chemical Reaction Energies.

    PubMed

    Simm, Gregor N; Reiher, Markus

    2016-06-14

    For a theoretical understanding of the reactivity of complex chemical systems, accurate relative energies between intermediates and transition states are required. Despite its popularity, density functional theory (DFT) often fails to provide sufficiently accurate data, especially for molecules containing transition metals. Due to the huge number of intermediates that need to be studied for all but the simplest chemical processes, DFT is, to date, the only method that is computationally feasible. Here, we present a Bayesian framework for DFT that allows for error estimation of calculated properties. Since the optimal choice of parameters in present-day density functionals is strongly system dependent, we advocate for a system-focused reparameterization. While, at first sight, this approach conflicts with the first-principles character of DFT that should make it, in principle, system independent, we deliberately introduce system dependence to be able to assign a stochastically meaningful error to the system-dependent parametrization, which makes it nonarbitrary. By reparameterizing a functional that was derived on a sound physical basis to a chemical system of interest, we obtain a functional that yields reliable confidence intervals for reaction energies. We demonstrate our approach on the example of catalytic nitrogen fixation.

  19. Analysis and Correction of Systematic Height Model Errors

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.

    2016-06-01

    The geometry of digital height models (DHM) determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC). Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3) has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP), but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM) digital surface model (DSM) or the new AW3D30 DSM, based on ALOS PRISM images, are

  20. Systematic Parameter Errors in Inspiraling Neutron Star Binaries

    NASA Astrophysics Data System (ADS)

    Favata, Marc

    2014-03-01

    The coalescence of two neutron stars is an important gravitational wave source for LIGO and other detectors. Numerous studies have considered the precision with which binary parameters (masses, spins, Love numbers) can be measured. Here I consider the accuracy with which these parameters can be determined in the presence of systematic errors due to waveform approximations. These approximations include truncation of the post-Newtonian (PN) series and neglect of neutron star (NS) spin, tidal deformation, or orbital eccentricity. All of these effects can yield systematic errors that exceed statistical errors for plausible parameter values. In particular, neglecting spin, eccentricity, or high-order PN terms causes a significant bias in the NS Love number. Tidal effects will not be measurable with PN inspiral waveforms if these systematic errors are not controlled.

  1. Optimal input design for aircraft instrumentation systematic error estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1991-01-01

    A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.

  2. Reducing Systematic Error in Weak Lensing Cluster Surveys

    NASA Astrophysics Data System (ADS)

    Utsumi, Yousuke; Miyazaki, Satoshi; Geller, Margaret J.; Dell'Antonio, Ian P.; Oguri, Masamune; Kurtz, Michael J.; Hamana, Takashi; Fabricant, Daniel G.

    2014-05-01

    Weak lensing provides an important route toward collecting samples of clusters of galaxies selected by mass. Subtle systematic errors in image reduction can compromise the power of this technique. We use the B-mode signal to quantify this systematic error and to test methods for reducing this error. We show that two procedures are efficient in suppressing systematic error in the B-mode: (1) refinement of the mosaic CCD warping procedure to conform to absolute celestial coordinates and (2) truncation of the smoothing procedure on a scale of 10'. Application of these procedures reduces the systematic error to 20% of its original amplitude. We provide an analytic expression for the distribution of the highest peaks in noise maps that can be used to estimate the fraction of false peaks in the weak-lensing κ-signal-to-noise ratio (S/N) maps as a function of the detection threshold. Based on this analysis, we select a threshold S/N = 4.56 for identifying an uncontaminated set of weak-lensing peaks in two test fields covering a total area of ~3 deg2. Taken together these fields contain seven peaks above the threshold. Among these, six are probable systems of galaxies and one is a superposition. We confirm the reliability of these peaks with dense redshift surveys, X-ray, and imaging observations. The systematic error reduction procedures we apply are general and can be applied to future large-area weak-lensing surveys. Our high-peak analysis suggests that with an S/N threshold of 4.5, there should be only 2.7 spurious weak-lensing peaks even in an area of 1000 deg2, where we expect ~2000 peaks based on our Subaru fields. Based in part on data collected at Subaru Telescope and obtained from the SMOKA, which is operated by the Astronomy Data Center, National Astronomical Observatory of Japan.

  3. Treatment of systematic errors in land data assimilation systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of lan...

  4. Bayesian conformity assessment in presence of systematic measurement errors

    NASA Astrophysics Data System (ADS)

    Carobbi, Carlo; Pennecchi, Francesca

    2016-04-01

    Conformity assessment of the distribution of the values of a quantity is investigated by using a Bayesian approach. The effect of systematic, non-negligible measurement errors is taken into account. The analysis is general, in the sense that the probability distribution of the quantity can be of any kind, that is even different from the ubiquitous normal distribution, and the measurement model function, linking the measurand with the observable and non-observable influence quantities, can be non-linear. Further, any joint probability density function can be used to model the available knowledge about the systematic errors. It is demonstrated that the result of the Bayesian analysis here developed reduces to the standard result (obtained through a frequentistic approach) when the systematic measurement errors are negligible. A consolidated frequentistic extension of such standard result, aimed at including the effect of a systematic measurement error, is directly compared with the Bayesian result, whose superiority is demonstrated. Application of the results here obtained to the derivation of the operating characteristic curves used for sampling plans for inspection by variables is also introduced.

  5. The "diagonal effect": a systematic error in oblique antisaccades.

    PubMed

    Koehn, John D; Roy, Elizabeth; Barton, Jason J S

    2008-08-01

    Antisaccades are known to show greater variable error and also a systematic hypometria in their amplitude compared with visually guided prosaccades. In this study, we examined whether their accuracy in direction (as opposed to amplitude) also showed a systematic error. We had human subjects perform prosaccades and antisaccades to goals located at a variety of polar angles. In the first experiment, subjects made prosaccades or antisaccades to one of eight equidistant locations in each block, whereas in the second, they made saccades to one of two equidistant locations per block. In the third, they made antisaccades to one of two locations at different distances but with the same polar angle in each block. Regardless of block design, the results consistently showed a saccadic systematic error, in that oblique antisaccades (but not prosaccades) requiring unequal vertical and horizontal vector components were deviated toward the 45 degrees diagonal meridians. This finding could not be attributed to range effects in either Cartesian or polar coordinates. A perceptual origin of the diagonal effect is suggested by similar systematic errors in other studies of memory-guided manual reaching or perceptual estimation of direction, and may indicate a common spatial bias when there is uncertain information about spatial location.

  6. Medication Errors in the Southeast Asian Countries: A Systematic Review

    PubMed Central

    Salmasi, Shahrzad; Khan, Tahir Mehmood; Hong, Yet Hoi; Ming, Long Chiau; Wong, Tin Wui

    2015-01-01

    Background Medication error (ME) is a worldwide issue, but most studies on ME have been undertaken in developed countries and very little is known about ME in Southeast Asian countries. This study aimed systematically to identify and review research done on ME in Southeast Asian countries in order to identify common types of ME and estimate its prevalence in this region. Methods The literature relating to MEs in Southeast Asian countries was systematically reviewed in December 2014 by using; Embase, Medline, Pubmed, ProQuest Central and the CINAHL. Inclusion criteria were studies (in any languages) that investigated the incidence and the contributing factors of ME in patients of all ages. Results The 17 included studies reported data from six of the eleven Southeast Asian countries: five studies in Singapore, four in Malaysia, three in Thailand, three in Vietnam, one in the Philippines and one in Indonesia. There was no data on MEs in Brunei, Laos, Cambodia, Myanmar and Timor. Of the seventeen included studies, eleven measured administration errors, four focused on prescribing errors, three were done on preparation errors, three on dispensing errors and two on transcribing errors. There was only one study of reconciliation error. Three studies were interventional. Discussion The most frequently reported types of administration error were incorrect time, omission error and incorrect dose. Staff shortages, and hence heavy workload for nurses, doctor/nurse distraction, and misinterpretation of the prescription/medication chart, were identified as contributing factors of ME. There is a serious lack of studies on this topic in this region which needs to be addressed if the issue of ME is to be fully understood and addressed. PMID:26340679

  7. The Effect of Systematic Error in Forced Oscillation Testing

    NASA Technical Reports Server (NTRS)

    Williams, Brianne Y.; Landman, Drew; Flory, Isaac L., IV; Murphy, Patrick C.

    2012-01-01

    One of the fundamental problems in flight dynamics is the formulation of aerodynamic forces and moments acting on an aircraft in arbitrary motion. Classically, conventional stability derivatives are used for the representation of aerodynamic loads in the aircraft equations of motion. However, for modern aircraft with highly nonlinear and unsteady aerodynamic characteristics undergoing maneuvers at high angle of attack and/or angular rates the conventional stability derivative model is no longer valid. Attempts to formulate aerodynamic model equations with unsteady terms are based on several different wind tunnel techniques: for example, captive, wind tunnel single degree-of-freedom, and wind tunnel free-flying techniques. One of the most common techniques is forced oscillation testing. However, the forced oscillation testing method does not address the systematic and systematic correlation errors from the test apparatus that cause inconsistencies in the measured oscillatory stability derivatives. The primary objective of this study is to identify the possible sources and magnitude of systematic error in representative dynamic test apparatuses. Sensitivities of the longitudinal stability derivatives to systematic errors are computed, using a high fidelity simulation of a forced oscillation test rig, and assessed using both Design of Experiments and Monte Carlo methods.

  8. Investigation of systematic CD distribution error on intrafield

    NASA Astrophysics Data System (ADS)

    Kim, Keunjun; Kim, Daewoo; Kang, Junghyun; Jeong, Inseok; Lee, Sungkoo; Kim, Hyeongsoo

    2016-03-01

    As feature size shrinks, better critical dimension uniformity (CDU) is highly demanded in aspects of device characteristics. Intra field CDU is one of main contributor in total CD variation budget. Especially systematic CD distribution in shot, bank and MAT boundary should be strongly considered to minimize repeated error to guarantee high yield even though it is not prominent in overall CDU value. In this paper, we investigated the several factors to affect systematic CD distribution error on intra field. First of all, localized mask CD variation caused by electron-beam scattering over local region, development loading and etch loading effect directly printed in wafer. Appropriate mask fabrication suppress CD variation at boundary region. Secondly, chemical flare effect is expected to make CD gradient at boundary region. Photo acid concentration change by sub-resolution assist feature (SRAF) can reduce the CD gradient. We demonstrated SRAF size dependency in positive tone develop (PTD) and negative tone develop (NTD) case. Thirdly, out-of-field stray light (OOFSL) due to adjacent exposed field causes CD gradient at field boundary. Exposure dose reduction is expected as a solution in this case. Even though we perfectly control CDU at boundary region after mask patterning, other process issues such as etch and CMP loading effect also make worse the CD distribution at boundary region. Through the consideration of above factors, we optimized systematic CD distribution error at boundary region before etch. Furthermore we compared several techniques to compensate post-etch systematic CD distribution.

  9. Spatial reasoning in the treatment of systematic sensor errors

    SciTech Connect

    Beckerman, M.; Jones, J.P.; Mann, R.C.; Farkas, L.A.; Johnston, S.E.

    1988-01-01

    In processing ultrasonic and visual sensor data acquired by mobile robots systematic errors can occur. The sonar errors include distortions in size and surface orientation due to the beam resolution, and false echoes. The vision errors include, among others, ambiguities in discriminating depth discontinuities from intensity gradients generated by variations in surface brightness. In this paper we present a methodology for the removal of systematic errors using data from the sonar sensor domain to guide the processing of information in the vision domain, and vice versa. During the sonar data processing some errors are removed from 2D navigation maps through pattern analyses and consistent-labelling conditions, using spatial reasoning about the sonar beam and object characteristics. Others are removed using visual information. In the vision data processing vertical edge segments are extracted using a Canny-like algorithm, and are labelled. Object edge features are then constructed from the segments using statistical and spatial analyses. A least-squares method is used during the statistical analysis, and sonar range data are used in the spatial analysis. 7 refs., 10 figs.

  10. Systematic Errors in GNSS Radio Occultation Data - Part 2

    NASA Astrophysics Data System (ADS)

    Foelsche, Ulrich; Danzer, Julia; Scherllin-Pirscher, Barbara; Schwärz, Marc

    2014-05-01

    The Global Navigation Satellite System (GNSS) Radio Occultation (RO) technique has the potential to deliver climate benchmark measurements of the upper troposphere and lower stratosphere (UTLS), since RO data can be traced, in principle, to the international standard for the second. Climatologies derived from RO data from different satellites show indeed an amazing consistency of (better than 0.1 K). The value of RO data for climate monitoring is therefore increasingly recognized by the scientific community, but there is also concern about potential residual systematic errors in RO climatologies, which might be common to data from all satellites. We have analyzed different potential error sources and present results on two of them. (1) If temperature is calculated from observed refractivity with the assumption that water vapor is zero, the product is called "dry temperature", which is commonly used to study the Earth's atmosphere, e.g., when analyzing temperature trends due to global warming. Dry temperature is a useful quantity, since it does not need additional background information in its retrieval. Concurrent trends in water vapor could, however, pretend false trends in dry temperature. We analyzed this effect, and identified the regions in the atmosphere, where it is safe to take dry temperature as a proxy for physical temperature. We found that the heights, where specified values of differences between dry and physical temperature are encountered, increase by about 150 m per decade, with little differences between all the 38 climate models under investigation. (2) All current RO retrievals use a "classic" set of (measured) constants, relating atmospheric microwave refractivity with temperature, pressure, and water vapor partial pressure. With the steadily increasing quality of RO climatologies, errors in these constants are not negligible anymore. We show how these parameters can be related to more fundamental physical quantities (fundamental constants, the

  11. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.

  12. Systematic errors in cosmic microwave background polarization measurements

    NASA Astrophysics Data System (ADS)

    O'Dea, Daniel; Challinor, Anthony; Johnson, Bradley R.

    2007-04-01

    We investigate the impact of instrumental systematic errors on the potential of cosmic microwave background polarization experiments targeting primordial B-modes. To do so, we introduce spin-weighted Müller matrix-valued fields describing the linear response of the imperfect optical system and receiver, and give a careful discussion of the behaviour of the induced systematic effects under rotation of the instrument. We give the correspondence between the matrix components and known optical and receiver imperfections, and compare the likely performance of pseudo-correlation receivers and those that modulate the polarization with a half-wave plate. The latter is shown to have the significant advantage of not coupling the total intensity into polarization for perfect optics, but potential effects like optical distortions that may be introduced by the quasi-optical wave plate warrant further investigation. A fast method for tolerancing time-invariant systematic effects is presented, which propagates errors through to power spectra and cosmological parameters. The method extends previous studies to an arbitrary scan strategy, and eliminates the need for time-consuming Monte Carlo simulations in the early phases of instrument and survey design. We illustrate the method with both simple parametrized forms for the systematics and with beams based on physical-optics simulations. Example results are given in the context of next-generation experiments targeting tensor-to-scalar ratios r ~ 0.01.

  13. SU-E-T-613: Dosimetric Consequences of Systematic MLC Leaf Positioning Errors

    SciTech Connect

    Kathuria, K; Siebers, J

    2014-06-01

    Purpose: The purpose of this study is to determine the dosimetric consequences of systematic MLC leaf positioning errors for clinical IMRT patient plans so as to establish detection tolerances for quality assurance programs. Materials and Methods: Dosimetric consequences were simulated by extracting mlc delivery instructions from the TPS, altering the file by the specified error, reloading the delivery instructions into the TPS, recomputing dose, and extracting dose-volume metrics for one head-andneck and one prostate patient. Machine error was simulated by offsetting MLC leaves in Pinnacle in a systematic way. Three different algorithms were followed for these systematic offsets, and are as follows: a systematic sequential one-leaf offset (one leaf offset in one segment per beam), a systematic uniform one-leaf offset (same one leaf offset per segment per beam) and a systematic offset of a given number of leaves picked uniformly at random from a given number of segments (5 out of 10 total). Dose to the PTV and normal tissue was simulated. Results: A systematic 5 mm offset of 1 leaf for all delivery segments of all beams resulted in a maximum PTV D98 deviation of 1%. Results showed very low dose error in all reasonably possible machine configurations, rare or otherwise, which could be simulated. Very low error in dose to PTV and OARs was shown in all possible cases of one leaf per beam per segment being offset (<1%), or that of only one leaf per beam being offset (<.2%). The errors resulting from a high number of adjacent leaves (maximum of 5 out of 60 total leaf-pairs) being simultaneously offset in many (5) of the control points (total 10–18 in all beams) per beam, in both the PTV and the OARs analyzed, were similarly low (<2–3%). Conclusions: The above results show that patient shifts and anatomical changes are the main source of errors in dose delivered, not machine delivery. These two sources of error are “visually complementary” and uncorrelated

  14. Quality assessment of speckle patterns for DIC by consideration of both systematic errors and random errors

    NASA Astrophysics Data System (ADS)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren

    2016-11-01

    The performance of digital image correlation (DIC) is influenced by the quality of speckle patterns significantly. Thus, it is crucial to present a valid and practical method to assess the quality of speckle patterns. However, existing assessment methods either lack a solid theoretical foundation or fail to consider the errors due to interpolation. In this work, it is proposed to assess the quality of speckle patterns by estimating the root mean square error (RMSE) of DIC, which is the square root of the sum of square of systematic error and random error. Two performance evaluation parameters, respectively the maximum and the quadratic mean of RMSE, are proposed to characterize the total error. An efficient algorithm is developed to estimate these parameters, and the correctness of this algorithm is verified by numerical experiments for both 1 dimensional signal and actual speckle images. The influences of correlation criterion, shape function order, and sub-pixel registration algorithm are briefly discussed. Compared to existing methods, method presented by this paper is more valid due to the consideration of both measurement accuracy and precision.

  15. Newborn screening for inborn errors of metabolism: a systematic review.

    PubMed

    Seymour, C A; Thomason, M J; Chalmers, R A; Addison, G M; Bain, M D; Cockburn, F; Littlejohns, P; Lord, J; Wilcox, A H

    1997-01-01

    OBJECTIVES. To establish a database of literature and other evidence on neonatal screening programmes and technologies for inborn errors of metabolism. To undertake a systematic review of the data as a basis for evaluation of newborn screening for inborn errors of metabolism. To prepare an objective summary of the evidence on the appropriateness and need for various existing and possible neonatal screening programmes for inborn errors of metabolism in relation to the natural history of these diseases. To identify gaps in existing knowledge and make recommendations for required primary research. To make recommendations for the future development and organisation of neonatal screening for inborn errors of metabolism in the UK. HOW THE RESEARCH WAS CONDUCTED. There were three parts to the research. A systematic review of the literature on inborn errors of metabolism, neonatal screening programmes, new technologies for screening and economic factors. Inclusion and exclusion criteria were applied, and a working database of relevant papers was established. All selected papers were read by two or three experts and were critically appraised using a standard format. Seven criteria for a screening programme, based on the principles formulated by Wilson and Jungner (WHO, 1968), were used to summarise the evidence. These were as follows. Clinically and biochemically well-defined disorder. Known incidence in populations relevant to the UK. Disorder associated with significant morbidity or mortality. Effective treatment available. Period before onset during which intervention improves outcome. Ethical, safe, simple and robust screening test. Cost-effectiveness of screening. A questionnaire which was sent to all newborn screening laboratories in the UK. Site visits to assess new methodologies for newborn screening. The classical definition of an inborn error of metabolism was used (i.e., a monogenic disease resulting in deficient activity in a single enzyme in a pathway of

  16. TRAINING ERRORS AND RUNNING RELATED INJURIES: A SYSTEMATIC REVIEW

    PubMed Central

    Buist, Ida; Sørensen, Henrik; Lind, Martin; Rasmussen, Sten

    2012-01-01

    Purpose: The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries. Methods: A systematic search was performed in PubMed, Web of Science, Embase, and SportDiscus. Studies were included if they examined novice, recreational, or elite runners between the ages of 18 and 65. Exposure variables were training characteristics defined as volume, distance or mileage, time or duration, frequency, intensity, speed or pace, or similar terms. The outcome of interest was Running Related Injuries (RRI) in general or specific RRI in the lower extremity or lower back. Methodological quality was evaluated using quality assessment tools of 11 to 16 items. Results: After examining 4561 titles and abstracts, 63 articles were identified as potentially relevant. Finally, nine retrospective cohort studies, 13 prospective cohort studies, six case-control studies, and three randomized controlled trials were included. The mean quality score was 44.1%. Conflicting results were reported on the relationships between volume, duration, intensity, and frequency and RRI. Conclusion: It was not possible to identify which training errors were related to running related injuries. Still, well supported data on which training errors relate to or cause running related injuries is highly important for determining proper prevention strategies. If methodological limitations in measuring training variables can be resolved, more work can be conducted to define training and the interactions between different training variables, create several hypotheses, test the hypotheses in a large scale prospective study, and explore cause and effect relationships in randomized controlled trials. Level of evidence: 2a PMID:22389869

  17. Jason-2 systematic error analysis in the GPS derived orbits

    NASA Astrophysics Data System (ADS)

    Melachroinos, S.; Lemoine, F. G.; Zelensky, N. P.; Rowlands, D. D.; Luthcke, S. B.; Chinn, D. S.

    2011-12-01

    Several results related to global or regional sea level changes still too often rely on the assumption that orbit errors coming from station coordinates adoption can be neglected in the total error budget (Ceri et al. 2010). In particular Instantaneous crust-fixed coordinates are obtained by adding to the linear ITRF model the geophysical high-frequency variations. In principle, geocenter motion should also be included in this computation, in order to reference these coordinates to the center of mass of the whole Earth. This correction is currently not applied when computing GDR orbits. Cerri et al. (2010) performed an analysis of systematic errors common to all coordinates along the North/South direction, as this type of bias, also known as Z-shift, has a clear impact on MSL estimates due to the unequal distribution of continental surface in the northern and southern hemispheres. The goal of this paper is to specifically study the main source of errors which comes from the current imprecision in the Z-axis realization of the frame. We focus here on the time variability of this Z-shift, which we can decompose in a drift and a periodic component due to the presumably omitted geocenter motion. A series of Jason-2 GPS-only orbits have been computed at NASA GSFC, using both IGS05 and IGS08. These orbits have been shown to agree radially at less than 1 cm RMS vs our SLR/DORIS std0905 and std1007 reduced-dynamic orbits and in comparison with orbits produced by other analysis centers (Melachroinos et al. 2011). Our GPS-only JASON-2 orbit accuracy is assessed using a number of tests including analysis of independent SLR and altimeter crossover residuals, orbit overlap differences, and direct comparison to orbits generated at GSFC using SLR and DORIS tracking, and to orbits generated externally at other centers. Tests based on SLR-crossover residuals provide the best performance indicator for independent validation of the NASA/GSFC GPS-only reduced dynamic orbits. Reduced

  18. Using Laser Scanners to Augment the Systematic Error Pointing Model

    NASA Astrophysics Data System (ADS)

    Wernicke, D. R.

    2016-08-01

    The antennas of the Deep Space Network (DSN) rely on precise pointing algorithms to communicate with spacecraft that are billions of miles away. Although the existing systematic error pointing model is effective at reducing blind pointing errors due to static misalignments, several of its terms have a strong dependence on seasonal and even daily thermal variation and are thus not easily modeled. Changes in the thermal state of the structure create a separation from the model and introduce a varying pointing offset. Compensating for this varying offset is possible by augmenting the pointing model with laser scanners. In this approach, laser scanners mounted to the alidade measure structural displacements while a series of transformations generate correction angles. Two sets of experiments were conducted in August 2015 using commercially available laser scanners. When compared with historical monopulse corrections under similar conditions, the computed corrections are within 3 mdeg of the mean. However, although the results show promise, several key challenges relating to the sensitivity of the optical equipment to sunlight render an implementation of this approach impractical. Other measurement devices such as inclinometers may be implementable at a significantly lower cost.

  19. Quantifying Systematic Errors and Total Uncertainties in Satellite-based Precipitation Measurements

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Peters-Lidard, C. D.

    2010-12-01

    Determining the uncertainties in precipitation measurements by satellite remote sensing is of fundamental importance to many applications. These uncertainties result mostly from the interplay of systematic errors and random errors. In this presentation, we will summarize our recent efforts in quantifying the error characteristics in satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMaP). For systematic errors, we devised an error decomposition to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals more error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. Our analysis reveals that the six different products share many error features. For example, they all detected strong precipitation (> 40 mm/day) well, but with various biases. They tend to over-estimate in summer and under-estimate in winter. They miss a significant amount of light precipitation (< 10 mm/day). In addition, hit biases and missed precipitation are the two leading error sources. However, their systematic errors also exhibit substantial differences, especially in winter and over rough topography, which greatly contribute to the uncertainties. To estimate the measurement uncertainties, we calculated the measurement spread from the ensemble of these six quasi-independent products. A global map of measurement uncertainties was thus produced. The map yields a global view of the error characteristics and their regional and seasonal variations, and reveals many undocumented error features over areas with no validation data available. The uncertainties are relatively small (40-60%) over the

  20. Treatment of systematic errors in land data assimilation systems

    NASA Astrophysics Data System (ADS)

    Crow, W. T.; Yilmaz, M.

    2012-12-01

    Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.

  1. Minor Planet Observations to Identify Reference System Systematic Errors

    NASA Astrophysics Data System (ADS)

    Hemenway, Paul D.; Duncombe, R. L.; Castelaz, M. W.

    2011-04-01

    In the 1930's Brouwer proposed using minor planets to correct the Fundamental System of celestial coordinates. Since then, many projects have used or proposed to use visual, photographic, photo detector, and space based observations to that end. From 1978 to 1990, a project was undertaken at the University of Texas utilizing the long focus and attendant advantageous plate scale (c. 7.37"/mm) of the 2.1m Otto Struve reflector's Cassegrain focus. The project followed precepts given in 1979. The program had several potential advantages over previous programs including high inclination orbits to cover half the celestial sphere, and, following Kristensen, the use of crossing points to remove entirely systematic star position errors from some observations. More than 1000 plates were obtained of 34 minor planets as part of this project. In July 2010 McDonald Observatory donated the plates to the Pisgah Astronomical Research Institute (PARI) in North Carolina. PARI is in the process of renovating the Space Telescope Science Institute GAMMA II modified PDS microdensitometer to scan the plates in the archives. We plan to scan the minor planet plates, reduce the plates to the densified ICRS using the UCAC4 positions (or the best available positions at the time of the reductions), and then determine the utility of attempting to find significant systematic corrections. Here we report the current status of various aspects of the project. Support from the National Science Foundation in the last millennium is gratefully acknowledged, as is help from Judit Ries and Wayne Green in packing and transporting the plates.

  2. Systematics for checking geometric errors in CNC lathes

    NASA Astrophysics Data System (ADS)

    Araújo, R. P.; Rolim, T. L.

    2015-10-01

    Non-idealities presented in machine tools compromise directly both the geometry and the dimensions of machined parts, generating distortions in the project. Given the competitive scenario among different companies, it is necessary to have knowledge of the geometric behavior of these machines in order to be able to establish their processing capability, avoiding waste of time and materials as well as satisfying customer requirements. But despite the fact that geometric tests are important and necessary to clarify the use of the machine correctly, therefore preventing future damage, most users do not apply such tests on their machines for lack of knowledge or lack of proper motivation, basically due to two factors: long period of time and high costs of testing. This work proposes a systematics for checking straightness and perpendicularity errors in CNC lathes demanding little time and cost with high metrological reliability, to be used on factory floors of small and medium-size businesses to ensure the quality of its products and make them competitive.

  3. Note on apparent systematic and periodic errors in Geosat orbits

    NASA Technical Reports Server (NTRS)

    Sirkes, Ziv; Wunsch, Carl

    1990-01-01

    Apparent errors in Geosat orbits are estimated directly from the measurements. There are technical difficulties in such estimates from quasi-periodically gapped data. The dominant orbit errors display a line spectrum, in which the once/orbit error peak is split in a complex way into a series of narrow lines, with other errors being present as well. The spatial pattern of the errors is not random, displaying differences between mean ascending and descending orbits which are coherent over thousands of kilometers. Orbit errors do not decorrelate within a few orbit periods.

  4. A study of systematic errors in the PMD CamBoard nano

    NASA Astrophysics Data System (ADS)

    Chow, Jacky C. K.; Lichti, Derek D.

    2013-04-01

    Time-of-flight-based three-dimensional cameras are the state-of-the-art imaging modality for acquiring rapid 3D position information. Unlike any other technology on the market, it can deliver 2D images co-located with distance information at every pixel location, without any shadows. Recent technological advancements have begun miniaturizing such technology to be more suitable for laptops and eventually cellphones. This paper explores the systematic errors inherent to the new PMD CamBoard nano camera. As the world's most compact 3D time-of-flight camera it has applications in a wide domain, such as gesture control and facial recognition. To model the systematic errors, a one-step point-based and plane-based bundle adjustment method is used. It simultaneously estimates all systematic errors and unknown parameters by minimizing the residuals of image measurements, distance measurements, and amplitude measurements in a least-squares sense. The presented self-calibration method only requires a standard checkerboard target on a flat plane, making it a suitable candidate for on-site calibration. In addition, because distances are only constrained to lie on a plane, the raw pixel-by-pixel distance observations can be used. This makes it possible to increase the number of distance observations in the adjustment with ease. The results from this paper indicate that amplitude dependent range errors are the dominant error source for the nano under low scattering imaging configurations. Post user self-calibration, the RMSE of the range observations reduced by almost 50%, delivering range measurements at a precision of approximately 2.5cm within a 70cm interval.

  5. Application of Bayesian Systematic Error Correction to Kepler Photometry

    NASA Astrophysics Data System (ADS)

    Van Cleve, Jeffrey E.; Jenkins, J. M.; Twicken, J. D.; Smith, J. C.; Fanelli, M. N.

    2011-01-01

    In a companion talk (Jenkins et al.), we present a Bayesian Maximum A Posteriori (MAP) approach to systematic error removal in Kepler photometric data, in which a subset of intrinsically quiet and highly correlated stars is used to establish the range of "reasonable” robust fit parameters, and hence mitigate the loss of astrophysical signal and noise injection on transit time scales (<3d), which afflict Least Squares (LS) fitting. In this poster, we illustrate the concept in detail by applying MAP to publicly available Kepler data, and give an overview of its application to all Kepler data collected through June 2010. We define the correlation function between normalized, mean-removed light curves and select a subset of highly correlated stars. This ensemble of light curves can then be combined with ancillary engineering data and image motion polynomials to form a design matrix from which the principal components are extracted by reduced-rank SVD decomposition. MAP is then represented in the resulting orthonormal basis, and applied to the set of all light curves. We show that the correlation matrix after treatment is diagonal, and present diagnostics such as correlation coefficient histograms, singular value spectra, and principal component plots. We then show the benefits of MAP applied to variable stars with RR Lyrae, harmonic, chaotic, and eclipsing binary waveforms, and examine the impact of MAP on transit waveforms and detectability. After high-pass filtering the MAP output, we show that MAP does not increase noise on transit time scales, compared to LS. We conclude with a discussion of current work selecting input vectors for the design matrix, representing and numerically solving MAP for non-Gaussian probability distribution functions (PDFs), and suppressing high-frequency noise injection with Lagrange multipliers. Funding for this mission is provided by NASA, Science Mission Directorate.

  6. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  7. Pressure Measurements Using an Airborne Differential Absorption Lidar. Part 1; Analysis of the Systematic Error Sources

    NASA Technical Reports Server (NTRS)

    Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.

    1999-01-01

    Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.

  8. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    NASA Astrophysics Data System (ADS)

    Kanphet, J.; Suriyapee, S.; Dumrongkijudom, N.; Sanghangthum, T.; Kumkhwao, J.; Wisetrintong, M.

    2016-03-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable.

  9. Investigating the epidemiology of medication errors and error-related adverse drug events (ADEs) in primary care, ambulatory care and home settings: a systematic review protocol

    PubMed Central

    Assiri, Ghadah Asaad; Grant, Liz; Aljadhey, Hisham; Sheikh, Aziz

    2016-01-01

    Introduction There is a need to better understand the epidemiology of medication errors and error-related adverse events in community care contexts. Methods and analysis We will systematically search the following databases: Cumulative Index to Nursing and Allied Health Literature (CINAHL), EMBASE, Eastern Mediterranean Regional Office of the WHO (EMRO), MEDLINE, PsycINFO and Web of Science. In addition, we will search Google Scholar and contact an international panel of experts to search for unpublished and in progress work. The searches will cover the time period January 1990–December 2015 and will yield data on the incidence or prevalence of and risk factors for medication errors and error-related adverse drug events in adults living in community settings (ie, primary care, ambulatory and home). Study quality will be assessed using the Critical Appraisal Skills Program quality assessment tool for cohort and case–control studies, and cross-sectional studies will be assessed using the Joanna Briggs Institute Critical Appraisal Checklist for Descriptive Studies. Meta-analyses will be undertaken using random-effects modelling using STATA (V.14) statistical software. Ethics and dissemination This protocol will be registered with PROSPERO, an international prospective register of systematic reviews, and the systematic review will be reported in the peer-reviewed literature using Preferred Reporting Items for Systematic Reviews and Meta-Analyses. PMID:27580826

  10. A review on the impact of systematic safety processes for the control of error in medicine.

    PubMed

    Damiani, Gianfranco; Pinnarelli, Luigi; Scopelliti, Lucia; Sommella, Lorenzo; Ricciardi, Walter

    2009-07-01

    Among risk management initiatives, systematic safety processes (SSPs), implemented within health care organizations, could be useful in managing patient safety. The purpose of this article is to conduct a systematic literature review assessing the impact of SSPs on different error categories. Articles that investigated the relation between SSPs, clinical and organizational outcomes were selected from scientific literature. The proportion and impact of proactive and reactive SSPs were calculated among five error categories. Proactive interventions impacted more positively than reactive ones in reducing medication errors, technical errors and errors due to personnel. PSSPs and RSSPs had similar effects in reducing errors related to a wrong procedure. A single reactive study influenced non-positively communication errors. A relevant prevalence of the impact of proactive processes on reactive ones is reported. This article can help decision makers in identifying which SSP can be the most appropriate against specific error categories. PMID:19564841

  11. Experimental investigation of the systematic error on photomechanic methods induced by camera self-heating.

    PubMed

    Ma, Qinwei; Ma, Shaopeng

    2013-03-25

    The systematic error for photomechanic methods caused by self-heating induced image expansion when using a digital camera was systematically studied, and a new physical model to explain the mechanism has been proposed and verified. The experimental results showed that the thermal expansion of the camera outer case and lens mount, instead of mechanical components within the camera, were the main reason for image expansion. The corresponding systematic error for both image analysis and fringe analysis based photomechanic methods were analyzed and measured, then error compensation techniques were proposed and verified.

  12. When should systematic patient positioning errors in radiotherapy be corrected?

    PubMed

    Bortfeld, Thomas; van Herk, Marcel; Jiang, Steve B

    2002-12-01

    One way to reduce patient set-up errors in radiotherapy is to measure the position during the first N treatment fractions, and to do an unconditional correction of the set-up position once at the (N + 1)th fraction. This strategy is known as the 'no action level' protocol. The question is when to do the correction, i.e. what is the optimum value of N? We determine N by minimizing the expectation value of the total quadratic set-up error taken over all fractions. A central assumption that we make is that there is no time trend in the patient set-up. The result is a simple formula for the value of N, which is proportional to the square root of the total number of fractions, and to the ratio of the execution (delivery) error and preparation error. We also provide a formula for cases where the measurement error is not negligible. For typical cases the optimum value is N = 4. Because the optimum is shallow, the exact choice of N is uncritical.

  13. Analysis of possible systematic errors in the Oslo method

    SciTech Connect

    Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.

    2011-03-15

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and {gamma}-ray transmission coefficient from a set of particle-{gamma} coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  14. On causes of the origin of systematic errors in latitude determination with the Moscow PZT.

    NASA Astrophysics Data System (ADS)

    Volchkov, A. A.; Gutsalo, G. A.

    Peculiarities of eye response during visual measurements of star positions on photographic plates are considered. It is shown that variations of the plate background density can be a source of systematic errors during latitude determinations with a PZT.

  15. Strategies for minimizing the impact of systematic errors on land data assimilation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Data assimilation concerns itself primarily with the impact of random stochastic errors on state estimation. However, the developers of land data assimilation systems are commonly faced with systematic errors arising from both the parameterization of a land surface model and the need to pre-process ...

  16. On systematic errors in spectral line parameters retrieved with the Voigt line profile

    NASA Astrophysics Data System (ADS)

    Kochanov, V. P.

    2012-08-01

    Systematic errors inherent in the Voigt line profile are analyzed. Molecular spectrum processing with the Voigt profile is shown to underestimate line intensities by 1-4%, with the errors in line positions being 0.0005 cm-1 and the decrease in pressure broadening coefficients varying from 5% to 55%.

  17. Using ridge regression in systematic pointing error corrections

    NASA Technical Reports Server (NTRS)

    Guiar, C. N.

    1988-01-01

    A pointing error model is used in the antenna calibration process. Data from spacecraft or radio star observations are used to determine the parameters in the model. However, the regression variables are not truly independent, displaying a condition known as multicollinearity. Ridge regression, a biased estimation technique, is used to combat the multicollinearity problem. Two data sets pertaining to Voyager 1 spacecraft tracking (days 105 and 106 of 1987) were analyzed using both linear least squares and ridge regression methods. The advantages and limitations of employing the technique are presented. The problem is not yet fully resolved.

  18. Second-order systematic errors in Mueller matrix dual rotating compensator ellipsometry.

    PubMed

    Broch, Laurent; En Naciri, Aotmane; Johann, Luc

    2010-06-10

    We investigate the systematic errors at the second order for a Mueller matrix ellipsometer in the dual rotating compensator configuration. Starting from a general formalism, we derive explicit second-order errors in the Mueller matrix coefficients of a given sample. We present the errors caused by the azimuthal inaccuracy of the optical components and their influences on the measurements. We demonstrate that the methods based on four-zone or two-zone averaging measurement are effective to vanish the errors due to the compensators. For the other elements, it is shown that the systematic errors at the second order can be canceled only for some coefficients of the Mueller matrix. The calibration step for the analyzer and the polarizer is developed. This important step is necessary to avoid the azimuthal inaccuracy in such elements. Numerical simulations and experimental measurements are presented and discussed.

  19. MAXIMUM LIKELIHOOD ANALYSIS OF SYSTEMATIC ERRORS IN INTERFEROMETRIC OBSERVATIONS OF THE COSMIC MICROWAVE BACKGROUND

    SciTech Connect

    Zhang Le; Timbie, Peter; Karakci, Ata; Korotkov, Andrei; Tucker, Gregory S.; Sutter, Paul M.; Wandelt, Benjamin D.; Bunn, Emory F.

    2013-06-01

    We investigate the impact of instrumental systematic errors in interferometric measurements of the cosmic microwave background (CMB) temperature and polarization power spectra. We simulate interferometric CMB observations to generate mock visibilities and estimate power spectra using the statistically optimal maximum likelihood technique. We define a quadratic error measure to determine allowable levels of systematic error that does not induce power spectrum errors beyond a given tolerance. As an example, in this study we focus on differential pointing errors. The effects of other systematics can be simulated by this pipeline in a straightforward manner. We find that, in order to accurately recover the underlying B-modes for r = 0.01 at 28 < l < 384, Gaussian-distributed pointing errors must be controlled to 0. Degree-Sign 7 root mean square for an interferometer with an antenna configuration similar to QUBIC, in agreement with analytical estimates. Only the statistical uncertainty for 28 < l < 88 would be changed at {approx}10% level. With the same instrumental configuration, we find that the pointing errors would slightly bias the 2{sigma} upper limit of the tensor-to-scalar ratio r by {approx}10%. We also show that the impact of pointing errors on the TB and EB measurements is negligibly small.

  20. A study for systematic errors of the GLA forecast model in tropical regions

    NASA Technical Reports Server (NTRS)

    Chen, Tsing-Chang; Baker, Wayman E.; Pfaendtner, James; Corrigan, Martin

    1988-01-01

    From the sensitivity studies performed with the Goddard Laboratory for Atmospheres (GLA) analysis/forecast system, it was revealed that the forecast errors in the tropics affect the ability to forecast midlatitude weather in some cases. Apparently, the forecast errors occurring in the tropics can propagate to midlatitudes. Therefore, the systematic error analysis of the GLA forecast system becomes a necessary step in improving the model's forecast performance. The major effort of this study is to examine the possible impact of the hydrological-cycle forecast error on dynamical fields in the GLA forecast system.

  1. Reducing Systematic Centroid Errors Induced by Fiber Optic Faceplates in Intensified High-Accuracy Star Trackers

    PubMed Central

    Xiong, Kun; Jiang, Jie

    2015-01-01

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920

  2. SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER.

    SciTech Connect

    QIAN,S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.

    2007-08-25

    Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately.

  3. The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation over Oceans

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Suarez, Max J.; Bacmeister, Julio T.; Chen, Baode; Takacs, Lawrence L.

    2006-01-01

    This study provides explanations for some of the experimental findings of Chao (2000) and Chao and Chen (2001) concerning the mechanisms responsible for the ITCZ in an aqua-planet model. These explanations are then applied to explain the origin of some of the systematic errors in the GCM simulation of ITCZ precipitatin over oceans. The ITCZ systematic errors are highly sensitive to model physics and by extension model horizontal resolution. The findings in this study along with those of Chao (2000) and Chao and Chen (2001, 2004) contribute to building a theoretical foundation for ITCZ study. A few possible methods of alleviating the systematic errors in the GCM simulaiton of ITCZ are discussed. This study uses a recent version of the Goddard Modeling and Assimilation Office's Goddard Earth Observing System (GEOS-5) GCM.

  4. Estimation of radiation risk in presence of classical additive and Berkson multiplicative errors in exposure doses.

    PubMed

    Masiuk, S V; Shklyar, S V; Kukush, A G; Carroll, R J; Kovgan, L N; Likhtarov, I A

    2016-07-01

    In this paper, the influence of measurement errors in exposure doses in a regression model with binary response is studied. Recently, it has been recognized that uncertainty in exposure dose is characterized by errors of two types: classical additive errors and Berkson multiplicative errors. The combination of classical additive and Berkson multiplicative errors has not been considered in the literature previously. In a simulation study based on data from radio-epidemiological research of thyroid cancer in Ukraine caused by the Chornobyl accident, it is shown that ignoring measurement errors in doses leads to overestimation of background prevalence and underestimation of excess relative risk. In the work, several methods to reduce these biases are proposed. They are new regression calibration, an additive version of efficient SIMEX, and novel corrected score methods.

  5. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    PubMed

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius. PMID:26931894

  6. Accuracy of image-plane holographic tomography with filtered backprojection: random and systematic errors.

    PubMed

    Belashov, A V; Petrov, N V; Semenova, I V

    2016-01-01

    This paper explores the concept of image-plane holographic tomography applied to the measurements of laser-induced thermal gradients in an aqueous solution of a photosensitizer with respect to the reconstruction accuracy of three-dimensional variations of the refractive index. It uses the least-squares estimation algorithm to reconstruct refractive index variations in each holographic projection. Along with the bitelecentric optical system, transferring focused projection to the sensor plane, it facilitates the elimination of diffraction artifacts and noise suppression. This work estimates the influence of typical random and systematic errors in experiments and concludes that random errors such as accidental measurement errors or noise presence can be significantly suppressed by increasing the number of recorded digital holograms. On the contrary, even comparatively small systematic errors such as a displacement of the rotation axis projection in the course of a reconstruction procedure can significantly distort the results. PMID:26835625

  7. Mechanical temporal fluctuation induced distance and force systematic errors in Casimir force experiments

    NASA Astrophysics Data System (ADS)

    Lamoreaux, Steve; Wong, Douglas

    2015-06-01

    The basic theory of temporal mechanical fluctuation induced systematic errors in Casimir force experiments is developed and applications of this theory to several experiments is reviewed. This class of systematic error enters in a manner similar to the usual surface roughness correction, but unlike the treatment of surface roughness for which an exact result requires an electromagnetic mode analysis, time dependent fluctuations can be treated exactly, assuming the fluctuation times are much longer than the zero point and thermal fluctuation correlation times of the electromagnetic field between the plates. An experimental method for measuring absolute distance with high bandwidth is also described and measurement data presented.

  8. The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Suarez, M. J.; Bacmeister, J. T.; Chen, B.; Takacs, L. L.

    2006-01-01

    Previous GCM studies have found that the systematic errors in the GCM simulation of the seasonal mean ITCZ intensity and location could be substantially corrected by adding suitable amount of rain re-evaporation or cumulus momentum transport. However, the reason(s) for these systematic errors and solutions has remained a puzzle. In this work the knowledge gained from previous studies of the ITCZ in an aqua-planet model with zonally uniform SST is applied to solve this puzzle. The solution is supported by further aqua-planet and full model experiments using the latest version of the Goddard Earth Observing System GCM.

  9. Patient disclosure of medical errors in paediatrics: A systematic literature review.

    PubMed

    Koller, Donna; Rummens, Anneke; Le Pouesard, Morgane; Espin, Sherry; Friedman, Jeremy; Coffey, Maitreya; Kenneally, Noah

    2016-05-01

    Medical errors are common within paediatrics; however, little research has examined the process of disclosing medical errors in paediatric settings. The present systematic review of current research and policy initiatives examined evidence regarding the disclosure of medical errors involving paediatric patients. Peer-reviewed research from a range of scientific journals from the past 10 years is presented, and an overview of Canadian and international policies regarding disclosure in paediatric settings are provided. The purpose of the present review was to scope the existing literature and policy, and to synthesize findings into an integrated and accessible report. Future research priorities and policy implications are then identified.

  10. Systematic errors analysis for a large dynamic range aberrometer based on aberration theory.

    PubMed

    Wu, Peng; Liu, Sheng; DeHoog, Edward; Schwiegerling, Jim

    2009-11-10

    In Ref. 1, it was demonstrated that the significant systematic errors of a type of large dynamic range aberrometer are strongly related to the power error (defocus) in the input wavefront. In this paper, a generalized theoretical analysis based on vector aberration theory is presented, and local shift errors of the SH spot pattern as a function of the lenslet position and the local wavefront tilt over the corresponding lenslet are derived. Three special cases, a spherical wavefront, a crossed cylindrical wavefront, and a cylindrical wavefront, are analyzed and the possibly affected Zernike terms in the wavefront reconstruction are investigated. The simulation and experimental results are illustrated to verify the theoretical predictions.

  11. SYSTEMATIC CONTINUUM ERRORS IN THE Ly{alpha} FOREST AND THE MEASURED TEMPERATURE-DENSITY RELATION

    SciTech Connect

    Lee, Khee-Gan

    2012-07-10

    Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T {proportional_to} {Delta}{sup {gamma}-1}) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly{alpha} forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in {gamma} of the order of unity. This is quantified further using a simple semi-analytic model of the Ly{alpha} forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of {gamma}. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of ({delta}({gamma})) Almost-Equal-To -0.1, while the error is increased to {sigma}{sub {gamma}} Almost-Equal-To 0.2 compared to {sigma}{sub {gamma}} Almost-Equal-To 0.1 in the absence of continuum errors.

  12. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Chao; Bao, Wan-Su; Wang, Xiang; Fu, Xiang-Qun

    2015-06-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002).

  13. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    DOE PAGES

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; et al

    2015-05-11

    The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty aboutmore » a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less

  14. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    SciTech Connect

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; Fenech Conti, Ian; Gavazzi, Raphael; Gentile, Marc; Gill, Mandeep S. S.; Hogg, David W.; Huff, Eric M.; Jee, M. James; Kacprzak, Tomasz; Kilbinger, Martin; Kuntzer, Thibault; Lang, Dustin; Luo, Wentao; March, Marisa C.; Marshall, Philip J.; Meyers, Joshua E.; Miller, Lance; Miyatake, Hironao; Nakajima, Reiko; Ngole Mboula, Fred Maurice; Nurbaeva, Guldariya; Okura, Yuki; Paulin-Henriksson, Stephane; Rhodes, Jason; Schneider, Michael D.; Shan, Huanyuan; Sheldon, Erin S.; Simet, Melanie; Starck, Jean -Luc; Sureau, Florent; Tewes, Malte; Zarb Adami, Kristian; Zhang, Jun; Zuntz, Joe

    2015-05-11

    The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

  15. Statistical and systematic errors in redshift-space distortion measurements from large surveys

    NASA Astrophysics Data System (ADS)

    Bianchi, D.; Guzzo, L.; Branchini, E.; Majerotto, E.; de la Torre, S.; Marulli, F.; Moscardini, L.; Angulo, R. E.

    2012-12-01

    We investigate the impact of statistical and systematic errors on measurements of linear redshift-space distortions (RSD) in future cosmological surveys by analysing large catalogues of dark matter haloes from the baryonic acoustic oscillation simulations at the Institute for Computational Cosmology. These allow us to estimate the dependence of errors on typical survey properties, as volume, galaxy density and mass (i.e. bias factor) of the adopted tracer. We find that measures of the specific growth rate β = f/b using the Hamilton/Kaiser harmonic expansion of the redshift-space correlation function ξ(rp, π) on scales larger than 3 h-1 Mpc are typically underestimated by up to 10 per cent for galaxy-sized haloes. This is significantly larger than the corresponding statistical errors, which amount to a few per cent, indicating the importance of non-linear improvements to the Kaiser model, to obtain accurate measurements of the growth rate. The systematic error shows a diminishing trend with increasing bias value (i.e. mass) of the haloes considered. We compare the amplitude and trends of statistical errors as a function of survey parameters to predictions obtained with the Fisher information matrix technique. This is what is usually adopted to produce RSD forecasts, based on the Feldman-Kaiser-Peacock prescription for the errors on the power spectrum. We show that this produces parameter errors fairly similar to the standard deviations from the halo catalogues, provided it is applied to strictly linear scales in Fourier space (k<0.2 h Mpc-1). Finally, we combine our measurements to define and calibrate an accurate scaling formula for the relative error on β as a function of the same parameters, which closely matches the simulation results in all explored regimes. This provides a handy and plausibly more realistic alternative to the Fisher matrix approach, to quickly and accurately predict statistical errors on RSD expected from future surveys.

  16. Systematic errors in conductimetric instrumentation due to bubble adhesions on the electrodes: An experimental assessment

    NASA Astrophysics Data System (ADS)

    Neelakantaswamy, P. S.; Rajaratnam, A.; Kisdnasamy, S.; Das, N. P.

    1985-02-01

    Systematic errors in conductimetric measurements are often encountered due to partial screening of interelectrode current paths resulting from adhesion of bubbles on the electrode surfaces of the cell. A method of assessing this error quantitatively by a simulated electrolytic tank technique is proposed here. The experimental setup simulates the bubble-curtain effect in the electrolytic tank by means of a pair of electrodes partially covered by a monolayer of small polystyrene-foam spheres representing the bubble adhesions. By varying the number of spheres stuck on the electrode surface, the fractional area covered by the bubbles is controlled; and by measuring the interelectrode impedance, the systematic error is determined as a function of the fractional area covered by the simulated bubbles. A theoretical model which depicts the interelectrode resistance and, hence, the systematic error caused by bubble adhesions is calculated by considering the random dispersal of bubbles on the electrodes. Relevant computed results are compared with the measured impedance data obtained from the electrolytic tank experiment. Results due to other models are also presented and discussed. A time-domain measurement on the simulated cell to study the capacitive effects of the bubble curtain is also explained.

  17. SU-F-BRD-03: Determination of Plan Robustness for Systematic Setup Errors Using Trilinear Interpolation

    SciTech Connect

    Fix, MK; Volken, W; Frei, D; Terribilini, D; Dal Pra, A; Schmuecking, M; Manser, P

    2014-06-15

    Purpose: Treatment plan evaluations in radiotherapy are currently ignoring the dosimetric impact of setup uncertainties. The determination of the robustness for systematic errors is rather computational intensive. This work investigates interpolation schemes to quantify the robustness of treatment plans for systematic errors in terms of efficiency and accuracy. Methods: The impact of systematic errors on dose distributions for patient treatment plans is determined by using the Swiss Monte Carlo Plan (SMCP). Errors in all translational directions are considered, ranging from −3 to +3 mm in mm steps. For each systematic error a full MC dose calculation is performed leading to 343 dose calculations, used as benchmarks. The interpolation uses only a subset of the 343 calculations, namely 9, 15 or 27, and determines all dose distributions by trilinear interpolation. This procedure is applied for a prostate and a head and neck case using Volumetric Modulated Arc Therapy with 2 arcs. The relative differences of the dose volume histograms (DVHs) of the target and the organs at risks are compared. Finally, the interpolation schemes are used to compare robustness of 4- versus 2-arcs in the head and neck treatment plan. Results: Relative local differences of the DVHs increase for decreasing number of dose calculations used in the interpolation. The mean deviations are <1%, 3.5% and 6.5% for a subset of 27, 15 and 9 used dose calculations, respectively. Thereby the dose computation times are reduced by factors of 13, 25 and 43, respectively. The comparison of the 4- versus 2-arcs plan shows a decrease in robustness; however, this is outweighed by the dosimetric improvements. Conclusion: The results of this study suggest that the use of trilinear interpolation to determine the robustness of treatment plans can remarkably reduce the number of dose calculations. This work was supported by Varian Medical Systems. This work was supported by Varian Medical Systems.

  18. A hybrid variational-ensemble data assimilation scheme with systematic error correction for limited-area ocean models

    NASA Astrophysics Data System (ADS)

    Oddo, Paolo; Storto, Andrea; Dobricic, Srdjan; Russo, Aniello; Lewis, Craig; Onken, Reiner; Coelho, Emanuel

    2016-10-01

    A hybrid variational-ensemble data assimilation scheme to estimate the vertical and horizontal parts of the background error covariance matrix for an ocean variational data assimilation system is presented and tested in a limited-area ocean model implemented in the western Mediterranean Sea. An extensive data set collected during the Recognized Environmental Picture Experiments conducted in June 2014 by the Centre for Maritime Research and Experimentation has been used for assimilation and validation. The hybrid scheme is used to both correct the systematic error introduced in the system from the external forcing (initialisation, lateral and surface open boundary conditions) and model parameterisation, and improve the representation of small-scale errors in the background error covariance matrix. An ensemble system is run offline for further use in the hybrid scheme, generated through perturbation of assimilated observations. Results of four different experiments have been compared. The reference experiment uses the classical stationary formulation of the background error covariance matrix and has no systematic error correction. The other three experiments account for, or not, systematic error correction and hybrid background error covariance matrix combining the static and the ensemble-derived errors of the day. Results show that the hybrid scheme when used in conjunction with the systematic error correction reduces the mean absolute error of temperature and salinity misfit by 55 and 42 % respectively, versus statistics arising from standard climatological covariances without systematic error correction.

  19. Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid

    2015-07-01

    Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.

  20. Voigt profile introduces optical depth dependent systematic errors - Detected in high resolution laboratory spectra of water

    NASA Astrophysics Data System (ADS)

    Birk, Manfred; Wagner, Georg

    2016-02-01

    The Voigt profile commonly used in radiative transfer modeling of Earth's and planets' atmospheres for remote sensing/climate modeling produces systematic errors so far not accounted for. Saturated lines are systematically too narrow when calculated from pressure broadening parameters based on the analysis of laboratory data with the Voigt profile. This is caused by line narrowing effects leading to systematically too small fitted broadening parameters when applying the Voigt profile. These effective values are still valid to model non-saturated lines with sufficient accuracy. Saturated lines dominated by the wings of the line profile are sufficiently accurately modeled with a Voigt profile with the correct broadening parameters and are thus systematically too narrow when calculated with the effective values. The systematic error was quantified by mid infrared laboratory spectroscopy of the water ν2 fundamental. Correct Voigt profile based pressure broadening parameters for saturated lines were 3-4% larger than the effective ones in the spectroscopic database. Impacts on remote sensing and climate modeling are expected. Combination of saturated and non-saturated lines in the spectroscopic analysis will quantify line narrowing with unprecedented precision.

  1. On the Gas Optimization and Systematic Error for the Gas Pixel Detector

    NASA Astrophysics Data System (ADS)

    Feng, Hua; Costa, Enrico; Muleri, Fabio; Bellazzini, Ronaldo; Soffitta, Paolo; Zhang, Heng; Li, Hong

    2016-07-01

    The gas pixel detector (GPD) is selected as the focal plane polarimeter for the X-ray Imaging Polarimetry Explorer (XIPE). We calculated the detection efficiency of different gas mixtures, simulated the electron tracks and degree of modulation at different X-ray energies using packages like Geant4/Maxwell/Garfield. The simulated results are tested to be consistent with measurements. We will demonstrate how the choice of gas mixture influences the sensitivity in polarization. We will also show test results of the systematic error, which is the response of detector to unpolarized signals and determines the limiting sensitivity. Our measurements indicate that systematic error is well below 1% in degree of polarization for GPD.

  2. Treatment of systematic errors in the processing of wide angle sonar sensor data for robotic navigation

    SciTech Connect

    Beckerman, M.; Oblow, E.M.

    1988-04-01

    A methodology has been developed for the treatment of systematic errors which arise in the processing of sparse sensor data. We present a detailed application of this methodology to the construction from wide-angle sonar sensor data of navigation maps for use in autonomous robotic navigation. In the methodology we introduce a four-valued labelling scheme and a simple logic for label combination. The four labels, conflict, occupied, empty and unknown, are used to mark the cells of the navigation maps; the logic allows for the rapid updating of these maps as new information is acquired. The systematic errors are treated by relabelling conflicting pixel assignments. Most of the new labels are obtained from analyses of the characteristic patterns of conflict which arise during the information processing. The remaining labels are determined by imposing an elementary consistent-labelling condition. 26 refs., 9 figs.

  3. A constant altitude flight survey method for mapping atmospheric ambient pressures and systematic radar errors

    NASA Technical Reports Server (NTRS)

    Larson, T. J.; Ehernberger, L. J.

    1985-01-01

    The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air data calibrations, but also for studies of atmospheric structure and space positioning accuracy performance. The examples presented cover a wide range of radar tracking conditions for both subsonic and supersonic flight to an altitude of 42,000 ft.

  4. Effects of systematic errors on the mixing ratios of trace gases obtained from occulation spectra

    NASA Technical Reports Server (NTRS)

    Shaffer, W. A.; Shaw, J. H.; Farmer, C. B.

    1983-01-01

    The influence of systematic errors in the parameters of the models describing the geometry and the atmosphere on the profiles of trace gases retrieved from simulated solar occultation spectra, collected at satellite altitudes, is investigated. Because of smearing effects and other uncertainties, it may be preferable to calibrate the spectra internally by measuring absorption lines of an atmospheric gas such as CO2 whose vertical distribution is assumed rather than to relay on externally supplied information.

  5. Local and Global Views of Systematic Errors of Atmosphere-Ocean General Circulation Models

    NASA Astrophysics Data System (ADS)

    Mechoso, C. Roberto; Wang, Chunzai; Lee, Sang-Ki; Zhang, Liping; Wu, Lixin

    2014-05-01

    Coupled Atmosphere-Ocean General Circulation Models (CGCMs) have serious systematic errors that challenge the reliability of climate predictions. One major reason for such biases is the misrepresentations of physical processes, which can be amplified by feedbacks among climate components especially in the tropics. Much effort, therefore, is dedicated to the better representation of physical processes in coordination with intense process studies. The present paper starts with a presentation of these systematic CGCM errors with an emphasis on the sea surface temperature (SST) in simulations by 22 participants in the Coupled Model Intercomparison Project phase 5 (CMIP5). Different regions are considered for discussion of model errors, including the one around the equator, the one covered by the stratocumulus decks off Peru and Namibia, and the confluence between the Angola and Benguela currents. Hypotheses on the reasons for the errors are reviewed, with particular attention on the parameterization of low-level marine clouds, model difficulties in the simulation of the ocean heat budget under the stratocumulus decks, and location of strong SST gradients. Next the presentation turns to a global perspective of the errors and their causes. It is shown that a simulated weak Atlantic Meridional Overturning Circulation (AMOC) tends to be associated with cold biases in the entire Northern Hemisphere with an atmospheric pattern that resembles the Northern Hemisphere annular mode. The AMOC weakening is also associated with a strengthening of Antarctic bottom water formation and warm SST biases in the Southern Ocean. It is also shown that cold biases in the tropical North Atlantic and West African/Indian monsoon regions during the warm season in the Northern Hemisphere have interhemispheric links with warm SST biases in the tropical southeastern Pacific and Atlantic, respectively. The results suggest that improving the simulation of regional processes may not suffice for a more

  6. Improving SMOS retrieved salinity: characterization of systematic errors in reconstructed and modelled brightness temperature images

    NASA Astrophysics Data System (ADS)

    Gourrion, J.; Guimbard, S.; Sabia, R.; Portabella, M.; Gonzalez, V.; Turiel, A.; Ballabrera, J.; Gabarro, C.; Perez, F.; Martinez, J.

    2012-04-01

    The Microwave Imaging Radiometer using Aperture Synthesis (MIRAS) instrument onboard the Soil Moisture and Ocean Salinity (SMOS) mission was launched on November 2nd, 2009 with the aim of providing, over the oceans, synoptic sea surface salinity (SSS) measurements with spatial and temporal coverage adequate for large-scale oceanographic studies. For each single satellite overpass, SSS is retrieved after collecting, at fixed ground locations, a series of brightness temperature from successive scenes corresponding to various geometrical and polarization conditions. SSS is inversed through minimization of the difference between reconstructed and modeled brightness temperatures. To meet the challenging mission requirements, retrieved SSS needs to accomplish an accuracy of 0.1 psu after averaging in a 10- or 30-day period and 2°x2° or 1°x1° spatial boxes, respectively. It is expected that, at such scales, the high radiometric noise can be reduced to a level such that remaining errors and inconsistencies in the retrieved salinity fields can essentially be related to (1) systematic brightness temperature errors in the antenna reference frame, (2) systematic errors in the Geophysical Model Function - GMF, used to model the observations and retrieve salinity - for specific environmental conditions and/or particular auxiliary parameter values and (3) errors in the auxiliary datasets used as input to the GMF. The present communication primarily aims at adressing above point 1 and possibly point 2 for the whole polarimetric information i.e. issued from both co-polar and cross-polar measurements. Several factors may potentially produce systematic errors in the antenna reference frame: the unavoidable fact that all antenna are not perfectly identical, the imperfect characterization of the instrument response e.g. antenna patterns, account for receiver temperatures in the reconstruction, calibration using flat sky scenes, implementation of ripple reduction algorithms at sharp

  7. Systematic errors in the measurement of the permanent electric dipole moment (EDM) of the 199 Hg atom

    NASA Astrophysics Data System (ADS)

    Chen, Yi; Graner, Brent; Heckel, Blayne; Lindahl, Eric

    2016-05-01

    This talk provides a discussion of the systematic errors that were encountered in the 199 Hg experiment described earlier in this session. The dominant systematic error, unseen in previous 199 Hg EDM experiments, arose from small motions of the Hg vapor cells due to forces exerted by the applied electric field. Methods used to understand this effect, as well as the anticipated sources of systematic errors such as leakage currents, parameter correlations, and E2 and v × E / c effects, will be presented. The total systematic error was found to be 72% as large as the statistical error of the EDM measurement. This work was supported by NSF Grant 1306743 and by DOE Grant DE-FG02-97ER41020.

  8. Systematic errors in the measurement of the permanent electric dipole moment (EDM) of the 199Hg atom

    NASA Astrophysics Data System (ADS)

    Chen, Yi; Graner, Brent; Lindahl, Eric; Heckel, Blayne

    2016-03-01

    This talk provides a discussion of the systematic errors that were encountered in the 199Hg experiment described earlier in this session. The dominant systematic error, unseen in previous 199Hg EDM experiments, arose from small motions of the Hg vapor cells due to forces exerted by the applied electric field. Methods used to understand this effect, as well as the anticipated sources of systematic errors such as leakage currents, parameter correlations, and E2 and v × E / c effects, will be presented. The total systematic error was found to be 72% as large as the statistical error of the EDM measurement. This work was supported by NSF Grant 1306743 and by DOE Grant DE-FG02-97ER41020.

  9. Random and systematic measurement errors in acoustic impedance as determined by the transmission line method

    NASA Technical Reports Server (NTRS)

    Parrott, T. L.; Smith, C. D.

    1977-01-01

    The effect of random and systematic errors associated with the measurement of normal incidence acoustic impedance in a zero-mean-flow environment was investigated by the transmission line method. The influence of random measurement errors in the reflection coefficients and pressure minima positions was investigated by computing fractional standard deviations of the normalized impedance. Both the standard techniques of random process theory and a simplified technique were used. Over a wavelength range of 68 to 10 cm random measurement errors in the reflection coefficients and pressure minima positions could be described adequately by normal probability distributions with standard deviations of 0.001 and 0.0098 cm, respectively. An error propagation technique based on the observed concentration of the probability density functions was found to give essentially the same results but with a computation time of about 1 percent of that required for the standard technique. The results suggest that careful experimental design reduces the effect of random measurement errors to insignificant levels for moderate ranges of test specimen impedance component magnitudes. Most of the observed random scatter can be attributed to lack of control by the mounting arrangement over mechanical boundary conditions of the test sample.

  10. A systematic impact assessment of GRACE error correlation on data assimilation in hydrological models

    NASA Astrophysics Data System (ADS)

    Schumacher, Maike; Kusche, Jürgen; Döll, Petra

    2016-06-01

    Recently, ensemble Kalman filters (EnKF) have found increasing application for merging hydrological models with total water storage anomaly (TWSA) fields from the Gravity Recovery And Climate Experiment (GRACE) satellite mission. Previous studies have disregarded the effect of spatially correlated errors of GRACE TWSA products in their investigations. Here, for the first time, we systematically assess the impact of the GRACE error correlation structure on EnKF data assimilation into a hydrological model, i.e. on estimated compartmental and total water storages and model parameter values. Our investigations include (1) assimilating gridded GRACE-derived TWSA into the WaterGAP Global Hydrology Model and, simultaneously, calibrating its parameters; (2) introducing GRACE observations on different spatial scales; (3) modelling observation errors as either spatially white or correlated in the assimilation procedure, and (4) replacing the standard EnKF algorithm by the square root analysis scheme or, alternatively, the singular evolutive interpolated Kalman filter. Results of a synthetic experiment designed for the Mississippi River Basin indicate that the hydrological parameters are sensitive to TWSA assimilation if spatial resolution of the observation data is sufficiently high. We find a significant influence of spatial error correlation on the adjusted water states and model parameters for all implemented filter variants, in particular for subbasins with a large discrepancy between observed and initially simulated TWSA and for north-south elongated sub-basins. Considering these correlated errors, however, does not generally improve results: while some metrics indicate that it is helpful to consider the full GRACE error covariance matrix, it appears to have an adverse effect on others. We conclude that considering the characteristics of GRACE error correlation is at least as important as the selection of the spatial discretisation of TWSA observations, while the choice

  11. Influence of Additive and Multiplicative Structure and Direction of Comparison on the Reversal Error

    ERIC Educational Resources Information Center

    González-Calero, José Antonio; Arnau, David; Laserna-Belenguer, Belén

    2015-01-01

    An empirical study has been carried out to evaluate the potential of word order matching and static comparison as explanatory models of reversal error. Data was collected from 214 undergraduate students who translated a set of additive and multiplicative comparisons expressed in Spanish into algebraic language. In these multiplicative comparisons…

  12. An examination of the southern California field test for the systematic accumulation of the optical refraction error in geodetic leveling.

    USGS Publications Warehouse

    Castle, R.O.; Brown, B.W., Jr.; Gilmore, T.D.; Mark, R.K.; Wilson, R.C.

    1983-01-01

    Appraisals of the two levelings that formed the southern California field test for the accumulation of the atmospheric refraction error indicate that random error and systematic error unrelated to refraction competed with the systematic refraction error and severely complicate any analysis of the test results. If the fewer than one-third of the sections that met less than second-order, class I standards are dropped, the divergence virtually disappears between the presumably more refraction contaminated long-sight-length survey and the less contaminated short-sight-length survey. -Authors

  13. Sherborn's Index Animalium: New names, systematic errors and availability of names in the light of modern nomenclature.

    PubMed

    Welter-Schultes, Francisco; Görlich, Angela; Lutze, Alexandra

    2016-01-01

    This study is aimed to shed light on the reliability of Sherborn's Index Animalium in terms of modern usage. The AnimalBase project spent several years' worth of teamwork dedicated to extracting new names from original sources in the period ranging from 1757 to the mid-1790s. This allowed us to closely analyse Sherborn's work and verify the completeness and correctness of his record. We found the reliability of Sherborn's resource generally very high, but in some special situations the reliability was reduced due to systematic errors or incompleteness in source material. Index Animalium is commonly used by taxonomists today who rely strongly on Sherborn's record; our study is directed most pointedly at those users. We recommend paying special attention to the situations where we found that Sherborn's data should be read with caution. In addition to some categories of systematic errors and mistakes that were Sherborn's own responsibility, readers should also take into account that nomenclatural rules have been changed or refined in the past 100 years, and that Sherborn's resource could eventually present outdated information. One of our main conclusions is that error rates in nomenclatoral compilations tend to be lower if one single and highly experienced person such as Sherborn carries out the work, than if a team is trying to do the task. Based on our experience with extracting names from original sources we came to the conclusion that error rates in such a manual work on names in a list are difficult to reduce below 2-4%. We suggest this is a natural limit and a point of diminishing returns for projects of this nature. PMID:26877658

  14. Sherborn’s Index Animalium: New names, systematic errors and availability of names in the light of modern nomenclature

    PubMed Central

    Welter-Schultes, Francisco; Görlich, Angela; Lutze, Alexandra

    2016-01-01

    Abstract This study is aimed to shed light on the reliability of Sherborn’s Index Animalium in terms of modern usage. The AnimalBase project spent several years’ worth of teamwork dedicated to extracting new names from original sources in the period ranging from 1757 to the mid-1790s. This allowed us to closely analyse Sherborn’s work and verify the completeness and correctness of his record. We found the reliability of Sherborn’s resource generally very high, but in some special situations the reliability was reduced due to systematic errors or incompleteness in source material. Index Animalium is commonly used by taxonomists today who rely strongly on Sherborn’s record; our study is directed most pointedly at those users. We recommend paying special attention to the situations where we found that Sherborn’s data should be read with caution. In addition to some categories of systematic errors and mistakes that were Sherborn’s own responsibility, readers should also take into account that nomenclatural rules have been changed or refined in the past 100 years, and that Sherborn’s resource could eventually present outdated information. One of our main conclusions is that error rates in nomenclatoral compilations tend to be lower if one single and highly experienced person such as Sherborn carries out the work, than if a team is trying to do the task. Based on our experience with extracting names from original sources we came to the conclusion that error rates in such a manual work on names in a list are difficult to reduce below 2–4%. We suggest this is a natural limit and a point of diminishing returns for projects of this nature. PMID:26877658

  15. Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Garcia, P.

    2016-04-01

    Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its

  16. Systematic errors in the measurement of emissivity caused by directional effects.

    PubMed

    Kribus, Abraham; Vishnevetsky, Irna; Rotenberg, Eyal; Yakir, Dan

    2003-04-01

    Accurate knowledge of surface emissivity is essential for applications in remote sensing (remote temperature measurement), radiative transport, and modeling of environmental energy balances. Direct measurements of surface emissivity are difficult when there is considerable background radiation at the same wavelength as the emitted radiation. This occurs, for example, when objects at temperatures near room temperature are measured in a terrestrial environment by use ofthe infrared 8-14-microm band.This problem is usually treated by assumption of a perfectly diffuse surface or of diffuse background radiation. However, real surfaces and actual background radiation are not diffuse; therefore there will be a systematic measurement error. It is demonstrated that, in some cases, the deviations from a diffuse behavior lead to large errors in the measured emissivity. Past measurements made with simplifying assumptions should therefore be reevaluated and corrected. Recommendations are presented for improving experimental procedures in emissivity measurement.

  17. Estimation of Systematic Errors for Deuteron Electric Dipole Moment Search at COSY

    NASA Astrophysics Data System (ADS)

    Chekmenev, Stanislav

    2016-02-01

    An experimental method which is aimed to find a permanent EDM of a charged particle was proposed by the JEDI (Jülich Electric Dipole moment Investigations) collaboration. EDMs can be observed by their influence on spin motion. The only possible way to perform a direct measurement is to use a storage ring. For this purpose, it was decided to carry out the first precursor experiment at the Cooler Synchrotron (COSY). Since the EDM of a particle violates CP invariance it is expected to be tiny, treatment of all various sources of systematic errors should be done with a great level of precision. One should clearly understand how misalignments of the magnets affects the beam and the spin motion. It is planned to use a RF Wien filter for the precusor experiment. In this paper the simulations of the systematic effects for the RF Wien filter device method will be discussed.

  18. Observing transiting exoplanets: Removing systematic errors to constrain atmospheric chemistry and dynamics

    NASA Astrophysics Data System (ADS)

    Zellem, Robert Thomas

    2015-03-01

    The > 1500 confirmed exoplanets span a wide range of planetary masses ( 1 MEarth -20 MJupiter), radii ( 0.3 R Earth -2 RJupiter), semi-major axes ( 0.005-100 AU), orbital periods ( 0.3-1 x 105 days), and host star spectral types. The effects of a widely-varying parameter space on a planetary atmosphere's chemistry and dynamics can be determined through transiting exoplanet observations. An exoplanet's atmospheric signal, either in absorption or emission, is on the order of 0.1% which is dwarfed by telescope-specific systematic error sources up to 60%. This thesis explores some of the major sources of error and their removal from space- and ground-based observations, specifically Spitzer /IRAC single-object photometry, IRTF/SpeX and Palomar/TripleSpec low-resolution single-slit near-infrared spectroscopy, and Kuiper/Mont4k multi-object photometry. The errors include pointing-induced uncertainties, airmass variations, seeing-induced signal loss, telescope jitter, and system variability. They are treated with detector efficiency pixel-mapping, normalization routines, a principal component analysis, binning with the geometric mean in Fourier-space, characterization by a comparison star, repeatability, and stellar monitoring to get within a few times of the photon noise limit. As a result, these observations provide strong measurements of an exoplanet's dynamical day-to-night heat transport, constrain its CH4 abundance, investigate emission mechanisms, and develop an observing strategy with smaller telescopes. The reduction methods presented here can also be applied to other existing and future platforms to identify and remove systematic errors. Until such sources of uncertainty are characterized with bright systems with large planetary signals for platforms such as the James Webb Space Telescope, for example, one cannot resolve smaller objects with more subtle spectral features, as expected of exo-Earths.

  19. First Year Wilkinson Microwave Anisotropy Probe(WMAP) Observations: Data Processing Methods and Systematic Errors Limits

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.

    2003-01-01

    We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.

  20. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.

    PubMed

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip

    2015-08-06

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%.

  1. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances

    PubMed Central

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvořáček, Filip

    2015-01-01

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777

  2. Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction

    NASA Technical Reports Server (NTRS)

    Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.

    2013-01-01

    The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.

  3. An Examination of the Spatial Distribution of Carbon Dioxide and Systematic Errors

    NASA Technical Reports Server (NTRS)

    Coffey, Brennan; Gunson, Mike; Frankenberg, Christian; Osterman, Greg

    2011-01-01

    The industrial period and modern age is characterized by combustion of coal, oil, and natural gas for primary energy and transportation leading to rising levels of atmospheric of CO2. This increase, which is being carefully measured, has ramifications throughout the biological world. Through remote sensing, it is possible to measure how many molecules of CO2 lie in a defined column of air. However, other gases and particles are present in the atmosphere, such as aerosols and water, which make such measurements more complicated1. Understanding the detailed geometry and path length of the observation is vital to computing the concentration of CO2. Comparing these satellite readings with ground-truth data (TCCON) the systematic errors arising from these sources can be assessed. Once the error is understood, it can be scaled for in the retrieval algorithms to create a set of data, which is closer to the TCCON measurements1. Using this process, the algorithms are being developed to reduce bias, within.1% worldwide of the true value. At this stage, the accuracy is within 1%, but through correcting small errors contained in the algorithms, such as accounting for the scattering of sunlight, the desired accuracy can be achieved.

  4. SU-D-BRD-07: Evaluation of the Effectiveness of Statistical Process Control Methods to Detect Systematic Errors For Routine Electron Energy Verification

    SciTech Connect

    Parker, S

    2015-06-15

    Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignment of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors

  5. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis.

    PubMed

    Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-08-05

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  6. Comparison of the sensitivity to systematic errors between nonadiabatic non-Abelian geometric gates and their dynamical counterparts

    NASA Astrophysics Data System (ADS)

    Zheng, Shi-Biao; Yang, Chui-Ping; Nori, Franco

    2016-03-01

    We investigate the effects of systematic errors of the control parameters on single-qubit gates based on nonadiabatic non-Abelian geometric holonomies and those relying on purely dynamical evolution. It is explicitly shown that the systematic error in the Rabi frequency of the control fields affects these two kinds of gates in different ways. In the presence of this systematic error, the transformation produced by the nonadiabatic non-Abelian geometric gate is not unitary in the computational space, and the resulting gate infidelity is larger than that with the dynamical method. Our results provide a theoretical basis for choosing a suitable method for implementing elementary quantum gates in physical systems, where the systematic noises are the dominant noise source.

  7. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  8. The Cosine Error: A Bayesian Procedure for Treating a Non-repetitive Systematic Effect

    NASA Astrophysics Data System (ADS)

    Lira, Ignacio; Grientschnig, Dieter

    2016-08-01

    An inconsistency with respect to variable transformations in our previous treatment of the cosine error example with repositioning (Metrologia, vol. 47, pp. R1-R14) is pointed out. The problem refers to the measurement of the vertical height of a column of liquid in a manometer. A systematic effect arises because of the possible deviation of the measurement axis from the vertical, which may be different each time the measurement is taken. A revised procedure for treating this problem is proposed; it consists in straightforward application of Bayesian statistics using a conditional reference prior with partial information. In most practical applications, the numerical differences between the two procedures will be negligible, so the interest of the revised one is mainly of conceptual nature. Nevertheless, similar measurement models may appear in other contexts, for example, in intercomparisons, so the present investigation may serve as a warning to analysts against applying the same methodology we used in our original approach to the present problem.

  9. Calibration and systematic error analysis for the COBE(1) DMR 4year sky maps

    SciTech Connect

    Kogut, A.; Banday, A.J.; Bennett, C.L.; Gorski, K.M.; Hinshaw,G.; Jackson, P.D.; Keegstra, P.; Lineweaver, C.; Smoot, G.F.; Tenorio,L.; Wright, E.L.

    1996-01-04

    The Differential Microwave Radiometers (DMR) instrument aboard the Cosmic Background Explorer (COBE) has mapped the full microwave sky to mean sensitivity 26 mu K per 7 degrees held of view. The absolute calibration is determined to 0.7 percent with drifts smaller than 0.2 percent per year. We have analyzed both the raw differential data and the pixelized sky maps for evidence of contaminating sources such as solar system foregrounds, instrumental susceptibilities, and artifacts from data recovery and processing. Most systematic effects couple only weakly to the sky maps. The largest uncertainties in the maps result from the instrument susceptibility to Earth's magnetic field, microwave emission from Earth, and upper limits to potential effects at the spacecraft spin period. Systematic effects in the maps are small compared to either the noise or the celestial signal: the 95 percent confidence upper limit for the pixel-pixel rms from all identified systematics is less than 6 mu K in the worst channel. A power spectrum analysis of the (A-B)/2 difference maps shows no evidence for additional undetected systematic effects.

  10. Strategies for Assessing Diffusion Anisotropy on the Basis of Magnetic Resonance Images: Comparison of Systematic Errors

    PubMed Central

    Boujraf, Saïd

    2014-01-01

    Diffusion weighted imaging uses the signal loss associated with the random thermal motion of water molecules in the presence of magnetic field gradients to derive a number of parameters that reflect the translational mobility of the water molecules in tissues. With a suitable experimental set-up, it is possible to calculate all the elements of the local diffusion tensor (DT) and derived parameters describing the behavior of the water molecules in each voxel. One of the emerging applications of the information obtained is an interpretation of the diffusion anisotropy in terms of the architecture of the underlying tissue. These interpretations can only be made provided the experimental data which are sufficiently accurate. However, the DT results are susceptible to two systematic error sources: On one hand, the presence of signal noise can lead to artificial divergence of the diffusivities. In contrast, the use of a simplified model for the interaction of the protons with the diffusion weighting and imaging field gradients (b matrix calculation), common in the clinical setting, also leads to deviation in the derived diffusion characteristics. In this paper, we study the importance of these two sources of error on the basis of experimental data obtained on a clinical magnetic resonance imaging system for an isotropic phantom using a state of the art single-shot echo planar imaging sequence. Our results show that optimal diffusion imaging require combining a correct calculation of the b-matrix and a sufficiently large signal to noise ratio. PMID:24761372

  11. X-ray optics metrology limited by random noise, instrumental drifts, and systematic errors

    SciTech Connect

    Yashchuk, Valeriy V.; Anderson, Erik H.; Barber, Samuel K.; Cambie, Rossana; Celestre, Richard; Conley, Raymond; Goldberg, Kenneth A.; McKinney, Wayne R.; Morrison, Gregory; Takacs, Peter Z.; Voronov, Dmitriy L.; Yuan, Sheng; Padmore, Howard A.

    2010-07-09

    Continuous, large-scale efforts to improve and develop third- and forth-generation synchrotron radiation light sources for unprecedented high-brightness, low emittance, and coherent x-ray beams demand diffracting and reflecting x-ray optics suitable for micro- and nano-focusing, brightness preservation, and super high resolution. One of the major impediments for development of x-ray optics with the required beamline performance comes from the inadequate present level of optical and at-wavelength metrology and insufficient integration of the metrology into the fabrication process and into beamlines. Based on our experience at the ALS Optical Metrology Laboratory, we review the experimental methods and techniques that allow us to mitigate significant optical metrology problems related to random, systematic, and drift errors with super-high-quality x-ray optics. Measurement errors below 0.2 mu rad have become routine. We present recent results from the ALS of temperature stabilized nano-focusing optics and dedicated at-wavelength metrology. The international effort to develop a next generation Optical Slope Measuring System (OSMS) to address these problems is also discussed. Finally, we analyze the remaining obstacles to further improvement of beamline x-ray optics and dedicated metrology, and highlight the ways we see to overcome the problems.

  12. Refractive Errors and Concomitant Strabismus: A Systematic Review and Meta-analysis

    PubMed Central

    Tang, Shu Min; Chan, Rachel Y. T.; Bin Lin, Shi; Rong, Shi Song; Lau, Henry H. W.; Lau, Winnie W. Y.; Yip, Wilson W. K.; Chen, Li Jia; Ko, Simon T. C.; Yam, Jason C. S.

    2016-01-01

    This systematic review and meta-analysis is to evaluate the risk of development of concomitant strabismus due to refractive errors. Eligible studies published from 1946 to April 1, 2016 were identified from MEDLINE and EMBASE that evaluated any kinds of refractive errors (myopia, hyperopia, astigmatism and anisometropia) as an independent factor for concomitant exotropia and concomitant esotropia. Totally 5065 published records were retrieved for screening, 157 of them eligible for detailed evaluation. Finally 7 population-based studies involving 23,541 study subjects met our criteria for meta-analysis. The combined OR showed that myopia was a risk factor for exotropia (OR: 5.23, P = 0.0001). We found hyperopia had a dose-related effect for esotropia (OR for a spherical equivalent [SE] of 2–3 diopters [D]: 10.16, P = 0.01; OR for an SE of 3-4D: 17.83, P < 0.0001; OR for an SE of 4-5D: 41.01, P < 0.0001; OR for an SE of ≥5D: 162.68, P < 0.0001). Sensitivity analysis indicated our results were robust. Results of this study confirmed myopia as a risk for concomitant exotropia and identified a dose-related effect for hyperopia as a risk of concomitant esotropia. PMID:27731389

  13. Systematic and Statistical Errors Associated with Nuclear Decay Constant Measurements Using the Counting Technique

    NASA Astrophysics Data System (ADS)

    Koltick, David; Wang, Haoyu; Liu, Shih-Chieh; Heim, Jordan; Nistor, Jonathan

    2016-03-01

    Typical nuclear decay constants are measured at the accuracy level of 10-2. There are numerous reasons: tests of unconventional theories, dating of materials, and long term inventory evolution which require decay constants accuracy at a level of 10-4 to 10-5. The statistical and systematic errors associated with precision measurements of decays using the counting technique are presented. Precision requires high count rates, which introduces time dependent dead time and pile-up corrections. An approach to overcome these issues is presented by continuous recording of the detector current. Other systematic corrections include, the time dependent dead time due to background radiation, control of target motion and radiation flight path variation due to environmental conditions, and the time dependent effects caused by scattered events are presented. The incorporation of blind experimental techniques can help make measurement independent of past results. A spectrometer design and data analysis is reviewed that can accomplish these goals. The author would like to thank TechSource, Inc. and Advanced Physics Technologies, LLC. for their support in this work.

  14. Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra

    NASA Astrophysics Data System (ADS)

    Whitmore, Jonathan B.; Murphy, Michael T.

    2015-02-01

    We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high-resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium-argon calibration can be tracked with ˜10 m s-1 precision over the entire optical wavelength range on scales of both echelle orders (˜50-100 Å) and entire spectrographs arms (˜1000-3000 Å). Using archival spectra from the past 20 yr, we have probed the supercalibration history of the Very Large Telescope-Ultraviolet and Visible Echelle Spectrograph (VLT-UVES) and Keck-High Resolution Echelle Spectrograph (HIRES) spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically ±200 m s-1 per 1000 Å. We apply a simple model of these distortions to simulated spectra that characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the fine-structure constant, α. The spurious deviations in α produced by the model closely match important aspects of the VLT-UVES quasar results at all redshifts and partially explain the HIRES results, though not self-consistently at all redshifts. That is, the apparent ubiquity, size and general characteristics of the distortions are capable of significantly weakening the evidence for variations in α from quasar absorption lines.

  15. Mapping systematic errors in helium abundance determinations using Markov Chain Monte Carlo

    SciTech Connect

    Aver, Erik; Olive, Keith A.; Skillman, Evan D. E-mail: olive@umn.edu

    2011-03-01

    Monte Carlo techniques have been used to evaluate the statistical and systematic uncertainties in the helium abundances derived from extragalactic H II regions. The helium abundance is sensitive to several physical parameters associated with the H II region. In this work, we introduce Markov Chain Monte Carlo (MCMC) methods to efficiently explore the parameter space and determine the helium abundance, the physical parameters, and the uncertainties derived from observations of metal poor nebulae. Experiments with synthetic data show that the MCMC method is superior to previous implementations (based on flux perturbation) in that it is not affected by biases due to non-physical parameter space. The MCMC analysis allows a detailed exploration of degeneracies, and, in particular, a false minimum that occurs at large values of optical depth in the He I emission lines. We demonstrate that introducing the electron temperature derived from the [O III] emission lines as a prior, in a very conservative manner, produces negligible bias and effectively eliminates the false minima occurring at large optical depth. We perform a frequentist analysis on data from several ''high quality'' systems. Likelihood plots illustrate degeneracies, asymmetries, and limits of the determination. In agreement with previous work, we find relatively large systematic errors, limiting the precision of the primordial helium abundance for currently available spectra.

  16. A table of integrals of the error function. II - Additions and corrections.

    NASA Technical Reports Server (NTRS)

    Geller, M.; Ng, E. W.

    1971-01-01

    Integrals of products of error functions with other functions are presented, taking into account a combination of the error function with powers, a combination of the error function with exponentials and powers, a combination of the error function with exponentials of more complicated arguments, definite integrals from Laplace transforms, and a combination of the error function with trigonometric functions. Other integrals considered include a combination of the error function with logarithms and powers, a combination of two error functions, and a combination of the error function with other special functions.

  17. The Shane-Wirtanen counts - Systematics and two-point correlation function. [for astronomical map error analysis

    NASA Technical Reports Server (NTRS)

    De Lapparent, V.; Kurtz, M. J.; Geller, M. J.

    1986-01-01

    Residual errors in the Selder et al. (SSGP) map which caused a break in both the correlation factor (CF) and the filamentary appearance of the Shane-Wirtanen map are examined. These errors, causing a residual rms fluctuation of 11 percent in the SSGP-corrected counts and a systematic rms offset of 8 percent in the mean count per plate, can be attributed to counting pattern and plate vignetting. Techniques for CF reconstruction in catalogs affected by plate-related systematic biases are examined, and it is concluded that accurate restoration may not be possible. Surveys designed to measure the CF at the depth of the SW counts on a scale of 2.5 deg, must have systematic errors of less than or about 0.04 mag.

  18. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    SciTech Connect

    Strömberg, Sten; Nistor, Mihaela; Liu, Jing

    2014-11-15

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2{sup 4} full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world.

  19. Interventions to reduce wrong blood in tube errors in transfusion: a systematic review.

    PubMed

    Cottrell, Susan; Watson, Douglas; Eyre, Toby A; Brunskill, Susan J; Dorée, Carolyn; Murphy, Michael F

    2013-10-01

    This systematic review addresses the issue of wrong blood in tube (WBIT). The objective was to identify interventions that have been implemented and the effectiveness of these interventions to reduce WBIT incidence in red blood cell transfusion. Eligible articles were identified through a comprehensive search of The Cochrane Library, MEDLINE, EMBASE, Cinahl, BNID, and the Transfusion Evidence Library to April 2013. Initial search criteria were wide including primary intervention or observational studies, case reports, expert opinion, and guidelines. There was no restriction by study type, language, or status. Publications before 1995, reviews or reports of a secondary nature, studies of sampling errors outwith transfusion, and articles involving animals were excluded. The primary outcome was a reduction in errors. Study characteristics, outcomes measured, and methodological quality were extracted by 2 authors independently. The principal method of analysis was descriptive. A total of 12,703 references were initially identified. Preliminary secondary screening by 2 reviewers reduced articles for detailed screening to 128 articles. Eleven articles were eventually identified as eligible, resulting in 9 independent studies being included in the review. The overall finding was that all the identified interventions reduced WBIT incidence. Five studies measured the effect of a single intervention, for example, changes to blood sample labeling, weekly feedback, handwritten transfusion requests, and an electronic transfusion system. Four studies reported multiple interventions including education, second check of ID at sampling, and confirmatory sampling. It was not clear which intervention was the most effective. Sustainability of the effectiveness of interventions was also unclear. Targeted interventions, either single or multiple, can lead to a reduction in WBIT; but the sustainability of effectiveness is uncertain. Data on the pre- and postimplementation of

  20. Systematic Errors in Low-latency Gravitational Wave Parameter Estimation Impact Electromagnetic Follow-up Observations

    NASA Astrophysics Data System (ADS)

    Littenberg, Tyson B.; Farr, Ben; Coughlin, Scott; Kalogera, Vicky

    2016-03-01

    Among the most eagerly anticipated opportunities made possible by Advanced LIGO/Virgo are multimessenger observations of compact mergers. Optical counterparts may be short-lived so rapid characterization of gravitational wave (GW) events is paramount for discovering electromagnetic signatures. One way to meet the demand for rapid GW parameter estimation is to trade off accuracy for speed, using waveform models with simplified treatment of the compact objects’ spin. We report on the systematic errors in GW parameter estimation suffered when using different spin approximations to recover generic signals. Component mass measurements can be biased by \\gt 5σ using simple-precession waveforms and in excess of 20σ when non-spinning templates are employed. This suggests that electromagnetic observing campaigns should not take a strict approach to selecting which LIGO/Virgo candidates warrant follow-up observations based on low-latency mass estimates. For sky localization, we find that searched areas are up to a factor of ∼ 2 larger for non-spinning analyses, and are systematically larger for any of the simplified waveforms considered in our analysis. Distance biases for the non-precessing waveforms can be in excess of 100% and are largest when the spin angular momenta are in the orbital plane of the binary. We confirm that spin-aligned waveforms should be used for low-latency parameter estimation at the minimum. Including simple precession, though more computationally costly, mitigates biases except for signals with extreme precession effects. Our results shine a spotlight on the critical need for development of computationally inexpensive precessing waveforms and/or massively parallel algorithms for parameter estimation.

  1. The Additional Error of Inertial Sensors Induced by Hypersonic Flight Conditions.

    PubMed

    Karachun, Volodimir; Mel'nick, Viktorij; Korobiichuk, Igor; Nowicki, Michał; Szewczyk, Roman; Kobzar, Svitlana

    2016-01-01

    The emergence of hypersonic technology pose a new challenge for inertial navigation sensors, widely used in aerospace industry. The main problems are: extremely high temperatures, vibration of the fuselage, penetrating acoustic radiation and shock N-waves. The nature of the additional errors of the gyroscopic inertial sensor with hydrostatic suspension components under operating conditions generated by forced precession of the movable part of the suspension due to diffraction phenomena in acoustic fields is explained. The cause of the disturbing moments in the form of the Coriolis inertia forces during the transition of the suspension surface into the category of impedance is revealed. The boundaries of occurrence of the features on the resonance wave match are described. The values of the "false" angular velocity as a result of the elastic-stress state of suspension in the acoustic fields are determined. PMID:26927122

  2. The Additional Error of Inertial Sensors Induced by Hypersonic Flight Conditions

    PubMed Central

    Karachun, Volodimir; Mel’nick, Viktorij; Korobiichuk, Igor; Nowicki, Michał; Szewczyk, Roman; Kobzar, Svitlana

    2016-01-01

    The emergence of hypersonic technology pose a new challenge for inertial navigation sensors, widely used in aerospace industry. The main problems are: extremely high temperatures, vibration of the fuselage, penetrating acoustic radiation and shock N-waves. The nature of the additional errors of the gyroscopic inertial sensor with hydrostatic suspension components under operating conditions generated by forced precession of the movable part of the suspension due to diffraction phenomena in acoustic fields is explained. The cause of the disturbing moments in the form of the Coriolis inertia forces during the transition of the suspension surface into the category of impedance is revealed. The boundaries of occurrence of the features on the resonance wave match are described. The values of the “false” angular velocity as a result of the elastic-stress state of suspension in the acoustic fields are determined. PMID:26927122

  3. The Additional Error of Inertial Sensors Induced by Hypersonic Flight Conditions.

    PubMed

    Karachun, Volodimir; Mel'nick, Viktorij; Korobiichuk, Igor; Nowicki, Michał; Szewczyk, Roman; Kobzar, Svitlana

    2016-02-26

    The emergence of hypersonic technology pose a new challenge for inertial navigation sensors, widely used in aerospace industry. The main problems are: extremely high temperatures, vibration of the fuselage, penetrating acoustic radiation and shock N-waves. The nature of the additional errors of the gyroscopic inertial sensor with hydrostatic suspension components under operating conditions generated by forced precession of the movable part of the suspension due to diffraction phenomena in acoustic fields is explained. The cause of the disturbing moments in the form of the Coriolis inertia forces during the transition of the suspension surface into the category of impedance is revealed. The boundaries of occurrence of the features on the resonance wave match are described. The values of the "false" angular velocity as a result of the elastic-stress state of suspension in the acoustic fields are determined.

  4. Reducing impacts of systematic errors in the observation data on inversing ecosystem model parameters using different normalization methods

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Xu, M.; Huang, M.; Yu, G.

    2009-11-01

    Modeling ecosystem carbon cycle on the regional and global scales is crucial to the prediction of future global atmospheric CO2 concentration and thus global temperature which features large uncertainties due mainly to the limitations in our knowledge and in the climate and ecosystem models. There is a growing body of research on parameter estimation against available carbon measurements to reduce model prediction uncertainty at regional and global scales. However, the systematic errors with the observation data have rarely been investigated in the optimization procedures in previous studies. In this study, we examined the feasibility of reducing the impact of systematic errors on parameter estimation using normalization methods, and evaluated the effectiveness of three normalization methods (i.e. maximum normalization, min-max normalization, and z-score normalization) on inversing key parameters, for example the maximum carboxylation rate (Vcmax,25) at a reference temperature of 25°C, in a process-based ecosystem model for deciduous needle-leaf forests in northern China constrained by the leaf area index (LAI) data. The LAI data used for parameter estimation were composed of the model output LAI (truth) and various designated systematic errors and random errors. We found that the estimation of Vcmax,25 could be severely biased with the composite LAI if no normalization was taken. Compared with the maximum normalization and the min-max normalization methods, the z-score normalization method was the most robust in reducing the impact of systematic errors on parameter estimation. The most probable values of estimated Vcmax,25 inversed by the z-score normalized LAI data were consistent with the true parameter values as in the model inputs though the estimation uncertainty increased with the magnitudes of random errors in the observations. We concluded that the z-score normalization method should be applied to the observed or measured data to improve model parameter

  5. Medication errors in paediatric care: a systematic review of epidemiology and an evaluation of evidence supporting reduction strategy recommendations

    PubMed Central

    Miller, Marlene R; Robinson, Karen A; Lubomski, Lisa H; Rinke, Michael L; Pronovost, Peter J

    2007-01-01

    Background Although children are at the greatest risk for medication errors, little is known about the overall epidemiology of these errors, where the gaps are in our knowledge, and to what extent national medication error reduction strategies focus on children. Objective To synthesise peer reviewed knowledge on children's medication errors and on recommendations to improve paediatric medication safety by a systematic literature review. Data sources PubMed, Embase and Cinahl from 1 January 2000 to 30 April 2005, and 11 national entities that have disseminated recommendations to improve medication safety. Study selection Inclusion criteria were peer reviewed original data in English language. Studies that did not separately report paediatric data were excluded. Data extraction Two reviewers screened articles for eligibility and for data extraction, and screened all national medication error reduction strategies for relevance to children. Data synthesis From 358 articles identified, 31 were included for data extraction. The definition of medication error was non‐uniform across the studies. Dispensing and administering errors were the most poorly and non‐uniformly evaluated. Overall, the distributional epidemiological estimates of the relative percentages of paediatric error types were: prescribing 3–37%, dispensing 5–58%, administering 72–75%, and documentation 17–21%. 26 unique recommendations for strategies to reduce medication errors were identified; none were based on paediatric evidence. Conclusions Medication errors occur across the entire spectrum of prescribing, dispensing, and administering, are common, and have a myriad of non‐evidence based potential reduction strategies. Further research in this area needs a firmer standardisation for items such as dose ranges and definitions of medication errors, broader scope beyond inpatient prescribing errors, and prioritisation of implementation of medication error reduction strategies. PMID:17403758

  6. Derivation and Application of a Global Albedo yielding an Optical Brightness To Physical Size Transformation Free of Systematic Errors

    NASA Technical Reports Server (NTRS)

    Mulrooney, Dr. Mark K.; Matney, Dr. Mark J.

    2007-01-01

    Orbital object data acquired via optical telescopes can play a crucial role in accurately defining the space environment. Radar systems probe the characteristics of small debris by measuring the reflected electromagnetic energy from an object of the same order of size as the wavelength of the radiation. This signal is affected by electrical conductivity of the bulk of the debris object, as well as its shape and orientation. Optical measurements use reflected solar radiation with wavelengths much smaller than the size of the objects. Just as with radar, the shape and orientation of an object are important, but we only need to consider the surface electrical properties of the debris material (i.e., the surface albedo), not the bulk electromagnetic properties. As a result, these two methods are complementary in that they measure somewhat independent physical properties to estimate the same thing, debris size. Short arc optical observations such as are typical of NASA's Liquid Mirror Telescope (LMT) give enough information to estimate an Assumed Circular Orbit (ACO) and an associated range. This information, combined with the apparent magnitude, can be used to estimate an "absolute" brightness (scaled to a fixed range and phase angle). This absolute magnitude is what is used to estimate debris size. However, the shape and surface albedo effects make the size estimates subject to systematic and random errors, such that it is impossible to ascertain the size of an individual object with any certainty. However, as has been shown with radar debris measurements, that does not preclude the ability to estimate the size distribution of a number of objects statistically. After systematic errors have been eliminated (range errors, phase function assumptions, photometry) there remains a random geometric albedo distribution that relates object size to absolute magnitude. Measurements by the LMT of a subset of tracked debris objects with sizes estimated from their radar cross

  7. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    SciTech Connect

    Nelms, Benjamin E.; Chan, Maria F.; Jarry, Geneviève; Lemire, Matthieu; Lowden, John; Hampton, Carnell

    2013-11-15

    correctable after detection and diagnosis, and the uncorrectable errors provided useful information about system limitations, which is another key element of system commissioning.Conclusions: Many forms of relevant systematic errors can go undetected when the currently prevalent metrics for IMRT/VMAT commissioning are used. If alternative methods and metrics are used instead of (or in addition to) the conventional metrics, these errors are more likely to be detected, and only once they are detected can they be properly diagnosed and rooted out of the system. Removing systematic errors should be a goal not only of commissioning by the end users but also product validation by the manufacturers. For any systematic errors that cannot be removed, detecting and quantifying them is important as it will help the physicist understand the limits of the system and work with the manufacturer on improvements. In summary, IMRT and VMAT commissioning, along with product validation, would benefit from the retirement of the 3%/3 mm passing rates as a primary metric of performance, and the adoption instead of tighter tolerances, more diligent diagnostics, and more thorough analysis.

  8. Sources of systematic error in calibrated BOLD based mapping of baseline oxygen extraction fraction.

    PubMed

    Blockley, Nicholas P; Griffeth, Valerie E M; Stone, Alan J; Hare, Hannah V; Bulte, Daniel P

    2015-11-15

    Recently a new class of calibrated blood oxygen level dependent (BOLD) functional magnetic resonance imaging (MRI) methods were introduced to quantitatively measure the baseline oxygen extraction fraction (OEF). These methods rely on two respiratory challenges and a mathematical model of the resultant changes in the BOLD functional MRI signal to estimate the OEF. However, this mathematical model does not include all of the effects that contribute to the BOLD signal, it relies on several physiological assumptions and it may be affected by intersubject physiological variability. The aim of this study was to investigate these sources of systematic error and their effect on estimating the OEF. This was achieved through simulation using a detailed model of the BOLD signal. Large ranges for intersubject variability in baseline physiological parameters such as haematocrit and cerebral blood volume were considered. Despite this the uncertainty in the relationship between the measured BOLD signals and the OEF was relatively low. Investigations of the physiological assumptions that underlie the mathematical model revealed that OEF measurements are likely to be overestimated if oxygen metabolism changes during hypercapnia or cerebral blood flow changes under hyperoxia. Hypoxic hypoxia was predicted to result in an underestimation of the OEF, whilst anaemic hypoxia was found to have only a minimal effect.

  9. The shrinking Sun: A systematic error in local correlation tracking of solar granulation

    NASA Astrophysics Data System (ADS)

    Löptien, B.; Birch, A. C.; Duvall, T. L.; Gizon, L.; Schou, J.

    2016-05-01

    Context. Local correlation tracking of granulation (LCT) is an important method for measuring horizontal flows in the photosphere. This method exhibits a systematic error that looks like a flow converging toward disk center, which is also known as the shrinking-Sun effect. Aims: We aim to study the nature of the shrinking-Sun effect for continuum intensity data and to derive a simple model that can explain its origin. Methods: We derived LCT flow maps by running the LCT code Fourier Local Correlation Tracking (FLCT) on tracked and remapped continuum intensity maps provided by the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO). We also computed flow maps from synthetic continuum images generated from STAGGER code simulations of solar surface convection. We investigated the origin of the shrinking-Sun effect by generating an average granule from synthetic data from the simulations. Results: The LCT flow maps derived from the HMI data and the simulations exhibit a shrinking-Sun effect of comparable magnitude. The origin of this effect is related to the apparent asymmetry of granulation originating from radiative transfer effects when observing with a viewing angle inclined from vertical. This causes, in combination with the expansion of the granules, an apparent motion toward disk center.

  10. Analysis of systematic errors of the ASM/RXTE monitor and GT-48 γ-ray telescope

    NASA Astrophysics Data System (ADS)

    Fidelis, V. V.

    2011-06-01

    The observational data concerning variations of light curves of supernovae remnants—the Crab Nebula, Cassiopeia A, Tycho Brahe, and pulsar Vela—over 14 days scale that may be attributed to systematic errors of the ASM/RXTE monitor are presented. The experimental systematic errors of the GT-48 γ-ray telescope in the mono mode of operation were also determined. For this the observational data of TeV J2032 + 4130 (Cyg γ-2, according to the Crimean version) were used and the stationary nature of its γ-ray emission was confirmed by long-term observations performed with HEGRA and MAGIC. The results of research allow us to draw the following conclusions: (1) light curves of supernovae remnants averaged for long observing periods have false statistically significant flux variations, (2) the level of systematic errors is proportional to the registered flux and decreases with increasing temporal scale of averaging, (3) the light curves of sources may be modulated by the year period, and (4) the systematic errors of the GT-48 γ-ray telescope, in the amount caused by observations in the mono mode and data processing with the stereo-algorithm come to 0.12 min-1.

  11. UNDERSTANDING SYSTEMATIC MEASUREMENT ERROR IN THERMAL-OPTICAL ANALYSIS FOR PM BLACK CARBON USING RESPONSE SURFACES AND SURFACE CONFIDENCE INTERVALS

    EPA Science Inventory

    Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...

  12. An online model correction method based on an inverse problem: Part II—systematic model error correction

    NASA Astrophysics Data System (ADS)

    Xue, Haile; Shen, Xueshun; Chou, Jifan

    2015-11-01

    An online systematic error correction is presented and examined as a technique to improve the accuracy of real-time numerical weather prediction, based on the dataset of model errors (MEs) in past intervals. Given the analyses, the ME in each interval (6 h) between two analyses can be iteratively obtained by introducing an unknown tendency term into the prediction equation, shown in Part I of this two-paper series. In this part, after analyzing the 5-year (2001-2005) GRAPES-GFS (Global Forecast System of the Global and Regional Assimilation and Prediction System) error patterns and evolution, a systematic model error correction is given based on the least-squares approach by firstly using the past MEs. To test the correction, we applied the approach in GRAPES-GFS for July 2009 and January 2010. The datasets associated with the initial condition and SST used in this study were based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results indicated that the Northern Hemispheric systematically underestimated equator-to-pole geopotential gradient and westerly wind of GRAPES-GFS were largely enhanced, and the biases of temperature and wind in the tropics were strongly reduced. Therefore, the correction results in a more skillful forecast with lower mean bias and root-mean-square error and higher anomaly correlation coefficient.

  13. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    PubMed Central

    Casas, Francisco J.; Ortiz, David; Villa, Enrique; Cano, Juan L.; Cagigas, Jaime; Pérez, Ana R.; Aja, Beatriz; Terán, J. Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-01-01

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906

  14. Standard addition/absorption detection microfluidic system for salt error-free nitrite determination.

    PubMed

    Ahn, Jae-Hoon; Jo, Kyoung Ho; Hahn, Jong Hoon

    2015-07-30

    A continuous-flow microfluidic chip-based standard addition/absorption detection system has been developed for accurate determination of nitrite in water of varying salinity. The absorption detection of nitrite is made via color development using the Griess reaction. We have found the yield of the reaction is significantly affected by salinity (e.g., -12% error for 30‰ NaCl, 50.0 μg L(-1)N-NO2(-) solution). The microchip has been designed to perform standard addition, color development, and absorbance detection in sequence. To effectively block stray light, the microchip made from black poly(dimethylsiloxane) is placed on the top of a compact housing that accommodates a light-emitting diode, a photomultiplier tube, and an interference filter, where the light source and the detector are optically isolated. An 80-mm liquid-core waveguide mounted on the chip externally has been employed as the absorption detection flow cell. These designs for optics secure a wide linear response range (up to 500 μg L(-1)N-NO2(-)) and a low detection limit (0.12 μg L(-1)N-NO2(-) = 8.6 nM N-NO2(-), S/N = 3). From determination of nitrite in standard samples and real samples collected from an estuary, it has been demonstrated that our microfluidic system is highly accurate (<1% RSD, n = 3) and precise (<1% RSD, n = 3). PMID:26320643

  15. SU-E-T-550: Range Effects in Proton Therapy Caused by Systematic Errors in the Stoichiometric Calibration

    SciTech Connect

    Doolan, P; Dias, M; Collins Fekete, C; Seco, J

    2014-06-01

    Purpose: The procedure for proton treatment planning involves the conversion of the patient's X-ray CT from Hounsfield units into relative stopping powers (RSP), using a stoichiometric calibration curve (Schneider 1996). In clinical practice a 3.5% margin is added to account for the range uncertainty introduced by this process and other errors. RSPs for real tissues are calculated using composition data and the Bethe-Bloch formula (ICRU 1993). The purpose of this work is to investigate the impact that systematic errors in the stoichiometric calibration have on the proton range. Methods: Seven tissue inserts of the Gammex 467 phantom were imaged using our CT scanner. Their known chemical compositions (Watanabe 1999) were then used to calculate the theoretical RSPs, using the same formula as would be used for human tissues in the stoichiometric procedure. The actual RSPs of these inserts were measured using a Bragg peak shift measurement in the proton beam at our institution. Results: The theoretical calculation of the RSP was lower than the measured RSP values, by a mean/max error of - 1.5/-3.6%. For all seven inserts the theoretical approach underestimated the RSP, with errors variable across the range of Hounsfield units. Systematic errors for lung (average of two inserts), adipose and cortical bone were - 3.0/-2.1/-0.5%, respectively. Conclusion: There is a systematic underestimation caused by the theoretical calculation of RSP; a crucial step in the stoichiometric calibration procedure. As such, we propose that proton calibration curves should be based on measured RSPs. Investigations will be made to see if the same systematic errors exist for biological tissues. The impact of these differences on the range of proton beams, for phantoms and patient scenarios, will be investigated. This project was funded equally by the Engineering and Physical Sciences Research Council (UK) and Ion Beam Applications (Louvain-La-Neuve, Belgium)

  16. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-area Sky Surveys

    NASA Astrophysics Data System (ADS)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; Tucker, D.; Kessler, R.; Annis, J.; Bernstein, G. M.; Boada, S.; Burke, D. L.; Finley, D. A.; James, D. J.; Kent, S.; Lin, H.; Marriner, J.; Mondrik, N.; Nagasawa, D.; Rykoff, E. S.; Scolnic, D.; Walker, A. R.; Wester, W.; Abbott, T. M. C.; Allam, S.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Capozzi, D.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; Cunha, C. E.; D’Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Doel, P.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gaztanaga, E.; Goldstein, D. A.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Kuehn, K.; Kuropatkin, N.; Maia, M. A. G.; Melchior, P.; Miller, C. J.; Miquel, R.; Mohr, J. J.; Neilsen, E.; Nichol, R. C.; Nord, B.; Ogando, R.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Thomas, D.; Vikram, V.; The DES Collaboration

    2016-06-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for

  17. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-area Sky Surveys

    NASA Astrophysics Data System (ADS)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; Tucker, D.; Kessler, R.; Annis, J.; Bernstein, G. M.; Boada, S.; Burke, D. L.; Finley, D. A.; James, D. J.; Kent, S.; Lin, H.; Marriner, J.; Mondrik, N.; Nagasawa, D.; Rykoff, E. S.; Scolnic, D.; Walker, A. R.; Wester, W.; Abbott, T. M. C.; Allam, S.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Capozzi, D.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Doel, P.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gaztanaga, E.; Goldstein, D. A.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Kuehn, K.; Kuropatkin, N.; Maia, M. A. G.; Melchior, P.; Miller, C. J.; Miquel, R.; Mohr, J. J.; Neilsen, E.; Nichol, R. C.; Nord, B.; Ogando, R.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Thomas, D.; Vikram, V.; DES Collaboration

    2016-06-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%-2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for

  18. Minimizing systematic errors from atmospheric multiple scattering and satellite viewing geometry in coastal zone color scanner level IIA imagery

    NASA Technical Reports Server (NTRS)

    Martin, D. L.; Perry, M. J.

    1994-01-01

    Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.

  19. Assessment of the accuracy of global geodetic satellite laser ranging observations and estimated impact on ITRF scale: estimation of systematic errors in LAGEOS observations 1993-2014

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Rodríguez, José; Altamimi, Zuheir

    2016-06-01

    Satellite laser ranging (SLR) to the geodetic satellites LAGEOS and LAGEOS-2 uniquely determines the origin of the terrestrial reference frame and, jointly with very long baseline interferometry, its scale. Given such a fundamental role in satellite geodesy, it is crucial that any systematic errors in either technique are at an absolute minimum as efforts continue to realise the reference frame at millimetre levels of accuracy to meet the present and future science requirements. Here, we examine the intrinsic accuracy of SLR measurements made by tracking stations of the International Laser Ranging Service using normal point observations of the two LAGEOS satellites in the period 1993 to 2014. The approach we investigate in this paper is to compute weekly reference frame solutions solving for satellite initial state vectors, station coordinates and daily Earth orientation parameters, estimating along with these weekly average range errors for each and every one of the observing stations. Potential issues in any of the large number of SLR stations assumed to have been free of error in previous realisations of the ITRF may have been absorbed in the reference frame, primarily in station height. Likewise, systematic range errors estimated against a fixed frame that may itself suffer from accuracy issues will absorb network-wide problems into station-specific results. Our results suggest that in the past two decades, the scale of the ITRF derived from the SLR technique has been close to 0.7 ppb too small, due to systematic errors either or both in the range measurements and their treatment. We discuss these results in the context of preparations for ITRF2014 and additionally consider the impact of this work on the currently adopted value of the geocentric gravitational constant, GM.

  20. Effects of Systematic and Random Errors on the Retrieval of Particle Microphysical Properties from Multiwavelength Lidar Measurements Using Inversion with Regularization

    NASA Technical Reports Server (NTRS)

    Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas

    2013-01-01

    In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.

  1. Systematic estimation of forecast and observation error covariances in four-dimensional data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, D. P.; Cohn, S. E.; Ghil, M.

    1985-01-01

    A two-part algorithm is presented for reliably computing weather forecast model and observational error covariances during data assimilation. Data errors arise from instrumental inaccuracies and sub-grid scale variability, whereas forecast errors occur because of modeling errors and the propagation of previous analysis errors. A Kalman filter is defined as the primary algorithm for estimating the forecast and analysis error convariance matrices. A second algorithm is described for quantifying the noise covariance matrices of any degree to obtain accurate values for the observational error covariances. Numerical results are provided from a linearized one-dimensional shallow-water model. The results cover observational noise covariances, initial instrumental errors and erroneous model values.

  2. Multiplicative errors in the galaxy power spectrum: self-calibration of unknown photometric systematics for precision cosmology

    NASA Astrophysics Data System (ADS)

    Shafer, Daniel L.; Huterer, Dragan

    2015-03-01

    We develop a general method to `self-calibrate' observations of galaxy clustering with respect to systematics associated with photometric calibration errors. We first point out the danger posed by the multiplicative effect of calibration errors, where large-angle error propagates to small scales and may be significant even if the large-scale information is cleaned or not used in the cosmological analysis. We then propose a method to measure the arbitrary large-scale calibration errors and use these measurements to correct the small-scale (high-multipole) power which is most useful for constraining the majority of cosmological parameters. We demonstrate the effectiveness of our approach on synthetic examples and briefly discuss how it may be applied to real data.

  3. Convolution method and CTV-to-PTV margins for finite fractions and small systematic errors

    NASA Astrophysics Data System (ADS)

    Gordon, J. J.; Siebers, J. V.

    2007-04-01

    The van Herk margin formula (VHMF) relies on the accuracy of the convolution method (CM) to determine clinical target volume (CTV) to planning target volume (PTV) margins. This work (1) evaluates the accuracy of the CM and VHMF as a function of the number of fractions N and other parameters, and (2) proposes an alternative margin algorithm which ensures target coverage for a wider range of parameter values. Dose coverage was evaluated for a spherical target with uniform margin, using the same simplified dose model and CTV coverage criterion as were used in development of the VHMF. Systematic and random setup errors were assumed to be normally distributed with standard deviations Σ and σ. For clinically relevant combinations of σ, Σ and N, margins were determined by requiring that 90% of treatment course simulations have a CTV minimum dose greater than or equal to the static PTV minimum dose. Simulation results were compared with the VHMF and the alternative margin algorithm. The CM and VHMF were found to be accurate for parameter values satisfying the approximate criterion: σ[1 - γN/25] < 0.2, where γ = Σ/σ. They were found to be inaccurate for σ[1 - γN/25] > 0.2, because they failed to account for the non-negligible dose variability associated with random setup errors. These criteria are applicable when σ gap σP, where σP = 0.32 cm is the standard deviation of the normal dose penumbra. (Qualitative behaviour of the CM and VHMF will remain the same, though the criteria might vary if σP takes values other than 0.32 cm.) When σ Lt σP, dose variability due to random setup errors becomes negligible, and the CM and VHMF are valid regardless of the values of Σ and N. When σ gap σP, consistent with the above criteria, it was found that the VHMF can underestimate margins for large σ, small Σ and small N. A potential consequence of this underestimate is that the CTV minimum dose can fall below its planned value in more than the prescribed 10% of

  4. Convolution method and CTV-to-PTV margins for finite fractions and small systematic errors.

    PubMed

    Gordon, J J; Siebers, J V

    2007-04-01

    The van Herk margin formula (VHMF) relies on the accuracy of the convolution method (CM) to determine clinical target volume (CTV) to planning target volume (PTV) margins. This work (1) evaluates the accuracy of the CM and VHMF as a function of the number of fractions N and other parameters, and (2) proposes an alternative margin algorithm which ensures target coverage for a wider range of parameter values. Dose coverage was evaluated for a spherical target with uniform margin, using the same simplified dose model and CTV coverage criterion as were used in development of the VHMF. Systematic and random setup errors were assumed to be normally distributed with standard deviations Sigma and sigma. For clinically relevant combinations of sigma, Sigma and N, margins were determined by requiring that 90% of treatment course simulations have a CTV minimum dose greater than or equal to the static PTV minimum dose. Simulation results were compared with the VHMF and the alternative margin algorithm. The CM and VHMF were found to be accurate for parameter values satisfying the approximate criterion: sigma[1 - gammaN/25] < 0.2, where gamma = Sigma/sigma. They were found to be inaccurate for sigma[1 - gammaN/25] > 0.2, because they failed to account for the non-negligible dose variability associated with random setup errors. These criteria are applicable when sigma greater than or approximately egual sigma(P), where sigma(P) = 0.32 cm is the standard deviation of the normal dose penumbra. (Qualitative behaviour of the CM and VHMF will remain the same, though the criteria might vary if sigma(P) takes values other than 0.32 cm.) When sigma < sigma(P), dose variability due to random setup errors becomes negligible, and the CM and VHMF are valid regardless of the values of Sigma and N. When sigma greater than or approximately egual sigma(P), consistent with the above criteria, it was found that the VHMF can underestimate margins for large sigma, small Sigma and small N. A

  5. Reduction of systematic errors in regional climate simulations of the summer monsoon over East Asia and the western North Pacific by applying the spectral nudging technique

    NASA Astrophysics Data System (ADS)

    Cha, Dong-Hyun; Lee, Dong-Kyou

    2009-07-01

    In this study, the systematic errors in regional climate simulation of 28-year summer monsoon over East Asia and the western North Pacific (WNP) and the impact of the spectral nudging technique (SNT) on the reduction of the systematic errors are investigated. The experiment in which the SNT is not applied (the CLT run) has large systematic errors in seasonal mean climatology such as overestimated precipitation, weakened subtropical high, and enhanced low-level southwesterly over the subtropical WNP, while in the experiment using the SNT (the SP run) considerably smaller systematic errors are resulted. In the CTL run, the systematic error of simulated precipitation over the ocean increases significantly after mid-June, since the CTL run cannot reproduce the principal intraseasonal variation of summer monsoon precipitation. The SP run can appropriately capture the spatial distribution as well as temporal variation of the principal empirical orthogonal function mode, and therefore, the systematic error over the ocean does not increase after mid-June. The systematic error of simulated precipitation over the subtropical WNP in the CTL run results from the unreasonable positive feedback between precipitation and surface latent heat flux induced by the warm sea surface temperature anomaly. Since the SNT plays a role in decreasing the positive feedback by improving monsoon circulations, the SP run can considerably reduce the systematic errors of simulated precipitation as well as atmospheric fields over the subtropical WNP region.

  6. Procedures for dealing with certain types of noise and systematic errors common to many Hadamard transform optical systems

    NASA Technical Reports Server (NTRS)

    Harwit, M.

    1977-01-01

    Sources of noise and error correcting procedures characteristic of Hadamard transform optical systems were investigated. Reduction of spectral noise due to noise spikes in the data, the effect of random errors, the relative performance of Fourier and Hadamard transform spectrometers operated under identical detector-noise-limited conditions, and systematic means for dealing with mask defects are among the topics discussed. The distortion in Hadamard transform optical instruments caused by moving Masks, incorrect mask alignment, missing measurements, and diffraction is analyzed and techniques for reducing or eliminating this distortion are described.

  7. A Novel, Physics-Based Data Analytics Framework for Reducing Systematic Model Errors

    NASA Astrophysics Data System (ADS)

    Wu, W.; Liu, Y.; Vandenberghe, F. C.; Knievel, J. C.; Hacker, J.

    2015-12-01

    Most climate and weather models exhibit systematic biases, such as under predicted diurnal temperatures in the WRF (Weather Research and Forecasting) model. General approaches to alleviate the systematic biases include improving model physics and numerics, improving data assimilation, and bias correction through post-processing. In this study, we developed a novel, physics-based data analytics framework in post processing by taking advantage of ever-growing high-resolution (spatial and temporal) observational and modeling data. In the framework, a spatiotemporal PCA (Principal Component Analysis) is first applied on the observational data to filter out noise and information on scales that a model may not be able to resolve. The filtered observations are then used to establish regression relationships with archived model forecasts in the same spatiotemporal domain. The regressions along with the model forecasts predict the projected observations in the forecasting period. The pre-regression PCA procedure strengthens regressions, and enhances predictive skills. We then combine the projected observations with the past observations to apply PCA iteratively to derive the final forecasts. This post-regression PCA reconstructs variances and scales of information that are lost in the regression. The framework was examined and validated with 24 days of 5-minute observational data and archives from the WRF model at 27 stations near Dugway Proving Ground, Utah. The validation shows significant bias reduction in the diurnal cycle of predicted surface air temperature compared to the direct output from the WRF model. Additionally, unlike other post-processing bias correction schemes, the data analytics framework does not require long-term historic data and model archives. A week or two of the data is enough to take into account changes in weather regimes. The program, written in python, is also computationally efficient.

  8. Optimizing MRI-targeted fusion prostate biopsy: the effect of systematic error and anisotropy on tumor sampling

    NASA Astrophysics Data System (ADS)

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2015-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the 21-47% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still has a substantial false negative rate. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. As a step toward this optimization, we obtained multiparametric MRI (mpMRI) and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy, and investigated the effects of systematic errors and anisotropy on P. Our experiments indicated that a biopsy system's lateral and elevational errors have a much greater effect on sampling probabilities, relative to its axial error. We have also determined that for a system with RMS error of 3.5 mm, tumors of volume 1.9 cm3 and smaller may require more than one biopsy core to ensure 95% probability of a sample with 50% core involvement, and tumors 1.0 cm3 and smaller may require more than two cores.

  9. Systematic Errors that are Due to the Monochromatic-Equivalent Radiative Transfer Approximation in Thermal Emission Problems.

    PubMed

    Turner, D S

    2000-11-01

    An underlying assumption of data assimilation models is that the radiative transfer model used by them can simulate observed radiances with zero bias and small error. For practical reasons a fast parameterized radiative transfer model is used instead of a highly accurate line-by-line model. These fast models usually replace the spectral integration of the product of the transmittance and the Planck function with a monochromatic equivalent, namely, the product of a spectrally averaged transmittance and a spectrally averaged Planck function. The error of using this equivalent form is commonly assumed to be negligible. However, this error is not necessarily negligible and introduces a systematic height-dependent bias to the assimilation scheme. Although the bias could be corrected by a separate bias correction scheme, it is more effective to correct its source, the fast radiative transfer model. I examine the magnitude of error when the monochromatic-equivalent approach is used and demonstrate how a fast parameterized radiative model with Planck-weighted mean transmittances can effectively reduce if not eliminate these errors at source. I focus on channel 12 of the High-Resolution Infrared Radiation Sounder onboard the National Oceanic and Atmospheric Administration (NOAA)-14 satellite that, among all the channels of this instrument, displays the largest error.

  10. Matching post-Newtonian and numerical relativity waveforms: Systematic errors and a new phenomenological model for nonprecessing black hole binaries

    SciTech Connect

    Santamaria, L.; Ohme, F.; Dorband, N.; Moesta, P.; Robinson, E. L.; Krishnan, B.; Ajith, P.; Bruegmann, B.; Hannam, M.; Husa, S.; Pollney, D.; Reisswig, C.; Seiler, J.

    2010-09-15

    We present a new phenomenological gravitational waveform model for the inspiral and coalescence of nonprecessing spinning black hole binaries. Our approach is based on a frequency-domain matching of post-Newtonian inspiral waveforms with numerical relativity based binary black hole coalescence waveforms. We quantify the various possible sources of systematic errors that arise in matching post-Newtonian and numerical relativity waveforms, and we use a matching criteria based on minimizing these errors; we find that the dominant source of errors are those in the post-Newtonian waveforms near the merger. An analytical formula for the dominant mode of the gravitational radiation of nonprecessing black hole binaries is presented that captures the phenomenology of the hybrid waveforms. Its implementation in the current searches for gravitational waves should allow cross-checks of other inspiral-merger-ringdown waveform families and improve the reach of gravitational-wave searches.

  11. [Error structure and additivity of individual tree biomass model for four natural conifer species in Northeast China].

    PubMed

    Dogn, Li-hu; Li, Feng-ri; Song, Yu-wen

    2015-03-01

    Based on the biomass data of 276 sampling trees of Pinus koraiensis, Abies nephrolepis, Picea koraiensis and Larix gmelinii, the mono-element and dual-element additive system of biomass equations for the four conifer species was developed. The model error structure (additive vs. multiplicative) of the allometric equation was evaluated using the likelihood analysis, while nonlinear seemly unrelated regression was used to estimate the parameters in the additive system of biomass equations. The results indicated that the assumption of multiplicative error structure was strongly supported for the biomass equations of total and tree components for the four conifer species. Thus, the additive system of log-transformed biomass equations was developed. The adjusted coefficient of determination (Ra 2) of the additive system of biomass equations for the four conifer species was 0.85-0.99, the mean relative error was between -7.7% and 5.5%, and the mean absolute relative error was less than 30.5%. Adding total tree height in the additive systems of biomass equations could significantly improve model fitting performance and predicting precision, and the biomass equations of total, aboveground and stem were better than biomass equations of root, branch, foliage and crown. The precision of each biomass equation in the additive system varied from 77.0% to 99.7% with a mean value of 92.3% that would be suitable for predicting the biomass of the four natural conifer species.

  12. a Measuring System with AN Additional Channel for Eliminating the Dynamic Error

    NASA Astrophysics Data System (ADS)

    Dichev, Dimitar; Koev, Hristofor; Louda, Petr

    2014-03-01

    The present article views a measuring system for determining the parameters of vessels. The system has high measurement accuracy when operating in both static and dynamic mode. It is designed on a gyro-free principle for plotting a vertical. High accuracy of measurement is achieved by using a simplified design of the mechanical module as well by minimizing the instrumental error. A new solution for improving the measurement accuracy in dynamic mode is offered. The approach presented is based on a method where the dynamic error is eliminated in real time, unlike the existing measurement methods and tools where stabilization of the vertical in the inertial space is used. The results obtained from the theoretical experiments, which have been performed on the basis of the developed mathematical model, demonstrate the effectiveness of the suggested measurement approach.

  13. Systematic Errors in Stereo PIV When Imaging through a Glass Window

    NASA Technical Reports Server (NTRS)

    Green, Richard; McAlister, Kenneth W.

    2004-01-01

    This document assesses the magnitude of velocity measurement errors that may arise when performing stereo particle image velocimetry (PIV) with cameras viewing through thick, refractive window and where the calibration is performed in one plane only. The effect of the window is to introduce a refractive error that increases with window thickness and the camera angle of incidence. The calibration should be performed while viewing through the test section window, otherwise a potentially significant error may be introduced that affects each velocity component differently. However, even when the calibration is performed correctly, another error may arise during the stereo reconstruction if the perspective angle determined for each camera does not account for the displacement of the light rays as they refract through the thick window. Care should be exercised when applying in a single-plane calibration since certain implicit assumptions may in fact require conditions that are extremely difficult to meet in a practical laboratory environment. It is suggested that the effort expended to ensure this accuracy may be better expended in performing a more lengthy volumetric calibration procedure, which does not rely upon the assumptions implicit in the single plane method and avoids the need for the perspective angle to be calculated.

  14. Reduction of Systematic Errors in Diagnostic Receivers Through the Use of Balanced Dicke-Switching and Y-Factor Noise Calibrations

    SciTech Connect

    John Musson, Trent Allison, Roger Flood, Jianxun Yan

    2009-05-01

    Receivers designed for diagnostic applications range from those having moderate sensitivity to those possessing large dynamic range. Digital receivers have a dynamic range which are a function of the number of bits represented by the ADC and subsequent processing. If some of this range is sacrificed for extreme sensitivity, noise power can then be used to perform two-point load calibrations. Since load temperatures can be precisely determined, the receiver can be quickly and accurately characterized; minute changes in system gain can then be detected, and systematic errors corrected. In addition, using receiver pairs in a balanced approach to measuring X+, X-, Y+, Y-, reduces systematic offset errors from non-identical system gains, and changes in system performance. This paper describes and demonstrates a balanced BPM-style diagnostic receiver, employing Dicke-switching to establish and maintain real-time system calibration. Benefits of such a receiver include wide bandwidth, solid absolute accuracy, improved position accuracy, and phase-sensitive measurements. System description, static and dynamic modelling, and measurement data are presented.

  15. DtaRefinery, a Software Tool for Elimination of Systematic Errors from Parent Ion Mass Measurements in Tandem Mass Spectra Data Sets*

    PubMed Central

    Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.; Polpitiya, Ashoka D.; Purvine, Samuel O.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

    2010-01-01

    Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and high throughput MS/MS fragmentation have become widely available in recent years, allowing for significantly better discrimination between true and false MS/MS peptide identifications by the application of a relatively narrow window for maximum allowable deviations of measured parent ion masses. To fully gain the advantage of highly accurate parent ion mass measurements, it is important to limit systematic mass measurement errors. Based on our previous studies of systematic biases in mass measurement errors, here, we have designed an algorithm and software tool that eliminates the systematic errors from the peptide ion masses in MS/MS data. We demonstrate that the elimination of the systematic mass measurement errors allows for the use of tighter criteria on the deviation of measured mass from theoretical monoisotopic peptide mass, resulting in a reduction of both false discovery and false negative rates of peptide identification. A software implementation of this algorithm called DtaRefinery reads a set of fragmentation spectra, searches for MS/MS peptide identifications using a FASTA file containing expected protein sequences, fits a regression model that can estimate systematic errors, and then corrects the parent ion mass entries by removing the estimated systematic error components. The output is a new file with fragmentation spectra with updated parent ion masses. The software is freely available. PMID:20019053

  16. Systematic reduction of sign errors in many-body calculations of atoms and molecules

    SciTech Connect

    Bajdich, Michal; Tiago, Murilo L; Hood, Randolph Q.; Kent, Paul R; Reboredo, Fernando A

    2010-01-01

    The self-healing diffusion Monte Carlo algorithm (SHDMC) [Phys. Rev. B {\\bf 79} 195117 (2009), {\\it ibid.} {\\bf 80} 125110 (2009)] is applied to the calculation of ground state states of atoms and molecules. By direct comparison with accurate configuration interaction results we show that applying the SHDMC method to the oxygen atom leads to systematic convergence towards the exact ground state wave function. We present results for the small but challenging N$_2$ molecule, where results obtained via the energy minimization method and SHDMC are within experimental accuracy of 0.08 eV. Moreover, we demonstrate that the algorithm is robust enough to be used for the calculations of systems at least as large as C$_{20}$ starting from a set of random coefficients. SHDMC thus constitutes a practical method for systematically reducing the fermion sign problem in electronic structure calculations.

  17. Possible evidence for a variable fine-structure constant from QSO absorption lines: systematic errors

    NASA Astrophysics Data System (ADS)

    Murphy, M. T.; Webb, J. K.; Flambaum, V. V.; Churchill, C. W.; Prochaska, J. X.

    2001-11-01

    Comparison of quasar (QSO) absorption spectra with laboratory spectra allows us to probe possible variations in the fundamental constants over cosmological time-scales. In a companion paper we present an analysis of Keck/HIRES spectra and report possible evidence suggesting that the fine-structure constant, α, may have been smaller in the past: [formmu2]Δα/α=(-0.72+/-0.18)×10-5 over the redshift range [formmu3]0.5systematic effects. Most of these do not significantly influence our results. When we correct for those which do produce a significant systematic effect in the data, the deviation of [formmu4]Δα/α from zero becomes more significant. We are led increasingly to the interpretation that α was slightly smaller in the past.

  18. Galaxy Cluster Shapes and Systematic Errors in the Hubble Constant as Determined by the Sunyaev-Zel'dovich Effect

    NASA Technical Reports Server (NTRS)

    Sulkanen, Martin E.; Joy, M. K.; Patel, S. K.

    1998-01-01

    Imaging of the Sunyaev-Zei'dovich (S-Z) effect in galaxy clusters combined with the cluster plasma x-ray diagnostics can measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic errors in the Hubble constant, H$-O$, because the true shape of the cluster is not known. This effect remains present for clusters that are otherwise chosen to avoid complications for the S-Z and x-ray analysis, such as plasma temperature variations, cluster substructure, or cluster dynamical evolution. In this paper we present a study of the systematic errors in the value of H$-0$, as determined by the x-ray and S-Z properties of a theoretical sample of triaxial isothermal 'beta-model' clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. The model clusters are not generated as ellipsoids of rotation, but have three independent 'core radii', as well as a random orientation to the plane of the sky.

  19. Avoiding a Systematic Error in Assessing Fat Graft Survival in the Breast with Repeated Magnetic Resonance Imaging

    PubMed Central

    Herly, Mikkel; Müller, Felix C.; Elberg, Jens J.; Kølle, Stig-Frederik T.; Fischer-Nielsen, Anne; Thomsen, Carsten; Drzewiecki, Krzysztof T.

    2016-01-01

    Summary: Several techniques for measuring breast volume (BV) are based on examining the breast on magnetic resonance imaging. However, when techniques designed to measure total BV are used to quantify BV changes, for example, after fat grafting, a systematic error is introduced because BV changes lead to contour alterations of the breast. The volume of the altered breast includes not only the injected volume but also tissue previously surrounding the breast. Therefore, the quantitative difference in BV before and after augmentation will differ from the injected volume. Here, we present a new technique to measure BV changes that compensates for this systematic error by defining the boundaries of the breast to immovable osseous pointers. This approach avoids the misinterpretation of tissue included within the expanded boundaries as graft tissue. This new method of analysis may be a reliable tool for assessing BV changes to determine fat graft retention and may be useful for evaluating and comparing available surgical techniques for breast augmentation and reconstruction using fat grafting. PMID:27757341

  20. Diagnostic errors in older patients: a systematic review of incidence and potential causes in seven prevalent diseases

    PubMed Central

    Skinner, Thomas R; Scott, Ian A; Martin, Jennifer H

    2016-01-01

    Background Misdiagnosis, either over- or underdiagnosis, exposes older patients to increased risk of inappropriate or omitted investigations and treatments, psychological distress, and financial burden. Objective To evaluate the frequency and nature of diagnostic errors in 16 conditions prevalent in older patients by undertaking a systematic literature review. Data sources and study selection Cohort studies, cross-sectional studies, or systematic reviews of such studies published in Medline between September 1993 and May 2014 were searched using key search terms of “diagnostic error”, “misdiagnosis”, “accuracy”, “validity”, or “diagnosis” and terms relating to each disease. Data synthesis A total of 938 articles were retrieved. Diagnostic error rates of >10% for both over- and underdiagnosis were seen in chronic obstructive pulmonary disease, dementia, Parkinson’s disease, heart failure, stroke/transient ischemic attack, and acute myocardial infarction. Diabetes was overdiagnosed in <5% of cases. Conclusion Over- and underdiagnosis are common in older patients. Explanations for over-diagnosis include subjective diagnostic criteria and the use of criteria not validated in older patients. Underdiagnosis was associated with long preclinical phases of disease or lack of sensitive diagnostic criteria. Factors that predispose to misdiagnosis in older patients must be emphasized in education and clinical guidelines. PMID:27284262

  1. Diagnostic and therapeutic errors in trigeminal autonomic cephalalgias and hemicrania continua: a systematic review.

    PubMed

    Viana, Michele; Tassorelli, Cristina; Allena, Marta; Nappi, Giuseppe; Sjaastad, Ottar; Antonaci, Fabio

    2013-02-18

    Trigeminal autonomic cephalalgias (TACs) and hemicrania continua (HC) are relatively rare but clinically rather well-defined primary headaches. Despite the existence of clear-cut diagnostic criteria (The International Classification of Headache Disorders, 2nd edition - ICHD-II) and several therapeutic guidelines, errors in workup and treatment of these conditions are frequent in clinical practice. We set out to review all available published data on mismanagement of TACs and HC patients in order to understand and avoid its causes. The search strategy identified 22 published studies. The most frequent errors described in the management of patients with TACs and HC are: referral to wrong type of specialist, diagnostic delay, misdiagnosis, and the use of treatments without overt indication. Migraine with and without aura, trigeminal neuralgia, sinus infection, dental pain and temporomandibular dysfunction are the disorders most frequently overdiagnosed. Even when the clinical picture is clear-cut, TACs and HC are frequently not recognized and/or mistaken for other disorders, not only by general physicians, dentists and ENT surgeons, but also by neurologists and headache specialists. This seems to be due to limited knowledge of the specific characteristics and variants of these disorders, and it results in the unnecessary prescription of ineffective and sometimes invasive treatments which may have negative consequences for patients. Greater knowledge of and education about these disorders, among both primary care physicians and headache specialists, might contribute to improving the quality of life of TACs and HC patients.

  2. The curious anomaly of skewed judgment distributions and systematic error in the wisdom of crowds.

    PubMed

    Nash, Ulrik W

    2014-01-01

    Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ) model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem. PMID:25406078

  3. The Curious Anomaly of Skewed Judgment Distributions and Systematic Error in the Wisdom of Crowds

    PubMed Central

    Nash, Ulrik W.

    2014-01-01

    Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ) model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem. PMID:25406078

  4. A review of sources of systematic errors and uncertainties in observations and simulations at 183 GHz

    NASA Astrophysics Data System (ADS)

    Brogniez, Helene; English, Stephen; Mahfouf, Jean-Francois; Behrendt, Andreas; Berg, Wesley; Boukabara, Sid; Buehler, Stefan Alexander; Chambon, Philippe; Gambacorta, Antonia; Geer, Alan; Ingram, William; Kursinski, E. Robert; Matricardi, Marco; Odintsova, Tatyana A.; Payne, Vivienne H.; Thorne, Peter W.; Tretyakov, Mikhail Yu.; Wang, Junhong

    2016-05-01

    Several recent studies have observed systematic differences between measurements in the 183.31 GHz water vapor line by space-borne sounders and calculations using radiative transfer models, with inputs from either radiosondes (radiosonde observations, RAOBs) or short-range forecasts by numerical weather prediction (NWP) models. This paper discusses all the relevant categories of observation-based or model-based data, quantifies their uncertainties and separates biases that could be common to all causes from those attributable to a particular cause. Reference observations from radiosondes, Global Navigation Satellite System (GNSS) receivers, differential absorption lidar (DIAL) and Raman lidar are thus overviewed. Biases arising from their calibration procedures, NWP models and data assimilation, instrument biases and radiative transfer models (both the models themselves and the underlying spectroscopy) are presented and discussed. Although presently no single process in the comparisons seems capable of explaining the observed structure of bias, recommendations are made in order to better understand the causes.

  5. Systematic identification and correction of annotation errors in the genetic interaction map of Saccharomyces cerevisiae

    PubMed Central

    Atias, Nir; Kupiec, Martin; Sharan, Roded

    2016-01-01

    The yeast mutant collections are a fundamental tool in deciphering genomic organization and function. Over the last decade, they have been used for the systematic exploration of ∼6 000 000 double gene mutants, identifying and cataloging genetic interactions among them. Here we studied the extent to which these data are prone to neighboring gene effects (NGEs), a phenomenon by which the deletion of a gene affects the expression of adjacent genes along the genome. Analyzing ∼90,000 negative genetic interactions observed to date, we found that more than 10% of them are incorrectly annotated due to NGEs. We developed a novel algorithm, GINGER, to identify and correct erroneous interaction annotations. We validated the algorithm using a comparative analysis of interactions from Schizosaccharomyces pombe. We further showed that our predictions are significantly more concordant with diverse biological data compared to their mis-annotated counterparts. Our work uncovered about 9500 new genetic interactions in yeast. PMID:26602688

  6. SU-E-T-405: Robustness of Volumetric-Modulated Arc Therapy (VMAT) Plans to Systematic MLC Positional Errors

    SciTech Connect

    Qi, P; Xia, P

    2014-06-01

    Purpose: To evaluate the dosimetric impact of systematic MLC positional errors (PEs) on the quality of volumetric-modulated arc therapy (VMAT) plans. Methods: Five patients with head-and-neck cancer (HN) and five patients with prostate cancer were randomly chosen for this study. The clinically approved VMAT plans were designed with 2–4 coplanar arc beams with none-zero collimator angles in the Pinnacle planning system. The systematic MLC PEs of 0.5, 1.0, and 2.0 mm on both MLC banks were introduced into the original VMAT plans using an in-house program, and recalculated with the same planned Monitor Units in the Pinnacle system. For each patient, the original VMAT plans and plans with MLC PEs were evaluated according to the dose-volume histogram information and Gamma index analysis. Results: For one primary target, the ratio of V100 in the plans with 0.5, 1.0, and 2.0 mm MLC PEs to those in the clinical plans was 98.8 ± 2.2%, 97.9 ± 2.1%, 90.1 ± 9.0% for HN cases and 99.5 ± 3.2%, 98.9 ± 1.0%, 97.0 ± 2.5% for prostate cases. For all OARs, the relative difference of Dmean in all plans was less than 1.5%. With 2mm/2% criteria for Gamma analysis, the passing rates were 99.0 ± 1.5% for HN cases and 99.7 ± 0.3% for prostate cases between the planar doses from the original plans and the plans with 1.0 mm MLC errors. The corresponding Gamma passing rates dropped to 88.9 ± 5.3% for HN cases and 83.4 ± 3.2% for prostate cases when comparing planar doses from the original plans and the plans with 2.0 mm MLC errors. Conclusion: For VMAT plans, systematic MLC PEs up to 1.0 mm did not affect the plan quality in term of target coverage, OAR sparing, and Gamma analysis with 2mm/2% criteria.

  7. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    PubMed

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  8. Refractive Error and Risk of Early or Late Age-Related Macular Degeneration: A Systematic Review and Meta-Analysis

    PubMed Central

    Li, Ying; Wang, JiWen; Zhong, XiaoJing; Tian, Zhen; Wu, Peipei; Zhao, Wenbo; Jin, Chenjin

    2014-01-01

    Objective To summarize relevant evidence investigating the associations between refractive error and age-related macular degeneration (AMD). Design Systematic review and meta-analysis. Methods We searched Medline, Web of Science, and Cochrane databases as well as the reference lists of retrieved articles to identify studies that met the inclusion criteria. Extracted data were combined using a random-effects meta-analysis. Studies that were pertinent to our topic but did not meet the criteria for quantitative analysis were reported in a systematic review instead. Main outcome measures Pooled odds ratios (ORs) and 95% confidence intervals (CIs) for the associations between refractive error (hyperopia, myopia, per-diopter increase in spherical equivalent [SE] toward hyperopia, per-millimeter increase in axial length [AL]) and AMD (early and late, prevalent and incident). Results Fourteen studies comprising over 5800 patients were eligible. Significant associations were found between hyperopia, myopia, per-diopter increase in SE, per-millimeter increase in AL, and prevalent early AMD. The pooled ORs and 95% CIs were 1.13 (1.06–1.20), 0.75 (0.56–0.94), 1.10 (1.07–1.14), and 0.79 (0.73–0.85), respectively. The per-diopter increase in SE was also significantly associated with early AMD incidence (OR, 1.06; 95% CI, 1.02–1.10). However, no significant association was found between hyperopia or myopia and early AMD incidence. Furthermore, neither prevalent nor incident late AMD was associated with refractive error. Considerable heterogeneity was found among studies investigating the association between myopia and prevalent early AMD (P = 0.001, I2 = 72.2%). Geographic location might play a role; the heterogeneity became non-significant after stratifying these studies into Asian and non-Asian subgroups. Conclusion Refractive error is associated with early AMD but not with late AMD. More large-scale longitudinal studies are needed to further investigate such

  9. Evaluation of the ability of a 2D ionisation chamber array and an EPID to detect systematic delivery errors in IMRT plans

    NASA Astrophysics Data System (ADS)

    Bawazeer, Omemh; Gray, Alison; Arumugam, Sankar; Vial, Philip; Thwaites, David; Descallar, Joseph; Holloway, Lois

    2014-03-01

    Two clinical intensity modulated radiotherapy plans were selected. Eleven plan variations were created with systematic errors introduced: Multi-Leaf Collimator (MLC) positional errors with all leaf pairs shifted in the same or the opposite direction, and collimator rotation offsets. Plans were measured using an Electronic Portal Imaging Device (EPID) and an ionisation chamber array. The plans were evaluated using gamma analysis with different criteria. The gamma pass rates remained around 95% or higher for most cases with MLC positional errors of 1 mm and 2 mm with 3%/3mm criteria. The ability of both devices to detect delivery errors was similar.

  10. The focus-to-detector distance as a source of systematical errors in the measurement of Chaoul therapy units.

    PubMed

    Zaránd, P

    1980-09-01

    The skin exposure rates measured on 22 Chaoul units in two consecutive years were compared and their variance was analysed. The statistical fluctuation of the ionization method was 3.1% by a factor of about 2 to 2.5 smaller than the variations due to lack of reproducibility of the Chaoul units. The authors observed systematical errors among exposure rate measurement performed at different focus-to-detector distances. The effective source-to-detector distance is different for various ionization chambers. It is the sum of nominal focus-to-detector distance plus a geometrical constant. The geometrical constant is for a particular chamber only to a small extent dependent on the front wall thickness and on the focus-to detector distance. Sufficient standardization of both calibration procedure and construction of ionization chambers may help in avoiding this effect.

  11. Variations in Learning Rate: Student Classification Based on Systematic Residual Error Patterns across Practice Opportunities

    ERIC Educational Resources Information Center

    Liu, Ran; Koedinger, Kenneth R.

    2015-01-01

    A growing body of research suggests that accounting for student specific variability in educational data can improve modeling accuracy and may have implications for individualizing instruction. The Additive Factors Model (AFM), a logistic regression model used to fit educational data and discover/refine skill models of learning, contains a…

  12. Systematic errors in respiratory gating due to intrafraction deformations of the liver

    SciTech Connect

    Siebenthal, Martin von; Szekely, Gabor; Lomax, Antony J.; Cattin, Philippe C.

    2007-09-15

    This article shows the limitations of respiratory gating due to intrafraction deformations of the right liver lobe. The variability of organ shape and motion over tens of minutes was taken into account for this evaluation, which closes the gap between short-term analysis of a few regular cycles, as it is possible with 4DCT, and long-term analysis of interfraction motion. Time resolved MR volumes (4D MR sequences) were reconstructed for 12 volunteers and subsequent non-rigid registration provided estimates of the 3D trajectories of points within the liver over time. The full motion during free breathing and its distribution over the liver were quantified and respiratory gating was simulated to determine the gating accuracy for different gating signals, duty cycles, and different intervals between patient setup and treatment. Gating effectively compensated for the respiratory motion within short sequences (3 min), but deformations, mainly in the anterior inferior part (Couinaud segments IVb and V), led to systematic deviations from the setup position of more than 5 mm in 7 of 12 subjects after 20 min. We conclude that measurements over a few breathing cycles should not be used as a proof of accurate reproducibility of motion, not even within the same fraction, if it is longer than a few minutes. Although the diaphragm shows the largest magnitude of motion, it should not be used to assess the gating accuracy over the entire liver because the reproducibility is typically much more limited in inferior parts. Simple gating signals, such as the trajectory of skin motion, can detect the exhalation phase, but do not allow for an absolute localization of the complete liver over longer periods because the drift of these signals does not necessarily correlate with the internal drift.

  13. Volumetric apparatus for hydrogen adsorption and diffusion measurements: Sources of systematic error and impact of their experimental resolutions

    SciTech Connect

    Policicchio, Alfonso; Maccallini, Enrico; Kalantzopoulos, Georgios N.; Cataldi, Ugo; Abate, Salvatore; Desiderio, Giovanni

    2013-10-15

    The development of a volumetric apparatus (also known as a Sieverts’ apparatus) for accurate and reliable hydrogen adsorption measurement is shown. The instrument minimizes the sources of systematic errors which are mainly due to inner volume calibration, stability and uniformity of the temperatures, precise evaluation of the skeletal volume of the measured samples, and thermodynamical properties of the gas species. A series of hardware and software solutions were designed and introduced in the apparatus, which we will indicate as f-PcT, in order to deal with these aspects. The results are represented in terms of an accurate evaluation of the equilibrium and dynamical characteristics of the molecular hydrogen adsorption on two well-known porous media. The contribution of each experimental solution to the error propagation of the adsorbed moles is assessed. The developed volumetric apparatus for gas storage capacity measurements allows an accurate evaluation over a 4 order-of-magnitude pressure range (from 1 kPa to 8 MPa) and in temperatures ranging between 77 K and 470 K. The acquired results are in good agreement with the values reported in the literature.

  14. The Systematic Error Test for PSF Correction in Weak Gravitational Lensing Shear Measurement By the ERA Method By Idealizing PSF

    NASA Astrophysics Data System (ADS)

    Okura, Yuki; Futamase, Toshifumi

    2016-08-01

    We improve the ellipticity of re-smeared artificial image (ERA) method of point-spread function (PSF) correction in a weak lensing shear analysis in order to treat the realistic shape of galaxies and the PSF. This is done by re-smearing the PSF and the observed galaxy image using a re-smearing function (RSF) and allows us to use a new PSF with a simple shape and to correct the PSF effect without any approximations or assumptions. We perform a numerical test to show that the method applied for galaxies and PSF with some complicated shapes can correct the PSF effect with a systematic error of less than 0.1%. We also apply the ERA method for real data of the Abell 1689 cluster to confirm that it is able to detect the systematic weak lensing shear pattern. The ERA method requires less than 0.1 or 1 s to correct the PSF for each object in a numerical test and a real data analysis, respectively.

  15. Error-compensation measurements on polarization qubits

    NASA Astrophysics Data System (ADS)

    Hou, Zhibo; Zhu, Huangjun; Xiang, Guo-Yong; Li, Chuan-Feng; Guo, Guang-Can

    2016-06-01

    Systematic errors are inevitable in most measurements performed in real life because of imperfect measurement devices. Reducing systematic errors is crucial to ensuring the accuracy and reliability of measurement results. To this end, delicate error-compensation design is often necessary in addition to device calibration to reduce the dependence of the systematic error on the imperfection of the devices. The art of error-compensation design is well appreciated in nuclear magnetic resonance system by using composite pulses. In contrast, there are few works on reducing systematic errors in quantum optical systems. Here we propose an error-compensation design to reduce the systematic error in projective measurements on a polarization qubit. It can reduce the systematic error to the second order of the phase errors of both the half-wave plate (HWP) and the quarter-wave plate (QWP) as well as the angle error of the HWP. This technique is then applied to experiments on quantum state tomography on polarization qubits, leading to a 20-fold reduction in the systematic error. Our study may find applications in high-precision tasks in polarization optics and quantum optics.

  16. A Treatment Planning Investigation Into the Dosimetric Effects of Systematic Prostate Patient Rotational Set-Up Errors

    SciTech Connect

    Cranmer-Sargison, Gavin

    2008-10-01

    The purpose of this study was to investigate the potential dosimetric effects of systematic rotational setup errors on prostate patients planned according to the RTOG P-0126 protocol, and to identify rotational tolerances about either the anterior-posterior (AP) or left-right (LR) axis, under which no correction in setup is required. Eight 3-dimensional conformal radiation therapy (3D-CRT) treatment plans were included in the study, half planned to give 7020 cGy in 39 fractions (P-0126 Arm 1) and the other half planned to give 7920 cGy in 44 fractions (P-0126 Arm 2). Systematic rotations of the pelvic anatomy were simulated in a commercial treatment planning system by rotating opposing apertures in the opposite direction to the simulated anatomy rotation. Rotations were incremented in steps of 2.5 deg. to a maximum of {+-}5.0 deg. and {+-}10.0 deg. the AP and LR axis respectively. Dose distributions were evaluated with respect to the planning objectives set out in the P-0126 protocol. For patients on Arm 2 of the study, maintaining the prescribed dose to 98% of the PTV was found to be problematic for superior-end-posterior rotations beyond 5.0 deg. The results also show that maintaining a rectal dose less than 7500 cGy to 15% of the volume can become problematic for cases of small rectal volume and large superior-end-anterior rotations. We found that setting rotational tolerances will depend on which Arm of the protocol the patient is, and how well the initial plan meets the protocol objectives. In general, we conclude that for rotations about the AP axis, no tolerance level is required; however, cases presenting extreme rotations should be investigated as routine practice. For rotations about the LR axis, we conclude that a tolerance level for patients on Arm 2 of the protocol should be set at {+-}5.0 deg. This tolerance represents the systematic setup error which would require correction if a variation to the initial plan was deemed unacceptable.

  17. DtaRefinery: a software tool for elimination of systematic errors from parent ion mass measurements in tandem mass spectra datasets

    SciTech Connect

    Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.; Polpitiya, Ashoka D.; Purvine, Samuel O.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

    2009-12-16

    Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and MS/MS fragmentation have become widely available in recent years and have allowed for sig-nificantly better discrimination between true and false MS/MS pep-tide identifications by applying relatively narrow windows for maxi-mum allowable deviations for parent ion mass measurements. To fully gain the advantage of highly accurate parent ion mass meas-urements, it is important to limit systematic mass measurement errors. The DtaRefinery software tool can correct systematic errors in parent ion masses by reading a set of fragmentation spectra, searching for MS/MS peptide identifications, then fitting a model that can estimate systematic errors, and removing them. This results in a new fragmentation spectrum file with updated parent ion masses.

  18. Probiotics as Additives on Therapy in Allergic Airway Diseases: A Systematic Review of Benefits and Risks

    PubMed Central

    Das, Rashmi Ranjan; Naik, Sushree Samiksha; Singh, Meenu

    2013-01-01

    Background. We conducted a systematic review to find out the role of probiotics in treatment of allergic airway diseases.  Methods. A comprehensive search of the major electronic databases was done till March 2013. Trials comparing the effect of probiotics versus placebo were included. A predefined set of outcome measures were assessed. Continuous data were expressed as standardized mean difference with 95% CI. Dichotomous data were expressed as odds ratio with 95% CI. P value < 0.05 was considered as significant. Results. A total of 12 studies were included. Probiotic intake was associated with a significantly improved quality of life score in patients with allergic rhinitis (SMD −1.9 (95% CI −3.62, −0.19); P = 0.03), though there was a high degree of heterogeneity. No improvement in quality of life score was noted in asthmatics. Probiotic intake also improved the following parameters: longer time free from episodes of asthma and rhinitis and decrease in the number of episodes of rhinitis per year. Adverse events were not significant. Conclusion. As the current evidence was generated from few trials with high degree of heterogeneity, routine use of probiotics as an additive on therapy in subjects with allergic airway diseases cannot be recommended. PMID:23956972

  19. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  20. On the systematic errors of cosmological-scale gravity tests using redshift-space distortion: non-linear effects and the halo bias

    NASA Astrophysics Data System (ADS)

    Ishikawa, Takashi; Totani, Tomonori; Nishimichi, Takahiro; Takahashi, Ryuichi; Yoshida, Naoki; Tonegawa, Motonari

    2014-10-01

    Redshift-space distortion (RSD) observed in galaxy redshift surveys is a powerful tool to test gravity theories on cosmological scales, but the systematic uncertainties must carefully be examined for future surveys with large statistics. Here we employ various analytic models of RSD and estimate the systematic errors on measurements of the structure growth-rate parameter, fσ8, induced by non-linear effects and the halo bias with respect to the dark matter distribution, by using halo catalogues from 40 realizations of 3.4 × 108 comoving h-3 Mpc3 cosmological N-body simulations. We consider hypothetical redshift surveys at redshifts z = 0.5, 1.35 and 2, and different minimum halo mass thresholds in the range of 5.0 × 1011-2.0 × 1013 h-1 M⊙. We find that the systematic error of fσ8 is greatly reduced to ˜5 per cent level, when a recently proposed analytical formula of RSD that takes into account the higher order coupling between the density and velocity fields is adopted, with a scale-dependent parametric bias model. Dependence of the systematic error on the halo mass, the redshift and the maximum wavenumber used in the analysis is discussed. We also find that the Wilson-Hilferty transformation is useful to improve the accuracy of likelihood analysis when only a small number of modes are available in power spectrum measurements.

  1. Meta-code for systematic analysis of chemical addition (SACHA): application to fluorination of C70 and carbon nanostructure growth.

    PubMed

    Ewels, Christopher P; Lier, Gregory Van; Geerlings, Paul; Charlier, Jean-Christophe

    2007-01-01

    We present a new computer program able to systematically study chemical addition to and growth or evolution of carbon nanostructures. SACHA is a meta-code able to exploit a wide variety of pre-existing molecular structure codes, automating the otherwise onerous task of constructing, running, and analyzing the large number of input files that are required when exploring structural isomers and addition paths. By way of examples we consider fluorination of the fullerene cage C70 and carbon nanostructure growth through C2 addition. We discuss the possibilities for extension of this technique to rapidly and efficiently explore structural energy landscapes and application to other areas of chemical and materials research.

  2. Statistical tests against systematic errors in data sets based on the equality of residual means and variances from control samples: theory and applications.

    PubMed

    Henn, Julian; Meindl, Kathrin

    2015-03-01

    Statistical tests are applied for the detection of systematic errors in data sets from least-squares refinements or other residual-based reconstruction processes. Samples of the residuals of the data are tested against the hypothesis that they belong to the same distribution. For this it is necessary that they show the same mean values and variances within the limits given by statistical fluctuations. When the samples differ significantly from each other, they are not from the same distribution within the limits set by the significance level. Therefore they cannot originate from a single Gaussian function in this case. It is shown that a significance cutoff results in exactly this case. Significance cutoffs are still frequently used in charge-density studies. The tests are applied to artificial data with and without systematic errors and to experimental data from the literature.

  3. The apparent British sea slope is caused by systematic errors in the levelling-based vertical datum

    NASA Astrophysics Data System (ADS)

    Penna, N. T.; Featherstone, W. E.; Gazeaux, J.; Bingham, R. J.

    2013-08-01

    The spirit-levelling-based British vertical datum (Ordnance Datum Newlyn) implies a south-north apparent slope in mean sea level of up to 53 mm deg-1 latitude, due to the datum falling on heading northwards. Although this apparent slope has been investigated since the 1960s, explanations of its origin have remained inconclusive. It has also been suggested that, rather than a slope, the British vertical datum includes a step of about 240 mm affecting all sites north of about 53°N. In either case, the British vertical datum may be of limited use for any study requiring accurate heights or changes in heights, such as testing geoid models, groundwater and hydrocarbon extraction, the calibration and validation of satellite-based digital terrain models, and the unification of vertical datums internationally. Within the last decade, however, based on an apparent reduction in the slope to only -12 mm deg-1 latitude with respect to recent geoid models, it has been claimed that the British vertical datum does provide a physically meaningful surface for use in scientific applications. In this paper, we reinvestigate the presence of apparent south-north sea slopes around Britain and reported slopes in the vertical datum, using the EGM2008 global gravitational model, together with mean sea level and GPS data from British tide gauges, GPS ellipsoidal heights of 178 fundamental benchmarks across mainland Britain, and vertical deflection observations at 192 stations. We demonstrate a south-north slope in the British vertical datum of -(20-25) mm deg-1 latitude with respect to both mean sea level (corrected for the ocean's mean dynamic topography and the inverse barometer response to atmospheric pressure loading) and the EGM2008 quasigeoid model, while EGM2008 is shown to exhibit a negligible slope of (2 ± 4) mm deg-1 with respect to mean sea level. It is clear, therefore, that the slope can only arise from systematic errors in the levelling, although we are unable to isolate

  4. A Simple Approach to Experimental Errors

    ERIC Educational Resources Information Center

    Phillips, M. D.

    1972-01-01

    Classifies experimental error into two main groups: systematic error (instrument, personal, inherent, and variational errors) and random errors (reading and setting errors) and presents mathematical treatments for the determination of random errors. (PR)

  5. A systematic study of well-known electrolyte additives in LiCoO2/graphite pouch cells

    NASA Astrophysics Data System (ADS)

    Wang, David Yaohui; Sinha, N. N.; Petibon, R.; Burns, J. C.; Dahn, J. R.

    2014-04-01

    The effectiveness of well-known electrolyte additives singly or in combination on LiCoO2/graphite pouch cells has been systematically investigated and compared using the ultra high precision charger (UHPC) at Dalhousie University and electrochemical impedance spectroscopy (EIS). UHPC studies are believed to identify the best electrolyte additives singly or in combination within a short time period (several weeks). Three parameters: 1) the coulombic efficiency (CE); 2) the charge endpoint capacity slippage (slippage) and 3) the charge transfer resistance (Rct), of LiCoO2/graphite pouch cells with different electrolyte additives singly or in combination were measured and the results for over 55 additive sets are compared. The experimental results suggest that a combination of electrolyte additives can be more effective than a single electrolyte additive. However, of all the additive sets tested, simply using 2 wt.% vinylene carbonate yielded cells very competitive in CE, slippage and Rct. It is hoped that this comprehensive report can be used as a guide and reference for the study of other electrolyte additives singly or in combination.

  6. A systematic approach of tracking and reporting medication errors at a tertiary care university hospital, Karachi, Pakistan

    PubMed Central

    Khowaja, Khurshid; Nizar, Rozmin; Merchant, Rashida J; Dias, Jacqueline; Bustamante-Gavino, Irma; Malik, Amina

    2008-01-01

    Introduction: Administering medication is one of the high risk areas for any health professional. It is a multidisciplinary process, which begins with the doctor’s prescription, followed by review and provision by a pharmacist, and ends with preparation and administration by a nurse. Several studies have highlighted a high medication incident rate at several healthcare institutions. Methods: Our study design was exploratory and evaluative and used methodological triangulation. Sample size was of two types. First, a convenient sample of 1000 medication dosages to estimate the medication error (95% CI). We took another sample from subjects involved in medication usage processes such as physicians, nurses, pharmacists, and patients. Two sets of instruments were designed via extensive literature review: a medication tracking error form and a focus group interview questionnaire. Results: Our study findings revealed 100% compliance with a computerized physician order entry (CPOE) system by physicians, nurses, and pharmacists. The main error rate was 5.5% and pharmacists contributed an higher error rate of 2.6% followed by nurses (1.1%) and physicians (1%). Major areas for improvement in error rates were identified: delay in medication delivery, lab results reviewed electronically before prescription, dispension, and administration. PMID:19209247

  7. Performance analysis for time-frequency MUSIC algorithm in presence of both additive noise and array calibration errors

    NASA Astrophysics Data System (ADS)

    Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim

    2012-12-01

    This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.

  8. A New Approach to Detection of Systematic Errors in Secondary Substation Monitoring Equipment Based on Short Term Load Forecasting.

    PubMed

    Moriano, Javier; Rodríguez, Francisco Javier; Martín, Pedro; Jiménez, Jose Antonio; Vuksanovic, Branislav

    2016-01-12

    In recent years, Secondary Substations (SSs) are being provided with equipment that allows their full management. This is particularly useful not only for monitoring and planning purposes but also for detecting erroneous measurements, which could negatively affect the performance of the SS. On the other hand, load forecasting is extremely important since they help electricity companies to make crucial decisions regarding purchasing and generating electric power, load switching, and infrastructure development. In this regard, Short Term Load Forecasting (STLF) allows the electric power load to be predicted over an interval ranging from one hour to one week. However, important issues concerning error detection by employing STLF has not been specifically addressed until now. This paper proposes a novel STLF-based approach to the detection of gain and offset errors introduced by the measurement equipment. The implemented system has been tested against real power load data provided by electricity suppliers. Different gain and offset error levels are successfully detected.

  9. A New Approach to Detection of Systematic Errors in Secondary Substation Monitoring Equipment Based on Short Term Load Forecasting

    PubMed Central

    Moriano, Javier; Rodríguez, Francisco Javier; Martín, Pedro; Jiménez, Jose Antonio; Vuksanovic, Branislav

    2016-01-01

    In recent years, Secondary Substations (SSs) are being provided with equipment that allows their full management. This is particularly useful not only for monitoring and planning purposes but also for detecting erroneous measurements, which could negatively affect the performance of the SS. On the other hand, load forecasting is extremely important since they help electricity companies to make crucial decisions regarding purchasing and generating electric power, load switching, and infrastructure development. In this regard, Short Term Load Forecasting (STLF) allows the electric power load to be predicted over an interval ranging from one hour to one week. However, important issues concerning error detection by employing STLF has not been specifically addressed until now. This paper proposes a novel STLF-based approach to the detection of gain and offset errors introduced by the measurement equipment. The implemented system has been tested against real power load data provided by electricity suppliers. Different gain and offset error levels are successfully detected. PMID:26771613

  10. Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations

    PubMed Central

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human–robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human–robot interaction experiments. For that, we analyzed 201 videos of five human–robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human–robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies. PMID:26217266

  11. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    PubMed

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

  12. Systematics of the family Plectopylidae in Vietnam with additional information on Chinese taxa (Gastropoda, Pulmonata, Stylommatophora)

    PubMed Central

    Páll-Gergely, Barna; Hunyadi, András; Ablett, Jonathan; Lương, Hào Văn; Fred Naggs; Asami, Takahiro

    2015-01-01

    Abstract Vietnamese species from the family Plectopylidae are revised based on the type specimens of all known taxa, more than 600 historical non-type museum lots, and almost 200 newly-collected samples. Altogether more than 7000 specimens were investigated. The revision has revealed that species diversity of the Vietnamese Plectopylidae was previously overestimated. Overall, thirteen species names (anterides Gude, 1909, bavayi Gude, 1901, congesta Gude, 1898, fallax Gude, 1909, gouldingi Gude, 1909, hirsuta Möllendorff, 1901, jovia Mabille, 1887, moellendorffi Gude, 1901, persimilis Gude, 1901, pilsbryana Gude, 1901, soror Gude, 1908, tenuis Gude, 1901, verecunda Gude, 1909) were synonymised with other species. In addition to these, Gudeodiscus hemmeni sp. n. and Gudeodiscus messageri raheemi ssp. n. are described from north-western Vietnam. Sixteen species and two subspecies are recognized from Vietnam. The reproductive anatomy of eight taxa is described. Based on anatomical information, Halongella gen. n. is erected to include Plectopylis schlumbergeri and Plectopylis fruhstorferi. Additionally, the genus Gudeodiscus is subdivided into two subgenera (Gudeodiscus and Veludiscus subgen. n.) on the basis of the morphology of the reproductive anatomy and the radula. The Chinese Gudeodiscus phlyarius werneri Páll-Gergely, 2013 is moved to synonymy of Gudeodiscus phlyarius. A spermatophore was found in the organ situated next to the gametolytic sac in one specimen. This suggests that this organ in the Plectopylidae is a diverticulum. Statistically significant evidence is presented for the presence of calcareous hook-like granules inside the penis being associated with the absence of embryos in the uterus in four genera. This suggests that these probably play a role in mating periods before disappearing when embryos develop. Sicradiscus mansuyi is reported from China for the first time. PMID:25632253

  13. Toward reducing systematic errors in NWP - cross-evaluation of common physics from 6h-regional to 6d-global to 6mon-coupled applications

    NASA Astrophysics Data System (ADS)

    Benjamin, S.

    2015-12-01

    An integrated evaluation system against gridded data and observations is being applied against global models (FIM, GFS) and regional models (WRF-ARW applications for RAP/HRRR). An overview will be presented on wind, relative humidity, and temperature model errors as measured against rawinsonde and aircraft observations in common at 12h forecast duration for global and regional models. Systematic errors common to both applications will be presented. A common problem with deficient cloud cover has been evident in both 6h (3km HRRR-WRF-ARW) regional forecasts and 6-month coupled-global (FIM-HYCOM) forecasts, allowing improvements in a common deep/shallow convection scheme (Grell-Freitas) with subgrid-scale clouds to be evaluated across time scales.

  14. The effectiveness of computerized order entry at reducing preventable adverse drug events and medication errors in hospital settings: a systematic review and meta-analysis

    PubMed Central

    2014-01-01

    Background The Health Information Technology for Economic and Clinical Health (HITECH) Act subsidizes implementation by hospitals of electronic health records with computerized provider order entry (CPOE), which may reduce patient injuries caused by medication errors (preventable adverse drug events, pADEs). Effects on pADEs have not been rigorously quantified, and effects on medication errors have been variable. The objectives of this analysis were to assess the effectiveness of CPOE at reducing pADEs in hospital-related settings, and examine reasons for heterogeneous effects on medication errors. Methods Articles were identified using MEDLINE, Cochrane Library, Econlit, web-based databases, and bibliographies of previous systematic reviews (September 2013). Eligible studies compared CPOE with paper-order entry in acute care hospitals, and examined diverse pADEs or medication errors. Studies on children or with limited event-detection methods were excluded. Two investigators extracted data on events and factors potentially associated with effectiveness. We used random effects models to pool data. Results Sixteen studies addressing medication errors met pooling criteria; six also addressed pADEs. Thirteen studies used pre-post designs. Compared with paper-order entry, CPOE was associated with half as many pADEs (pooled risk ratio (RR) = 0.47, 95% CI 0.31 to 0.71) and medication errors (RR = 0.46, 95% CI 0.35 to 0.60). Regarding reasons for heterogeneous effects on medication errors, five intervention factors and two contextual factors were sufficiently reported to support subgroup analyses or meta-regression. Differences between commercial versus homegrown systems, presence and sophistication of clinical decision support, hospital-wide versus limited implementation, and US versus non-US studies were not significant, nor was timing of publication. Higher baseline rates of medication errors predicted greater reductions (P < 0.001). Other context and

  15. Impacts of nitrogen addition on plant biodiversity in mountain grasslands depend on dose, application duration and climate: a systematic review.

    PubMed

    Humbert, Jean-Yves; Dwyer, John M; Andrey, Aline; Arlettaz, Raphaël

    2016-01-01

    Although the influence of nitrogen (N) addition on grassland plant communities has been widely studied, it is still unclear whether observed patterns and underlying mechanisms are constant across biomes. In this systematic review, we use meta-analysis and metaregression to investigate the influence of N addition (here referring mostly to fertilization) upon the biodiversity of temperate mountain grasslands (including montane, subalpine and alpine zones). Forty-two studies met our criteria of inclusion, resulting in 134 measures of effect size. The main general responses of mountain grasslands to N addition were increases in phytomass and reductions in plant species richness, as observed in lowland grasslands. More specifically, the analysis reveals that negative effects on species richness were exacerbated by dose (ha(-1) year(-1) ) and duration of N application (years) in an additive manner. Thus, sustained application of low to moderate levels of N over time had effects similar to short-term application of high N doses. The climatic context also played an important role: the overall effects of N addition on plant species richness and diversity (Shannon index) were less pronounced in mountain grasslands experiencing cool rather than warm summers. Furthermore, the relative negative effect of N addition on species richness was more pronounced in managed communities and was strongly negatively related to N-induced increases in phytomass, that is the greater the phytomass response to N addition, the greater the decline in richness. Altogether, this review not only establishes that plant biodiversity of mountain grasslands is negatively affected by N addition, but also demonstrates that several local management and abiotic factors interact with N addition to drive plant community changes. This synthesis yields essential information for a more sustainable management of mountain grasslands, emphasizing the importance of preserving and restoring grasslands with both low

  16. Impacts of nitrogen addition on plant biodiversity in mountain grasslands depend on dose, application duration and climate: a systematic review.

    PubMed

    Humbert, Jean-Yves; Dwyer, John M; Andrey, Aline; Arlettaz, Raphaël

    2016-01-01

    Although the influence of nitrogen (N) addition on grassland plant communities has been widely studied, it is still unclear whether observed patterns and underlying mechanisms are constant across biomes. In this systematic review, we use meta-analysis and metaregression to investigate the influence of N addition (here referring mostly to fertilization) upon the biodiversity of temperate mountain grasslands (including montane, subalpine and alpine zones). Forty-two studies met our criteria of inclusion, resulting in 134 measures of effect size. The main general responses of mountain grasslands to N addition were increases in phytomass and reductions in plant species richness, as observed in lowland grasslands. More specifically, the analysis reveals that negative effects on species richness were exacerbated by dose (ha(-1) year(-1) ) and duration of N application (years) in an additive manner. Thus, sustained application of low to moderate levels of N over time had effects similar to short-term application of high N doses. The climatic context also played an important role: the overall effects of N addition on plant species richness and diversity (Shannon index) were less pronounced in mountain grasslands experiencing cool rather than warm summers. Furthermore, the relative negative effect of N addition on species richness was more pronounced in managed communities and was strongly negatively related to N-induced increases in phytomass, that is the greater the phytomass response to N addition, the greater the decline in richness. Altogether, this review not only establishes that plant biodiversity of mountain grasslands is negatively affected by N addition, but also demonstrates that several local management and abiotic factors interact with N addition to drive plant community changes. This synthesis yields essential information for a more sustainable management of mountain grasslands, emphasizing the importance of preserving and restoring grasslands with both low

  17. Meta-analysis of gene–environment-wide association scans accounting for education level identifies additional loci for refractive error

    PubMed Central

    Fan, Qiao; Verhoeven, Virginie J. M.; Wojciechowski, Robert; Barathi, Veluchamy A.; Hysi, Pirro G.; Guggenheim, Jeremy A.; Höhn, René; Vitart, Veronique; Khawaja, Anthony P.; Yamashiro, Kenji; Hosseini, S Mohsen; Lehtimäki, Terho; Lu, Yi; Haller, Toomas; Xie, Jing; Delcourt, Cécile; Pirastu, Mario; Wedenoja, Juho; Gharahkhani, Puya; Venturini, Cristina; Miyake, Masahiro; Hewitt, Alex W.; Guo, Xiaobo; Mazur, Johanna; Huffman, Jenifer E.; Williams, Katie M.; Polasek, Ozren; Campbell, Harry; Rudan, Igor; Vatavuk, Zoran; Wilson, James F.; Joshi, Peter K.; McMahon, George; St Pourcain, Beate; Evans, David M.; Simpson, Claire L.; Schwantes-An, Tae-Hwi; Igo, Robert P.; Mirshahi, Alireza; Cougnard-Gregoire, Audrey; Bellenguez, Céline; Blettner, Maria; Raitakari, Olli; Kähönen, Mika; Seppala, Ilkka; Zeller, Tanja; Meitinger, Thomas; Ried, Janina S.; Gieger, Christian; Portas, Laura; van Leeuwen, Elisabeth M.; Amin, Najaf; Uitterlinden, André G.; Rivadeneira, Fernando; Hofman, Albert; Vingerling, Johannes R.; Wang, Ya Xing; Wang, Xu; Tai-Hui Boh, Eileen; Ikram, M. Kamran; Sabanayagam, Charumathi; Gupta, Preeti; Tan, Vincent; Zhou, Lei; Ho, Candice E. H.; Lim, Wan'e; Beuerman, Roger W.; Siantar, Rosalynn; Tai, E-Shyong; Vithana, Eranga; Mihailov, Evelin; Khor, Chiea-Chuen; Hayward, Caroline; Luben, Robert N.; Foster, Paul J.; Klein, Barbara E. K.; Klein, Ronald; Wong, Hoi-Suen; Mitchell, Paul; Metspalu, Andres; Aung, Tin; Young, Terri L.; He, Mingguang; Pärssinen, Olavi; van Duijn, Cornelia M.; Jin Wang, Jie; Williams, Cathy; Jonas, Jost B.; Teo, Yik-Ying; Mackey, David A.; Oexle, Konrad; Yoshimura, Nagahisa; Paterson, Andrew D.; Pfeiffer, Norbert; Wong, Tien-Yin; Baird, Paul N.; Stambolian, Dwight; Wilson, Joan E. Bailey; Cheng, Ching-Yu; Hammond, Christopher J.; Klaver, Caroline C. W.; Saw, Seang-Mei; Rahi, Jugnoo S.; Korobelnik, Jean-François; Kemp, John P.; Timpson, Nicholas J.; Smith, George Davey; Craig, Jamie E.; Burdon, Kathryn P.; Fogarty, Rhys D.; Iyengar, Sudha K.; Chew, Emily; Janmahasatian, Sarayut; Martin, Nicholas G.; MacGregor, Stuart; Xu, Liang; Schache, Maria; Nangia, Vinay; Panda-Jonas, Songhomitra; Wright, Alan F.; Fondran, Jeremy R.; Lass, Jonathan H.; Feng, Sheng; Zhao, Jing Hua; Khaw, Kay-Tee; Wareham, Nick J.; Rantanen, Taina; Kaprio, Jaakko; Pang, Chi Pui; Chen, Li Jia; Tam, Pancy O.; Jhanji, Vishal; Young, Alvin L.; Döring, Angela; Raffel, Leslie J.; Cotch, Mary-Frances; Li, Xiaohui; Yip, Shea Ping; Yap, Maurice K.H.; Biino, Ginevra; Vaccargiu, Simona; Fossarello, Maurizio; Fleck, Brian; Yazar, Seyhan; Tideman, Jan Willem L.; Tedja, Milly; Deangelis, Margaret M.; Morrison, Margaux; Farrer, Lindsay; Zhou, Xiangtian; Chen, Wei; Mizuki, Nobuhisa; Meguro, Akira; Mäkelä, Kari Matti

    2016-01-01

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10−5), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia. PMID:27020472

  18. Meta-analysis of gene-environment-wide association scans accounting for education level identifies additional loci for refractive error.

    PubMed

    Fan, Qiao; Verhoeven, Virginie J M; Wojciechowski, Robert; Barathi, Veluchamy A; Hysi, Pirro G; Guggenheim, Jeremy A; Höhn, René; Vitart, Veronique; Khawaja, Anthony P; Yamashiro, Kenji; Hosseini, S Mohsen; Lehtimäki, Terho; Lu, Yi; Haller, Toomas; Xie, Jing; Delcourt, Cécile; Pirastu, Mario; Wedenoja, Juho; Gharahkhani, Puya; Venturini, Cristina; Miyake, Masahiro; Hewitt, Alex W; Guo, Xiaobo; Mazur, Johanna; Huffman, Jenifer E; Williams, Katie M; Polasek, Ozren; Campbell, Harry; Rudan, Igor; Vatavuk, Zoran; Wilson, James F; Joshi, Peter K; McMahon, George; St Pourcain, Beate; Evans, David M; Simpson, Claire L; Schwantes-An, Tae-Hwi; Igo, Robert P; Mirshahi, Alireza; Cougnard-Gregoire, Audrey; Bellenguez, Céline; Blettner, Maria; Raitakari, Olli; Kähönen, Mika; Seppala, Ilkka; Zeller, Tanja; Meitinger, Thomas; Ried, Janina S; Gieger, Christian; Portas, Laura; van Leeuwen, Elisabeth M; Amin, Najaf; Uitterlinden, André G; Rivadeneira, Fernando; Hofman, Albert; Vingerling, Johannes R; Wang, Ya Xing; Wang, Xu; Tai-Hui Boh, Eileen; Ikram, M Kamran; Sabanayagam, Charumathi; Gupta, Preeti; Tan, Vincent; Zhou, Lei; Ho, Candice E H; Lim, Wan'e; Beuerman, Roger W; Siantar, Rosalynn; Tai, E-Shyong; Vithana, Eranga; Mihailov, Evelin; Khor, Chiea-Chuen; Hayward, Caroline; Luben, Robert N; Foster, Paul J; Klein, Barbara E K; Klein, Ronald; Wong, Hoi-Suen; Mitchell, Paul; Metspalu, Andres; Aung, Tin; Young, Terri L; He, Mingguang; Pärssinen, Olavi; van Duijn, Cornelia M; Jin Wang, Jie; Williams, Cathy; Jonas, Jost B; Teo, Yik-Ying; Mackey, David A; Oexle, Konrad; Yoshimura, Nagahisa; Paterson, Andrew D; Pfeiffer, Norbert; Wong, Tien-Yin; Baird, Paul N; Stambolian, Dwight; Wilson, Joan E Bailey; Cheng, Ching-Yu; Hammond, Christopher J; Klaver, Caroline C W; Saw, Seang-Mei; Rahi, Jugnoo S; Korobelnik, Jean-François; Kemp, John P; Timpson, Nicholas J; Smith, George Davey; Craig, Jamie E; Burdon, Kathryn P; Fogarty, Rhys D; Iyengar, Sudha K; Chew, Emily; Janmahasatian, Sarayut; Martin, Nicholas G; MacGregor, Stuart; Xu, Liang; Schache, Maria; Nangia, Vinay; Panda-Jonas, Songhomitra; Wright, Alan F; Fondran, Jeremy R; Lass, Jonathan H; Feng, Sheng; Zhao, Jing Hua; Khaw, Kay-Tee; Wareham, Nick J; Rantanen, Taina; Kaprio, Jaakko; Pang, Chi Pui; Chen, Li Jia; Tam, Pancy O; Jhanji, Vishal; Young, Alvin L; Döring, Angela; Raffel, Leslie J; Cotch, Mary-Frances; Li, Xiaohui; Yip, Shea Ping; Yap, Maurice K H; Biino, Ginevra; Vaccargiu, Simona; Fossarello, Maurizio; Fleck, Brian; Yazar, Seyhan; Tideman, Jan Willem L; Tedja, Milly; Deangelis, Margaret M; Morrison, Margaux; Farrer, Lindsay; Zhou, Xiangtian; Chen, Wei; Mizuki, Nobuhisa; Meguro, Akira; Mäkelä, Kari Matti

    2016-03-29

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10(-5)), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia.

  19. Elimination of 'ghost'-effect-related systematic error in metrology of X-ray optics with a long trace profiler

    SciTech Connect

    Yashchuk, Valeriy V.; Irick, Steve C.; MacDowell, Alastair A.

    2005-04-28

    A data acquisition technique and relevant program for suppression of one of the systematic effects, namely the ''ghost'' effect, of a second generation long trace profiler (LTP) is described. The ''ghost'' effect arises when there is an unavoidable cross-contamination of the LTP sample and reference signals into one another, leading to a systematic perturbation in the recorded interference patterns and, therefore, a systematic variation of the measured slope trace. Perturbations of about 1-2 {micro}rad have been observed with a cylindrically shaped X-ray mirror. Even stronger ''ghost'' effects show up in an LTP measurement with a mirror having a toroidal surface figure. The developed technique employs separate measurement of the ''ghost''-effect-related interference patterns in the sample and the reference arms and then subtraction of the ''ghost'' patterns from the sample and the reference interference patterns. The procedure preserves the advantage of simultaneously measuring the sample and reference signals. The effectiveness of the technique is illustrated with LTP metrology of a variety of X-ray mirrors.

  20. Toward a Framework for Systematic Error Modeling of NASA Spaceborne Radar with NOAA/NSSL Ground Radar-Based National Mosaic QPE

    NASA Technical Reports Server (NTRS)

    Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.

    2011-01-01

    Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.

  1. Effectiveness of Barcoding for Reducing Patient Specimen and Laboratory Testing Identification Errors: A Laboratory Medicine Best Practices Systematic Review and Meta-Analysis

    PubMed Central

    Snyder, Susan R.; Favoretto, Alessandra M.; Derzon, James H.; Christenson, Robert; Kahn, Stephen; Shaw, Colleen; Baetz, Rich Ann; Mass, Diana; Fantz, Corrine; Raab, Stephen; Tanasijevic, Milenko; Liebow, Edward B.

    2015-01-01

    Objectives This is the first systematic review of the effectiveness of barcoding practices for reducing patient specimen and laboratory testing identification errors. Design and Methods The CDC-funded Laboratory Medicine Best Practices Initiative systematic review methods for quality improvement practices were used. Results A total of 17 observational studies reporting on barcoding systems are included in the body of evidence; 10 for patient specimens and 7 for point-of-care testing. All 17 studies favored barcoding, with meta-analysis mean odds ratios for barcoding systems of 4.39 (95% CI: 3.05 – 6.32) and for point-of-care testing of 5.93 (95% CI: 5.28 – 6.67). Conclusions Barcoding is effective for reducing patient specimen and laboratory testing identification errors in diverse hospital settings and is recommended as an evidence-based “best practice.” The overall strength of evidence rating is high and the effect size rating is substantial. Unpublished studies made an important contribution comprising almost half of the body of evidence. PMID:22750145

  2. Systematic analysis of the in situ crosstalk of tyrosine modifications reveals no additional natural selection on multiply modified residues

    PubMed Central

    Pan, Zhicheng; Liu, Zexian; Cheng, Han; Wang, Yongbo; Gao, Tianshun; Ullah, Shahid; Ren, Jian; Xue, Yu

    2014-01-01

    Recent studies have indicated that different post-translational modifications (PTMs) synergistically orchestrate specific biological processes by crosstalks. However, the preference of the crosstalk among different PTMs and the evolutionary constraint on the PTM crosstalk need further dissections. In this study, the in situ crosstalk at the same positions among three tyrosine PTMs including sulfation, nitration and phosphorylation were systematically analyzed. The experimentally identified sulfation, nitration and phosphorylation sites were collected and integrated with reliable predictions to perform large-scale analyses of in situ crosstalks. From the results, we observed that the in situ crosstalk between sulfation and nitration is significantly under-represented, whereas both sulfation and nitration prefer to co-occupy with phosphorylation at same tyrosines. Further analyses suggested that sulfation and nitration preferentially co-occur with phosphorylation at specific positions in proteins, and participate in distinct biological processes and functions. More interestingly, the long-term evolutionary analysis indicated that multi-PTM targeting tyrosines didn't show any higher conservation than singly modified ones. Also, the analysis of human genetic variations demonstrated that there is no additional functional constraint on inherited disease, cancer or rare mutations of multiply modified tyrosines. Taken together, our systematic analyses provided a better understanding of the in situ crosstalk among PTMs. PMID:25476580

  3. Impact of contacting study authors to obtain additional data for systematic reviews: diagnostic accuracy studies for hepatic fibrosis

    PubMed Central

    2014-01-01

    Background Seventeen of 172 included studies in a recent systematic review of blood tests for hepatic fibrosis or cirrhosis reported diagnostic accuracy results discordant from 2 × 2 tables, and 60 studies reported inadequate data to construct 2 × 2 tables. This study explores the yield of contacting authors of diagnostic accuracy studies and impact on the systematic review findings. Methods Sixty-six corresponding authors were sent letters requesting additional information or clarification of data from 77 studies. Data received from the authors were synthesized with data included in the previous review, and diagnostic accuracy sensitivities, specificities, and positive and likelihood ratios were recalculated. Results Of the 66 authors, 68% were successfully contacted and 42% provided additional data for 29 out of 77 studies (38%). All authors who provided data at all did so by the third emailed request (ten authors provided data after one request). Authors of more recent studies were more likely to be located and provide data compared to authors of older studies. The effects of requests for additional data on the conclusions regarding the utility of blood tests to identify patients with clinically significant fibrosis or cirrhosis were generally small for ten out of 12 tests. Additional data resulted in reclassification (using median likelihood ratio estimates) from less useful to moderately useful or vice versa for the remaining two blood tests and enabled the calculation of an estimate for a third blood test for which previously the data had been insufficient to do so. We did not identify a clear pattern for the directional impact of additional data on estimates of diagnostic accuracy. Conclusions We successfully contacted and received results from 42% of authors who provided data for 38% of included studies. Contacting authors of studies evaluating the diagnostic accuracy of serum biomarkers for hepatic fibrosis and cirrhosis in hepatitis C patients

  4. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    SciTech Connect

    Yu, Jaehyung; Wagner, Lucas K.; Ertekin, Elif

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.

  5. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    NASA Astrophysics Data System (ADS)

    Yu, Jaehyung; Wagner, Lucas K.; Ertekin, Elif

    2015-12-01

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.

  6. A statistical analysis of systematic errors in temperature and ram velocity estimates from satellite-borne retarding potential analyzers

    SciTech Connect

    Klenzing, J. H.; Earle, G. D.; Heelis, R. A.; Coley, W. R.

    2009-05-15

    The use of biased grids as energy filters for charged particles is common in satellite-borne instruments such as a planar retarding potential analyzer (RPA). Planar RPAs are currently flown on missions such as the Communications/Navigation Outage Forecast System and the Defense Meteorological Satellites Program to obtain estimates of geophysical parameters including ion velocity and temperature. It has been shown previously that the use of biased grids in such instruments creates a nonuniform potential in the grid plane, which leads to inherent errors in the inferred parameters. A simulation of ion interactions with various configurations of biased grids has been developed using a commercial finite-element analysis software package. Using a statistical approach, the simulation calculates collected flux from Maxwellian ion distributions with three-dimensional drift relative to the instrument. Perturbations in the performance of flight instrumentation relative to expectations from the idealized RPA flux equation are discussed. Both single grid and dual-grid systems are modeled to investigate design considerations. Relative errors in the inferred parameters for each geometry are characterized as functions of ion temperature and drift velocity.

  7. Analysis and mitigation of systematic errors in spectral shearing interferometry of pulses approaching the single-cycle limit [Invited

    SciTech Connect

    Birge, Jonathan R.; Kaertner, Franz X.

    2008-06-15

    We derive an analytical approximation for the measured pulse width error in spectral shearing methods, such as spectral phase interferometry for direct electric-field reconstruction (SPIDER), caused by an anomalous delay between the two sheared pulse components. This analysis suggests that, as pulses approach the single-cycle limit, the resulting requirements on the calibration and stability of this delay become significant, requiring precision orders of magnitude higher than the scale of a wavelength. This is demonstrated by numerical simulations of SPIDER pulse reconstruction using actual data from a sub-two-cycle laser. We briefly propose methods to minimize the effects of this sensitivity in SPIDER and review variants of spectral shearing that attempt to avoid this difficulty.

  8. Kinematic GPS solutions for aircraft trajectories: Identifying and minimizing systematic height errors associated with atmospheric propagation delays

    USGS Publications Warehouse

    Shan, S.; Bevis, M.; Kendrick, E.; Mader, G.L.; Raleigh, D.; Hudnut, K.; Sartori, M.; Phillips, D.

    2007-01-01

    When kinematic GPS processing software is used to estimate the trajectory of an aircraft, unless the delays imposed on the GPS signals by the atmosphere are either estimated or calibrated via external observations, then vertical height errors of decimeters can occur. This problem is clearly manifested when the aircraft is positioned against multiple base stations in areas of pronounced topography because the aircraft height solutions obtained using different base stations will tend to be mutually offset, or biased, in proportion to the elevation differences between the base stations. When performing kinematic surveys in areas with significant topography it should be standard procedure to use multiple base stations, and to separate them vertically to the maximum extent possible, since it will then be much easier to detect mis-modeling of the atmosphere. Copyright 2007 by the American Geophysical Union.

  9. Benzodiazepine Use During Hospitalization: Automated Identification of Potential Medication Errors and Systematic Assessment of Preventable Adverse Events

    PubMed Central

    Niedrig, David Franklin; Hoppe, Liesa; Mächler, Sarah; Russmann, Heike; Russmann, Stefan

    2016-01-01

    Objective Benzodiazepines and “Z-drug” GABA-receptor modulators (BDZ) are among the most frequently used drugs in hospitals. Adverse drug events (ADE) associated with BDZ can be the result of preventable medication errors (ME) related to dosing, drug interactions and comorbidities. The present study evaluated inpatient use of BDZ and related ME and ADE. Methods We conducted an observational study within a pharmacoepidemiological database derived from the clinical information system of a tertiary care hospital. We developed algorithms that identified dosing errors and interacting comedication for all administered BDZ. Associated ADE and risk factors were validated in medical records. Results Among 53,081 patients contributing 495,813 patient-days BDZ were administered to 25,626 patients (48.3%) on 115,150 patient-days (23.2%). We identified 3,372 patient-days (2.9%) with comedication that inhibits BDZ metabolism, and 1,197 (1.0%) with lorazepam administration in severe renal impairment. After validation we classified 134, 56, 12, and 3 cases involving lorazepam, zolpidem, midazolam and triazolam, respectively, as clinically relevant ME. Among those there were 23 cases with associated adverse drug events, including severe CNS-depression, falls with subsequent injuries and severe dyspnea. Causality for BDZ was formally assessed as ‘possible’ or ‘probable’ in 20 of those cases. Four cases with ME and associated severe ADE required administration of the BDZ antagonist flumazenil. Conclusions BDZ use was remarkably high in the studied setting, frequently involved potential ME related to dosing, co-medication and comorbidities, and rarely cases with associated ADE. We propose the implementation of automated ME screening and validation for the prevention of BDZ-related ADE. PMID:27711224

  10. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  11. Adenomyomatosis of the gallbladder in childhood: A systematic review of the literature and an additional case report

    PubMed Central

    Parolini, Filippo; Indolfi, Giuseppe; Magne, Miguel Garcia; Salemme, Marianna; Cheli, Maurizio; Boroni, Giovanni; Alberti, Daniele

    2016-01-01

    AIM: To investigate the diagnostic and therapeutic assessment in children with adenomyomatosis of the gallbladder (AMG). METHODS: AMG is a degenerative disease characterized by a proliferation of the mucosal epithelium which deeply invaginates and extends into the thickened muscular layer of the gallbladder, causing intramural diverticula. Although AMG is found in up to 5% of cholecystectomy specimens in adult populations, this condition in childhood is extremely uncommon. Authors provide a detailed systematic review of the pediatric literature according to PRISMA guidelines, focusing on diagnostic and therapeutic assessment. An additional case of AMG is also presented. RESULTS: Five studies were finally enclosed, encompassing 5 children with AMG. Analysis was extended to our additional 11-year-old patient, who presented diffuse AMG and pancreatic acinar metaplasia of the gallbladder mucosa and was successfully managed with laparoscopic cholecystectomy. Mean age at presentation was 7.2 years. Unspecific abdominal pain was the commonest symptom. Abdominal ultrasound was performed on all patients, with a diagnostic accuracy of 100%. Five patients underwent cholecystectomy, and at follow-up were asymptomatic. In the remaining patient, completely asymptomatic at diagnosis, a conservative approach with monthly monitoring via ultrasonography was undertaken. CONCLUSION: Considering the remote but possible degeneration leading to cancer and the feasibility of laparoscopic cholecystectomy even in small children, evidence suggests that elective laparoscopic cholecystectomy represent the treatment of choice. Pre-operative evaluation of the extrahepatic biliary tree anatomy with cholangio-MRI is strongly recommended. PMID:27170933

  12. Medication Errors

    MedlinePlus

    ... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...

  13. Bacterial Cooperation Causes Systematic Errors in Pathogen Risk Assessment due to the Failure of the Independent Action Hypothesis.

    PubMed

    Cornforth, Daniel M; Matthews, Andrew; Brown, Sam P; Raymond, Ben

    2015-04-01

    The Independent Action Hypothesis (IAH) states that pathogenic individuals (cells, spores, virus particles etc.) behave independently of each other, so that each has an independent probability of causing systemic infection or death. The IAH is not just of basic scientific interest; it forms the basis of our current estimates of infectious disease risk in humans. Despite the important role of the IAH in managing disease interventions for food and water-borne pathogens, experimental support for the IAH in bacterial pathogens is indirect at best. Moreover since the IAH was first proposed, cooperative behaviors have been discovered in a wide range of microorganisms, including many pathogens. A fundamental principle of cooperation is that the fitness of individuals is affected by the presence and behaviors of others, which is contrary to the assumption of independent action. In this paper, we test the IAH in Bacillus thuringiensis (B.t), a widely occurring insect pathogen that releases toxins that benefit others in the inoculum, infecting the diamondback moth, Plutella xylostella. By experimentally separating B.t. spores from their toxins, we demonstrate that the IAH fails because there is an interaction between toxin and spore effects on mortality, where the toxin effect is synergistic and cannot be accommodated by independence assumptions. Finally, we show that applying recommended IAH dose-response models to high dose data leads to systematic overestimation of mortality risks at low doses, due to the presence of synergistic pathogen interactions. Our results show that cooperative secretions can easily invalidate the IAH, and that such mechanistic details should be incorporated into pathogen risk analysis.

  14. Bacterial Cooperation Causes Systematic Errors in Pathogen Risk Assessment due to the Failure of the Independent Action Hypothesis.

    PubMed

    Cornforth, Daniel M; Matthews, Andrew; Brown, Sam P; Raymond, Ben

    2015-04-01

    The Independent Action Hypothesis (IAH) states that pathogenic individuals (cells, spores, virus particles etc.) behave independently of each other, so that each has an independent probability of causing systemic infection or death. The IAH is not just of basic scientific interest; it forms the basis of our current estimates of infectious disease risk in humans. Despite the important role of the IAH in managing disease interventions for food and water-borne pathogens, experimental support for the IAH in bacterial pathogens is indirect at best. Moreover since the IAH was first proposed, cooperative behaviors have been discovered in a wide range of microorganisms, including many pathogens. A fundamental principle of cooperation is that the fitness of individuals is affected by the presence and behaviors of others, which is contrary to the assumption of independent action. In this paper, we test the IAH in Bacillus thuringiensis (B.t), a widely occurring insect pathogen that releases toxins that benefit others in the inoculum, infecting the diamondback moth, Plutella xylostella. By experimentally separating B.t. spores from their toxins, we demonstrate that the IAH fails because there is an interaction between toxin and spore effects on mortality, where the toxin effect is synergistic and cannot be accommodated by independence assumptions. Finally, we show that applying recommended IAH dose-response models to high dose data leads to systematic overestimation of mortality risks at low doses, due to the presence of synergistic pathogen interactions. Our results show that cooperative secretions can easily invalidate the IAH, and that such mechanistic details should be incorporated into pathogen risk analysis. PMID:25909384

  15. Large-scale compensation of errors in pairwise-additive empirical force fields: comparison of AMBER intermolecular terms with rigorous DFT-SAPT calculations.

    PubMed

    Zgarbová, Marie; Otyepka, Michal; Sponer, Jirí; Hobza, Pavel; Jurecka, Petr

    2010-09-21

    The intermolecular interaction energy components for several molecular complexes were calculated using force fields available in the AMBER suite of programs and compared with Density Functional Theory-Symmetry Adapted Perturbation Theory (DFT-SAPT) values. The extent to which such comparison is meaningful is discussed. The comparability is shown to depend strongly on the intermolecular distance, which means that comparisons made at one distance only are of limited value. At large distances the coulombic and van der Waals 1/r(6) empirical terms correspond fairly well with the DFT-SAPT electrostatics and dispersion terms, respectively. At the onset of electronic overlap the empirical values deviate from the reference values considerably. However, the errors in the force fields tend to cancel out in a systematic manner at equilibrium distances. Thus, the overall performance of the force fields displays errors an order of magnitude smaller than those of the individual interaction energy components. The repulsive 1/r(12) component of the van der Waals expression seems to be responsible for a significant part of the deviation of the force field results from the reference values. We suggest that further improvement of the force fields for intermolecular interactions would require replacement of the nonphysical 1/r(12) term by an exponential function. Dispersion anisotropy and its effects are discussed. Our analysis is intended to show that although comparing the empirical and non-empirical interaction energy components is in general problematic, it might bring insights useful for the construction of new force fields. Our results are relevant to often performed force-field-based interaction energy decompositions.

  16. Error Analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Input data as well as the results of elementary operations have to be represented by machine numbers, the subset of real numbers which is used by the arithmetic unit of today's computers. Generally this generates rounding errors. This kind of numerical error can be avoided in principle by using arbitrary precision arithmetics or symbolic algebra programs. But this is unpractical in many cases due to the increase in computing time and memory requirements. Results from more complex operations like square roots or trigonometric functions can have even larger errors since series expansions have to be truncated and iterations accumulate the errors of the individual steps. In addition, the precision of input data from an experiment is limited. In this chapter we study the influence of numerical errors on the uncertainties of the calculated results and the stability of simple algorithms.

  17. Additive Synergism between Asbestos and Smoking in Lung Cancer Risk: A Systematic Review and Meta-Analysis

    PubMed Central

    Ngamwong, Yuwadee; Tangamornsuksan, Wimonchat; Lohitnavy, Ornrat; Chaiyakunapruk, Nathorn; Scholfield, C. Norman; Reisfeld, Brad; Lohitnavy, Manupat

    2015-01-01

    Smoking and asbestos exposure are important risks for lung cancer. Several epidemiological studies have linked asbestos exposure and smoking to lung cancer. To reconcile and unify these results, we conducted a systematic review and meta-analysis to provide a quantitative estimate of the increased risk of lung cancer associated with asbestos exposure and cigarette smoking and to classify their interaction. Five electronic databases were searched from inception to May, 2015 for observational studies on lung cancer. All case-control (N = 10) and cohort (N = 7) studies were included in the analysis. We calculated pooled odds ratios (ORs), relative risks (RRs) and 95% confidence intervals (CIs) using a random-effects model for the association of asbestos exposure and smoking with lung cancer. Lung cancer patients who were not exposed to asbestos and non-smoking (A-S-) were compared with; (i) asbestos-exposed and non-smoking (A+S-), (ii) non-exposure to asbestos and smoking (A-S+), and (iii) asbestos-exposed and smoking (A+S+). Our meta-analysis showed a significant difference in risk of developing lung cancer among asbestos exposed and/or smoking workers compared to controls (A-S-), odds ratios for the disease (95% CI) were (i) 1.70 (A+S-, 1.31–2.21), (ii) 5.65; (A-S+, 3.38–9.42), (iii) 8.70 (A+S+, 5.8–13.10). The additive interaction index of synergy was 1.44 (95% CI = 1.26–1.77) and the multiplicative index = 0.91 (95% CI = 0.63–1.30). Corresponding values for cohort studies were 1.11 (95% CI = 1.00–1.28) and 0.51 (95% CI = 0.31–0.85). Our results point to an additive synergism for lung cancer with co-exposure of asbestos and cigarette smoking. Assessments of industrial health risks should take smoking and other airborne health risks when setting occupational asbestos exposure limits. PMID:26274395

  18. Additive Synergism between Asbestos and Smoking in Lung Cancer Risk: A Systematic Review and Meta-Analysis.

    PubMed

    Ngamwong, Yuwadee; Tangamornsuksan, Wimonchat; Lohitnavy, Ornrat; Chaiyakunapruk, Nathorn; Scholfield, C Norman; Reisfeld, Brad; Lohitnavy, Manupat

    2015-01-01

    Smoking and asbestos exposure are important risks for lung cancer. Several epidemiological studies have linked asbestos exposure and smoking to lung cancer. To reconcile and unify these results, we conducted a systematic review and meta-analysis to provide a quantitative estimate of the increased risk of lung cancer associated with asbestos exposure and cigarette smoking and to classify their interaction. Five electronic databases were searched from inception to May, 2015 for observational studies on lung cancer. All case-control (N = 10) and cohort (N = 7) studies were included in the analysis. We calculated pooled odds ratios (ORs), relative risks (RRs) and 95% confidence intervals (CIs) using a random-effects model for the association of asbestos exposure and smoking with lung cancer. Lung cancer patients who were not exposed to asbestos and non-smoking (A-S-) were compared with; (i) asbestos-exposed and non-smoking (A+S-), (ii) non-exposure to asbestos and smoking (A-S+), and (iii) asbestos-exposed and smoking (A+S+). Our meta-analysis showed a significant difference in risk of developing lung cancer among asbestos exposed and/or smoking workers compared to controls (A-S-), odds ratios for the disease (95% CI) were (i) 1.70 (A+S-, 1.31-2.21), (ii) 5.65; (A-S+, 3.38-9.42), (iii) 8.70 (A+S+, 5.8-13.10). The additive interaction index of synergy was 1.44 (95% CI = 1.26-1.77) and the multiplicative index = 0.91 (95% CI = 0.63-1.30). Corresponding values for cohort studies were 1.11 (95% CI = 1.00-1.28) and 0.51 (95% CI = 0.31-0.85). Our results point to an additive synergism for lung cancer with co-exposure of asbestos and cigarette smoking. Assessments of industrial health risks should take smoking and other airborne health risks when setting occupational asbestos exposure limits.

  19. Systematic errors in the simulation of european climate (1961-2000) with RegCM3 driven by NCEP/NCAR reanalysis

    NASA Astrophysics Data System (ADS)

    Bergant, Klemen; Belda, Michal; Halenka, Tomá

    2007-03-01

    Systematic errors of a European climate simulation (1961-2000) with RegCM3 were analyzed. Model results were compared to Climate Research Unit (CRU) observations. Average (AveB) and annual cycle biases (CycB) were evaluated for three surface variables: air temperature (TMP), water vapor pressure (VAP), and precipitation (PRE). The model shows a cold AveB over Europe with the exception of the northern part. It also shows a prevailing wet AveB. Annual PRE is underestimated only in regions with high average values, while VAP is overestimated over the entire European continent. The AveB is between - 1.2 °C and + 1.0 °C for TMP, 0.4 mb and 1.4 mb for VAP and - 15% and + 33% for PRC on annual/subcontinental scale. Most of the TMP and VAP CycB is related to the amplitude of annual cycle (CycA). The CycA of the TMP is underestimated over most of Europe. The CycA of the VAP is underestimated in some coastal regions and overestimated over the continental regions. The distinction between coastal and inland regions can also be seen in the CycB of the PRE. In coastal regions, with a PRE maximum in late autumn/early winter and minimum in summer, the CycA is underestimated. In some continental regions, with a precipitation maximum in summer and minimum in late autumn/early winter, the CycA is overestimated. The annual cycle pattern is not captured well by RegCM3 over the Alps and European Russia. Most of systematic errors in the RegCM3 simulation can be related to boundary conditions. Although the bias in NCEP/NCAR reanalysis is reflected in RegCM3 simulation, the RegCM3 enriches large-scale information with regional details and with the more realistic description of annual cycles, especially for PRE. Because of these advantages and the overall relatively good performance of RegCM3, the model is seen as a valuable tool in regional projections of future climate change.

  20. Errors in Viking Lander Atmospheric Profiles Discovered Using MOLA Topography

    NASA Technical Reports Server (NTRS)

    Withers, Paul; Lorenz, R. D.; Neumann, G. A.

    2002-01-01

    Each Viking lander measured a topographic profile during entry. Comparing to MOLA (Mars Orbiter Laser Altimeter), we find a vertical error of 1-2 km in the Viking trajectory. This introduces a systematic error of 10-20% in the Viking densities and pressures at a given altitude. Additional information is contained in the original extended abstract.

  1. Addition of dipeptidyl peptidase-4 inhibitors to sulphonylureas and risk of hypoglycaemia: systematic review and meta-analysis

    PubMed Central

    Moore, Nicholas; Arnaud, Mickael; Robinson, Philip; Raschi, Emanuel; De Ponti, Fabrizio; Bégaud, Bernard; Pariente, Antoine

    2016-01-01

    Objective To quantify the risk of hypoglycaemia associated with the concomitant use of dipeptidyl peptidase-4 (DPP-4) inhibitors and sulphonylureas compared with placebo and sulphonylureas. Design Systematic review and meta-analysis. Data sources Medline, ISI Web of Science, SCOPUS, Cochrane Central Register of Controlled Trials, and clinicaltrial.gov were searched without any language restriction. Study selection Placebo controlled randomised trials comprising at least 50 participants with type 2 diabetes treated with DPP-4 inhibitors and sulphonylureas. Review methods Risk of bias in each trial was assessed using the Cochrane Collaboration tool. The risk ratio of hypoglycaemia with 95% confidence intervals was computed for each study and then pooled using fixed effect models (Mantel Haenszel method) or random effect models, when appropriate. Subgroup analyses were also performed (eg, dose of DPP-4 inhibitors). The number needed to harm (NNH) was estimated according to treatment duration. Results 10 studies were included, representing a total of 6546 participants (4020 received DPP-4 inhibitors plus sulphonylureas, 2526 placebo plus sulphonylureas). The risk ratio of hypoglycaemia was 1.52 (95% confidence interval 1.29 to 1.80). The NNH was 17 (95% confidence interval 11 to 30) for a treatment duration of six months or less, 15 (9 to 26) for 6.1 to 12 months, and 8 (5 to 15) for more than one year. In subgroup analysis, no difference was found between full and low doses of DPP-4 inhibitors: the risk ratio related to full dose DPP-4 inhibitors was 1.66 (1.34 to 2.06), whereas the increased risk ratio related to low dose DPP-4 inhibitors did not reach statistical significance (1.33, 0.92 to 1.94). Conclusions Addition of DPP-4 inhibitors to sulphonylurea to treat people with type 2 diabetes is associated with a 50% increased risk of hypoglycaemia and to one excess case of hypoglycaemia for every 17 patients in the first six months of treatment. This

  2. Sources of the systematic errors in measurements of 214Po decay half-life time variations at the Baksan deep underground experiments

    NASA Astrophysics Data System (ADS)

    Alexeyev, E. N.; Gavrilyuk, Yu. M.; Gangapshev, A. M.; Kazalov, V. V.; Kuzminov, V. V.; Panasenko, S. I.; Ratkevich, S. S.

    2015-03-01

    The design changes of the Baksan low-background TAU-1 and TAU-2 set-ups allowed to improve a sensitivity of 214Po half-life (τ) measurements up to the 2.5 × 10-4 are described. Different possible sources of systematic errors influencing on the τ-value are studied. An annual variation of 214Po half-life time measurements with an amplitude of A = (6.9 ± 3) × 10-4 and a phase of φ = 93 ± 10 days was found in a sequence of the week-collected τ-values obtained from the TAU-2 data sample with total duration of 480 days. 24 hours' variation of the t-value measurements with an amplitude of A = (10.0 ± 2.6) × 10-4 and phase of φ = 1 ± 0.5 hours was found in a solar day 1 hour step t-value sequence formed from the same data sample. It was found that the 214Po half-life averaged at 480 days is equal to 163.45 ± 0.04 μs.

  3. [Medical device use errors].

    PubMed

    Friesdorf, Wolfgang; Marsolek, Ingo

    2008-01-01

    Medical devices define our everyday patient treatment processes. But despite the beneficial effect, every use can also lead to damages. Use errors are thus often explained by human failure. But human errors can never be completely extinct, especially in such complex work processes like those in medicine that often involve time pressure. Therefore we need error-tolerant work systems in which potential problems are identified and solved as early as possible. In this context human engineering uses the TOP principle: technological before organisational and then person-related solutions. But especially in everyday medical work we realise that error-prone usability concepts can often only be counterbalanced by organisational or person-related measures. Thus human failure is pre-programmed. In addition, many medical work places represent a somewhat chaotic accumulation of individual devices with totally different user interaction concepts. There is not only a lack of holistic work place concepts, but of holistic process and system concepts as well. However, this can only be achieved through the co-operation of producers, healthcare providers and clinical users, by systematically analyzing and iteratively optimizing the underlying treatment processes from both a technological and organizational perspective. What we need is a joint platform like medilab V of the TU Berlin, in which the entire medical treatment chain can be simulated in order to discuss, experiment and model--a key to a safe and efficient healthcare system of the future. PMID:19213452

  4. Reducing diagnostic errors in primary care. A systematic meta-review of computerized diagnostic decision support systems by the LINNEAUS collaboration on patient safety in primary care

    PubMed Central

    Nurek, Martine; Kostopoulou, Olga; Delaney, Brendan C; Esmail, Aneez

    2015-01-01

    ABSTRACT Background: Computerized diagnostic decision support systems (CDDSS) have the potential to support the cognitive task of diagnosis, which is one of the areas where general practitioners have greatest difficulty and which accounts for a significant proportion of adverse events recorded in the primary care setting. Objective: To determine the extent to which CDDSS may meet the requirements of supporting the cognitive task of diagnosis, and the currently perceived barriers that prevent the integration of CDDSS with electronic health record (EHR) systems. Methods: We conducted a meta-review of existing systematic reviews published in English, searching MEDLINE, Embase, PsycINFO and Web of Knowledge for articles on the features and effectiveness of CDDSS for medical diagnosis published since 2004. Eligibility criteria included systematic reviews where individual clinicians were primary end users. Outcomes we were interested in were the effectiveness and identification of specific features of CDDSS on diagnostic performance. Results: We identified 1970 studies and excluded 1938 because they did not fit our inclusion criteria. A total of 45 articles were identified and 12 were found suitable for meta-review. Extraction of high-level requirements identified that a more standardized computable approach is needed to knowledge representation, one that can be readily updated as new knowledge is gained. In addition, a deep integration with the EHR is needed in order to trigger at appropriate points in cognitive workflow. Conclusion: Developing a CDDSS that is able to utilize dynamic vocabulary tools to quickly capture and code relevant diagnostic findings, and coupling these with individualized diagnostic suggestions based on the best-available evidence has the potential to improve diagnostic accuracy, but requires evaluation. PMID:26339829

  5. Preventive zinc supplementation for children, and the effect of additional iron: a systematic review and meta-analysis

    PubMed Central

    Mayo-Wilson, Evan; Imdad, Aamer; Junior, Jean; Dean, Sohni; Bhutta, Zulfiqar A

    2014-01-01

    Objective Zinc deficiency is widespread, and preventive supplementation may have benefits in young children. Effects for children over 5 years of age, and effects when coadministered with other micronutrients are uncertain. These are obstacles to scale-up. This review seeks to determine if preventive supplementation reduces mortality and morbidity for children aged 6 months to 12 years. Design Systematic review conducted with the Cochrane Developmental, Psychosocial and Learning Problems Group. Two reviewers independently assessed studies. Meta-analyses were performed for mortality, illness and side effects. Data sources We searched multiple databases, including CENTRAL and MEDLINE in January 2013. Authors were contacted for missing information. Eligibility criteria for selecting studies Randomised trials of preventive zinc supplementation. Hospitalised children and children with chronic diseases were excluded. Results 80 randomised trials with 205 401 participants were included. There was a small but non-significant effect on all-cause mortality (risk ratio (RR) 0.95 (95% CI 0.86 to 1.05)). Supplementation may reduce incidence of all-cause diarrhoea (RR 0.87 (0.85 to 0.89)), but there was evidence of reporting bias. There was no evidence of an effect of incidence or prevalence of respiratory infections or malaria. There was moderate quality evidence of a very small effect on linear growth (standardised mean difference 0.09 (0.06 to 0.13)) and an increase in vomiting (RR 1.29 (1.14 to 1.46)). There was no evidence of an effect on iron status. Comparing zinc with and without iron cosupplementation and direct comparisons of zinc plus iron versus zinc administered alone favoured cointervention for some outcomes and zinc alone for other outcomes. Effects may be larger for children over 1 year of age, but most differences were not significant. Conclusions Benefits of preventive zinc supplementation may outweigh any potentially adverse effects in areas where

  6. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  7. Systematic review of ERP and fMRI studies investigating inhibitory control and error processing in people with substance dependence and behavioural addictions

    PubMed Central

    Luijten, Maartje; Machielsen, Marise W.J.; Veltman, Dick J.; Hester, Robert; de Haan, Lieuwe; Franken, Ingmar H.A.

    2014-01-01

    Background Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The combined evaluation of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) findings in the present review offers unique information on neural deficits in addicted individuals. Methods We selected 19 ERP and 22 fMRI studies using stop-signal, go/no-go or Flanker paradigms based on a search of PubMed and Embase. Results The most consistent findings in addicted individuals relative to healthy controls were lower N2, error-related negativity and error positivity amplitudes as well as hypoactivation in the anterior cingulate cortex (ACC), inferior frontal gyrus and dorsolateral prefrontal cortex. These neural deficits, however, were not always associated with impaired task performance. With regard to behavioural addictions, some evidence has been found for similar neural deficits; however, studies are scarce and results are not yet conclusive. Differences among the major classes of substances of abuse were identified and involve stronger neural responses to errors in individuals with alcohol dependence versus weaker neural responses to errors in other substance-dependent populations. Limitations Task design and analysis techniques vary across studies, thereby reducing comparability among studies and the potential of clinical use of these measures. Conclusion Current addiction theories were supported by identifying consistent abnormalities in prefrontal brain function in individuals with addiction. An integrative model is proposed, suggesting that neural deficits in the dorsal ACC may constitute a hallmark neurocognitive deficit underlying addictive behaviours, such as loss of control. PMID:24359877

  8. Error Sources in Asteroid Astrometry

    NASA Technical Reports Server (NTRS)

    Owen, William M., Jr.

    2000-01-01

    Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.

  9. Errors and Uncertainty in Physics Measurement.

    ERIC Educational Resources Information Center

    Blasiak, Wladyslaw

    1983-01-01

    Classifies errors as either systematic or blunder and uncertainties as either systematic or random. Discusses use of error/uncertainty analysis in direct/indirect measurement, describing the process of planning experiments to ensure lowest possible uncertainty. Also considers appropriate level of error analysis for high school physics students'…

  10. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...

  11. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  12. [Dealing with errors in medicine].

    PubMed

    Schoenenberger, R A; Perruchoud, A P

    1998-12-24

    Iatrogenic disease is probably more commonly than assumed the consequence of errors and mistakes committed by physicians and other medical personnel. Traditionally, strategies to prevent errors in medicine focus on inspection and rely on the professional ethos of health care personnel. The increasingly complex nature of medical practise and the multitude of interventions that each patient receives increases the likelihood of error. More efficient approaches to deal with errors have been developed. The methods include routine identification of errors (critical incidence report), systematic monitoring of multiple-step processes in medical practice, system analysis, and system redesign. A search for underlying causes of errors (rather than distal causes) will enable organizations to collectively learn without denying the inevitable occurrence of human error. Errors and mistakes may become precious chances to increase the quality of medical care.

  13. Systematic reviews need systematic searchers

    PubMed Central

    McGowan, Jessie; Sampson, Margaret

    2005-01-01

    Purpose: This paper will provide a description of the methods, skills, and knowledge of expert searchers working on systematic review teams. Brief Description: Systematic reviews and meta-analyses are very important to health care practitioners, who need to keep abreast of the medical literature and make informed decisions. Searching is a critical part of conducting these systematic reviews, as errors made in the search process potentially result in a biased or otherwise incomplete evidence base for the review. Searches for systematic reviews need to be constructed to maximize recall and deal effectively with a number of potentially biasing factors. Librarians who conduct the searches for systematic reviews must be experts. Discussion/Conclusion: Expert searchers need to understand the specifics about data structure and functions of bibliographic and specialized databases, as well as the technical and methodological issues of searching. Search methodology must be based on research about retrieval practices, and it is vital that expert searchers keep informed about, advocate for, and, moreover, conduct research in information retrieval. Expert searchers are an important part of the systematic review team, crucial throughout the review process—from the development of the proposal and research question to publication. PMID:15685278

  14. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  15. Medication Errors

    MedlinePlus

    Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...

  16. Refractive error blindness.

    PubMed Central

    Dandona, R.; Dandona, L.

    2001-01-01

    Recent data suggest that a large number of people are blind in different parts of the world due to high refractive error because they are not using appropriate refractive correction. Refractive error as a cause of blindness has been recognized only recently with the increasing use of presenting visual acuity for defining blindness. In addition to blindness due to naturally occurring high refractive error, inadequate refractive correction of aphakia after cataract surgery is also a significant cause of blindness in developing countries. Blindness due to refractive error in any population suggests that eye care services in general in that population are inadequate since treatment of refractive error is perhaps the simplest and most effective form of eye care. Strategies such as vision screening programmes need to be implemented on a large scale to detect individuals suffering from refractive error blindness. Sufficient numbers of personnel to perform reasonable quality refraction need to be trained in developing countries. Also adequate infrastructure has to be developed in underserved areas of the world to facilitate the logistics of providing affordable reasonable-quality spectacles to individuals suffering from refractive error blindness. Long-term success in reducing refractive error blindness worldwide will require attention to these issues within the context of comprehensive approaches to reduce all causes of avoidable blindness. PMID:11285669

  17. Error Analysis and the EFL Classroom Teaching

    ERIC Educational Resources Information Center

    Xie, Fang; Jiang, Xue-mei

    2007-01-01

    This paper makes a study of error analysis and its implementation in the EFL (English as Foreign Language) classroom teaching. It starts by giving a systematic review of the concepts and theories concerning EA (Error Analysis), the various reasons causing errors are comprehensively explored. The author proposes that teachers should employ…

  18. Systematizing Trial and Error Using Spreadsheets.

    ERIC Educational Resources Information Center

    Sgroi, Richard J.

    1992-01-01

    Presents two spreadsheets for middle school students applying Polya's heuristic to help develop number sense, reasoning abilities, and problem-solving skills. Spreadsheet 1, "the coin problem," allows students to vary coin quantities to total $8.32. Spreadsheet 2, "ratios," develops number relationships while finding 3 3-digit numbers in the…

  19. Systematic approach for simultaneously correcting the band-gap andp-dseparation errors of common cation III-V or II-VI binaries in density functional theory calculations within a local density approximation

    DOE PAGES

    Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang

    2015-07-31

    We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles methodmore » can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.« less

  20. Error Modeling of Multi-baseline Optical Truss. Part II; Application to SIM Metrology Truss Field Dependent Error

    NASA Technical Reports Server (NTRS)

    Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert

    2004-01-01

    The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices

  1. TU-C-BRE-08: IMRT QA: Selecting Meaningful Gamma Criteria Based On Error Detection Sensitivity

    SciTech Connect

    Steers, J; Fraass, B

    2014-06-15

    Purpose: To develop a strategy for defining meaningful tolerance limits and studying the sensitivity of IMRT QA gamma criteria by inducing known errors in QA plans. Methods: IMRT QA measurements (ArcCHECK, Sun Nuclear) were compared to QA plan calculations with induced errors. Many (>24) gamma comparisons between data and calculations were performed for each of several kinds of cases and classes of induced error types with varying magnitudes (e.g. MU errors ranging from -10% to +10%), resulting in over 3,000 comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using various gamma criteria. Results: This study demonstrates that random, case-specific, and systematic errors can be detected by the error curve analysis. Depending on location of the peak of the error curve (e.g., not centered about zero), 3%/3mm threshold=10% criteria may miss MU errors of up to 10% and random MLC errors of up to 5 mm. Additionally, using larger dose thresholds for specific devices may increase error sensitivity (for the same X%/Ymm criteria) by up to a factor of two. This analysis will allow clinics to select more meaningful gamma criteria based on QA device, treatment techniques, and acceptable error tolerances. Conclusion: We propose a strategy for selecting gamma parameters based on the sensitivity of gamma criteria and individual QA devices to induced calculation errors in QA plans. Our data suggest large errors may be missed using conventional gamma criteria and that using stricter criteria with an increased dose threshold may reduce the range of missed errors. This approach allows quantification of gamma criteria sensitivity and is straightforward to apply to other combinations of devices and treatment techniques.

  2. The need for annual echocardiography to detect cabergoline-associated valvulopathy in patients with prolactinoma: a systematic review and additional clinical data.

    PubMed

    Caputo, Carmela; Prior, David; Inder, Warrick J

    2015-11-01

    Present recommendations by the US Food and Drug Administration advise that patients with prolactinoma treated with cabergoline should have an annual echocardiogram to screen for valvular heart disease. Here, we present new clinical data and a systematic review of the scientific literature showing that the prevalence of cabergoline-associated valvulopathy is very low. We prospectively assessed 40 patients with prolactinoma taking cabergoline. Cardiovascular examination before echocardiography detected an audible systolic murmur in 10% of cases (all were functional murmurs), and no clinically significant valvular lesion was shown on echocardiogram in the 90% of patients without a murmur. Our systematic review identified 21 studies that assessed the presence of valvular abnormalities in patients with prolactinoma treated with cabergoline. Including our new clinical data, only two (0·11%) of 1811 patients were confirmed to have cabergoline-associated valvulopathy (three [0·17%] if possible cases were included). The probability of clinically significant valvular heart disease is low in the absence of a murmur. On the basis of these findings, we challenge the present recommendations to do routine echocardiography in all patients taking cabergoline for prolactinoma every 12 months. We propose that such patients should be screened by a clinical cardiovascular examination and that echocardiogram should be reserved for those patients with an audible murmur, those treated for more than 5 years at a dose of more than 3 mg per week, or those who maintain cabergoline treatment after the age of 50 years.

  3. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  4. BFC: correcting Illumina sequencing errors

    PubMed Central

    2015-01-01

    Summary: BFC is a free, fast and easy-to-use sequencing error corrector designed for Illumina short reads. It uses a non-greedy algorithm but still maintains a speed comparable to implementations based on greedy methods. In evaluations on real data, BFC appears to correct more errors with fewer overcorrections in comparison to existing tools. It particularly does well in suppressing systematic sequencing errors, which helps to improve the base accuracy of de novo assemblies. Availability and implementation: https://github.com/lh3/bfc Contact: hengli@broadinstitute.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25953801

  5. Cardinality Balanced Multi-Target Multi-Bernoulli Filter with Error Compensation

    PubMed Central

    He, Xiangyu; Liu, Guixi

    2016-01-01

    The cardinality balanced multi-target multi-Bernoulli (CBMeMBer) filter developed recently has been proved an effective multi-target tracking (MTT) algorithm based on the random finite set (RFS) theory, and it can jointly estimate the number of targets and their states from a sequence of sensor measurement sets. However, because of the existence of systematic errors in sensor measurements, the CBMeMBer filter can easily produce different levels of performance degradation. In this paper, an extended CBMeMBer filter, in which the joint probability density function of target state and systematic error is recursively estimated, is proposed to address the MTT problem based on the sensor measurements with systematic errors. In addition, an analytic implementation of the extended CBMeMBer filter is also presented for linear Gaussian models. Simulation results confirm that the proposed algorithm can track multiple targets with better performance. PMID:27589764

  6. Cardinality Balanced Multi-Target Multi-Bernoulli Filter with Error Compensation.

    PubMed

    He, Xiangyu; Liu, Guixi

    2016-08-31

    The cardinality balanced multi-target multi-Bernoulli (CBMeMBer) filter developed recently has been proved an effective multi-target tracking (MTT) algorithm based on the random finite set (RFS) theory, and it can jointly estimate the number of targets and their states from a sequence of sensor measurement sets. However, because of the existence of systematic errors in sensor measurements, the CBMeMBer filter can easily produce different levels of performance degradation. In this paper, an extended CBMeMBer filter, in which the joint probability density function of target state and systematic error is recursively estimated, is proposed to address the MTT problem based on the sensor measurements with systematic errors. In addition, an analytic implementation of the extended CBMeMBer filter is also presented for linear Gaussian models. Simulation results confirm that the proposed algorithm can track multiple targets with better performance.

  7. Cardinality Balanced Multi-Target Multi-Bernoulli Filter with Error Compensation.

    PubMed

    He, Xiangyu; Liu, Guixi

    2016-01-01

    The cardinality balanced multi-target multi-Bernoulli (CBMeMBer) filter developed recently has been proved an effective multi-target tracking (MTT) algorithm based on the random finite set (RFS) theory, and it can jointly estimate the number of targets and their states from a sequence of sensor measurement sets. However, because of the existence of systematic errors in sensor measurements, the CBMeMBer filter can easily produce different levels of performance degradation. In this paper, an extended CBMeMBer filter, in which the joint probability density function of target state and systematic error is recursively estimated, is proposed to address the MTT problem based on the sensor measurements with systematic errors. In addition, an analytic implementation of the extended CBMeMBer filter is also presented for linear Gaussian models. Simulation results confirm that the proposed algorithm can track multiple targets with better performance. PMID:27589764

  8. Errors in airborne flux measurements

    NASA Astrophysics Data System (ADS)

    Mann, Jakob; Lenschow, Donald H.

    1994-07-01

    We present a general approach for estimating systematic and random errors in eddy correlation fluxes and flux gradients measured by aircraft in the convective boundary layer as a function of the length of the flight leg, or of the cutoff wavelength of a highpass filter. The estimates are obtained from empirical expressions for various length scales in the convective boundary layer and they are experimentally verified using data from the First ISLSCP (International Satellite Land Surface Climatology Experiment) Field Experiment (FIFE), the Air Mass Transformation Experiment (AMTEX), and the Electra Radome Experiment (ELDOME). We show that the systematic flux and flux gradient errors can be important if fluxes are calculated from a set of several short flight legs or if the vertical velocity and scalar time series are high-pass filtered. While the systematic error of the flux is usually negative, that of the flux gradient can change sign. For example, for temperature flux divergence the systematic error changes from negative to positive about a quarter of the way up in the convective boundary layer.

  9. Theory of Test Translation Error

    ERIC Educational Resources Information Center

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  10. Tropical errors and convection

    NASA Astrophysics Data System (ADS)

    Bechtold, P.; Bauer, P.; Engelen, R. J.

    2012-12-01

    Tropical convection is analysed in the ECMWF Integrated Forecast System (IFS) through tropical errors and their evolution during the last decade as a function of model resolution and model changes. As the characterization of these errors is particularly difficult over tropical oceans due to sparse in situ upper-air data, more weight compared to the middle latitudes is given in the analysis to the underlying forecast model. Therefore, special attention is paid to available near-surface observations and to comparison with analysis from other Centers. There is a systematic lack of low-level wind convergence in the Inner Tropical Convergence Zone (ITCZ) in the IFS, leading to a spindown of the Hadley cell. Critical areas with strong cross-equatorial flow and large wind errors are the Indian Ocean with large interannual variations in forecast errors, and the East Pacific with persistent systematic errors that have evolved little during the last decade. The analysis quality in the East Pacific is affected by observation errors inherent to the atmospheric motion vector wind product. The model's tropical climate and its variability and teleconnections are also evaluated, with a particular focus on the Madden-Julian Oscillation (MJO) during the Year of Tropical Convection (YOTC). The model is shown to reproduce the observed tropical large-scale wave spectra and teleconnections, but overestimates the precipitation during the South-East Asian summer monsoon. The recent improvements in tropical precipitation, convectively coupled wave and MJO predictability are shown to be strongly related to improvements in the convection parameterization that realistically represents the convection sensitivity to environmental moisture, and the large-scale forcing due to the use of strong entrainment and a variable adjustment time-scale. There is however a remaining slight moistening tendency and low-level wind imbalance in the model that is responsible for the Asian Monsoon bias and for too

  11. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  12. Neural network calibration of a snapshot birefringent Fourier transform spectrometer with periodic phase errors.

    PubMed

    Luo, David; Kudenov, Michael W

    2016-05-16

    Systematic phase errors in Fourier transform spectroscopy can severely degrade the calculated spectra. Compensation of these errors is typically accomplished using post-processing techniques, such as Fourier deconvolution, linear unmixing, or iterative solvers. This results in increased computational complexity when reconstructing and calibrating many parallel interference patterns. In this paper, we describe a new method of calibrating a Fourier transform spectrometer based on the use of artificial neural networks (ANNs). In this way, it is demonstrated that a simpler and more straightforward reconstruction process can be achieved at the cost of additional calibration equipment. To this end, we provide a theoretical model for general systematic phase errors in a polarization birefringent interferometer. This is followed by a discussion of our experimental setup and a demonstration of our technique, as applied to data with and without phase error. The technique's utility is then supported by comparison to alternative reconstruction techniques using fast Fourier transforms (FFTs) and linear unmixing.

  13. Neural network calibration of a snapshot birefringent Fourier transform spectrometer with periodic phase errors.

    PubMed

    Luo, David; Kudenov, Michael W

    2016-05-16

    Systematic phase errors in Fourier transform spectroscopy can severely degrade the calculated spectra. Compensation of these errors is typically accomplished using post-processing techniques, such as Fourier deconvolution, linear unmixing, or iterative solvers. This results in increased computational complexity when reconstructing and calibrating many parallel interference patterns. In this paper, we describe a new method of calibrating a Fourier transform spectrometer based on the use of artificial neural networks (ANNs). In this way, it is demonstrated that a simpler and more straightforward reconstruction process can be achieved at the cost of additional calibration equipment. To this end, we provide a theoretical model for general systematic phase errors in a polarization birefringent interferometer. This is followed by a discussion of our experimental setup and a demonstration of our technique, as applied to data with and without phase error. The technique's utility is then supported by comparison to alternative reconstruction techniques using fast Fourier transforms (FFTs) and linear unmixing. PMID:27409947

  14. A systematic review of image segmentation methodology, used in the additive manufacture of patient-specific 3D printed models of the cardiovascular system

    PubMed Central

    Byrne, N; Velasco Forte, M; Tandon, A; Valverde, I

    2016-01-01

    Background Shortcomings in existing methods of image segmentation preclude the widespread adoption of patient-specific 3D printing as a routine decision-making tool in the care of those with congenital heart disease. We sought to determine the range of cardiovascular segmentation methods and how long each of these methods takes. Methods A systematic review of literature was undertaken. Medical imaging modality, segmentation methods, segmentation time, segmentation descriptive quality (SDQ) and segmentation software were recorded. Results Totally 136 studies met the inclusion criteria (1 clinical trial; 80 journal articles; 55 conference, technical and case reports). The most frequently used image segmentation methods were brightness thresholding, region growing and manual editing, as supported by the most popular piece of proprietary software: Mimics (Materialise NV, Leuven, Belgium, 1992–2015). The use of bespoke software developed by individual authors was not uncommon. SDQ indicated that reporting of image segmentation methods was generally poor with only one in three accounts providing sufficient detail for their procedure to be reproduced. Conclusions and implication of key findings Predominantly anecdotal and case reporting precluded rigorous assessment of risk of bias and strength of evidence. This review finds a reliance on manual and semi-automated segmentation methods which demand a high level of expertise and a significant time commitment on the part of the operator. In light of the findings, we have made recommendations regarding reporting of 3D printing studies. We anticipate that these findings will encourage the development of advanced image segmentation methods. PMID:27170842

  15. Efficacy of additional psychosocial intervention in reducing low birth weight and preterm birth in teenage pregnancy: A systematic review and meta-analysis.

    PubMed

    Sukhato, Kanokporn; Wongrathanandha, Chathaya; Thakkinstian, Ammarin; Dellow, Alan; Horsuwansak, Pornpot; Anothaisintawee, Thunyarat

    2015-10-01

    This systematic review aimed to assess the efficacy of psychosocial interventions in reducing risk of low birth weight (LBW) and preterm birth (PTB) in teenage pregnancy. Relevant studies were identified from Medline, Scopus, CINAHL, and CENTRAL databases. Randomized controlled trials investigating effect of psychosocial interventions on risk of LBW and PTB, compared to routine antenatal care (ANC) were eligible. Relative risks (RR) of LBW and PTB were pooled using inverse variance method. Mean differences of birth weight (BW) between intervention and control groups were pooled using unstandardized mean difference (USMD). Five studies were included in the review. Compared with routine ANC, psychosocial interventions significantly reduced risk of LBW by 40% (95%CI: 8%,62%) but not for PTB (pooled RR = 0.67, 95%CI: 0.42,1.05). Mean BW of the intervention group was significantly higher than that of the control group with USMD of 200.63 g (95% CI: 21.02, 380.25). Results of our study suggest that psychosocial interventions significantly reduced risk of LBW in teenage pregnancy.

  16. Error growth in operational ECMWF forecasts

    NASA Technical Reports Server (NTRS)

    Kalnay, E.; Dalcher, A.

    1985-01-01

    A parameterization scheme used at the European Centre for Medium Range Forecasting to model the average growth of the difference between forecasts on consecutive days was extended by including the effect of error growth on forecast model deficiencies. Error was defined as the difference between the forecast and analysis fields during the verification time. Systematic and random errors were considered separately in calculating the error variance for a 10 day operational forecast. A good fit was obtained with measured forecast errors and a satisfactory trend was achieved in the difference between forecasts. Fitting six parameters to forecast errors and differences that were performed separately for each wavenumber revealed that the error growth rate grew with wavenumber. The saturation error decreased with the total wavenumber and the limit of predictability, i.e., when error variance reaches 95 percent of saturation, decreased monotonically with the total wavenumber.

  17. Disclosure of "nonharmful" medical errors and other events: duty to disclose.

    PubMed

    Chamberlain, Catherine J; Koniaris, Leonidas G; Wu, Albert W; Pawlik, Timothy M

    2012-03-01

    An estimated 98 000 patients die in the United States each year because of medical errors. One million or more total medical errors are estimated to occur annually, which is far greater than the actual number of reported "harmful" mistakes. Although it is generally agreed that harmful errors must be disclosed to patients, when the error is deemed to have not resulted in a harmful event, physicians are less inclined to disclose it. Little has been written about the handling of near misses or "nonharmful" errors, and the issues related to disclosure of such events have rarely been discussed in medicine, although they are routinely addressed within the aviation industry. Herein, we elucidate the arguments for reporting nonharmful medical errors to patients and to reporting systems. A definition of what constitutes harm is explored, as well as the ethical issues underpinning disclosure of nonharmful errors. In addition, systematic institutional implications of reporting nonharmful errors are highlighted. Full disclosure of nonharmful errors is advocated, and recommendations on how to discuss errors with patients are provided. An argument that full error disclosure may improve future patient care is also outlined.

  18. Discretization vs. Rounding Error in Euler's Method

    ERIC Educational Resources Information Center

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  19. Error handling strategies in multiphase inverse modeling

    SciTech Connect

    Finsterle, S.; Zhang, Y.

    2010-12-01

    Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.

  20. Efficacy and Safety Assessment of the Addition of Bevacizumab to Adjuvant Therapy Agents in Cancer Patients: A Systematic Review and Meta-Analysis of Randomized Controlled Trials

    PubMed Central

    Ahmadizar, Fariba; Onland-Moret, N. Charlotte; de Boer, Anthonius; Liu, Geoffrey; Maitland-van der Zee, Anke H.

    2015-01-01

    Aim To evaluate the efficacy and safety of bevacizumab in the adjuvant cancer therapy setting within different subset of patients. Methods & Design/ Results PubMed, EMBASE, Cochrane and Clinical trials.gov databases were searched for English language studies of randomized controlled trials comparing bevacizumab and adjuvant therapy with adjuvant therapy alone published from January 1966 to 7th of May 2014. Progression free survival, overall survival, overall response rate, safety and quality of life were analyzed using random- or fixed-effects models according to the PRISMA guidelines. We obtained data from 44 randomized controlled trials (30,828 patients). Combining bevacizumab with different adjuvant therapies resulted in significant improvement of progression free survival (log hazard ratio, 0.87; 95% confidence interval (CI), 0.84–0.89), overall survival (log hazard ratio, 0.96; 95% CI, 0.94–0.98) and overall response rate (relative risk, 1.46; 95% CI: 1.33–1.59) compared to adjuvant therapy alone in all studied tumor types. In subgroup analyses, there were no interactions of bevacizumab with baseline characteristics on progression free survival and overall survival, while overall response rate was influenced by tumor type and bevacizumab dose (p-value: 0.02). Although bevacizumab use resulted in additional expected adverse drug reactions except anemia and fatigue, it was not associated with a significant decline in quality of life. There was a trend towards a higher risk of several side effects in patients treated by high-dose bevacizumab compared to the low-dose e.g. all grade proteinuria (9.24; 95% CI: 6.60–12.94 vs. 2.64; 95% CI: 1.29–5.40). Conclusions Combining bevacizumab with different adjuvant therapies provides a survival benefit across all major subsets of patients, including by tumor type, type of adjuvant therapy, and duration and dose of bevacizumab therapy. Though bevacizumab was associated with increased risks of some adverse drug

  1. Additional use of an aldosterone antagonist in patients with mild to moderate chronic heart failure: a systematic review and meta-analysis

    PubMed Central

    Hu, Li-jun; Chen, Yun-qing; Deng, Song-bai; Du, Jian-lin; She, Qiang

    2013-01-01

    Aims Aldosterone antagonists (AldoAs) have been used to treat severe chronic heart failure (CHF).There is uncertainty regarding the efficacy of using AldoAs in mild to moderate CHF with New York Heart Association (NYHA) classifications of I to II. This study summarizes the evidence for the efficacy of spironolactone (SP), eplerenone (EP) and canrenone in mild to moderate CHF patients. Methods PubMed, MEDLINE, EMBASE and OVID databases were searched before June 2012 for randomized and quasi-randomized controlled trials assessing AldoA treatment in CHF patients with NYHA classes I to II. Data concerning the study's design, patients' characteristics and outcomes were extracted. Risk ratio (RR) and weighted mean differences (WMD) or standardized mean difference were calculated using either fixed or random effects models. Results Eight trials involving 3929 CHF patients were included. AldoAs were superior to the control in all cause mortality (RR 0.79, 95% CI 0.66, 0.95) and in re-hospitalization for cardiac causes (RR 0.62, 95% CI 0.52, 0.74), the left ventricular ejection fraction was improved by AldoA treatment (WMD 2.94%, P = 0.52). Moreover, AldoA therapy decreased the left ventricular end-diastolic volume (WMD −14.04 ml, P < 0.00001),the left ventricular end-systolic volume (WMD −14.09 ml, P < 0.00001). A stratified analysis showed a statistical superiority in the benefits of SP over EP in reducing LVEDV and LVESV. AldoAs reduced B-type natriuretic peptide concentrations (WMD −37.76 pg ml−1, P < 0.00001), increased serum creatinine (WMD 8.69 μmol l−1, P = 0.0003) and occurrence of hyperkalaemia (RR 1.78, 95% CI 1.43, 2.23). Conclusions Additional use of AldoAs in CHF patients may decrease mortality and re-hospitalization for cardiac reasons, improve cardiac function and simultaneously ameliorate LV reverse remodelling. PMID:23088367

  2. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.

  3. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  4. Sepsis: Medical errors in Poland.

    PubMed

    Rorat, Marta; Jurek, Tomasz

    2016-01-01

    Health, safety and medical errors are currently the subject of worldwide discussion. The authors analysed medico-legal opinions trying to determine types of medical errors and their impact on the course of sepsis. The authors carried out a retrospective analysis of 66 medico-legal opinions issued by the Wroclaw Department of Forensic Medicine between 2004 and 2013 (at the request of the prosecutor or court) in cases examined for medical errors. Medical errors were confirmed in 55 of the 66 medico-legal opinions. The age of victims varied from 2 weeks to 68 years; 49 patients died. The analysis revealed medical errors committed by 113 health-care workers: 98 physicians, 8 nurses and 8 emergency medical dispatchers. In 33 cases, an error was made before hospitalisation. Hospital errors occurred in 35 victims. Diagnostic errors were discovered in 50 patients, including 46 cases of sepsis being incorrectly recognised and insufficient diagnoses in 37 cases. Therapeutic errors occurred in 37 victims, organisational errors in 9 and technical errors in 2. In addition to sepsis, 8 patients also had a severe concomitant disease and 8 had a chronic disease. In 45 cases, the authors observed glaring errors, which could incur criminal liability. There is an urgent need to introduce a system for reporting and analysing medical errors in Poland. The development and popularisation of standards for identifying and treating sepsis across basic medical professions is essential to improve patient safety and survival rates. Procedures should be introduced to prevent health-care workers from administering incorrect treatment in cases.

  5. Quantifying errors without random sampling

    PubMed Central

    Phillips, Carl V; LaPole, Luwanna M

    2003-01-01

    Background All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. Discussion We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Summary Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research. PMID:12892568

  6. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  7. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  8. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  9. Immediate error correction process following sleep deprivation.

    PubMed

    Hsieh, Shulan; Cheng, I-Chen; Tsai, Ling-Ling

    2007-06-01

    Previous studies have suggested that one night of sleep deprivation decreases frontal lobe metabolic activity, particularly in the anterior cingulated cortex (ACC), resulting in decreased performance in various executive function tasks. This study thus attempted to address whether sleep deprivation impaired the executive function of error detection and error correction. Sixteen young healthy college students (seven women, nine men, with ages ranging from 18 to 23 years) participated in this study. Participants performed a modified letter flanker task and were instructed to make immediate error corrections on detecting performance errors. Event-related potentials (ERPs) during the flanker task were obtained using a within-subject, repeated-measure design. The error negativity or error-related negativity (Ne/ERN) and the error positivity (Pe) seen immediately after errors were analyzed. The results show that the amplitude of the Ne/ERN was reduced significantly following sleep deprivation. Reduction also occurred for error trials with subsequent correction, indicating that sleep deprivation influenced error correction ability. This study further demonstrated that the impairment in immediate error correction following sleep deprivation was confined to specific stimulus types, with both Ne/ERN and behavioral correction rates being reduced only for trials in which flanker stimuli were incongruent with the target stimulus, while the response to the target was compatible with that of the flanker stimuli following sleep deprivation. The results thus warrant future systematic investigation of the interaction between stimulus type and error correction following sleep deprivation. PMID:17542943

  10. Exposure measurement error in time-series studies of air pollution: concepts and consequences.

    PubMed Central

    Zeger, S L; Thomas, D; Dominici, F; Samet, J M; Schwartz, J; Dockery, D; Cohen, A

    2000-01-01

    Misclassification of exposure is a well-recognized inherent limitation of epidemiologic studies of disease and the environment. For many agents of interest, exposures take place over time and in multiple locations; accurately estimating the relevant exposures for an individual participant in epidemiologic studies is often daunting, particularly within the limits set by feasibility, participant burden, and cost. Researchers have taken steps to deal with the consequences of measurement error by limiting the degree of error through a study's design, estimating the degree of error using a nested validation study, and by adjusting for measurement error in statistical analyses. In this paper, we address measurement error in observational studies of air pollution and health. Because measurement error may have substantial implications for interpreting epidemiologic studies on air pollution, particularly the time-series analyses, we developed a systematic conceptual formulation of the problem of measurement error in epidemiologic studies of air pollution and then considered the consequences within this formulation. When possible, we used available relevant data to make simple estimates of measurement error effects. This paper provides an overview of measurement errors in linear regression, distinguishing two extremes of a continuum-Berkson from classical type errors, and the univariate from the multivariate predictor case. We then propose one conceptual framework for the evaluation of measurement errors in the log-linear regression used for time-series studies of particulate air pollution and mortality and identify three main components of error. We present new simple analyses of data on exposures of particulate matter < 10 microm in aerodynamic diameter from the Particle Total Exposure Assessment Methodology Study. Finally, we summarize open questions regarding measurement error and suggest the kind of additional data necessary to address them. Images Figure 1 Figure 2

  11. Error in radiology.

    PubMed

    Goddard, P; Leslie, A; Jones, A; Wakeley, C; Kabala, J

    2001-10-01

    The level of error in radiology has been tabulated from articles on error and on "double reporting" or "double reading". The level of error varies depending on the radiological investigation, but the range is 2-20% for clinically significant or major error. The greatest reduction in error rates will come from changes in systems.

  12. Using data assimilation for systematic model improvement

    NASA Astrophysics Data System (ADS)

    Lang, Matthew S.; van Leeuwen, Peter Jan; Browne, Phil

    2016-04-01

    In Numerical Weather Prediction parameterisations are used to simulate missing physics in the model. These can be due to a lack of scientific understanding or a lack of computing power available to address all the known physical processes. Parameterisations are sources of large uncertainty in a model as parameter values used in these parameterisations cannot be measured directly and hence are often not well known, and the parameterisations themselves are approximations of the processes present in the true atmosphere. Whilst there are many efficient and effective methods for combined state/parameter estimation in data assimilation, such as state augmentation, these are not effective at estimating the structure of parameterisations. A new method of parameterisation estimation is proposed that uses sequential data assimilation methods to estimate errors in the numerical models at each space-time point for each model equation. These errors are then fitted to predetermined functional forms of missing physics or parameterisations, that are based upon prior information. The method picks out the functional form, or that combination of functional forms, that bests fits the error structure. The prior information typically takes the form of expert knowledge. We applied the method to a one-dimensional advection model with additive model error, and it is shown that the method can accurately estimate parameterisations, with consistent error estimates. It is also demonstrated that state augmentation is not successful. The results indicate that this new method is a powerful tool in systematic model improvement.

  13. A Systematic Methodology for Verifying Superscalar Microprocessors

    NASA Technical Reports Server (NTRS)

    Srivas, Mandayam; Hosabettu, Ravi; Gopalakrishnan, Ganesh

    1999-01-01

    We present a systematic approach to decompose and incrementally build the proof of correctness of pipelined microprocessors. The central idea is to construct the abstraction function by using completion functions, one per unfinished instruction, each of which specifies the effect (on the observables) of completing the instruction. In addition to avoiding the term size and case explosion problem that limits the pure flushing approach, our method helps localize errors, and also handles stages with interactive loops. The technique is illustrated on pipelined and superscalar pipelined implementations of a subset of the DLX architecture. It has also been applied to a processor with out-of-order execution.

  14. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have approximately 7am/7pm orbital geometry) and. afternoon satellites (NOAA 7, 9, 11 and 14 that have approximately 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error eo. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error eo. We find eo can decrease the global temperature trend by approximately 0.07 K/decade. In addition there are systematic time dependent errors ed and ec present in the data that are introduced by the drift in the satellite orbital geometry. ed arises from the diurnal cycle in temperature and ec is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error ed can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observation made in the MSU Ch 1 (50.3 GHz) support this approach. The error ec is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the

  15. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the

  16. Sources of Error in Mammalian Genetic Screens.

    PubMed

    Sack, Laura Magill; Davoli, Teresa; Xu, Qikai; Li, Mamie Z; Elledge, Stephen J

    2016-01-01

    Genetic screens are invaluable tools for dissection of biological phenomena. Optimization of such screens to enhance discovery of candidate genes and minimize false positives is thus a critical aim. Here, we report several sources of error common to pooled genetic screening techniques used in mammalian cell culture systems, and demonstrate methods to eliminate these errors. We find that reverse transcriptase-mediated recombination during retroviral replication can lead to uncoupling of molecular tags, such as DNA barcodes (BCs), from their associated library elements, leading to chimeric proviral genomes in which BCs are paired to incorrect ORFs, shRNAs, etc This effect depends on the length of homologous sequence between unique elements, and can be minimized with careful vector design. Furthermore, we report that residual plasmid DNA from viral packaging procedures can contaminate transduced cells. These plasmids serve as additional copies of the PCR template during library amplification, resulting in substantial inaccuracies in measurement of initial reference populations for screen normalization. The overabundance of template in some samples causes an imbalance between PCR cycles of contaminated and uncontaminated samples, which results in a systematic artifactual depletion of GC-rich library elements. Elimination of contaminating plasmid DNA using the bacterial endonuclease Benzonase can restore faithful measurements of template abundance and minimize GC bias. PMID:27402361

  17. Sources of Error in Mammalian Genetic Screens

    PubMed Central

    Sack, Laura Magill; Davoli, Teresa; Xu, Qikai; Li, Mamie Z.; Elledge, Stephen J.

    2016-01-01

    Genetic screens are invaluable tools for dissection of biological phenomena. Optimization of such screens to enhance discovery of candidate genes and minimize false positives is thus a critical aim. Here, we report several sources of error common to pooled genetic screening techniques used in mammalian cell culture systems, and demonstrate methods to eliminate these errors. We find that reverse transcriptase-mediated recombination during retroviral replication can lead to uncoupling of molecular tags, such as DNA barcodes (BCs), from their associated library elements, leading to chimeric proviral genomes in which BCs are paired to incorrect ORFs, shRNAs, etc. This effect depends on the length of homologous sequence between unique elements, and can be minimized with careful vector design. Furthermore, we report that residual plasmid DNA from viral packaging procedures can contaminate transduced cells. These plasmids serve as additional copies of the PCR template during library amplification, resulting in substantial inaccuracies in measurement of initial reference populations for screen normalization. The overabundance of template in some samples causes an imbalance between PCR cycles of contaminated and uncontaminated samples, which results in a systematic artifactual depletion of GC-rich library elements. Elimination of contaminating plasmid DNA using the bacterial endonuclease Benzonase can restore faithful measurements of template abundance and minimize GC bias. PMID:27402361

  18. Errors Associated with the Direct Measurement of Radionuclides in Wounds

    SciTech Connect

    Hickman, D P

    2006-03-02

    Work in radiation areas can occasionally result in accidental wounds containing radioactive materials. When a wound is incurred within a radiological area, the presence of radioactivity in the wound needs to be confirmed to determine if additional remedial action needs to be taken. Commonly used radiation area monitoring equipment is poorly suited for measurement of radioactive material buried within the tissue of the wound. The Lawrence Livermore National Laboratory (LLNL) In Vivo Measurement Facility has constructed a portable wound counter that provides sufficient detection of radioactivity in wounds as shown in Fig. 1. The LLNL wound measurement system is specifically designed to measure low energy photons that are emitted from uranium and transuranium radionuclides. The portable wound counting system uses a 2.5cm diameter by 1mm thick NaI(Tl) detector. The detector is connected to a Canberra NaI InSpector{trademark}. The InSpector interfaces with an IBM ThinkPad laptop computer, which operates under Genie 2000 software. The wound counting system is maintained and used at the LLNL In Vivo Measurement Facility. The hardware is designed to be portable and is occasionally deployed to respond to the LLNL Health Services facility or local hospitals for examination of personnel that may have radioactive materials within a wound. The typical detection levels in using the LLNL portable wound counter in a low background area is 0.4 nCi to 0.6 nCi assuming a near zero mass source. This paper documents the systematic errors associated with in vivo measurement of radioactive materials buried within wounds using the LLNL portable wound measurement system. These errors are divided into two basic categories, calibration errors and in vivo wound measurement errors. Within these categories, there are errors associated with particle self-absorption of photons, overlying tissue thickness, source distribution within the wound, and count errors. These errors have been examined and

  19. Evaluation of Topographic Error and Quality with Stereophotoclinometry

    NASA Astrophysics Data System (ADS)

    Palmer, Eric; Weirich, John; Campbell, Tanner; Lambert, Diane; Drozd, Kristofer

    2016-10-01

    One of the primary means to evaluate the accuracy of a shape model is to measure the deviation between a truth model (if available) and the shape model. Typically, this is done by calculating the square root of the average error squared of all the points, i.e the root mean squared error (RMS).This technique provides valuable insight into the error distribution of a shape model, as well as providing an objective measurement of deviations. However, it does not fully explain the error and especially the quality of a digital terrain model. Systematic errors can obscure poorly performing regions and may over-report errors.We have begun an extensive analysis of using normalized cross-correlation to evaluate the quality of shape models compared to truth topography, as well as the agreement between images rendered from the model with the original images. This technique provides a tool to differentiate between local accuracy and global accuracy. It also provides an effective way to decompose the error vector into horizontal and vertical displacements. It is especially useful for stereophotoclinometry (SPC) because it allows a clear determination of the quality of the model at the resolution of the source images (i.e. if the source images have a 5cm pixel size, it shows how well the SPC solution is at 5cm). Additionally, it demonstrates how essential a good imaging plan is to the quality of the shape model.We are using these techniques in support of the OSIRIS-REx mission to the asteroid Bennu.

  20. Determination of molar IR absorptivities and their errors

    NASA Astrophysics Data System (ADS)

    Staat, H.; Korte, E. H.

    1984-03-01

    Molar absorptivities of band maxima of acetonitrile, n-heptane, benzene, and toluene were determined from difference spectra. The statistical and most important systematic errors are given. Recently, we studied statistical and systematic errors occuring in the determination of IR absorptivities ɛ of liquids (ref. 1). Considerable systematic errors are caused by reflection losses at the outer and inner surfaces of the cell windows. It was shown that these are compensated for if the ratio of two transmittance spectra (T 1, T 2) due to different sample thicknesses (d 1, d 2) is used: In such a case Bouguer—Lambert-Beer's laws leads to ? where c denotes the concentration. The reliability of the absorptivities derived in this way, is mainly affected by the statistical error comprising the standard deviations of the transmittance measurements as well as by the systematic errors from multiple beam interference within the cell (the fringes do not compensate for each other because of their different periods) and from the finite slit width. Experimental conditions can be chosen so that errors from beam convergence, polarization, temperature variations, and thermal emission are negligible. The influences on the transmittance measurement by drift, unwanted radiation, reliability of wavenumber reading, and non-linearity of the detector system are not considered. The molar absorptivities of band maxima of acetonitrile, n-heptane, benzene, and toluene have been determined using equation (1) and are listed in the Table. The values ofΔd employed were in the order of 10 μm to 40 μm, therefore, the strongest bands could not be evaluated. The statistical error was calculated from ? and the systematic error due to finite spectral slit width (s) from ? with the band half-width 2γ. The deviation of the cell from planoparallel shape has been taken into account quantitatively, this is different to the method used previously (ref. 1). If the cell is wedge shaped so that its thickness

  1. Studies of Error Sources in Geodetic VLBI

    NASA Technical Reports Server (NTRS)

    Rogers, A. E. E.; Niell, A. E.; Corey, B. E.

    1996-01-01

    Achieving the goal of millimeter uncertainty in three dimensional geodetic positioning on a global scale requires significant improvement in the precision and accuracy of both random and systematic error sources. For this investigation we proposed to study errors due to instrumentation in Very Long Base Interferometry (VLBI) and due to the atmosphere. After the inception of this work we expanded the scope to include assessment of error sources in GPS measurements, especially as they affect the vertical component of site position and the measurement of water vapor in the atmosphere. The atmosphere correction 'improvements described below are of benefit to both GPS and VLBI.

  2. The effectiveness of selected feed and water additives for reducing Salmonella spp. of public health importance in broiler chickens: a systematic review, meta-analysis, and meta-regression approach.

    PubMed

    Totton, Sarah C; Farrar, Ashley M; Wilkins, Wendy; Bucher, Oliver; Waddell, Lisa A; Wilhelm, Barbara J; McEwen, Scott A; Rajić, Andrijana

    2012-10-01

    Eating inappropriately prepared poultry meat is a major cause of foodborne salmonellosis. Our objectives were to determine the efficacy of feed and water additives (other than competitive exclusion and antimicrobials) on reducing Salmonella prevalence or concentration in broiler chickens using systematic review-meta-analysis and to explore sources of heterogeneity found in the meta-analysis through meta-regression. Six electronic databases were searched (Current Contents (1999-2009), Agricola (1924-2009), MEDLINE (1860-2009), Scopus (1960-2009), Centre for Agricultural Bioscience (CAB) (1913-2009), and CAB Global Health (1971-2009)), five topic experts were contacted, and the bibliographies of review articles and a topic-relevant textbook were manually searched to identify all relevant research. Study inclusion criteria comprised: English-language primary research investigating the effects of feed and water additives on the Salmonella prevalence or concentration in broiler chickens. Data extraction and study methodological assessment were conducted by two reviewers independently using pretested forms. Seventy challenge studies (n=910 unique treatment-control comparisons), seven controlled studies (n=154), and one quasi-experiment (n=1) met the inclusion criteria. Compared to an assumed control group prevalence of 44 of 1000 broilers, random-effects meta-analysis indicated that the Salmonella cecal colonization in groups with prebiotics (fructooligosaccharide, lactose, whey, dried milk, lactulose, lactosucrose, sucrose, maltose, mannanoligosaccharide) added to feed or water was 15 out of 1000 broilers; with lactose added to feed or water it was 10 out of 1000 broilers; with experimental chlorate product (ECP) added to feed or water it was 21 out of 1000. For ECP the concentration of Salmonella in the ceca was decreased by 0.61 log(10)cfu/g in the treated group compared to the control group. Significant heterogeneity (Cochran's Q-statistic p≤0.10) was observed

  3. Measurement error in geometric morphometrics.

    PubMed

    Fruciano, Carmelo

    2016-06-01

    Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset.

  4. Motor and non-motor error and the influence of error magnitude on brain activity.

    PubMed

    Nadig, Karin Graziella; Jäncke, Lutz; Lüchinger, Roger; Lutz, Kai

    2010-04-01

    It has been shown that frontal cortical areas increase their activity during error perception and error processing. However, it is not yet clear whether perception of motor errors is processed in the same frontal areas as perception of errors in cognitive tasks. It is also unclear whether brain activity level is influenced by the magnitude of error. For this purpose, we conducted a study in which subjects were confronted with motor and non-motor errors, and had them perform a sensorimotor transformation task in which they were likely to commit motor errors of different magnitudes (internal errors). In addition to the internally committed motor errors, non-motor errors (external errors) were added to the feedback in some trials. We found that activity in the anterior insula, inferior frontal gyrus (IFG), cerebellum, precuneus, and posterior medial frontal cortex (pMFC) correlated positively with the magnitude of external errors. The middle frontal gyrus (MFG) and the pMFC cortex correlated positively with the magnitude of the total error fed back to subjects (internal plus external). No significant positive correlation between internal error and brain activity could be detected. These results indicate that motor errors have a differential effect on brain activity compared with non-motor errors.

  5. The effects of recall errors and of selection bias in epidemiologic studies of mobile phone use and cancer risk.

    PubMed

    Vrijheid, Martine; Deltour, Isabelle; Krewski, Daniel; Sanchez, Marie; Cardis, Elisabeth

    2006-07-01

    This paper examines the effects of systematic and random errors in recall and of selection bias in case-control studies of mobile phone use and cancer. These sensitivity analyses are based on Monte-Carlo computer simulations and were carried out within the INTERPHONE Study, an international collaborative case-control study in 13 countries. Recall error scenarios simulated plausible values of random and systematic, non-differential and differential recall errors in amount of mobile phone use reported by study subjects. Plausible values for the recall error were obtained from validation studies. Selection bias scenarios assumed varying selection probabilities for cases and controls, mobile phone users, and non-users. Where possible these selection probabilities were based on existing information from non-respondents in INTERPHONE. Simulations used exposure distributions based on existing INTERPHONE data and assumed varying levels of the true risk of brain cancer related to mobile phone use. Results suggest that random recall errors of plausible levels can lead to a large underestimation in the risk of brain cancer associated with mobile phone use. Random errors were found to have larger impact than plausible systematic errors. Differential errors in recall had very little additional impact in the presence of large random errors. Selection bias resulting from underselection of unexposed controls led to J-shaped exposure-response patterns, with risk apparently decreasing at low to moderate exposure levels. The present results, in conjunction with those of the validation studies conducted within the INTERPHONE study, will play an important role in the interpretation of existing and future case-control studies of mobile phone use and cancer risk, including the INTERPHONE study.

  6. The Error Distribution of BATSE GRB Location

    NASA Technical Reports Server (NTRS)

    Briggs, Michael S.; Pendleton, Geoffrey N.; Kippen, R. Marc; Brainerd, J. J.; Hurley, Kevin; Connaughton, Valerie; Meegan, Charles A.

    1998-01-01

    We develop empirical probability models for BATSE GRB location errors by a Bayesian analysis of the separations between BATSE GRB locations and locations obtained with the InterPlanetary Network (IPN). Models are compared and their parameters estimated using 394 GRBs with single IPN annuli and 20 GRBs with intersecting IPN annuli. Most of the analysis is for the 4B (rev) BATSE catalog; earlier catalogs are also analyzed. The simplest model that provides a good representation of the error distribution has 78% of the locations in a 'core' term with a systematic error of 1.85 degrees and the remainder in an extended tail with a systematic error of 5.36 degrees, implying a 68% confidence region for bursts with negligible statistical errors of 2.3 degrees. There is some evidence for a more complicated model in which the error distribution depends on the BATSE datatype that was used to obtain the location. Bright bursts are typically located using the CONT datatype, and according to the more complicated model, the 68% confidence region for CONT-located bursts with negligible statistical errors is 2.0 degrees.

  7. Effects of Repeated Readings, Error Correction, and Performance Feedback on the Fluency and Comprehension of Middle School Students with Behavior Problems

    ERIC Educational Resources Information Center

    Alber-Morgan, Sheila R.; Ramp, Ellen Matheson; Anderson, Lara L.; Martin, Christa M.

    2007-01-01

    This study used a multiple-baseline-across-students design to examine the effects of repeated readings combined with systematic error correction and performance feedback on the reading fluency and comprehension of 4 middle school students attending an outpatient day treatment program for their behavior problems. Additionally, a brief prediction…

  8. Field error lottery

    NASA Astrophysics Data System (ADS)

    James Elliott, C.; McVey, Brian D.; Quimby, David C.

    1991-07-01

    The level of field errors in a free electron laser (FEL) is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is use of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond convenient mechanical tolerances of ± 25 μm, and amelioration of these may occur by a procedure using direct measurement of the magnetic fields at assembly time.

  9. Field error lottery

    NASA Astrophysics Data System (ADS)

    Elliott, C. James; McVey, Brian D.; Quimby, David C.

    1990-11-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement, and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of (plus minus)25(mu)m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time.

  10. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  11. Medical Error and Moral Luck.

    PubMed

    Hubbeling, Dieneke

    2016-09-01

    This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome. PMID:26662613

  12. Beta systems error analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  13. Addition of docetaxel or bisphosphonates to standard of care in men with localised or metastatic, hormone-sensitive prostate cancer: a systematic review and meta-analyses of aggregate data

    PubMed Central

    Vale, Claire L; Burdett, Sarah; Rydzewska, Larysa H M; Albiges, Laurence; Clarke, Noel W; Fisher, David; Fizazi, Karim; Gravis, Gwenaelle; James, Nicholas D; Mason, Malcolm D; Parmar, Mahesh K B; Sweeney, Christopher J; Sydes, Matthew R; Tombal, Bertrand; Tierney, Jayne F

    2016-01-01

    Summary Background Results from large randomised controlled trials combining docetaxel or bisphosphonates with standard of care in hormone-sensitive prostate cancer have emerged. In order to investigate the effects of these therapies and to respond to emerging evidence, we aimed to systematically review all relevant trials using a framework for adaptive meta-analysis. Methods For this systematic review and meta-analysis, we searched MEDLINE, Embase, LILACS, and the Cochrane Central Register of Controlled Trials, trial registers, conference proceedings, review articles, and reference lists of trial publications for all relevant randomised controlled trials (published, unpublished, and ongoing) comparing either standard of care with or without docetaxel or standard of care with or without bisphosphonates for men with high-risk localised or metastatic hormone-sensitive prostate cancer. For each trial, we extracted hazard ratios (HRs) of the effects of docetaxel or bisphosphonates on survival (time from randomisation until death from any cause) and failure-free survival (time from randomisation to biochemical or clinical failure or death from any cause) from published trial reports or presentations or obtained them directly from trial investigators. HRs were combined using the fixed-effect model (Mantel-Haenzsel). Findings We identified five eligible randomised controlled trials of docetaxel in men with metastatic (M1) disease. Results from three (CHAARTED, GETUG-15, STAMPEDE) of these trials (2992 [93%] of 3206 men randomised) showed that the addition of docetaxel to standard of care improved survival. The HR of 0·77 (95% CI 0·68–0·87; p<0·0001) translates to an absolute improvement in 4-year survival of 9% (95% CI 5–14). Docetaxel in addition to standard of care also improved failure-free survival, with the HR of 0·64 (0·58–0·70; p<0·0001) translating into a reduction in absolute 4-year failure rates of 16% (95% CI 12–19). We identified 11 trials of

  14. Addition of docetaxel or bisphosphonates to standard of care in men with localised or metastatic, hormone-sensitive prostate cancer: a systematic review and meta-analyses of aggregate data

    PubMed Central

    Vale, Claire L; Burdett, Sarah; Rydzewska, Larysa H M; Albiges, Laurence; Clarke, Noel W; Fisher, David; Fizazi, Karim; Gravis, Gwenaelle; James, Nicholas D; Mason, Malcolm D; Parmar, Mahesh K B; Sweeney, Christopher J; Sydes, Matthew R; Tombal, Bertrand; Tierney, Jayne F

    2016-01-01

    Summary Background Results from large randomised controlled trials combining docetaxel or bisphosphonates with standard of care in hormone-sensitive prostate cancer have emerged. In order to investigate the effects of these therapies and to respond to emerging evidence, we aimed to systematically review all relevant trials using a framework for adaptive meta-analysis. Methods For this systematic review and meta-analysis, we searched MEDLINE, Embase, LILACS, and the Cochrane Central Register of Controlled Trials, trial registers, conference proceedings, review articles, and reference lists of trial publications for all relevant randomised controlled trials (published, unpublished, and ongoing) comparing either standard of care with or without docetaxel or standard of care with or without bisphosphonates for men with high-risk localised or metastatic hormone-sensitive prostate cancer. For each trial, we extracted hazard ratios (HRs) of the effects of docetaxel or bisphosphonates on survival (time from randomisation until death from any cause) and failure-free survival (time from randomisation to biochemical or clinical failure or death from any cause) from published trial reports or presentations or obtained them directly from trial investigators. HRs were combined using the fixed-effect model (Mantel-Haenzsel). Findings We identified five eligible randomised controlled trials of docetaxel in men with metastatic (M1) disease. Results from three (CHAARTED, GETUG-15, STAMPEDE) of these trials (2992 [93%] of 3206 men randomised) showed that the addition of docetaxel to standard of care improved survival. The HR of 0·77 (95% CI 0·68–0·87; p<0·0001) translates to an absolute improvement in 4-year survival of 9% (95% CI 5–14). Docetaxel in addition to standard of care also improved failure-free survival, with the HR of 0·64 (0·58–0·70; p<0·0001) translating into a reduction in absolute 4-year failure rates of 16% (95% CI 12–19). We identified 11 trials of

  15. Accepting error to make less error.

    PubMed

    Einhorn, H J

    1986-01-01

    In this article I argue that the clinical and statistical approaches rest on different assumptions about the nature of random error and the appropriate level of accuracy to be expected in prediction. To examine this, a case is made for each approach. The clinical approach is characterized as being deterministic, causal, and less concerned with prediction than with diagnosis and treatment. The statistical approach accepts error as inevitable and in so doing makes less error in prediction. This is illustrated using examples from probability learning and equal weighting in linear models. Thereafter, a decision analysis of the two approaches is proposed. Of particular importance are the errors that characterize each approach: myths, magic, and illusions of control in the clinical; lost opportunities and illusions of the lack of control in the statistical. Each approach represents a gamble with corresponding risks and benefits.

  16. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  17. Decoding and synchronization of error correcting codes

    NASA Astrophysics Data System (ADS)

    Madkour, S. A.

    1983-01-01

    Decoding devices for hard quantization and soft decision error correcting codes are discussed. A Meggit decoder for Reed-Solomon polynominal codes was implemented and tested. It uses 49 TTL logic IC. A maximum binary frequency of 30 Mbit/sec is demonstrated. A soft decision decoding approach was applied to hard decision decoding, using the principles of threshold decoding. Simulation results indicate that the proposed schema achieves satisfactory performance using only a small number of parity checks. The combined correction of substitution and synchronization errors is analyzed. The algorithm presented shows the capability of convolutional codes to correct synchronization errors as well as independent additive errors without any additional redundancy.

  18. Drug Errors in Anaesthesiology

    PubMed Central

    Jain, Rajnish Kumar; Katiyar, Sarika

    2009-01-01

    Summary Medication errors are a leading cause of morbidity and mortality in hospitalized patients. The incidence of these drug errors during anaesthesia is not certain. They impose a considerable financial burden to health care systems apart from the patient losses. Common causes of these errors and their prevention is discussed. PMID:20640103

  19. Issues in automatic OCR error classification

    SciTech Connect

    Esakov, J.; Lopresti, D.P.; Sandberg, J.S.; Zhou, J.

    1994-12-31

    In this paper we present the surprising result that OCR errors are not always uniformly distributed across a page. Under certain circumstances, 30% or more of the errors incurred can be attributed to a single, avoidable phenomenon in the scanning process. This observation has important ramifications for work that explicitly or implicitly assumes a uniform error distribution. In addition, our experiments show that not just the quantity but also the nature of the errors is affected. This could have an impact on strategies used for post-process error correction. Results such as these can be obtained only by analyzing large quantities of data in a controlled way. To this end, we also describe our algorithm for classifying OCR errors. This is based on a well-known dynamic programming approach for determining string edit distance which we have extended to handle the types of character segmentation errors inherent to OCR.

  20. Using positional observations of numbered minor planets for determination of star catalog errors

    NASA Astrophysics Data System (ADS)

    Medvedev, Y.; Kuznetsov, V.

    2015-08-01

    The systematic errors of star catalogs have been defined by the O-C of the asteroid positional observations. 102 760 633 positional observations for 404 941 numbered asteroids were used. The considerable systematic errors for the USNO A2.0 catalog are founded. For this catalogue we can estimated also the value of variation of systematic errors for some areas on the celestial sphere.

  1. [Medical errors in obstetrics].

    PubMed

    Marek, Z

    1984-08-01

    Errors in medicine may fall into 3 main categories: 1) medical errors made only by physicians, 2) technical errors made by physicians and other health care specialists, and 3) organizational errors associated with mismanagement of medical facilities. This classification of medical errors, as well as the definition and treatment of them, fully applies to obstetrics. However, the difference between obstetrics and other fields of medicine stems from the fact that an obstetrician usually deals with healthy women. Conversely, professional risk in obstetrics is very high, as errors and malpractice can lead to very serious complications. Observations show that the most frequent obstetrical errors occur in induced abortions, diagnosis of pregnancy, selection of optimal delivery techniques, treatment of hemorrhages, and other complications. Therefore, the obstetrician should be prepared to use intensive care procedures similar to those used for resuscitation.

  2. A comparison of some observations of the Galilean satellites with Sampson's tables. [position error analysis

    NASA Technical Reports Server (NTRS)

    Arlot, J.-E.

    1975-01-01

    Two series of photographic observations of the Galilean satellites are analyzed to determine systematic errors in the observations and errors in Sampson's (1921) theory. Satellite-satellite as well as planet-satellite positions are used in comparing theory with observation. Ten unknown errors are identified, and results are presented for three determinations of the unknown longitude error.

  3. Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel

    ERIC Educational Resources Information Center

    Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.

    2007-01-01

    A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…

  4. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  5. Military veterans with mental health problems: a protocol for a systematic review to identify whether they have an additional risk of contact with criminal justice systems compared with other veterans groups

    PubMed Central

    2012-01-01

    Background There is concern that some veterans of armed forces, in particular those with mental health, drug or alcohol problems, experience difficulty returning to a civilian way of life and may subsequently come into contact with criminal justice services and imprisonment. The aim of this review is to examine whether military veterans with mental health problems, including substance use, have an additional risk of contact with criminal justice systems when compared with veterans who do not have such problems. The review will also seek to identify veterans’ views and experiences on their contact with criminal justice services, what contributed to or influenced their contact and whether there are any differences, including international and temporal, in incidence, contact type, veteran type, their presenting health needs and reported experiences. Methods/design In this review we will adopt a methodological model similar to that previously used by other researchers when reviewing intervention studies. The model, which we will use as a framework for conducting a review of observational and qualitative studies, consists of two parallel synthesis stages within the review process; one for quantitative research and the other for qualitative research. The third stage involves a cross study synthesis, enabling a deeper understanding of the results of the quantitative synthesis. A range of electronic databases, including MEDLINE, PsychINFO, CINAHL, will be systematically searched, from 1939 to present day, using a broad range of search terms that cover four key concepts: mental health, military veterans, substance misuse, and criminal justice. Studies will be screened against topic specific inclusion/exclusion criteria and then against a smaller subset of design specific inclusion/exclusion criteria. Data will be extracted for those studies that meet the inclusion criteria, and all eligible studies will be critically appraised. Included studies, both quantitative and

  6. Search, Memory, and Choice Error: An Experiment

    PubMed Central

    Sanjurjo, Adam

    2015-01-01

    Multiple attribute search is a central feature of economic life: we consider much more than price when purchasing a home, and more than wage when choosing a job. An experiment is conducted in order to explore the effects of cognitive limitations on choice in these rich settings, in accordance with the predictions of a new model of search memory load. In each task, subjects are made to search the same information in one of two orders, which differ in predicted memory load. Despite standard models of choice treating such variations in order of acquisition as irrelevant, lower predicted memory load search orders are found to lead to substantially fewer choice errors. An implication of the result for search behavior, more generally, is that in order to reduce memory load (thus choice error) a limited memory searcher ought to deviate from the search path of an unlimited memory searcher in predictable ways-a mechanism that can explain the systematic deviations from optimal sequential search that have recently been discovered in peoples' behavior. Further, as cognitive load is induced endogenously (within the task), and found to affect choice behavior, this result contributes to the cognitive load literature (in which load is induced exogenously), as well as the cognitive ability literature (in which cognitive ability is measured in a separate task). In addition, while the information overload literature has focused on the detrimental effects of the quantity of information on choice, this result suggests that, holding quantity constant, the order that information is observed in is an essential determinant of choice failure. PMID:26121356

  7. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    NASA Astrophysics Data System (ADS)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  8. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  9. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  10. Responses and concerns of healthcare providers to medication errors.

    PubMed

    Wolf, Z R; Serembus, J F; Smetzer, J; Cohen, H; Cohen, M

    2000-11-01

    This descriptive, correlational study examined the responses and concerns of healthcare professionals about making medication errors and estimated patient harm from such errors. A systematic random sample of nurses, pharmacists, and physicians (N = 402) completed a self-report survey about a medication error they judged to be serious. Respondents were guilty, nervous, and worried about the error. They feared for the safety of the patient, disciplinary action, and punishment. A few subjects indicated that they never reported the errors. The most frequent symptoms associated with errors were neurologically based. The injury suffered by patients was not severe overall according to the harm scales. Weak correlations were found for the harm scales and responses and concerns. The authors suggest a supportive environment for the provider following an error and continuous quality improvement efforts to eliminate system-based errors.

  11. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  12. Instrumental systematics and weak gravitational lensing

    NASA Astrophysics Data System (ADS)

    Mandelbaum, R.

    2015-05-01

    We present a pedagogical review of the weak gravitational lensing measurement process and its connection to major scientific questions such as dark matter and dark energy. Then we describe common ways of parametrizing systematic errors and understanding how they affect weak lensing measurements. Finally, we discuss several instrumental systematics and how they fit into this context, and conclude with some future perspective on how progress can be made in understanding the impact of instrumental systematics on weak lensing measurements.

  13. A Fully Bayesian Approach to Improved Calibration and Prediction of Groundwater Models With Structure Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.

    2014-12-01

    Effective water resource management typically relies on numerical models to analyse groundwater flow and solute transport processes. These models are usually subject to model structure error due to simplification and/or misrepresentation of the real system. As a result, the model outputs may systematically deviate from measurements, thus violating a key assumption for traditional regression-based calibration and uncertainty analysis. On the other hand, model structure error induced bias can be described statistically in an inductive, data-driven way based on historical model-to-measurement misfit. We adopt a fully Bayesian approach that integrates a Gaussian process error model to account for model structure error to the calibration, prediction and uncertainty analysis of groundwater models. The posterior distributions of parameters of the groundwater model and the Gaussian process error model are jointly inferred using DREAM, an efficient Markov chain Monte Carlo sampler. We test the usefulness of the fully Bayesian approach towards a synthetic case study of surface-ground water interaction under changing pumping conditions. We first illustrate through this example that traditional least squares regression without accounting for model structure error yields biased parameter estimates due to parameter compensation as well as biased predictions. In contrast, the Bayesian approach gives less biased parameter estimates. Moreover, the integration of a Gaussian process error model significantly reduces predictive bias and leads to prediction intervals that are more consistent with observations. The results highlight the importance of explicit treatment of model structure error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification. In addition, the data-driven error modelling approach is capable of extracting more information from observation data than using a groundwater model alone.

  14. Analysis of Medication Error Reports

    SciTech Connect

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  15. Addressing medical errors in hand surgery.

    PubMed

    Johnson, Shepard P; Adkinson, Joshua M; Chung, Kevin C

    2014-09-01

    Influential think tanks such as the Institute of Medicine have raised awareness about the implications of medical errors. In response, organizations, medical societies, and hospitals have initiated programs to decrease the incidence and prevent adverse effects of these errors. Surgeons deal with the direct implications of adverse events involving patients. In addition to managing the physical consequences, they are confronted with ethical and social issues when caring for a harmed patient. Although there is considerable effort to implement system-wide changes, there is little guidance for hand surgeons on how to address medical errors. Admitting an error by a physician is difficult, but a transparent environment where patients are notified of errors and offered consolation and compensation is essential to maintain physician-patient trust. Furthermore, equipping hand surgeons with a guide for addressing medical errors will help identify system failures, provide learning points for safety improvement, decrease litigation against physicians, and demonstrate a commitment to ethical and compassionate medical care.

  16. Preventing errors in laterality.

    PubMed

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2015-04-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in separate colors. This allows the radiologist to correlate all detected laterality terms of the report with the images open in PACS and correct them before the report is finalized. The system is monitored every time an error in laterality was detected. The system detected 32 errors in laterality over a 7-month period (rate of 0.0007 %), with CT containing the highest error detection rate of all modalities. Significantly, more errors were detected in male patients compared with female patients. In conclusion, our study demonstrated that with our system, laterality errors can be detected and corrected prior to finalizing reports.

  17. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  18. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-01

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  19. Food additives

    PubMed Central

    Spencer, Michael

    1974-01-01

    Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857

  20. Errors Disrupt Subsequent Early Attentional Processes

    PubMed Central

    Van der Borght, Liesbet; Schevernels, Hanne; Burle, Boris; Notebaert, Wim

    2016-01-01

    It has been demonstrated that target detection is impaired following an error in an unrelated flanker task. These findings support the idea that the occurrence or processing of unexpected error-like events interfere with subsequent information processing. In the present study, we investigated the effect of errors on early visual ERP components. We therefore combined a flanker task and a visual discrimination task. Additionally, the intertrial interval between both tasks was manipulated in order to investigate the duration of these negative after-effects. The results of the visual discrimination task indicated that the amplitude of the N1 component, which is related to endogenous attention, was significantly decreased following an error, irrespective of the intertrial interval. Additionally, P3 amplitude was attenuated after an erroneous trial, but only in the long-interval condition. These results indicate that low-level attentional processes are impaired after errors. PMID:27050303

  1. Proofreading for word errors.

    PubMed

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  2. Errors in radial velocity variance from Doppler wind lidar

    NASA Astrophysics Data System (ADS)

    Wang, H.; Barthelmie, R. J.; Doubrawa, P.; Pryor, S. C.

    2016-08-01

    A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Using both statistically simulated and observed data, this paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, the systematic error is negligible but the random error exceeds about 10 %.

  3. Errors in neuroradiology.

    PubMed

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  4. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  5. Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada

    NASA Astrophysics Data System (ADS)

    Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.

    2015-08-01

    Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion

  6. Subaperture test of wavefront error of large telescopes: error sources and stitching performance simulations

    NASA Astrophysics Data System (ADS)

    Chen, Shanyong; Li, Shengyi; Wang, Guilin

    2014-11-01

    The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and

  7. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  8. The Crucial Role of Error Correlation for Uncertainty Modeling of CFD-Based Aerodynamics Increments

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.; Walker, Eric L.

    2011-01-01

    The Ares I ascent aerodynamics database for Design Cycle 3 (DAC-3) was built from wind-tunnel test results and CFD solutions. The wind tunnel results were used to build the baseline response surfaces for wind-tunnel Reynolds numbers at power-off conditions. The CFD solutions were used to build increments to account for Reynolds number effects. We calculate the validation errors for the primary CFD code results at wind tunnel Reynolds number power-off conditions and would like to be able to use those errors to predict the validation errors for the CFD increments. However, the validation errors are large compared to the increments. We suggest a way forward that is consistent with common practice in wind tunnel testing which is to assume that systematic errors in the measurement process and/or the environment will subtract out when increments are calculated, thus making increments more reliable with smaller uncertainty than absolute values of the aerodynamic coefficients. A similar practice has arisen for the use of CFD to generate aerodynamic database increments. The basis of this practice is the assumption of strong correlation of the systematic errors inherent in each of the results used to generate an increment. The assumption of strong correlation is the inferential link between the observed validation uncertainties at wind-tunnel Reynolds numbers and the uncertainties to be predicted for flight. In this paper, we suggest a way to estimate the correlation coefficient and demonstrate the approach using code-to-code differences that were obtained for quality control purposes during the Ares I CFD campaign. Finally, since we can expect the increments to be relatively small compared to the baseline response surface and to be typically of the order of the baseline uncertainty, we find that it is necessary to be able to show that the correlation coefficients are close to unity to avoid overinflating the overall database uncertainty with the addition of the increments.

  9. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  10. Simulation of systematic errors in the SLC magnets

    SciTech Connect

    Jaeger, J.

    1983-08-08

    The distance (iron to iron) between a focusing and a defocusing magnet in the SLC-arcs is 6.7056 cm and the iron length of each of them is 2.52914 m. To represent these magnets by a hard-edge model in computer codes TRANSPORT or TURTLE the magnetic length rather than the core length of these magnets is of interest. In the present lattice the magnetic length for the field and the gradient of each of these magnets is assumed to be 2.5462 m.

  11. Systematic approach for simultaneously correcting the band-gap andp-dseparation errors of common cation III-V or II-VI binaries in density functional theory calculations within a local density approximation

    SciTech Connect

    Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang

    2015-07-31

    We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles method can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.

  12. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  13. Slowing after Observed Error Transfers across Tasks

    PubMed Central

    Wang, Lijun; Pan, Weigang; Tan, Jinfeng; Liu, Congcong; Chen, Antao

    2016-01-01

    . Moreover, the PES effect appears across tasksets with distinct stimuli and response rules in the context of observed errors, reflecting a generic process. Additionally, the slowing effect and improved accuracy in the post-observed error trial do not occur together, suggesting that they are independent behavioral adjustments in the context of observed errors. PMID:26934579

  14. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  15. Alcohol and error processing.

    PubMed

    Holroyd, Clay B; Yeung, Nick

    2003-08-01

    A recent study indicates that alcohol consumption reduces the amplitude of the error-related negativity (ERN), a negative deflection in the electroencephalogram associated with error commission. Here, we explore possible mechanisms underlying this result in the context of two recent theories about the neural system that produces the ERN - one based on principles of reinforcement learning and the other based on response conflict monitoring.

  16. Errors from Image Analysis

    SciTech Connect

    Wood, William Monford

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  17. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.

  18. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    PubMed

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article

  19. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  20. Error correction and degeneracy in surface codes suffering loss

    SciTech Connect

    Stace, Thomas M.; Barrett, Sean D.

    2010-02-15

    Many proposals for quantum information processing are subject to detectable loss errors. In this paper, we give a detailed account of recent results in which we showed that topological quantum memories can simultaneously tolerate both loss errors and computational errors, with a graceful tradeoff between the threshold for each. We further discuss a number of subtleties that arise when implementing error correction on topological memories. We particularly focus on the role played by degeneracy in the matching algorithms and present a systematic study of its effects on thresholds. We also discuss some of the implications of degeneracy for estimating phase transition temperatures in the random bond Ising model.

  1. Do calculation errors by nurses cause medication errors in clinical practice? A literature review.

    PubMed

    Wright, Kerri

    2010-01-01

    This review aims to examine the literature available to ascertain whether medication errors in clinical practice are the result of nurses' miscalculating drug dosages. The research studies highlighting poor calculation skills of nurses and student nurses have been tested using written drug calculation tests in formal classroom settings [Kapborg, I., 1994. Calculation and administration of drug dosage by Swedish nurses, student nurses and physicians. International Journal for Quality in Health Care 6(4): 389 -395; Hutton, M., 1998. Nursing Mathematics: the importance of application Nursing Standard 13(11): 35-38; Weeks, K., Lynne, P., Torrance, C., 2000. Written drug dosage errors made by students: the threat to clinical effectiveness and the need for a new approach. Clinical Effectiveness in Nursing 4, 20-29]; Wright, K., 2004. Investigation to find strategies to improve student nurses' maths skills. British Journal Nursing 13(21) 1280-1287; Wright, K., 2005. An exploration into the most effective way to teach drug calculation skills to nursing students. Nurse Education Today 25, 430-436], but there have been no reviews of the literature on medication errors in practice that specifically look to see whether the medication errors are caused by nurses' poor calculation skills. The databases Medline, CINAHL, British Nursing Index (BNI), Journal of American Medical Association (JAMA) and Archives and Cochrane reviews were searched for research studies or systematic reviews which reported on the incidence or causes of drug errors in clinical practice. In total 33 articles met the criteria for this review. There were no studies that examined nurses' drug calculation errors in practice. As a result studies and systematic reviews that investigated the types and causes of drug errors were examined to establish whether miscalculations by nurses were the causes of errors. The review found insufficient evidence to suggest that medication errors are caused by nurses' poor

  2. Interactions of timing and prediction error learning.

    PubMed

    Kirkpatrick, Kimberly

    2014-01-01

    Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields.

  3. Interactions of timing and prediction error learning.

    PubMed

    Kirkpatrick, Kimberly

    2014-01-01

    Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields. PMID:23962670

  4. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Nagel, David C.

    1988-01-01

    The role of human error in commercial and general aviation accidents and the techniques used to evaluate it are reviewed from a human-factors perspective. Topics addressed include the general decline in accidents per million departures since the 1960s, the increase in the proportion of accidents due to human error, methods for studying error, theoretical error models, and the design of error-resistant systems. Consideration is given to information acquisition and processing errors, visually guided flight, disorientation, instrument-assisted guidance, communication errors, decision errors, debiasing, and action errors.

  5. Dysgraphia in dementia: a systematic investigation of graphemic buffer features in a case series.

    PubMed

    Haslam, Catherine; Kay, Janice; Tree, Jeremy; Baron, Rachel

    2009-08-01

    In this paper we report findings from a systematic investigation of spelling performance in three patients - PR, RH and AC - who despite their different medical diagnoses showed a very consistent pattern of dysgraphia, more typical of graphemic buffer disorder. Systematic investigation of the features characteristic of this disorder in Study 1 confirmed the presence of length effects in spelling, classic errors (i.e., letter substitution, omission, addition, transposition), a bow-shaped curve in the serial position of errors and consistency in substitution of consonants and vowels. However, in addition to this clear pattern of graphemic buffer impairment, evidence of regularity effects and phonologically plausible errors in spelling raised questions about the integrity of the lexical spelling route in each case. A second study was conducted, using a word and non-word immediate delay copy task, in an attempt to minimise the influence of orthographic representations on written output. Persistence of graphemic buffer errors would suggest an additional, independent source of damage. Two patients, PR and AC, took part and in both cases symptoms of graphemic buffer disorder persisted. Together, these findings suggest that damage to the graphemic buffer may be more common than currently suggested in the literature. PMID:19370478

  6. Error monitoring in musicians.

    PubMed

    Maidhof, Clemens

    2013-01-01

    To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e., the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. Electroencephalography (EEG) studies reported an early component of the event-related potential (ERP) occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e., attempts to cancel the undesired sensory consequence (a wrong tone) a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed. PMID:23898255

  7. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  8. Neuroimaging measures of error-processing: Extracting reliable signals from event-related potentials and functional magnetic resonance imaging.

    PubMed

    Steele, Vaughn R; Anderson, Nathaniel E; Claus, Eric D; Bernat, Edward M; Rao, Vikram; Assaf, Michal; Pearlson, Godfrey D; Calhoun, Vince D; Kiehl, Kent A

    2016-05-15

    Error-related brain activity has become an increasingly important focus of cognitive neuroscience research utilizing both event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI). Given the significant time and resources required to collect these data, it is important for researchers to plan their experiments such that stable estimates of error-related processes can be achieved efficiently. Reliability of error-related brain measures will vary as a function of the number of error trials and the number of participants included in the averages. Unfortunately, systematic investigations of the number of events and participants required to achieve stability in error-related processing are sparse, and none have addressed variability in sample size. Our goal here is to provide data compiled from a large sample of healthy participants (n=180) performing a Go/NoGo task, resampled iteratively to demonstrate the relative stability of measures of error-related brain activity given a range of sample sizes and event numbers included in the averages. We examine ERP measures of error-related negativity (ERN/Ne) and error positivity (Pe), as well as event-related fMRI measures locked to False Alarms. We find that achieving stable estimates of ERP measures required four to six error trials and approximately 30 participants; fMRI measures required six to eight trials and approximately 40 participants. Fewer trials and participants were required for measures where additional data reduction techniques (i.e., principal component analysis and independent component analysis) were implemented. Ranges of reliability statistics for various sample sizes and numbers of trials are provided. We intend this to be a useful resource for those planning or evaluating ERP or fMRI investigations with tasks designed to measure error-processing.

  9. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  10. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  11. Reduction of Orifice-Induced Pressure Errors

    NASA Technical Reports Server (NTRS)

    Plentovich, Elizabeth B.; Gloss, Blair B.; Eves, John W.; Stack, John P.

    1987-01-01

    Use of porous-plug orifice reduces or eliminates errors, induced by orifice itself, in measuring static pressure on airfoil surface in wind-tunnel experiments. Piece of sintered metal press-fitted into static-pressure orifice so it matches surface contour of model. Porous material reduces orifice-induced pressure error associated with conventional orifice of same or smaller diameter. Also reduces or eliminates additional errors in pressure measurement caused by orifice imperfections. Provides more accurate measurements in regions with very thin boundary layers.

  12. Dialogues on prediction errors.

    PubMed

    Niv, Yael; Schoenbaum, Geoffrey

    2008-07-01

    The recognition that computational ideas from reinforcement learning are relevant to the study of neural circuits has taken the cognitive neuroscience community by storm. A central tenet of these models is that discrepancies between actual and expected outcomes can be used for learning. Neural correlates of such prediction-error signals have been observed now in midbrain dopaminergic neurons, striatum, amygdala and even prefrontal cortex, and models incorporating prediction errors have been invoked to explain complex phenomena such as the transition from goal-directed to habitual behavior. Yet, like any revolution, the fast-paced progress has left an uneven understanding in its wake. Here, we provide answers to ten simple questions about prediction errors, with the aim of exposing both the strengths and the limitations of this active area of neuroscience research.

  13. Experimental Quantum Error Detection

    PubMed Central

    Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi

    2012-01-01

    Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of time-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum error detection, an economical approach to reliably protecting a qubit against bit-flip errors. Arbitrary unknown polarization states of single photons and entangled photons are converted into time bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated errors on the reference frame of time bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047

  14. Discretization error estimation and exact solution generation using the method of nearby problems.

    SciTech Connect

    Sinclair, Andrew J.; Raju, Anil; Kurzen, Matthew J.; Roy, Christopher John; Phillips, Tyrone S.

    2011-10-01

    The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.

  15. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  16. Phosphazene additives

    DOEpatents

    Harrup, Mason K; Rollins, Harry W

    2013-11-26

    An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.

  17. Potlining Additives

    SciTech Connect

    Rudolf Keller

    2004-08-10

    In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.

  18. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    NASA Astrophysics Data System (ADS)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  19. A new analysis of fine-structure constant measurements and modelling errors from quasar absorption lines

    NASA Astrophysics Data System (ADS)

    Wilczynska, Michael R.; Webb, John K.; King, Julian A.; Murphy, Michael T.; Bainbridge, Matthew B.; Flambaum, Victor V.

    2015-12-01

    We present an analysis of 23 absorption systems along the lines of sight towards 18 quasars in the redshift range of 0.4 ≤ zabs ≤ 2.3 observed on the Very Large Telescope (VLT) using the Ultraviolet and Visual Echelle Spectrograph (UVES). Considering both statistical and systematic error contributions we find a robust estimate of the weighted mean deviation of the fine-structure constant from its current, laboratory value of Δα/α = (0.22 ± 0.23) × 10-5, consistent with the dipole variation reported in Webb et al. and King et al. This paper also examines modelling methodologies and systematic effects. In particular, we focus on the consequences of fitting quasar absorption systems with too few absorbing components and of selectively fitting only the stronger components in an absorption complex. We show that using insufficient continuum regions around an absorption complex causes a significant increase in the scatter of a sample of Δα/α measurements, thus unnecessarily reducing the overall precision. We further show that fitting absorption systems with too few velocity components also results in a significant increase in the scatter of Δα/α measurements, and in addition causes Δα/α error estimates to be systematically underestimated. These results thus identify some of the potential pitfalls in analysis techniques and provide a guide for future analyses.

  20. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  1. Help prevent hospital errors

    MedlinePlus

    ... A.D.A.M. Editorial team. Related MedlinePlus Health Topics Medication Errors Patient Safety Browse the Encyclopedia A.D.A.M., Inc. is accredited by URAC, also known as the American Accreditation HealthCare Commission ... for online health information and services. Learn more about A.D. ...

  2. Rapid mapping of volumetric machine errors using distance measurements

    SciTech Connect

    Krulewich, D.A.

    1998-04-01

    This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are

  3. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  4. Feature-binding errors after eye movements and shifts of attention.

    PubMed

    Golomb, Julie D; L'heureux, Zara E; Kanwisher, Nancy

    2014-05-01

    When people move their eyes, the eye-centered (retinotopic) locations of objects must be updated to maintain world-centered (spatiotopic) stability. Here, we demonstrated that the attentional-updating process temporarily distorts the fundamental ability to bind object locations with their features. Subjects were simultaneously presented with four colors after a saccade-one in a precued spatiotopic target location-and were instructed to report the target's color using a color wheel. Subjects' reports were systematically shifted in color space toward the color of the distractor in the retinotopic location of the cue. Probabilistic modeling exposed both crude swapping errors and subtler feature mixing (as if the retinotopic color had blended into the spatiotopic percept). Additional experiments conducted without saccades revealed that the two types of errors stemmed from different attentional mechanisms (attention shifting vs. splitting). Feature mixing not only reflects a new perceptual phenomenon, but also provides novel insight into how attention is remapped across saccades. PMID:24647672

  5. Feature-binding errors after eye movements and shifts of attention.

    PubMed

    Golomb, Julie D; L'heureux, Zara E; Kanwisher, Nancy

    2014-05-01

    When people move their eyes, the eye-centered (retinotopic) locations of objects must be updated to maintain world-centered (spatiotopic) stability. Here, we demonstrated that the attentional-updating process temporarily distorts the fundamental ability to bind object locations with their features. Subjects were simultaneously presented with four colors after a saccade-one in a precued spatiotopic target location-and were instructed to report the target's color using a color wheel. Subjects' reports were systematically shifted in color space toward the color of the distractor in the retinotopic location of the cue. Probabilistic modeling exposed both crude swapping errors and subtler feature mixing (as if the retinotopic color had blended into the spatiotopic percept). Additional experiments conducted without saccades revealed that the two types of errors stemmed from different attentional mechanisms (attention shifting vs. splitting). Feature mixing not only reflects a new perceptual phenomenon, but also provides novel insight into how attention is remapped across saccades.

  6. Enhanced orbit determination filter: Inclusion of ground system errors as filter parameters

    NASA Technical Reports Server (NTRS)

    Masters, W. C.; Scheeres, D. J.; Thurman, S. W.

    1994-01-01

    The theoretical aspects of an orbit determination filter that incorporates ground-system error sources as model parameters for use in interplanetary navigation are presented in this article. This filter, which is derived from sequential filtering theory, allows a systematic treatment of errors in calibrations of transmission media, station locations, and earth orientation models associated with ground-based radio metric data, in addition to the modeling of the spacecraft dynamics. The discussion includes a mathematical description of the filter and an analytical comparison of its characteristics with more traditional filtering techniques used in this application. The analysis in this article shows that this filter has the potential to generate navigation products of substantially greater accuracy than more traditional filtering procedures.

  7. Test models for improving filtering with model errors through stochastic parameter estimation

    SciTech Connect

    Gershgorin, B.; Harlim, J. Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  8. Testing Scientific Software: A Systematic Literature Review

    PubMed Central

    Kanewala, Upulee; Bieman, James M.

    2014-01-01

    Context Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. Objective This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. Method We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. Results We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Conclusions Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques. PMID:25125798

  9. Inborn Errors of Metabolism.

    PubMed

    Ezgu, Fatih

    2016-01-01

    Inborn errors of metabolism are single gene disorders resulting from the defects in the biochemical pathways of the body. Although these disorders are individually rare, collectively they account for a significant portion of childhood disability and deaths. Most of the disorders are inherited as autosomal recessive whereas autosomal dominant and X-linked disorders are also present. The clinical signs and symptoms arise from the accumulation of the toxic substrate, deficiency of the product, or both. Depending on the residual activity of the deficient enzyme, the initiation of the clinical picture may vary starting from the newborn period up until adulthood. Hundreds of disorders have been described until now and there has been a considerable clinical overlap between certain inborn errors. Resulting from this fact, the definite diagnosis of inborn errors depends on enzyme assays or genetic tests. Especially during the recent years, significant achievements have been gained for the biochemical and genetic diagnosis of inborn errors. Techniques such as tandem mass spectrometry and gas chromatography for biochemical diagnosis and microarrays and next-generation sequencing for the genetic diagnosis have enabled rapid and accurate diagnosis. The achievements for the diagnosis also enabled newborn screening and prenatal diagnosis. Parallel to the development the diagnostic methods; significant progress has also been obtained for the treatment. Treatment approaches such as special diets, enzyme replacement therapy, substrate inhibition, and organ transplantation have been widely used. It is obvious that by the help of the preclinical and clinical research carried out for inborn errors, better diagnostic methods and better treatment approaches will high likely be available.

  10. Microdensitometer errors: Their effect on photometric data reduction

    NASA Technical Reports Server (NTRS)

    Bozyan, E. P.; Opal, C. B.

    1984-01-01

    The performance of densitometers used for photometric data reduction of high dynamic range electrographic plate material is analyzed. Densitometer repeatability is tested by comparing two scans of one plate. Internal densitometer errors are examined by constructing histograms of digitized densities and finding inoperative bits and differential nonlinearity in the analog to digital converter. Such problems appear common to the four densitometers used in this investigation and introduce systematic algorithm dependent errors in the results. Strategies to improve densitometer performance are suggested.

  11. Breast Patient Setup Error Assessment: Comparison of Electronic Portal Image Devices and Cone-Beam Computed Tomography Matching Results

    SciTech Connect

    Topolnjak, Rajko; Sonke, Jan-Jakob; Nijkamp, Jasper; Rasch, Coen; Minkema, Danny; Remeijer, Peter; Vliet-Vroegindeweij, Corine van

    2010-11-15

    Purpose: To quantify the differences in setup errors measured with the cone-beam computed tomography (CBCT) and electronic portal image devices (EPID) in breast cancer patients. Methods and Materials: Repeat CBCT scan were acquired for routine offline setup verification in 20 breast cancer patients. During the CBCT imaging fractions, EPID images of the treatment beams were recorded. Registrations of the bony anatomy for CBCT to planning CT and EPID to digitally reconstructed-radiographs (DRRs) were compared. In addition, similar measurements of an anthropomorphic thorax phantom were acquired. Bland-Altman and linear regression analysis were performed for clinical and phantom registrations. Systematic and random setup errors were quantified for CBCT and EPID-driven correction protocols in the EPID coordinate system (U, V), with V parallel to the cranial-caudal axis and U perpendicular to V and the central beam axis. Results: Bland-Altman analysis of clinical EPID and CBCT registrations yielded 4 to 6-mm limits of agreement, indicating that both methods were not compatible. The EPID-based setup errors were smaller than the CBCT-based setup errors. Phantom measurements showed that CBCT accurately measures setup error whereas EPID underestimates setup errors in the cranial-caudal direction. In the clinical measurements, the residual bony anatomy setup errors after offline CBCT-based corrections were {Sigma}{sub U} = 1.4 mm, {Sigma}{sub V} = 1.7 mm, and {sigma}{sub U} = 2.6 mm, {sigma}{sub V} = 3.1 mm. Residual setup errors of EPID driven corrections corrected for underestimation were estimated at {Sigma}{sub U} = 2.2mm, {Sigma}{sub V} = 3.3 mm, and {sigma}{sub U} = 2.9 mm, {sigma}{sub V} = 2.9 mm. Conclusion: EPID registration underestimated the actual bony anatomy setup error in breast cancer patients by 20% to 50%. Using CBCT decreased setup uncertainties significantly.

  12. Speech Errors across the Lifespan

    ERIC Educational Resources Information Center

    Vousden, Janet I.; Maylor, Elizabeth A.

    2006-01-01

    Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…

  13. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  14. Marking Errors: A Simple Strategy

    ERIC Educational Resources Information Center

    Timmons, Theresa Cullen

    1987-01-01

    Indicates that using highlighters to mark errors produced a 76% class improvement in removing comma errors and a 95.5% improvement in removing apostrophe errors. Outlines two teaching procedures, to be followed before introducing this tool to the class, that enable students to remove errors at this effective rate. (JD)

  15. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  16. Neural Correlates of Reach Errors

    PubMed Central

    Hashambhoy, Yasmin; Rane, Tushar; Shadmehr, Reza

    2005-01-01

    Reach errors may be broadly classified into errors arising from unpredictable changes in target location, called target errors, and errors arising from miscalibration of internal models, called execution errors. Execution errors may be caused by miscalibration of dynamics (e.g.. when a force field alters limb dynamics) or by miscalibration of kinematics (e.g., when prisms alter visual feedback). While all types of errors lead to similar online corrections, we found that the motor system showed strong trial-by-trial adaptation in response to random execution errors but not in response to random target errors. We used fMRI and a compatible robot to study brain regions involved in processing each kind of error. Both kinematic and dynamic execution errors activated regions along the central and the post-central sulci and in lobules V, VI, and VIII of the cerebellum, making these areas possible sites of plastic changes in internal models for reaching. Only activity related to kinematic errors extended into parietal area 5. These results are inconsistent with the idea that kinematics and dynamics of reaching are computed in separate neural entities. In contrast, only target errors caused increased activity in the striatum and the posterior superior parietal lobule. The cerebellum and motor cortex were as strongly activated as with execution errors. These findings indicate a neural and behavioral dissociation between errors that lead to switching of behavioral goals, and errors that lead to adaptation of internal models of limb dynamics and kinematics. PMID:16251440

  17. The Insufficiency of Error Analysis

    ERIC Educational Resources Information Center

    Hammarberg, B.

    1974-01-01

    The position here is that error analysis is inadequate, particularly from the language-teaching point of view. Non-errors must be considered in specifying the learner's current command of the language, its limits, and his learning tasks. A cyclic procedure of elicitation and analysis, to secure evidence of errors and non-errors, is outlined.…

  18. Perceptual learning eases crowding by reducing recognition errors but not position errors.

    PubMed

    Xiong, Ying-Zi; Yu, Cong; Zhang, Jun-Yun

    2015-08-01

    When an observer reports a letter flanked by additional letters in the visual periphery, the response errors (the crowding effect) may result from failure to recognize the target letter (recognition errors), from mislocating a correctly recognized target letter at a flanker location (target misplacement errors), or from reporting a flanker as the target letter (flanker substitution errors). Crowding can be reduced through perceptual learning. However, it is not known how perceptual learning operates to reduce crowding. In this study we trained observers with a partial-report task (Experiment 1), in which they reported the central target letter of a three-letter string presented in the visual periphery, or a whole-report task (Experiment 2), in which they reported all three letters in order. We then assessed the impact of training on recognition of both unflanked and flanked targets, with particular attention to how perceptual learning affected the types of errors. Our results show that training improved target recognition but not single-letter recognition, indicating that training indeed affected crowding. However, training did not reduce target misplacement errors or flanker substitution errors. This dissociation between target recognition and flanker substitution errors supports the view that flanker substitution may be more likely a by-product (due to response bias), rather than a cause, of crowding. Moreover, the dissociation is not consistent with hypothesized mechanisms of crowding that would predict reduced positional errors.

  19. The impact of data density and data error on the evolution of mesoscale forecast error

    NASA Technical Reports Server (NTRS)

    Warner, T. T.; Key, L. E.

    1985-01-01

    The effects of data density and data errors on the accuracy of mesoscale weather forecasts were assessed by simulations of a period in February 1979 when a snowstorm occurred along the U.S. eastern seaboard. The simulations were initiated every 12 hr and the growth of the rms error was tracked as a function of time and varying data densities. A uniform grid of instrumentation separation and perfect boundary conditions were assumed for all the simulations. A bias error representation was defined to account for systematic measurement errors. A brief summary of features of the somewhat complex storm is provided, together with the parameters of the 11 different simulations with high, medium and low data densities. The accuracy of the forecasts were proportional to the data density, although the error decreased for all the simulations over a 24 hr period. The density of the vertical data had a significant impact on the accuracy of the forecast, whereas the horizontal data density did not. Finally, pathways by which errors were transferred among variables were identified.

  20. Financial errors in dementia: Testing a neuroeconomic conceptual framework

    PubMed Central

    Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L.; Rosen, Howard J.

    2013-01-01

    Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer’s disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p< 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p< 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention. PMID:23550884

  1. Manson's triple error.

    PubMed

    F, Delaporte

    2008-09-01

    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729

  2. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  3. TOA/FOA geolocation error analysis.

    SciTech Connect

    Mason, John Jeffrey

    2008-08-01

    This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.

  4. Conditional Density Estimation in Measurement Error Problems.

    PubMed

    Wang, Xiao-Feng; Ye, Deping

    2015-01-01

    This paper is motivated by a wide range of background correction problems in gene array data analysis, where the raw gene expression intensities are measured with error. Estimating a conditional density function from the contaminated expression data is a key aspect of statistical inference and visualization in these studies. We propose re-weighted deconvolution kernel methods to estimate the conditional density function in an additive error model, when the error distribution is known as well as when it is unknown. Theoretical properties of the proposed estimators are investigated with respect to the mean absolute error from a "double asymptotic" view. Practical rules are developed for the selection of smoothing-parameters. Simulated examples and an application to an Illumina bead microarray study are presented to illustrate the viability of the methods. PMID:25284902

  5. SIP: Systematics-Insensitive Periodograms

    NASA Astrophysics Data System (ADS)

    Angus, Ruth

    2016-09-01

    SIP (Systematics-Insensitive Periodograms) extends the generative model used to create traditional sine-fitting periodograms for finding the frequency of a sinusoid by including systematic trends based on a set of eigen light curves in the generative model in addition to using a sum of sine and cosine functions over a grid of frequencies, producing periodograms with vastly reduced systematic features. Acoustic oscillations in giant stars and measurement of stellar rotation periods can be recovered from the SIP periodograms without detrending. The code can also be applied to detection other periodic phenomena, including eclipsing binaries and short-period exoplanet candidates.

  6. Errors and mistakes in breast ultrasound diagnostics.

    PubMed

    Jakubowski, Wiesław; Dobruch-Sobczak, Katarzyna; Migda, Bartosz

    2012-09-01

    Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Nevertheless, as in each imaging method, there are errors and mistakes resulting from the technical limitations of the method, breast anatomy (fibrous remodeling), insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts), improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS-usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, including the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  7. Systematic neutron guide misalignment for an accelerator-driven spallation neutron source

    NASA Astrophysics Data System (ADS)

    Zendler, C.; Bentley, P. M.

    2016-08-01

    The European Spallation Source (ESS) is a long pulse spallation neutron source that is currently under construction in Lund, Sweden. A considerable fraction of the 22 planned instruments extend as far as 75-150 m from the source. In such long beam lines, misalignment between neutron guide segments can decrease the neutron transmission significantly. In addition to a random misalignment from installation tolerances, the ground on which ESS is built can be expected to sink with time, and thus shift the neutron guide segments further away from the ideal alignment axis in a systematic way. These systematic errors are correlated to the ground structure, position of buildings and shielding installation. Since the largest deformation is expected close to the target, even short instruments might be noticeably affected. In this study, the effect of this systematic misalignment on short and long ESS beam lines is analyzed, and a possible mitigation by overillumination of subsequent guide sections investigated.

  8. [The notion and classification of expert errors].

    PubMed

    Klevno, V A

    2012-01-01

    The author presents the analysis of the legal and forensic medical literature concerning currently accepted concepts and classification of expert malpractice. He proposes a new easy-to-remember definition of the expert error and considers the classification of such mistakes. The analysis of the cases of erroneous application of the medical criteria for estimation of the harm to health made it possible to reveal and systematize the causes accounting for the cases of expert malpractice committed by forensic medical experts and health providers when determining the degree of harm to human health. PMID:22686055

  9. A PARAMETRIC ANALYSIS OF ERRORS OF COMMISSION DURING DISCRETE-TRIAL TRAINING

    PubMed Central

    DiGennaro Reed, Florence D; Reed, Derek D; Baez, Cynthia N; Maguire, Helena

    2011-01-01

    We investigated the effects of systematic changes in levels of treatment integrity by altering errors of commission during error-correction procedures as part of discrete-trial training. We taught 3 students with autism receptive nonsense shapes under 3 treatment integrity conditions (0%, 50%, or 100% errors of commission). Participants exhibited higher levels of performance during perfect implementation (0% errors). For 2 of the 3 participants, performance was low and showed no differentiation in the remaining conditions. Findings suggest that 50% commission errors may be as detrimental as 100% commission errors on teaching outcomes. PMID:21941391

  10. Understanding error generation in fused deposition modeling

    NASA Astrophysics Data System (ADS)

    Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David

    2015-03-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.

  11. Susceptibility of biallelic haplotype and genotype frequencies to genotyping error.

    PubMed

    Moskvina, Valentina; Schmidt, Karl Michael

    2006-12-01

    With the availability of fast genotyping methods and genomic databases, the search for statistical association of single nucleotide polymorphisms with a complex trait has become an important methodology in medical genetics. However, even fairly rare errors occurring during the genotyping process can lead to spurious association results and decrease in statistical power. We develop a systematic approach to study how genotyping errors change the genotype distribution in a sample. The general M-marker case is reduced to that of a single-marker locus by recognizing the underlying tensor-product structure of the error matrix. Both method and general conclusions apply to the general error model; we give detailed results for allele-based errors of size depending both on the marker locus and the allele present. Multiple errors are treated in terms of the associated diffusion process on the space of genotype distributions. We find that certain genotype and haplotype distributions remain unchanged under genotyping errors, and that genotyping errors generally render the distribution more similar to the stable one. In case-control association studies, this will lead to loss of statistical power for nondifferential genotyping errors and increase in type I error for differential genotyping errors. Moreover, we show that allele-based genotyping errors do not disturb Hardy-Weinberg equilibrium in the genotype distribution. In this setting we also identify maximally affected distributions. As they correspond to situations with rare alleles and marker loci in high linkage disequilibrium, careful checking for genotyping errors is advisable when significant association based on such alleles/haplotypes is observed in association studies.

  12. Non-Gaussian Error Distributions of LMC Distance Moduli Measurements

    NASA Astrophysics Data System (ADS)

    Crandall, Sara; Ratra, Bharat

    2015-12-01

    We construct error distributions for a compilation of 232 Large Magellanic Cloud (LMC) distance moduli values from de Grijs et al. that give an LMC distance modulus of (m - M)0 = 18.49 ± 0.13 mag (median and 1σ symmetrized error). Central estimates found from weighted mean and median statistics are used to construct the error distributions. The weighted mean error distribution is non-Gaussian—flatter and broader than Gaussian—with more (less) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of unaccounted-for systematic uncertainties. The median statistics error distribution, which does not make use of the individual measurement errors, is also non-Gaussian—more peaked than Gaussian—with less (more) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of publication bias and/or the non-independence of the measurements. We also construct the error distributions of 247 SMC distance moduli values from de Grijs & Bono. We find a central estimate of {(m-M)}0=18.94+/- 0.14 mag (median and 1σ symmetrized error), and similar probabilities for the error distributions.

  13. Advancing the research agenda for diagnostic error reduction.

    PubMed

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  14. Magnetic nanoparticle thermometer: an investigation of minimum error transmission path and AC bias error.

    PubMed

    Du, Zhongzhou; Su, Rijian; Liu, Wenzhong; Huang, Zhixing

    2015-04-14

    The signal transmission module of a magnetic nanoparticle thermometer (MNPT) was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias), was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA) when the hardware system of the MNPT was designed with the aforementioned method.

  15. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    PubMed Central

    Du, Zhongzhou; Su, Rijian; Liu, Wenzhong; Huang, Zhixing

    2015-01-01

    The signal transmission module of a magnetic nanoparticle thermometer (MNPT) was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias), was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA) when the hardware system of the MNPT was designed with the aforementioned method. PMID:25875188

  16. Pitch Error Analysis of Young Piano Students' Music Reading Performances

    ERIC Educational Resources Information Center

    Rut Gudmundsdottir, Helga

    2010-01-01

    This study analyzed the music reading performances of 6-13-year-old piano students (N = 35) in their second year of piano study. The stimuli consisted of three piano pieces, systematically constructed to vary in terms of left-hand complexity and input simultaneity. The music reading performances were recorded digitally and a code of error analysis…

  17. Error Management Behavior in Classrooms: Teachers' Responses to Student Mistakes

    ERIC Educational Resources Information Center

    Tulis, Maria

    2013-01-01

    Only a few studies have focused on how teachers deal with mistakes in actual classroom settings. Teachers' error management behavior was analyzed based on data obtained from direct (Study 1) and videotaped systematic observation (Study 2), and students' self-reports. In Study 3 associations between students' and teachers' attitudes towards…

  18. Power Measurement Errors on a Utility Aircraft

    NASA Technical Reports Server (NTRS)

    Bousman, William G.

    2002-01-01

    Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.

  19. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    PubMed

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article

  20. DNA systematics. Volume II

    SciTech Connect

    Dutta, S.K.

    1986-01-01

    This book discusses the following topics: PLANTS: PLANT DNA: Contents and Systematics. Repeated DNA Sequences and Polyploidy in Cereal Crops. Homology of Nonrepeated DNA Sequences in Phylogeny of Fungal Species. Chloropast DNA and Phylogenetic Relationships. rDNA: Evolution Over a Billion Years. 23S rRNA-derived Small Ribosomal RNAs: Their Structure and Evolution with Reference to Plant Phylogeny. Molecular Analysis of Plant DNA Genomes: Conserved and Diverged DNA Sequences. A Critical Review of Some Terminologies Used for Additional DNA in Plant Chromosomes and Index.

  1. Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans

    SciTech Connect

    Siebers, Jeffrey V. . E-mail: jsiebers@vcu.edu; Keall, Paul J.; Wu Qiuwen; Williamson, Jeffrey F.; Schmidt-Ullrich, Rupert K.

    2005-10-01

    Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errors of {sigma} = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with {sigma} = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for {sigma} = {sigma} = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D{sub 98}), clinical target volume (CTV) D{sub 90}, nodes D{sub 90}, cord D{sub 2}, and parotid D{sub 50} and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error {sigma} exceeded 3 mm. Simulated systematic setup errors with {sigma} = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a {sigma} = 3.0 mm resulted in half of the plans having

  2. Reducing nurse medicine administration errors.

    PubMed

    Ofosu, Rose; Jarrett, Patricia

    Errors in administering medicines are common and can compromise the safety of patients. This review discusses the causes of drug administration error in hospitals by student and registered nurses, and the practical measures educators and hospitals can take to improve nurses' knowledge and skills in medicines management, and reduce drug errors.

  3. Error Bounds for Interpolative Approximations.

    ERIC Educational Resources Information Center

    Gal-Ezer, J.; Zwas, G.

    1990-01-01

    Elementary error estimation in the approximation of functions by polynomials as a computational assignment, error-bounding functions and error bounds, and the choice of interpolation points are discussed. Precalculus and computer instruction are used on some of the calculations. (KR)

  4. Errors inducing radiation overdoses.

    PubMed

    Grammaticos, Philip C

    2013-01-01

    There is no doubt that equipments exposing radiation and used for therapeutic purposes should be often checked for possibly administering radiation overdoses to the patients. Technologists, radiation safety officers, radiologists, medical physicists, healthcare providers and administration should take proper care on this issue. "We must be beneficial and not harmful to the patients", according to the Hippocratic doctrine. Cases of radiation overdose are often reported. A series of cases of radiation overdoses have recently been reported. Doctors who were responsible, received heavy punishments. It is much better to prevent than to treat an error or a disease. A Personal Smart Card or Score Card has been suggested for every patient undergoing therapeutic and/or diagnostic procedures by the use of radiation. Taxonomy may also help. PMID:24251304

  5. Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics

    NASA Astrophysics Data System (ADS)

    Sarovar, Mohan; Young, Kevin C.

    2013-12-01

    While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC.

  6. Role of cognition in generating and mitigating clinical errors.

    PubMed

    Patel, Vimla L; Kannampallil, Thomas G; Shortliffe, Edward H

    2015-07-01

    Given the complexities of current clinical practice environments, strategies to reduce clinical error must appreciate that error detection and recovery are integral to the function of complex cognitive systems. In this review, while acknowledging that error elimination is an attractive notion, we use evidence to show that enhancing error detection and improving error recovery are also important goals. We further show how departures from clinical protocols or guidelines can yield innovative and appropriate solutions to unusual problems. This review addresses cognitive approaches to the study of human error and its recovery process, highlighting their implications in promoting patient safety and quality. In addition, we discuss methods for enhancing error recognition, and promoting suitable responses, through external cognitive support and virtual reality simulations for the training of clinicians. PMID:25935928

  7. Fixed-point error analysis of Winograd Fourier transform algorithms

    NASA Technical Reports Server (NTRS)

    Patterson, R. W.; Mcclellan, J. H.

    1978-01-01

    The quantization error introduced by the Winograd Fourier transform algorithm (WFTA) when implemented in fixed-point arithmetic is studied and compared with that of the fast Fourier transform (FFT). The effect of ordering the computational modules and the relative contributions of data quantization error and coefficient quantization error are determined. In addition, the quantization error introduced by the Good-Winograd (GW) algorithm, which uses Good's prime-factor decomposition for the discrete Fourier transform (DFT) together with Winograd's short length DFT algorithms, is studied. Error introduced by the WFTA is, in all cases, worse than that of the FFT. In general, the WFTA requires one or two more bits for data representation to give an error similar to that of the FFT. Error introduced by the GW algorithm is approximately the same as that of the FFT.

  8. Identification of Error Patterns in Terminal-Area ATC Communications

    NASA Technical Reports Server (NTRS)

    Quinn, Cheryl; Walter, Kim E.; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    Advancing air traffic management technologies have enabled a greater number of aircraft to use the same airspace more effectively. As aircraft separations are reduced and final approaches are more finely timed, there is less room for error. The present study examined 122 terminal-area, loss-of-separation and procedure violation incidents reported to the Aviation Safety Reporting System (ASRS) by air traffic controllers. Narrative description codes were used for the incidents for type of violation, contributing factors, recovery strategies, and consequences. Usually multiple errors occurred prior to the violation. Error sequences were analyzed and common patterns of errors were identified. In half of the incidents, errors were noticed in time to correct mistakes. Of these, almost 43% committed additional errors during the recovery attempt. This analysis shows that redundancies in the present air traffic control system may not be sufficient to support large increases in traffic density. Error prevention and design considerations for air traffic management systems are discussed.

  9. Register file soft error recovery

    SciTech Connect

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  10. Reviewing the literature, how systematic is systematic?

    PubMed

    MacLure, Katie; Paudyal, Vibhu; Stewart, Derek

    2016-06-01

    Introduction Professor Archibald Cochrane, after whom the Cochrane Collaboration is named, was influential in promoting evidence-based clinical practice. He called for "relevant, valid research" to underpin all aspects of healthcare. Systematic reviews of the literature are regarded as a high quality source of cumulative evidence but it is unclear how truly systematic they, or other review articles, are or 'how systematic is systematic?' Today's evidence-based review industry is a burgeoning mix of specialist terminology, collaborations and foundations, databases, portals, handbooks, tools, criteria and training courses. Aim of the review This study aims to identify uses and types of reviews, key issues in planning, conducting, reporting and critiquing reviews, and factors which limit claims to be systematic. Method A rapid review of review articles published in IJCP. Results This rapid review identified 17 review articles published in IJCP between 2010 and 2015 inclusive. It explored the use of different types of review article, the variation and widely available range of guidelines, checklists and criteria which, through systematic application, aim to promote best practice. It also identified common pitfalls in endeavouring to conduct reviews of the literature systematically. Discussion Although a limited set of IJCP reviews were identified, there is clear evidence of the variation in adoption and application of systematic methods. The burgeoning evidence industry offers the tools and guidelines required to conduct systematic reviews, and other types of review, systematically. This rapid review was limited to the database of one journal over a period of 6 years. Although this review was conducted systematically, it is not presented as a systematic review. Conclusion As a research community we have yet to fully engage with readily available guidelines and tools which would help to avoid the common pitfalls. Therefore the question remains, of not just IJCP but

  11. Error analysis and system optimization of non-null aspheric testing system

    NASA Astrophysics Data System (ADS)

    Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo

    2010-10-01

    A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.

  12. Influence of nonhomogeneous earth on the rms phase error and beam-pointing errors of large, sparse high-frequency receiving arrays

    NASA Astrophysics Data System (ADS)

    Weiner, M. M.

    1994-01-01

    The performance of ground-based high-frequency (HF) receiving arrays is reduced when the array elements have electrically small ground planes. The array rms phase error and beam-pointing errors, caused by multipath rays reflected from a nonhomogeneous Earth, are determined for a sparse array of elements that are modeled as Hertzian dipoles in close proximity to Earth with no ground planes. Numerical results are presented for cases of randomly distributed and systematically distributed Earth nonhomogeneities where one-half of vertically polarized array elements are located in proximity to one type of Earth and the remaining half are located in proximity to a second type of Earth. The maximum rms phase errors, for the cases examined, are 18 deg and 9 deg for randomly distributed and systematically distributed nonhomogeneities, respectively. The maximum beampointing errors are 0 and 0.3 beam widths for randomly distributed and systematically distributed nonhomogeneities, respectively.

  13. Diagnostic Errors Study Findings

    MedlinePlus

    ... ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/office-testing-toolkit/ . Additionally, the Office of the National ... Assistance on Health Initiatives Measurement & Reporting Tools Research Tools & ... Centers & Programs Centers & Offices Initiatives About AHRQ Portfolios ...

  14. A systematic approach to riser VIV response reconstruction

    NASA Astrophysics Data System (ADS)

    Mukundan, H.; Hover, F. S.; Triantafyllou, M. S.

    2010-07-01

    Vortex-induced vibration (VIV) of long flexible cylindrical structures (e.g. risers, pipelines, tendons, mooring lines) enduring ocean currents is ubiquitous in the offshore industry. Though significant effort has gone into understanding this complicated fluid-structure interaction problem, major challenges remain in modeling and predicting the response of such structures (for example a riser). The work presented in this paper provides a systematic approach to estimate and analyze the vortex-induced motions of a marine riser. A systematic framework is developed, which allows reconstruction of the riser motion from a limited number of sensors placed along its length. A full reconstruction criterion is developed, which classifies when the measurements from the sensors contain all information pertinent to riser VIV response, and when they do not, in which case additional, analytical methods must be employed. Reconstruction methods for both scenarios are developed and applied to experimental data. Finally, a systematic study on the error during the reconstruction is also undertaken. The methods developed in this paper can be applied to: improve understanding of the vortex shedding mechanisms, including the presence of traveling waves and higher-harmonic forces; develop tools for in-situ estimation of fatigue damage on marine risers; and estimate the vortex-induced forces on marine risers.

  15. Experimental quantum error correction with high fidelity

    SciTech Connect

    Zhang Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond

    2011-09-15

    More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett. 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from {epsilon} to {approx}{epsilon}{sup 2}. In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.

  16. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled

  17. Error modelling on burned area products

    NASA Astrophysics Data System (ADS)

    Padilla, M.; Chuvieco, E.

    2012-12-01

    In the last decade multiple efforts have been undertaken to map burned areas (BA) at global scale. Global BA projects usually carry along a validation phase, which aims to assess product quality. Errors are commonly measured in these validation exercises, but they frequently do not tackle error sources, which hampers the use of BA products as input to earth system models. In this study we present a method to assess the relationships between commission and omission errors on one side and landscape and burned patch characteristics on the other side. Errors were extracted by comparing global BA results and Landsat BA perimeters. Selected factors to explain error distribution were related to landscape characteristics and quality of input data. The former included BA spatial properties, tree cover (from MODIS Vegetation Continous Field), and the land cover type (Globcover 2005). The latter were the number of cloud-free observations, the confidence level of the BA algorithm and the sub-pixel proportion of true BA. The relationship between explanatory variables and errors was estimated using Generalized Additive Models. This analysis was undertaken to assess global BA products within the framework of the fire_cci project (www.esa-fire-cci.org). This project is part of the European Space Agency's Climate Change Initiative, which aims to generate long-term global products of Essential Climate Variables (ECV). The fire_cci project aims to generate time series of global BA, merging data from three sensors: MERIS, (A)ATSR and VEGETATION. The error characterization exercise presented in this paper was based on MERIS BA results from 2005 in four study sites (Australia, Brazil, Canada and Kazakhstan). Results show that errors are more frequent on pixels partially burned, and tend to decrease for high and low tree-cover (when areas have either 0 or 100%), as well as when the product confidence level is high. Detected burned pixels surrounded by other burned pixels were found less

  18. Scaling prediction errors to reward variability benefits error-driven learning in humans

    PubMed Central

    Schultz, Wolfram

    2015-01-01

    Effective error-driven learning requires individuals to adapt learning to environmental reward variability. The adaptive mechanism may involve decays in learning rate across subsequent trials, as shown previously, and rescaling of reward prediction errors. The present study investigated the influence of prediction error scaling and, in particular, the consequences for learning performance. Participants explicitly predicted reward magnitudes that were drawn from different probability distributions with specific standard deviations. By fitting the data with reinforcement learning models, we found scaling of prediction errors, in addition to the learning rate decay shown previously. Importantly, the prediction error scaling was closely related to learning performance, defined as accuracy in predicting the mean of reward distributions, across individual participants. In addition, participants who scaled prediction errors relative to standard deviation also presented with more similar performance for different standard deviations, indicating that increases in standard deviation did not substantially decrease “adapters'” accuracy in predicting the means of reward distributions. However, exaggerated scaling beyond the standard deviation resulted in impaired performance. Thus efficient adaptation makes learning more robust to changing variability. PMID:26180123

  19. Improved Error Thresholds for Measurement-Free Error Correction

    NASA Astrophysics Data System (ADS)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  20. Errors in clinical laboratories or errors in laboratory medicine?

    PubMed

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  1. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  2. Systematic Alternatives to Proposal Preparation.

    ERIC Educational Resources Information Center

    Knirk, Frederick G.; And Others

    Educators who have to develop proposals must be concerned with making effective decisions. This paper discusses a number of educational systems management tools which can be used to reduce the time and effort in developing a proposal. In addition, ways are introduced to systematically increase the quality of the proposal through the development of…

  3. Sensitivity of SLR baselines to errors in Earth orientation

    NASA Technical Reports Server (NTRS)

    Smith, D. E.; Christodoulidis, D. C.

    1984-01-01

    The sensitivity of inter station distances derived from Satellite Laser Ranging (SLR) to errors in Earth orientation is discussed. An analysis experiment is performed which imposes a known polar motion error on all of the arcs used over this interval. The effect of the averaging of the errors over the tracking periods of individual sites is assessed. Baselines between stations that are supported by a global network of tracking stations are only marginally affected by errors in Earth orientation. The global network of stations retains its integrity even in the presence of systematic changes in the coordinate frame. The effect of these coordinate frame changes on the relative locations of the stations is minimal.

  4. Model of glucose sensor error components: identification and assessment for new Dexcom G4 generation devices.

    PubMed

    Facchinetti, Andrea; Del Favero, Simone; Sparacino, Giovanni; Cobelli, Claudio

    2015-12-01

    It is clinically well-established that minimally invasive subcutaneous continuous glucose monitoring (CGM) sensors can significantly improve diabetes treatment. However, CGM readings are still not as reliable as those provided by standard fingerprick blood glucose (BG) meters. In addition to unavoidable random measurement noise, other components of sensor error are distortions due to the blood-to-interstitial glucose kinetics and systematic under-/overestimations associated with the sensor calibration process. A quantitative assessment of these components, and the ability to simulate them with precision, is of paramount importance in the design of CGM-based applications, e.g., the artificial pancreas (AP), and in their in silico testing. In the present paper, we identify and assess a model of sensor error of for two sensors, i.e., the G4 Platinum (G4P) and the advanced G4 for artificial pancreas studies (G4AP), both belonging to the recently presented "fourth" generation of Dexcom CGM sensors but different in their data processing. Results are also compared with those obtained by a sensor belonging to the previous, "third," generation by the same manufacturer, the SEVEN Plus (7P). For each sensor, the error model is derived from 12-h CGM recordings of two sensors used simultaneously and BG samples collected in parallel every 15 ± 5 min. Thanks to technological innovations, G4P outperforms 7P, with average mean absolute relative difference (MARD) of 11.1 versus 14.2%, respectively, and lowering of about 30% the error of each component. Thanks to the more sophisticated data processing algorithms, G4AP resulted more reliable than G4P, with a MARD of 10.0%, and a further decrease to 20% of the error due to blood-to-interstitial glucose kinetics.

  5. The Role of Supralexical Prosodic Units in Speech Production: Evidence from the Distribution of Speech Errors

    ERIC Educational Resources Information Center

    Choe, Wook Kyung

    2013-01-01

    The current dissertation represents one of the first systematic studies of the distribution of speech errors within supralexical prosodic units. Four experiments were conducted to gain insight into the specific role of these units in speech planning and production. The first experiment focused on errors in adult English. These were found to be…

  6. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  7. Grammatical Errors Produced by English Majors: The Translation Task

    ERIC Educational Resources Information Center

    Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad

    2011-01-01

    This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…

  8. Parameter Estimation In Ensemble Data Assimilation To Characterize Model Errors In Surface-Layer Schemes Over Complex Terrain

    NASA Astrophysics Data System (ADS)

    Hacker, Joshua; Lee, Jared; Lei, Lili

    2014-05-01

    Numerical weather prediction (NWP) models have deficiencies in surface and boundary layer parameterizations, which may be particularly acute over complex terrain. Structural and physical model deficiencies are often poorly understood, and can be difficult to identify. Uncertain model parameters can lead to one class of model deficiencies when they are mis-specified. Augmenting the model state variables with parameters, data assimilation can be used to estimate the parameter distributions as long as the forecasts for observed variables is linearly dependent on the parameters. Reduced forecast (background) error shows that the parameter is accounting for some component of model error. Ensemble data assimilation has the favorable characteristic of providing ensemble-mean parameter estimates, eliminating some noise in the estimates when additional constraints on the error dynamics are unknown. This study focuses on coupling the Weather Research and Forecasting (WRF) NWP model with the Data Assimilation Research Testbed (DART) to estimate the Zilitinkevich parameter (CZIL). CZIL controls the thermal 'roughness length' for a given momentum roughness, thereby controlling heat and moisture fluxes through the surface layer by specifying the (unobservable) aerodynamic surface temperature. Month-long data assimilation experiments with 96 ensemble members, and grid spacing down to 3.3 km, provide a data set for interpreting parametric model errors in complex terrain. Experiments are during fall 2012 over the western U.S., and radiosonde, aircraft, satellite wind, surface, and mesonet observations are assimilated every 3 hours. One ensemble has a globally constant value of CZIL=0.1 (the WRF default value), while a second ensemble allows CZIL to vary over the range [0.01, 0.99], with distributions updated via the assimilation. Results show that the CZIL estimates do vary in time and space. Most often, forecasts are more skillful with the updated parameter values, compared to the

  9. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  10. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  11. Error correction for IFSAR

    DOEpatents

    Doerry, Armin W.; Bickel, Douglas L.

    2002-01-01

    IFSAR images of a target scene are generated by compensating for variations in vertical separation between collection surfaces defined for each IFSAR antenna by adjusting the baseline projection during image generation. In addition, height information from all antennas is processed before processing range and azimuth information in a normal fashion to create the IFSAR image.

  12. Medication Errors in Outpatient Pediatrics.

    PubMed

    Berrier, Kyla

    2016-01-01

    Medication errors may occur during parental administration of prescription and over-the-counter medications in the outpatient pediatric setting. Misinterpretation of medication labels and dosing errors are two types of errors in medication administration. Health literacy may play an important role in parents' ability to safely manage their child's medication regimen. There are several proposed strategies for decreasing these medication administration errors, including using standardized dosing instruments, using strictly metric units for medication dosing, and providing parents and caregivers with picture-based dosing instructions. Pediatric healthcare providers should be aware of these strategies and seek to implement many of them into their practices. PMID:27537086

  13. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  14. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  15. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    ERIC Educational Resources Information Center

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  16. Improving medication administration error reporting systems. Why do errors occur?

    PubMed

    Wakefield, B J; Wakefield, D S; Uden-Holman, T

    2000-01-01

    Monitoring medication administration errors (MAE) is often included as part of the hospital's risk management program. While observation of actual medication administration is the most accurate way to identify errors, hospitals typically rely on voluntary incident reporting processes. Although incident reporting systems are more economical than other methods of error detection, incident reporting can also be a time-consuming process depending on the complexity or "user-friendliness" of the reporting system. Accurate incident reporting systems are also dependent on the ability of the practitioner to: 1) recognize an error has actually occurred; 2) believe the error is significant enough to warrant reporting; and 3) overcome the embarrassment of having committed a MAE and the fear of punishment for reporting a mistake (either one's own or another's mistake).

  17. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  18. Nutrition Informatics Applications in Clinical Practice: a Systematic Review

    PubMed Central

    North, Jennifer C.; Jordan, Kristine C.; Metos, Julie; Hurdle, John F.

    2015-01-01

    Nutrition care and metabolic control contribute to clinical patient outcomes. Biomedical informatics applications represent a way to potentially improve quality and efficiency of nutrition management. We performed a systematic literature review to identify clinical decision support and computerized provider order entry systems used to manage nutrition care. Online research databases were searched using a specific set of keywords. Additionally, bibliographies were referenced for supplemental citations. Four independent reviewers selected sixteen studies out of 364 for review. These papers described adult and neonatal nutrition support applications, blood glucose management applications, and other nutrition applications. Overall, results indicated that computerized interventions could contribute to improved patient outcomes and provider performance. Specifically, computer systems in the clinical setting improved nutrient delivery, rates of malnutrition, weight loss, blood glucose values, clinician efficiency, and error rates. In conclusion, further investigation of informatics applications on nutritional and performance outcomes utilizing rigorous study designs is recommended. PMID:26958233

  19. A geometric model for initial orientation errors in pigeon navigation.

    PubMed

    Postlethwaite, Claire M; Walker, Michael M

    2011-01-21

    All mobile animals respond to gradients in signals in their environment, such as light, sound, odours and magnetic and electric fields, but it remains controversial how they might use these signals to navigate over long distances. The Earth's surface is essentially two-dimensional, so two stimuli are needed to act as coordinates for navigation. However, no environmental fields are known to be simple enough to act as perpendicular coordinates on a two-dimensional grid. Here, we propose a model for navigation in which we assume that an animal has a simplified 'cognitive map' in which environmental stimuli act as perpendicular coordinates. We then investigate how systematic deviation of the contour lines of the environmental signals from a simple orthogonal arrangement can cause errors in position determination and lead to systematic patterns of directional errors in initial homing directions taken by pigeons. The model reproduces patterns of initial orientation errors seen in previously collected data from homing pigeons, predicts that errors should increase with distance from the loft, and provides a basis for efforts to identify further sources of orientation errors made by homing pigeons.

  20. The Error Distribution of BATSE Gamma-Ray Burst Locations

    NASA Technical Reports Server (NTRS)

    Briggs, Michael S.; Pendleton, Geoffrey N.; Kippen, R. Marc; Brainerd, J. J.; Hurley, Kevin; Connaughton, Valerie; Meegan, Charles A.

    1999-01-01

    Empirical probability models for BATSE gamma-ray burst (GRB) location errors are developed via a Bayesian analysis of the separations between BATSE GRB locations and locations obtained with the Interplanetary Network (IPN). Models are compared and their parameters estimated using 392 GRBs with single IPN annuli and 19 GRBs with intersecting IPN annuli. Most of the analysis is for the 4Br BATSE catalog; earlier catalogs are also analyzed. The simplest model that provides a good representation of the error distribution has 78% of the probability in a "core" term with a systematic error of 1.85 deg and the remainder in an extended tail with a systematic error of 5.1 deg, which implies a 68% confidence radius for bursts with negligible statistical uncertainties of 2.2 deg. There is evidence for a more complicated model in which the error distribution depends on the BATSE data type that was used to obtain the location. Bright bursts are typically located using the CONT data type, and according to the more complicated model, the 68% confidence radius for CONT-located bursts with negligible statistical uncertainties is 2.0 deg.

  1. Effect of multiple error sources on the calibration uncertainty.

    PubMed

    Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Pastore, Paolo

    2015-06-15

    The calibration uncertainty associated with the determination of metals at trace levels in a drinking water sample by ICP-MS was estimated when signals were affected by two error contributions, namely instrumental errors and operational condition errors. The calibration uncertainty was studied by using J concentration levels measured I times, as usual in experimental calibration procedures. The instrumental error was random in character whilst the operational error was assumed systematic at each concentration level but random among the J levels. The presence or the absence of the two error contributions was determined with an F-test between the ordinary least squares residual variance of the mean responses at each concentration and a pooled variance of the replicates. The theory was applied to the calibration of 30 elements present in a multi-standard solution and then to the analysis of boron, calcium, lithium, barium and manganese in a real drinking water sample. The need of using the proposed approach as calibration for almost all the analyzed elements resulted evident. The presence or the absence of the two error contributions was determined with an F-test between the ordinary least squares residual variance of the mean responses at each concentration and a pooled variance of the replicates. It was found that in the former instance the uncertainty determined using a two-components variance regression was greater than that obtainable from the one-variance regression.

  2. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  3. Error coding simulations in C

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1994-10-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  4. A review article of the reduce errors in medical laboratories.

    PubMed

    Mohammedsaleh, Zuhair M; Mohammedsaleh, Fayez

    2014-07-29

    The current article examines the modern practices of reducing errors in medical laboratories. The paper sought to examine the methods that different countries are applying to reduce errors in medical laboratories. In addition, the paper examines the relationship between inadequate training of laboratory personnel and error causation in medical laboratories. A total of 17 research articles have been reviewed. The paper has done a comparison of pathology laboratory practices in the US, Canada, the UK and Australia, regarding laboratory staff skills and error reduction. The paper finds out that; although some of the developed countries have employed advanced technology to reduce errors, there is still a great need to use sophisticated medical equipment to reduce errors. In addition, the levels of training for the medical technicians are still low. They are not equipped enough to reduce the errors to the required levels. The article recommends application of advanced technology in the reduction of errors, and training of technicians on the best practices to reduce errors.

  5. Medical error and related factors during internship and residency.

    PubMed

    Ahmadipour, Habibeh; Nahid, Mortazavi

    2015-01-01

    It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors.

  6. Error-related electrocorticographic activity in humans during continuous movements.

    PubMed

    Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten

    2012-04-01

    Brain-machine interface (BMI) devices make errors in decoding. Detecting these errors online from neuronal activity can improve BMI performance by modifying the decoding algorithm and by correcting the errors made. Here, we study the neuronal correlates of two different types of errors which can both be employed in BMI: (i) the execution error, due to inaccurate decoding of the subjects' movement intention; (ii) the outcome error, due to not achieving the goal of the movement. We demonstrate that, in electrocorticographic (ECoG) recordings from the surface of the human brain, strong error-related neural responses (ERNRs) for both types of errors can be observed. ERNRs were present in the low and high frequency components of the ECoG signals, with both signal components carrying partially independent information. Moreover, the observed ERNRs can be used to discriminate between error types, with high accuracy (≥83%) obtained already from single electrode signals. We found ERNRs in multiple cortical areas, including motor and somatosensory cortex. As the motor cortex is the primary target area for recording control signals for a BMI, an adaptive motor BMI utilizing these error signals may not require additional electrode implants in other brain areas.

  7. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  8. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  9. Explaining Errors in Children's Questions

    ERIC Educational Resources Information Center

    Rowland, Caroline F.

    2007-01-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…

  10. Dual Processing and Diagnostic Errors

    ERIC Educational Resources Information Center

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  11. Quantifying error distributions in crowding.

    PubMed

    Hanus, Deborah; Vul, Edward

    2013-03-22

    When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a naïve pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance.

  12. Children's Scale Errors with Tools

    ERIC Educational Resources Information Center

    Casler, Krista; Eshleman, Angelica; Greene, Kimberly; Terziyan, Treysi

    2011-01-01

    Children sometimes make "scale errors," attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to…

  13. Enhanced notification of infusion pump programming errors.

    PubMed

    Evans, R Scott; Carlson, Rick; Johnson, Kyle V; Palmer, Brent K; Lloyd, James F

    2010-01-01

    Hospitalized patients receive countless doses of medications through manually programmed infusion pumps. Many medication errors are the result of programming incorrect pump settings. When used appropriately, smart pumps have the potential to detect some programming errors. However, based on the current use of smart pumps, there are conflicting reports on their ability to prevent patient harm without additional capabilities and interfaces to electronic medical records (EMR). We developed a smart system that is connected to the EMR including medication charting that can detect and alert on potential pump programming errors. Acceptable programming limits of dose rate increases in addition to initial drug doses for 23 high-risk medications are monitored. During 22.5 months in a 24 bed ICU, 970 alerts (4% of 25,040 doses, 1.4 alerts per day) were generated for pump settings programmed outside acceptable limits of which 137 (14%) were found to have prevented potential harm. Monitoring pump programming at the system level rather than the pump provides access to additional patient data in the EMR including previous dosage levels, other concurrent medications and caloric intake, age, gender, vitals and laboratory results.

  14. Challenge and error: critical events and attention-related errors.

    PubMed

    Cheyne, James Allan; Carriere, Jonathan S A; Solman, Grayden J F; Smilek, Daniel

    2011-12-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error↔attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention lapses; resource-depleting cognitions interfering with attention to subsequent task challenges. Attention lapses lead to errors, and errors themselves are a potent consequence often leading to further attention lapses potentially initiating a spiral into more serious errors. We investigated this challenge-induced error↔attention-lapse model using the Sustained Attention to Response Task (SART), a GO-NOGO task requiring continuous attention and response to a number series and withholding of responses to a rare NOGO digit. We found response speed and increased commission errors following task challenges to be a function of temporal distance from, and prior performance on, previous NOGO trials. We conclude by comparing and contrasting the present theory and findings to those based on choice paradigms and argue that the present findings have implications for the generality of conflict monitoring and control models.

  15. Human error in recreational boating.

    PubMed

    McKnight, A James; Becker, Wayne W; Pettit, Anthony J; McKnight, A Scott

    2007-03-01

    Each year over 600 people die and more than 4000 are reported injured in recreational boating accidents. As with most other accidents, human error is the major contributor. U.S. Coast Guard reports of 3358 accidents were analyzed to identify errors in each of the boat types by which statistics are compiled: auxiliary (motor) sailboats, cabin motorboats, canoes and kayaks, house boats, personal watercraft, open motorboats, pontoon boats, row boats, sail-only boats. The individual errors were grouped into categories on the basis of similarities in the behavior involved. Those presented here are the categories accounting for at least 5% of all errors when summed across boat types. The most revealing and significant finding is the extent to which the errors vary across types. Since boating is carried out with one or two types of boats for long periods of time, effective accident prevention measures, including safety instruction, need to be geared to individual boat types.

  16. Angle interferometer cross axis errors

    SciTech Connect

    Bryan, J.B.; Carter, D.L.; Thompson, S.L.

    1994-01-01

    Angle interferometers are commonly used to measure surface plate flatness. An error can exist when the centerline of the double comer cube mirror assembly is not square to the surface plate and the guide bar for the mirror sled is curved. Typical errors can be one to two microns per meter. A similar error can exist in the calibration of rotary tables when the centerline of the double comer cube mirror assembly is not square to the axes of rotation of the angle calibrator and the calibrator axis is not parallel to the rotary table axis. Commercial double comer cube assemblies typically have non-parallelism errors of ten milli-radians between their centerlines and their sides and similar values for non-squareness between their centerlines and end surfaces. The authors have developed a simple method for measuring these errors and correcting them by remachining the reference surfaces.

  17. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  18. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  19. Error diffusion with a more symmetric error distribution

    NASA Astrophysics Data System (ADS)

    Fan, Zhigang

    1994-05-01

    In this paper a new error diffusion algorithm is presented that effectively eliminates the `worm' artifacts appearing in the standard methods. The new algorithm processes each scanline of the image in two passes, a forward pass followed by a backward one. This enables the error made at one pixel to be propagated to all the `future' pixels. A much more symmetric error distribution is achieved than that of the standard methods. The frequency response of the noise shaping filter associated with the new algorithm is mirror-symmetric in magnitude.

  20. Errors as allies: error management training in health professions education.

    PubMed

    King, Aimee; Holder, Michael G; Ahmed, Rami A

    2013-06-01

    This paper adopts methods from the organisational team training literature to outline how health professions education can improve patient safety. We argue that health educators can improve training quality by intentionally encouraging errors during simulation-based team training. Preventable medical errors are inevitable, but encouraging errors in low-risk settings like simulation can allow teams to have better emotional control and foresight to manage the situation if it occurs again with live patients. Our paper outlines an innovative approach for delivering team training.

  1. Application of human error analysis to aviation and space operations

    SciTech Connect

    Nelson, W.R.

    1998-03-01

    For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) the authors have been working to apply methods of human error analysis to the design of complex systems. They have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. They are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. The primary vehicle the authors have used to develop and apply these methods has been a series of projects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. They are currently adapting their methods and tools of human error analysis to the domain of air traffic management (ATM) systems. Under the NASA-sponsored Advanced Air Traffic Technologies (AATT) program they are working to address issues of human reliability in the design of ATM systems to support the development of a free flight environment for commercial air traffic in the US. They are also currently testing the application of their human error analysis approach for space flight operations. They have developed a simplified model of the critical habitability functions for the space station Mir, and have used this model to assess the affects of system failures and human errors that have occurred in the wake of the collision incident last year. They are developing an approach so that lessons learned from Mir operations can be systematically applied to design and operation of long-term space missions such as the International Space Station (ISS) and the manned Mars mission.

  2. Systematics and limit calculations

    SciTech Connect

    Fisher, Wade; /Fermilab

    2006-12-01

    This note discusses the estimation of systematic uncertainties and their incorporation into upper limit calculations. Two different approaches to reducing systematics and their degrading impact on upper limits are introduced. An improved {chi}{sup 2} function is defined which is useful in comparing Poisson distributed data with models marginalized by systematic uncertainties. Also, a technique using profile likelihoods is introduced which provides a means of constraining the degrading impact of systematic uncertainties on limit calculations.

  3. SU-E-I-83: Error Analysis of Multi-Modality Image-Based Volumes of Rodent Solid Tumors Using a Preclinical Multi-Modality QA Phantom

    SciTech Connect

    Lee, Y; Fullerton, G; Goins, B

    2015-06-15

    Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group; 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement

  4. Non-Systematic Complex Number RS Coded OFDM by Unique Word Prefix

    NASA Astrophysics Data System (ADS)

    Huemer, Mario; Hofbauer, Christian; Huber, Johannes B.

    2012-01-01

    In this paper we expand our recently introduced concept of UW-OFDM (unique word orthogonal frequency division multiplexing). In UW-OFDM the cyclic prefixes (CPs) are replaced by deterministic sequences, the so-called unique words (UWs). The UWs are generated by appropriately loading a set of redundant subcarriers. By that a systematic complex number Reed Solomon (RS) code construction is introduced in a quite natural way, because an RS code may be defined as the set of vectors, for which a block of successive zeros occurs in the other domain w.r.t. a discrete Fourier transform. (For a fixed block different to zero, i.e., a UW, a coset code of an RS code is generated.) A remaining problem in the original systematic coded UW-OFDM concept is the fact that the redundant subcarrier symbols disproportionately contribute to the mean OFDM symbol energy. In this paper we introduce the concept of non-systematic coded UW-OFDM, where the redundancy is no longer allocated to dedicated subcarriers, but distributed over all subcarriers. We derive optimum complex valued code generator matrices matched to the BLUE (best linear unbiased estimator) and to the LMMSE (linear minimum mean square error) data estimator, respectively. With the help of simulations we highlight the advantageous spectral properties and the superior BER (bit error ratio) performance of non-systematic coded UW-OFDM compared to systematic coded UW-OFDM as well as to CP-OFDM in AWGN (additive white Gaussian noise) and in frequency selective environments.

  5. The Effects of Computational Modeling Errors on the Estimation of Statistical Mechanical Variables.

    PubMed

    Faver, John C; Yang, Wei; Merz, Kenneth M

    2012-10-01

    Computational models used in the estimation of thermodynamic quantities of large chemical systems often require approximate energy models that rely on parameterization and cancellation of errors to yield agreement with experimental measurements. In this work, we show how energy function errors propagate when computing statistical mechanics-derived thermodynamic quantities. Assuming that each microstate included in a statistical ensemble has a measurable amount of error in its calculated energy, we derive low-order expressions for the propagation of these errors in free energy, average energy, and entropy. Through gedanken experiments we show the expected behavior of these error propagation formulas on hypothetical energy surfaces. For very large microstate energy errors, these low-order formulas disagree with estimates from Monte Carlo simulations of error propagation. Hence, such simulations of error propagation may be required when using poor potential energy functions. Propagated systematic errors predicted by these methods can be removed from computed quantities, while propagated random errors yield uncertainty estimates. Importantly, we find that end-point free energy methods maximize random errors and that local sampling of potential energy wells decreases random error significantly. Hence, end-point methods should be avoided in energy computations and should be replaced by methods that incorporate local sampling. The techniques described herein will be used in future work involving the calculation of free energies of biomolecular processes, where error corrections are expected to yield improved agreement with experiment.

  6. Effect of MLC leaf position, collimator rotation angle, and gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma

    SciTech Connect

    Bai, Sen; Li, Guangjun; Wang, Maojie; Jiang, Qinfeng; Zhang, Yingjie; Wei, Yuquan

    2013-07-01

    The purpose of this study was to investigate the effect of multileaf collimator (MLC) leaf position, collimator rotation angle, and accelerator gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma. To compare dosimetric differences between the simulating plans and the clinical plans with evaluation parameters, 6 patients with nasopharyngeal carcinoma were selected for simulation of systematic and random MLC leaf position errors, collimator rotation angle errors, and accelerator gantry rotation angle errors. There was a high sensitivity to dose distribution for systematic MLC leaf position errors in response to field size. When the systematic MLC position errors were 0.5, 1, and 2 mm, respectively, the maximum values of the mean dose deviation, observed in parotid glands, were 4.63%, 8.69%, and 18.32%, respectively. The dosimetric effect was comparatively small for systematic MLC shift errors. For random MLC errors up to 2 mm and collimator and gantry rotation angle errors up to 0.5°, the dosimetric effect was negligible. We suggest that quality control be regularly conducted for MLC leaves, so as to ensure that systematic MLC leaf position errors are within 0.5 mm. Because the dosimetric effect of 0.5° collimator and gantry rotation angle errors is negligible, it can be concluded that setting a proper threshold for allowed errors of collimator and gantry rotation angle may increase treatment efficacy and reduce treatment time.

  7. Effects of Setup Errors and Shape Changes on Breast Radiotherapy

    SciTech Connect

    Mourik, Anke van; Kranen, Simon van; Hollander, Suzanne den; Sonke, Jan-Jakob; Herk, Marcel van; Vliet-Vroegindeweij, Corine van

    2011-04-01

    Purpose: The purpose of the present study was to quantify the robustness of the dose distributions from three whole-breast radiotherapy (RT) techniques involving different levels of intensity modulation against whole patient setup inaccuracies and breast shape changes. Methods and Materials: For 19 patients (one computed tomography scan and five cone beam computed tomography scans each), three treatment plans were made (wedge, simple intensity-modulated RT [IMRT], and full IMRT). For each treatment plan, four dose distributions were calculated. The first dose distribution was the original plan. The other three included the effects of patient setup errors (rigid displacement of the bony anatomy) or breast errors (e.g., rotations and shape changes of the breast with respect to the bony anatomy), or both, and were obtained through deformable image registration and dose accumulation. Subsequently, the effects of the plan type and error sources on target volume coverage, mean lung dose, and excess dose were determined. Results: Systematic errors of 1-2 mm and random errors of 2-3 mm (standard deviation) were observed for both patient- and breast-related errors. Planning techniques involving glancing fields (wedge and simple IMRT) were primarily affected by patient errors ({approx}6% loss of coverage near the dorsal field edge and {approx}2% near the skin). In contrast, plan deterioration due to breast errors was primarily observed in planning techniques without glancing fields (full IMRT, {approx}2% loss of coverage near the dorsal field edge and {approx}4% near the skin). Conclusion: The influences of patient and breast errors on the dose distributions are comparable in magnitude for whole breast RT plans, including glancing open fields, rendering simple IMRT the preferred technique. Dose distributions from planning techniques without glancing open fields were more seriously affected by shape changes of the breast, demanding specific attention in partial breast

  8. Error compensation for thermally induced errors on a machine tool

    SciTech Connect

    Krulewich, D.A.

    1996-11-08

    Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.

  9. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test.

  10. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. PMID:23999403

  11. Performance characteristics of rules for internal quality control: probabilities for false rejection and error detection.

    PubMed

    Westgard, J O; Groth, T; Aronsson, T; Falk, H; de Verdier, C H

    1977-10-01

    When assessing the performance of an internal quality control system, it is useful to determine the probability for false rejections (pfr) and the probability for error detection (ped). These performance characteristics are estimated here by use of a computer stimulation procedure. The control rules studied include those commonly employed with Shewhart-type control charts, a cumulative sum rule, and rules applicable when a series of control measurements are treated as a single control observation. The error situations studied include an increase in random error, a systematic shift, a systematic drift, and mixtures of these. The probability for error detection is very dependent on the number of control observations and the choice of control rules. No one rule is best for detecting all errors, thus combinations of rules are desirable. Some appropriate combinations are suggested and their performance characteristics are presented.

  12. Exploring Discretization Error in Simulation-Based Aerodynamic Databases

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2010-01-01

    This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.

  13. Reducing the risk of medication errors in women.

    PubMed

    Grissinger, Matthew C; Kelly, Kate

    2005-01-01

    We outline some of the causes of medication errors involving women and recommend ways that healthcare practitioners can prevent some of these errors. Patient safety has become a major concern since the November 1999 release of the Institute of Medicine (IOM) report, "To Err Is Human." Errors involving prescription medications are responsible for up to 7000 American deaths per year, and the financial costs of drug-related morbidity and mortality may be nearly $77 billion a year. The Institute for Safe Medication Practices (ISMP) collects and analyzes voluntary confidential medication error reports and makes recommendations on the prevention of such errors. This paper uses the expertise of ISMP in medication error prevention to make recommendations to prevent medication errors involving women. Healthcare practitioners should focus on areas of the medication use process that would have the greatest impact, including obtaining complete patient information, accurately communicating drug information, and properly educating patients. Although medication errors are not more common in women, there are some unique concerns with medications used for treating women. In addition, sharing of information about medication use and compliance with medication regimens have been identified as concerns. Through the sharing of information and improving the patient education process, healthcare practitioners should play a more active role in medication error reduction activities by working together toward the goal of improving medication safety and encouraging women to become active in their own care.

  14. Sources of error in picture naming under time pressure.

    PubMed

    Lloyd-Jones, Toby J; Nettlemill, Mandy

    2007-06-01

    We used a deadline procedure to investigate how time pressure may influence the processes involved in picture naming. The deadline exaggerated errors found under naming without deadline. There were also category differences in performance between living and nonliving things and, in particular, for animals versus fruit and vegetables. The majority of errors were visuallyand semantically related to the target (e. celery-asparagus), and there was a greater proportion of these errors made to living things. Importantly, there were also more visual-semantic errors to animals than to fruit and vegetables. In addition, there were a smaller number of pure semantic errors (e.g., nut-bolt), which were made predominantly to nonliving things. The different kinds of error were correlated with different variables. Overall, visual-semantic errors were associated with visual complexity and visual similarity, whereas pure semantic errors were associated with imageability and age of acquisition. However, for animals, visual-semantic errors were associated with visual complexity, whereas for fruit and vegetables they were associated with visual similarity. We discuss these findings in terms of theories of category-specific semantic impairment and models of picture naming. PMID:17848037

  15. Strategies for reducing medication errors in the emergency department

    PubMed Central

    Weant, Kyle A; Bailey, Abby M; Baker, Stephanie N

    2014-01-01

    Medication errors are an all-too-common occurrence in emergency departments across the nation. This is largely secondary to a multitude of factors that create an almost ideal environment for medication errors to thrive. To limit and mitigate these errors, it is necessary to have a thorough knowledge of the medication-use process in the emergency department and develop strategies targeted at each individual step. Some of these strategies include medication-error analysis, computerized provider-order entry systems, automated dispensing cabinets, bar-coding systems, medication reconciliation, standardizing medication-use processes, education, and emergency-medicine clinical pharmacists. Special consideration also needs to be given to the development of strategies for the pediatric population, as they can be at an elevated risk of harm. Regardless of the strategies implemented, the prevention of medication errors begins and ends with the development of a culture that promotes the reporting of medication errors, and a systematic, nonpunitive approach to their elimination. PMID:27147879

  16. THE DISKMASS SURVEY. II. ERROR BUDGET

    SciTech Connect

    Bershady, Matthew A.; Westfall, Kyle B.; Verheijen, Marc A. W.; Martinsson, Thomas; Andersen, David R.; Swaters, Rob A. E-mail: verheyen@astro.rug.n

    2010-06-10

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio ({Upsilon}{sub *}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25{sup 0}-35{sup 0} is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction (F{sub bar}) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density ({Sigma}{sub dyn}), disk stellar mass-to-light ratio ({Upsilon}{sub *}{sup disk}), and disk maximality (F{sub *,max}{sup disk} {identical_to} V{sub *,max}{sup disk}/ V{sub c}). Random and systematic errors in these quantities for individual galaxies will be {approx}25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  17. FORCE: FORtran for Cosmic Errors

    NASA Astrophysics Data System (ADS)

    Colombi, Stéphane; Szapudi, István

    We review the theory of cosmic errors we have recently developed for count-in-cells statistics. The corresponding FORCE package provides a simple and useful way to compute cosmic covariance on factorial moments and cumulants measured in galaxy catalogs.

  18. Human errors and measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Kuselman, Ilya; Pennecchi, Francesca

    2015-04-01

    Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.

  19. Quantile Regression With Measurement Error

    PubMed Central

    Wei, Ying; Carroll, Raymond J.

    2010-01-01

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. PMID:20305802

  20. Robust characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2016-04-01

    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  1. Static Detection of Disassembly Errors

    SciTech Connect

    Krishnamoorthy, Nithya; Debray, Saumya; Fligg, Alan K

    2009-10-13

    Static disassembly is a crucial first step in reverse engineering executable files, and there is a consider- able body of work in reverse-engineering of binaries, as well as areas such as semantics-based security anal- ysis, that assumes that the input executable has been correctly disassembled. However, disassembly errors, e.g., arising from binary obfuscations, can render this assumption invalid. This work describes a machine- learning-based approach, using decision trees, for stat- ically identifying possible errors in a static disassem- bly; such potential errors may then be examined more closely, e.g., using dynamic analyses. Experimental re- sults using a variety of input executables indicate that our approach performs well, correctly identifying most disassembly errors with relatively few false positives.

  2. Dual processing and diagnostic errors.

    PubMed

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  3. Prospective errors determine motor learning

    PubMed Central

    Takiyama, Ken; Hirashima, Masaya; Nozaki, Daichi

    2015-01-01

    Diverse features of motor learning have been reported by numerous studies, but no single theoretical framework concurrently accounts for these features. Here, we propose a model for motor learning to explain these features in a unified way by extending a motor primitive framework. The model assumes that the recruitment pattern of motor primitives is determined by the predicted movement error of an upcoming movement (prospective error). To validate this idea, we perform a behavioural experiment to examine the model’s novel prediction: after experiencing an environment in which the movement error is more easily predictable, subsequent motor learning should become faster. The experimental results support our prediction, suggesting that the prospective error might be encoded in the motor primitives. Furthermore, we demonstrate that this model has a strong explanatory power to reproduce a wide variety of motor-learning-related phenomena that have been separately explained by different computational models. PMID:25635628

  4. Orbital and Geodetic Error Analysis

    NASA Technical Reports Server (NTRS)

    Felsentreger, T.; Maresca, P.; Estes, R.

    1985-01-01

    Results that previously required several runs determined in more computer-efficient manner. Multiple runs performed only once with GEODYN and stored on tape. ERODYN then performs matrix partitioning and linear algebra required for each individual error-analysis run.

  5. Cirrus cloud retrieval using infrared sounding data: Multilevel cloud errors

    NASA Technical Reports Server (NTRS)

    Baum, Bryan A.; Wielicki, Bruce A.

    1994-01-01

    In this study we perform an error analysis for cloud-top pressure retrieval using the High-Resolution Infrared Radiometric Sounder (HIRS/2) 15-microns CO2 channels for the two-layer case of transmissive cirrus overlying an overcast, opaque stratiform cloud. This analysis includes standard deviation and bias error due to instrument noise and the presence of two cloud layers, the lower of which is opaque. Instantaneous cloud pressure retrieval errors are determined for a range of cloud amounts (0.1-1.0) and cloud-top pressures (850-250 mb). Large cloud-top pressure retrieval errors are found to occur when a lower opaque layer is present underneath an upper transmissive cloud layer in the satellite field of view (FOV). Errors tend to increase with decreasing upper-cloud effective cloud amount and with decreasing cloud height (increasing pressure). Errors in retrieved upper-cloud pressure result in corresponding errors in derived effective cloud amount. For the case in which a HIRS FOV has two distinct cloud layers, the difference between the retrieved and actual cloud-top pressure is positive in all cases, meaning that the retrieved upper-cloud height is lower than the actual upper-cloud height. In addition, errors in retrieved cloud pressure are found to depend upon the lapse rate between the low-level cloud top and the surface. We examined which sounder channel combinations would minimize the total errors in derived cirrus cloud height caused by instrument noise and by the presence of a lower-level cloud. We find that while the sounding channels that peak between 700 and 1000 mb minimize random errors, the sounding channels that peak at 300-500 mb minimize bias errors. For a cloud climatology, the bias errors are most critical.

  6. Interpolation Errors in Spectrum Analyzers

    NASA Technical Reports Server (NTRS)

    Martin, J. L.

    1996-01-01

    To obtain the proper measurement amplitude with a spectrum analyzer, the correct frequency-dependent transducer factor must be added to the voltage measured by the transducer. This report examines how entering transducer factors into a spectrum analyzer can cause significant errors in field amplitude due to the misunderstanding of the analyzer's interpolation methods. It also discusses how to reduce these errors to obtain a more accurate field amplitude reading.

  7. Relative-Error-Covariance Algorithms

    NASA Technical Reports Server (NTRS)

    Bierman, Gerald J.; Wolff, Peter J.

    1991-01-01

    Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.

  8. Algorithmic Error Correction of Impedance Measuring Sensors

    PubMed Central

    Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira

    2009-01-01

    This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177

  9. Detecting Soft Errors in Stencil based Computations

    SciTech Connect

    Sharma, V.; Gopalkrishnan, G.; Bronevetsky, G.

    2015-05-06

    Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.

  10. Continuous quantum error correction through local operations

    SciTech Connect

    Mascarenhas, Eduardo; Franca Santos, Marcelo; Marques, Breno; Terra Cunha, Marcelo

    2010-09-15

    We propose local strategies to protect global quantum information. The protocols, which are quantum error-correcting codes for dissipative systems, are based on environment measurements, direct feedback control, and simple encoding of the logical qubits into physical qutrits whose decaying transitions are indistinguishable and equally probable. The simple addition of one extra level in the description of the subsystems allows for local actions to fully and deterministically protect global resources such as entanglement. We present codes for both quantum jump and quantum state diffusion measurement strategies and test them against several sources of inefficiency. The use of qutrits in information protocols suggests further characterization of qutrit-qutrit disentanglement dynamics, which we also give together with simple local environment measurement schemes able to prevent distillability sudden death and even enhance entanglement in situations in which our feedback error correction is not possible.

  11. Learning (from) the errors of a systems biology model.

    PubMed

    Engelhardt, Benjamin; Frőhlich, Holger; Kschischo, Maik

    2016-01-01

    Mathematical modelling is a labour intensive process involving several iterations of testing on real data and manual model modifications. In biology, the domain knowledge guiding model development is in many cases itself incomplete and uncertain. A major problem in this context is that biological systems are open. Missed or unknown external influences as well as erroneous interactions in the model could thus lead to severely misleading results. Here we introduce the dynamic elastic-net, a data driven mathematical method which automatically detects such model errors in ordinary differential equation (ODE) models. We demonstrate for real and simulated data, how the dynamic elastic-net approach can be used to automatically (i) reconstruct the error signal, (ii) identify the target variables of model error, and (iii) reconstruct the true system state even for incomplete or preliminary models. Our work provides a systematic computational method facilitating modelling of open biological systems under uncertain knowledge. PMID:26865316

  12. Discreteness noise versus force errors in N-body simulations

    NASA Technical Reports Server (NTRS)

    Hernquist, Lars; Hut, Piet; Makino, Jun

    1993-01-01

    A low accuracy in the force calculation per time step of a few percent for each particle pair is sufficient for collisionless N-body simulations. Higher accuracy is made meaningless by the dominant discreteness noise in the form of two-body relaxation, which can be reduced only by increasing the number of particles. Since an N-body simulation is a Monte Carlo procedure in which each particle-particle force is essentially random, i.e., carries an error of about 1000 percent, the only requirement is a systematic averaging-out of these intrinsic errors. We illustrate these assertions with two specific examples in which individual pairwise forces are deliberately allowed to carry significant errors: tree-codes on supercomputers and algorithms on special-purpose machines with low-precision hardware.

  13. Learning (from) the errors of a systems biology model

    NASA Astrophysics Data System (ADS)

    Engelhardt, Benjamin; Frőhlich, Holger; Kschischo, Maik

    2016-02-01

    Mathematical modelling is a labour intensive process involving several iterations of testing on real data and manual model modifications. In biology, the domain knowledge guiding model development is in many cases itself incomplete and uncertain. A major problem in this context is that biological systems are open. Missed or unknown external influences as well as erroneous interactions in the model could thus lead to severely misleading results. Here we introduce the dynamic elastic-net, a data driven mathematical method which automatically detects such model errors in ordinary differential equation (ODE) models. We demonstrate for real and simulated data, how the dynamic elastic-net approach can be used to automatically (i) reconstruct the error signal, (ii) identify the target variables of model error, and (iii) reconstruct the true system state even for incomplete or preliminary models. Our work provides a systematic computational method facilitating modelling of open biological systems under uncertain knowledge.

  14. Learning (from) the errors of a systems biology model.

    PubMed

    Engelhardt, Benjamin; Frőhlich, Holger; Kschischo, Maik

    2016-02-11

    Mathematical modelling is a labour intensive process involving several iterations of testing on real data and manual model modifications. In biology, the domain knowledge guiding model development is in many cases itself incomplete and uncertain. A major problem in this context is that biological systems are open. Missed or unknown external influences as well as erroneous interactions in the model could thus lead to severely misleading results. Here we introduce the dynamic elastic-net, a data driven mathematical method which automatically detects such model errors in ordinary differential equation (ODE) models. We demonstrate for real and simulated data, how the dynamic elastic-net approach can be used to automatically (i) reconstruct the error signal, (ii) identify the target variables of model error, and (iii) reconstruct the true system state even for incomplete or preliminary models. Our work provides a systematic computational method facilitating modelling of open biological systems under uncertain knowledge.

  15. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  16. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  17. Learning (from) the errors of a systems biology model

    PubMed Central

    Engelhardt, Benjamin; Frőhlich, Holger; Kschischo, Maik

    2016-01-01

    Mathematical modelling is a labour intensive process involving several iterations of testing on real data and manual model modifications. In biology, the domain knowledge guiding model development is in many cases itself incomplete and uncertain. A major problem in this context is that biological systems are open. Missed or unknown external influences as well as erroneous interactions in the model could thus lead to severely misleading results. Here we introduce the dynamic elastic-net, a data driven mathematical method which automatically detects such model errors in ordinary differential equation (ODE) models. We demonstrate for real and simulated data, how the dynamic elastic-net approach can be used to automatically (i) reconstruct the error signal, (ii) identify the target variables of model error, and (iii) reconstruct the true system state even for incomplete or preliminary models. Our work provides a systematic computational method facilitating modelling of open biological systems under uncertain knowledge. PMID:26865316

  18. Error image aware content restoration

    NASA Astrophysics Data System (ADS)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  19. Optimal measurement strategies for effective suppression of drift errors

    SciTech Connect

    Yashchuk, Valeriy V.

    2009-04-16

    Drifting of experimental set-ups with change of temperature or other environmental conditions is the limiting factor of many, if not all, precision measurements. The measurement error due to a drift is, in some sense, in-between random noise and systematic error. In the general case, the error contribution of a drift cannot be averaged out using a number of measurements identically carried out over a reasonable time. In contrast to systematic errors, drifts are usually not stable enough for a precise calibration. Here a rather general method for effective suppression of the spurious effects caused by slow drifts in a large variety of instruments and experimental set-ups is described. An analytical derivation of an identity, describing the optimal measurement strategies suitable for suppressing the contribution of a slow drift described with a certain order polynomial function, is presented. A recursion rule as well as a general mathematical proof of the identity is given. The effectiveness of the discussed method is illustrated with an application of the derived optimal scanning strategies to precise surface slope measurements with a surface profiler.

  20. Highly porous thermal protection materials: Modelling and prediction of the methodical experimental errors

    NASA Astrophysics Data System (ADS)

    Cherepanov, Valery V.; Alifanov, Oleg M.; Morzhukhina, Alena V.; Budnik, Sergey A.

    2016-11-01

    The formation mechanisms and the main factors affecting the systematic error of thermocouples were investigated. According to the results of experimental studies and mathematical modelling it was established that in highly porous heat resistant materials for aerospace application the thermocouple errors are determined by two competing mechanisms provided correlation between the errors and the difference between radiation and conduction heat fluxes. The comparative analysis was carried out and some features of the methodical error formation related to the distances from the heated surface were established.