Science.gov

Sample records for additional systematic error

  1. Statistical errors in Monte Carlo estimates of systematic errors

    NASA Astrophysics Data System (ADS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  2. Protecting weak measurements against systematic errors

    NASA Astrophysics Data System (ADS)

    Pang, Shengshi; Alonso, Jose Raul Gonzalez; Brun, Todd A.; Jordan, Andrew N.

    2016-07-01

    In this work, we consider the systematic error of quantum metrology by weak measurements under decoherence. We derive the systematic error of maximum likelihood estimation in general to the first-order approximation of a small deviation in the probability distribution and study the robustness of standard weak measurement and postselected weak measurements against systematic errors. We show that, with a large weak value, the systematic error of a postselected weak measurement when the probe undergoes decoherence can be significantly lower than that of a standard weak measurement. This indicates another advantage of weak-value amplification in improving the performance of parameter estimation. We illustrate the results by an exact numerical simulation of decoherence arising from a bosonic mode and compare it to the first-order analytical result we obtain.

  3. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  4. Systematic Error Modeling and Bias Estimation

    PubMed Central

    Zhang, Feihu; Knoll, Alois

    2016-01-01

    This paper analyzes the statistic properties of the systematic error in terms of range and bearing during the transformation process. Furthermore, we rely on a weighted nonlinear least square method to calculate the biases based on the proposed models. The results show the high performance of the proposed approach for error modeling and bias estimation. PMID:27213386

  5. Measuring Systematic Error with Curve Fits

    ERIC Educational Resources Information Center

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

  6. Systematic Error Modeling and Bias Estimation.

    PubMed

    Zhang, Feihu; Knoll, Alois

    2016-01-01

    This paper analyzes the statistic properties of the systematic error in terms of range and bearing during the transformation process. Furthermore, we rely on a weighted nonlinear least square method to calculate the biases based on the proposed models. The results show the high performance of the proposed approach for error modeling and bias estimation. PMID:27213386

  7. Antenna pointing systematic error model derivations

    NASA Technical Reports Server (NTRS)

    Guiar, C. N.; Lansing, F. L.; Riggs, R.

    1987-01-01

    The pointing model used to represent and correct systematic errors for the Deep Space Network (DSN) antennas is presented. Analytical expressions are given in both azimuth-elevation (az-el) and hour angle-declination (ha-dec) mounts for RF axis collimation error, encoder offset, nonorthogonality of axes, axis plane tilt, and structural flexure due to gravity loading. While the residual pointing errors (rms) after correction appear to be within the ten percent of the half-power beamwidth criterion commonly set for good pointing accuracy, the DSN has embarked on an extensive pointing improvement and modeling program aiming toward an order of magnitude higher pointing precision.

  8. Systematic errors in strong lens modeling

    NASA Astrophysics Data System (ADS)

    Johnson, Traci Lin; Sharon, Keren; Bayliss, Matthew B.

    2015-08-01

    The lensing community has made great strides in quantifying the statistical errors associated with strong lens modeling. However, we are just now beginning to understand the systematic errors. Quantifying these errors is pertinent to Frontier Fields science, as number counts and luminosity functions are highly sensitive to the value of the magnifications of background sources across the entire field of view. We are aware that models can be very different when modelers change their assumptions about the parameterization of the lensing potential (i.e., parametric vs. non-parametric models). However, models built while utilizing a single methodology can lead to inconsistent outcomes for different quantities, distributions, and qualities of redshift information regarding the multiple images used as constraints in the lens model. We investigate how varying the number of multiple image constraints and available redshift information of those constraints (ex., spectroscopic vs. photometric vs. no redshift) can influence the outputs of our parametric strong lens models, specifically, the mass distribution and magnifications of background sources. We make use of the simulated clusters by M. Meneghetti et al. and the first two Frontier Fields clusters, which have a high number of multiply imaged galaxies with spectroscopically-measured redshifts (or input redshifts, in the case of simulated clusters). This work will not only inform upon Frontier Field science, but also for work on the growing collection of strong lensing galaxy clusters, most of which are less massive and are capable of lensing a handful of galaxies, and are more prone to these systematic errors.

  9. More on Systematic Error in a Boyle's Law Experiment

    ERIC Educational Resources Information Center

    McCall, Richard P.

    2012-01-01

    A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.

  10. Improved Systematic Pointing Error Model for the DSN Antennas

    NASA Technical Reports Server (NTRS)

    Rochblatt, David J.; Withington, Philip M.; Richter, Paul H.

    2011-01-01

    New pointing models have been developed for large reflector antennas whose construction is founded on elevation over azimuth mount. At JPL, the new models were applied to the Deep Space Network (DSN) 34-meter antenna s subnet for corrections of their systematic pointing errors; it achieved significant improvement in performance at Ka-band (32-GHz) and X-band (8.4-GHz). The new models provide pointing improvements relative to the traditional models by a factor of two to three, which translate to approximately 3-dB performance improvement at Ka-band. For radio science experiments where blind pointing performance is critical, the new innovation provides a new enabling technology. The model extends the traditional physical models with higher-order mathematical terms, thereby increasing the resolution of the model for a better fit to the underlying systematic imperfections that are the cause of antenna pointing errors. The philosophy of the traditional model was that all mathematical terms in the model must be traced to a physical phenomenon causing antenna pointing errors. The traditional physical terms are: antenna axis tilts, gravitational flexure, azimuth collimation, azimuth encoder fixed offset, azimuth and elevation skew, elevation encoder fixed offset, residual refraction, azimuth encoder scale error, and antenna pointing de-rotation terms for beam waveguide (BWG) antennas. Besides the addition of spherical harmonics terms, the new models differ from the traditional ones in that the coefficients for the cross-elevation and elevation corrections are completely independent and may be different, while in the traditional model, some of the terms are identical. In addition, the new software allows for all-sky or mission-specific model development, and can utilize the previously used model as an a priori estimate for the development of the updated models.

  11. Identifying and Reducing Systematic Errors in Chromosome Conformation Capture Data

    PubMed Central

    Hahn, Seungsoo; Kim, Dongsup

    2015-01-01

    Chromosome conformation capture (3C)-based techniques have recently been used to uncover the mystic genomic architecture in the nucleus. These techniques yield indirect data on the distances between genomic loci in the form of contact frequencies that must be normalized to remove various errors. This normalization process determines the quality of data analysis. In this study, we describe two systematic errors that result from the heterogeneous local density of restriction sites and different local chromatin states, methods to identify and remove those artifacts, and three previously described sources of systematic errors in 3C-based data: fragment length, mappability, and local DNA composition. To explain the effect of systematic errors on the results, we used three different published data sets to show the dependence of the results on restriction enzymes and experimental methods. Comparison of the results from different restriction enzymes shows a higher correlation after removing systematic errors. In contrast, using different methods with the same restriction enzymes shows a lower correlation after removing systematic errors. Notably, the improved correlation of the latter case caused by systematic errors indicates that a higher correlation between results does not ensure the validity of the normalization methods. Finally, we suggest a method to analyze random error and provide guidance for the maximum reproducibility of contact frequency maps. PMID:26717152

  12. Identifying and Reducing Systematic Errors in Chromosome Conformation Capture Data.

    PubMed

    Hahn, Seungsoo; Kim, Dongsup

    2015-01-01

    Chromosome conformation capture (3C)-based techniques have recently been used to uncover the mystic genomic architecture in the nucleus. These techniques yield indirect data on the distances between genomic loci in the form of contact frequencies that must be normalized to remove various errors. This normalization process determines the quality of data analysis. In this study, we describe two systematic errors that result from the heterogeneous local density of restriction sites and different local chromatin states, methods to identify and remove those artifacts, and three previously described sources of systematic errors in 3C-based data: fragment length, mappability, and local DNA composition. To explain the effect of systematic errors on the results, we used three different published data sets to show the dependence of the results on restriction enzymes and experimental methods. Comparison of the results from different restriction enzymes shows a higher correlation after removing systematic errors. In contrast, using different methods with the same restriction enzymes shows a lower correlation after removing systematic errors. Notably, the improved correlation of the latter case caused by systematic errors indicates that a higher correlation between results does not ensure the validity of the normalization methods. Finally, we suggest a method to analyze random error and provide guidance for the maximum reproducibility of contact frequency maps. PMID:26717152

  13. Simulation of Systematic Errors in Phase-Referenced VLBI Astrometry

    NASA Astrophysics Data System (ADS)

    Pradel, N.; Charlot, P.; Lestrade, J.-F.

    2005-12-01

    The astrometric accuracy in the relative coordinates of two angularly-close radio sources observed with the phase-referencing VLBI technique is limited by systematic errors. These include geometric errors and atmospheric errors. Based on simulation with the SPRINT software, we evaluate the impact of these errors in the estimated relative source coordinates for standard VLBA observations. Such evaluations are useful to estimate the actual accuracy of phase-referenced VLBI astrometry.

  14. Systematic errors for a Mueller matrix dual rotating compensator ellipsometer.

    PubMed

    Broch, Laurent; En Naciri, Aotmane; Johann, Luc

    2008-06-01

    The characterization of anisotropic materials and complex systems by ellipsometry has pushed the design of instruments to require the measurement of the full reflection Mueller matrix of the sample with a great precision. Therefore Mueller matrix ellipsometers have emerged over the past twenty years. The values of some coefficients of the matrix can be very small and errors due to noise or systematic errors can induce distored analysis. We present a detailed characterization of the systematic errors for a Mueller Matrix Ellipsometer in the dual-rotating compensator configuration. Starting from a general formalism, we derive explicit first-order expressions for the errors on all the coefficients of the Mueller matrix of the sample. The errors caused by inaccuracy of the azimuthal arrangement of the optical components and residual ellipticity introduced by imperfect optical elements are shown. A new method based on a four-zone averaging measurement is proposed to vanish the systematic errors. PMID:18545594

  15. Systematic errors in precipitation measurements with different rain gauge sensors

    NASA Astrophysics Data System (ADS)

    Sungmin, O.; Foelsche, Ulrich

    2015-04-01

    Ground-level rain gauges provide the most direct measurement of precipitation and therefore such precipitation measurement datasets are often utilized for the evaluation of precipitation estimates via remote sensing and in climate model simulations. However, measured precipitation by means of national standard gauge networks is constrained by their spatial density. For this reason, in order to accurately measure precipitation it is of essential importance to understand the performance and reliability of rain gauges. This study is aimed to assess the systematic errors between measurements taken with different rain gauge sensors. We will mainly address extreme precipitation events as these are connected with high uncertainties in the measurements. Precipitation datasets for the study are available from WegenerNet, a dense network of 151 meteorological stations within an area of about 20 km × 15 km centred near the city of Feldbach in the southeast of Austria. The WegenerNet has a horizontal resolution of about 1.4-km and employs 'tripping bucket' rain gauges for precipitation measurements with three different types of sensors; a reference station provides measurements from all types of sensors. The results will illustrate systematic errors via the comparison of the precipitation datasets gained with different types of sensors. The analyses will be carried out by direct comparison between the datasets from the reference station. In addition, the dependence of the systematic errors on meteorological conditions, e.g. precipitation intensity and wind speed, will be investigated to assess the feasibility of applying the WegenerNet datasets for the study of extreme precipitation events. The study can be regarded as a pre-processing research to further studies in hydro-meteorological applications, which require high-resolution precipitation datasets, such as satellite/radar-derived precipitation validation and hydrodynamic modelling.

  16. Systematic Error Estimation for Chemical Reaction Energies.

    PubMed

    Simm, Gregor N; Reiher, Markus

    2016-06-14

    For a theoretical understanding of the reactivity of complex chemical systems, accurate relative energies between intermediates and transition states are required. Despite its popularity, density functional theory (DFT) often fails to provide sufficiently accurate data, especially for molecules containing transition metals. Due to the huge number of intermediates that need to be studied for all but the simplest chemical processes, DFT is, to date, the only method that is computationally feasible. Here, we present a Bayesian framework for DFT that allows for error estimation of calculated properties. Since the optimal choice of parameters in present-day density functionals is strongly system dependent, we advocate for a system-focused reparameterization. While, at first sight, this approach conflicts with the first-principles character of DFT that should make it, in principle, system independent, we deliberately introduce system dependence to be able to assign a stochastically meaningful error to the system-dependent parametrization, which makes it nonarbitrary. By reparameterizing a functional that was derived on a sound physical basis to a chemical system of interest, we obtain a functional that yields reliable confidence intervals for reaction energies. We demonstrate our approach on the example of catalytic nitrogen fixation. PMID:27159007

  17. Analysis and Correction of Systematic Height Model Errors

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.

    2016-06-01

    The geometry of digital height models (DHM) determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC). Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3) has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP), but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM) digital surface model (DSM) or the new AW3D30 DSM, based on ALOS PRISM images, are

  18. Systematic parameter errors in inspiraling neutron star binaries.

    PubMed

    Favata, Marc

    2014-03-14

    The coalescence of two neutron stars is an important gravitational wave source for LIGO and other detectors. Numerous studies have considered the precision with which binary parameters (masses, spins, Love numbers) can be measured. Here I consider the accuracy with which these parameters can be determined in the presence of systematic errors due to waveform approximations. These approximations include truncation of the post-Newtonian (PN) series and neglect of neutron star (NS) spin, tidal deformation, or orbital eccentricity. All of these effects can yield systematic errors that exceed statistical errors for plausible parameter values. In particular, neglecting spin, eccentricity, or high-order PN terms causes a significant bias in the NS Love number. Tidal effects will not be measurable with PN inspiral waveforms if these systematic errors are not controlled. PMID:24679276

  19. Systematic Parameter Errors in Inspiraling Neutron Star Binaries

    NASA Astrophysics Data System (ADS)

    Favata, Marc

    2014-03-01

    The coalescence of two neutron stars is an important gravitational wave source for LIGO and other detectors. Numerous studies have considered the precision with which binary parameters (masses, spins, Love numbers) can be measured. Here I consider the accuracy with which these parameters can be determined in the presence of systematic errors due to waveform approximations. These approximations include truncation of the post-Newtonian (PN) series and neglect of neutron star (NS) spin, tidal deformation, or orbital eccentricity. All of these effects can yield systematic errors that exceed statistical errors for plausible parameter values. In particular, neglecting spin, eccentricity, or high-order PN terms causes a significant bias in the NS Love number. Tidal effects will not be measurable with PN inspiral waveforms if these systematic errors are not controlled.

  20. Neutrino spectrum at the far detector systematic errors

    SciTech Connect

    Szleper, M.; Para, A.

    2001-10-01

    Neutrino oscillation experiments often employ two identical detectors to minimize errors due to inadequately known neutrino beam. We examine various systematics effects related to the prediction of the neutrino spectrum in the `far' detector on the basis of the spectrum observed at the `near' detector. We propose a novel method of the derivation of the far detector spectrum. This method is less sensitive to the details of the understanding of the neutrino beam line and the hadron production spectra than the usually used `double ratio' method thus allowing to reduce the systematic errors.

  1. Systematic error analysis for 3D nanoprofiler tracing normal vector

    NASA Astrophysics Data System (ADS)

    Kudo, Ryota; Tokuta, Yusuke; Nakano, Motohiro; Yamamura, Kazuya; Endo, Katsuyoshi

    2015-10-01

    In recent years, demand for an optical element having a high degree of freedom shape is increased. High-precision aspherical shape is required for the X-ray focusing mirror etc. For the head-mounted display etc., optical element of the free-form surface is used. For such an optical device fabrication, measurement technology is essential. We have developed a high- precision 3D nanoprofiler. By nanoprofiler, the normal vector information of the sample surface is obtained on the basis of the linearity of light. Normal vector information is differential value of the shape, it is possible to determine the shape by integrating. Repeatability of sub-nanometer has been achieved by nanoprofiler. To pursue the accuracy of shapes, systematic error is analyzed. The systematic errors are figure error of sample and assembly errors of the device. This method utilizes the information of the ideal shape of the sample, and the measurement point coordinates and normal vectors are calculated. However, measured figure is not the ideal shape by the effect of systematic errors. Therefore, the measurement point coordinate and the normal vector is calculated again by feeding back the measured figure. Correction of errors have been attempted by figure re-derivation. It was confirmed theoretically effectiveness by simulation. This approach also applies to the experiment, it was confirmed the possibility of about 4 nm PV figure correction in the employed sample.

  2. Reducing systematic error in weak lensing cluster surveys

    SciTech Connect

    Utsumi, Yousuke; Miyazaki, Satoshi; Hamana, Takashi; Geller, Margaret J.; Kurtz, Michael J.; Fabricant, Daniel G.; Dell'Antonio, Ian P.; Oguri, Masamune

    2014-05-10

    Weak lensing provides an important route toward collecting samples of clusters of galaxies selected by mass. Subtle systematic errors in image reduction can compromise the power of this technique. We use the B-mode signal to quantify this systematic error and to test methods for reducing this error. We show that two procedures are efficient in suppressing systematic error in the B-mode: (1) refinement of the mosaic CCD warping procedure to conform to absolute celestial coordinates and (2) truncation of the smoothing procedure on a scale of 10'. Application of these procedures reduces the systematic error to 20% of its original amplitude. We provide an analytic expression for the distribution of the highest peaks in noise maps that can be used to estimate the fraction of false peaks in the weak-lensing κ-signal-to-noise ratio (S/N) maps as a function of the detection threshold. Based on this analysis, we select a threshold S/N = 4.56 for identifying an uncontaminated set of weak-lensing peaks in two test fields covering a total area of ∼3 deg{sup 2}. Taken together these fields contain seven peaks above the threshold. Among these, six are probable systems of galaxies and one is a superposition. We confirm the reliability of these peaks with dense redshift surveys, X-ray, and imaging observations. The systematic error reduction procedures we apply are general and can be applied to future large-area weak-lensing surveys. Our high-peak analysis suggests that with an S/N threshold of 4.5, there should be only 2.7 spurious weak-lensing peaks even in an area of 1000 deg{sup 2}, where we expect ∼2000 peaks based on our Subaru fields.

  3. Bayesian conformity assessment in presence of systematic measurement errors

    NASA Astrophysics Data System (ADS)

    Carobbi, Carlo; Pennecchi, Francesca

    2016-04-01

    Conformity assessment of the distribution of the values of a quantity is investigated by using a Bayesian approach. The effect of systematic, non-negligible measurement errors is taken into account. The analysis is general, in the sense that the probability distribution of the quantity can be of any kind, that is even different from the ubiquitous normal distribution, and the measurement model function, linking the measurand with the observable and non-observable influence quantities, can be non-linear. Further, any joint probability density function can be used to model the available knowledge about the systematic errors. It is demonstrated that the result of the Bayesian analysis here developed reduces to the standard result (obtained through a frequentistic approach) when the systematic measurement errors are negligible. A consolidated frequentistic extension of such standard result, aimed at including the effect of a systematic measurement error, is directly compared with the Bayesian result, whose superiority is demonstrated. Application of the results here obtained to the derivation of the operating characteristic curves used for sampling plans for inspection by variables is also introduced.

  4. Seeing Your Error Alters My Pointing: Observing Systematic Pointing Errors Induces Sensori-Motor After-Effects

    PubMed Central

    Ronchi, Roberta; Revol, Patrice; Katayama, Masahiro; Rossetti, Yves; Farnè, Alessandro

    2011-01-01

    During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: As consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects). Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift) were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction) produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion “to feel” the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors. PMID:21731649

  5. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    PubMed

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods. PMID:26328545

  6. Medication Errors in the Southeast Asian Countries: A Systematic Review

    PubMed Central

    Salmasi, Shahrzad; Khan, Tahir Mehmood; Hong, Yet Hoi; Ming, Long Chiau; Wong, Tin Wui

    2015-01-01

    Background Medication error (ME) is a worldwide issue, but most studies on ME have been undertaken in developed countries and very little is known about ME in Southeast Asian countries. This study aimed systematically to identify and review research done on ME in Southeast Asian countries in order to identify common types of ME and estimate its prevalence in this region. Methods The literature relating to MEs in Southeast Asian countries was systematically reviewed in December 2014 by using; Embase, Medline, Pubmed, ProQuest Central and the CINAHL. Inclusion criteria were studies (in any languages) that investigated the incidence and the contributing factors of ME in patients of all ages. Results The 17 included studies reported data from six of the eleven Southeast Asian countries: five studies in Singapore, four in Malaysia, three in Thailand, three in Vietnam, one in the Philippines and one in Indonesia. There was no data on MEs in Brunei, Laos, Cambodia, Myanmar and Timor. Of the seventeen included studies, eleven measured administration errors, four focused on prescribing errors, three were done on preparation errors, three on dispensing errors and two on transcribing errors. There was only one study of reconciliation error. Three studies were interventional. Discussion The most frequently reported types of administration error were incorrect time, omission error and incorrect dose. Staff shortages, and hence heavy workload for nurses, doctor/nurse distraction, and misinterpretation of the prescription/medication chart, were identified as contributing factors of ME. There is a serious lack of studies on this topic in this region which needs to be addressed if the issue of ME is to be fully understood and addressed. PMID:26340679

  7. Systematic lossy forward error protection for error-resilient digital video broadcasting

    NASA Astrophysics Data System (ADS)

    Rane, Shantanu D.; Aaron, Anne; Girod, Bernd

    2004-01-01

    We present a novel scheme for error-resilient digital video broadcasting,using the Wyner-Ziv coding paradigm. We apply the general framework of systematic lossy source-channel coding to generate a supplementary bitstream that can correct transmission errors in the decoded video waveform up to a certain residual distortion. The systematic portion consists of a conventional MPEG-coded bitstream, which is transmitted over the error-prone channel without forward error correction.The supplementary bitstream is a low rate representation of the transmitted video sequence generated using Wyner-Ziv encoding. We use the conventionally decoded error-concealed MPEG video sequence as side information to decode the Wyner-Ziv bits. The decoder combines the error-prone side information and the Wyner-Ziv description to yield an improved decoded video signal. Our results indicate that, over a large range of channel error probabilities, this scheme yields superior video quality when compared with traditional forward error correction techniques employed in digital video broadcasting.

  8. The Effect of Systematic Error in Forced Oscillation Testing

    NASA Technical Reports Server (NTRS)

    Williams, Brianne Y.; Landman, Drew; Flory, Isaac L., IV; Murphy, Patrick C.

    2012-01-01

    One of the fundamental problems in flight dynamics is the formulation of aerodynamic forces and moments acting on an aircraft in arbitrary motion. Classically, conventional stability derivatives are used for the representation of aerodynamic loads in the aircraft equations of motion. However, for modern aircraft with highly nonlinear and unsteady aerodynamic characteristics undergoing maneuvers at high angle of attack and/or angular rates the conventional stability derivative model is no longer valid. Attempts to formulate aerodynamic model equations with unsteady terms are based on several different wind tunnel techniques: for example, captive, wind tunnel single degree-of-freedom, and wind tunnel free-flying techniques. One of the most common techniques is forced oscillation testing. However, the forced oscillation testing method does not address the systematic and systematic correlation errors from the test apparatus that cause inconsistencies in the measured oscillatory stability derivatives. The primary objective of this study is to identify the possible sources and magnitude of systematic error in representative dynamic test apparatuses. Sensitivities of the longitudinal stability derivatives to systematic errors are computed, using a high fidelity simulation of a forced oscillation test rig, and assessed using both Design of Experiments and Monte Carlo methods.

  9. Weak gravitational lensing systematic errors in the dark energy survey

    NASA Astrophysics Data System (ADS)

    Plazas, Andres Alejandro

    Dark energy is one of the most important unsolved problems in modern Physics, and weak gravitational lensing (WL) by mass structures along the line of sight ("cosmic shear") is a promising technique to learn more about its nature. However, WL is subject to numerous systematic errors which induce biases in measured cosmological parameters and prevent the development of its full potential. In this thesis, we advance the understanding of WL systematics in the context of the Dark Energy Survey (DES). We develop a testing suite to assess the performance of the shapelet-based DES WL measurement pipeline. We determine that the measurement bias of the parameters of our Point Spread Function (PSF) model scales as (S/N )-2, implying that a PSF S/N > 75 is needed to satisfy DES requirements. PSF anisotropy suppression also satisfies the requirements for source galaxies with S/N ≳ 45. For low-noise, marginally-resolved exponential galaxies, the shear calibration errors are up to about 0.06% (for shear values ≲ 0.075). Galaxies with S/N ≳ 75 present about 1% errors, sufficient for first-year DES data. However, more work is needed to satisfy full-area DES requirements, especially in the high-noise regime. We then implement tests to validate the high accuracy of the map between pixel coordinates and sky coordinates (astrometric solution), which is crucial to detect the required number of galaxies for WL in stacked images. We also study the effect of atmospheric dispersion on cosmic shear experiments such as DES and the Large Synoptic Survey Telescope (LSST) in the four griz bands. For DES (LSST), we find systematics in the g and r (g, r, and i) bands that are larger than required. We find that a simple linear correction in galaxy color is accurate enough to reduce dispersion shear systematics to insignificant levels in the r ( i) band for DES (LSST). More complex corrections will likely reduce the systematic cosmic-shear errors below statistical errors for LSST r band

  10. Spatial reasoning in the treatment of systematic sensor errors

    SciTech Connect

    Beckerman, M.; Jones, J.P.; Mann, R.C.; Farkas, L.A.; Johnston, S.E.

    1988-01-01

    In processing ultrasonic and visual sensor data acquired by mobile robots systematic errors can occur. The sonar errors include distortions in size and surface orientation due to the beam resolution, and false echoes. The vision errors include, among others, ambiguities in discriminating depth discontinuities from intensity gradients generated by variations in surface brightness. In this paper we present a methodology for the removal of systematic errors using data from the sonar sensor domain to guide the processing of information in the vision domain, and vice versa. During the sonar data processing some errors are removed from 2D navigation maps through pattern analyses and consistent-labelling conditions, using spatial reasoning about the sonar beam and object characteristics. Others are removed using visual information. In the vision data processing vertical edge segments are extracted using a Canny-like algorithm, and are labelled. Object edge features are then constructed from the segments using statistical and spatial analyses. A least-squares method is used during the statistical analysis, and sonar range data are used in the spatial analysis. 7 refs., 10 figs.

  11. Systematic lossy error protection of video based on H.264/AVC redundant slices

    NASA Astrophysics Data System (ADS)

    Rane, Shantanu; Girod, Bernd

    2006-01-01

    We propose the use of H.264 redundant slices for Systematic Lossy Error Protection (SLEP) of a video signal transmitted over an error-prone channel. In SLEP, the video signal is transmitted to the decoder without channel coding. Additionally, a Wyner-Ziv encoded version of the video signal is transmitted in order to provide error-resilience. In the event of channel errors, the Wyner-Ziv description is decoded as a substitute for the error-prone portions of the primary video signal. Since the Wyner-Ziv description is typically coarser than the primary video signal, SLEP is a lossy error protection technique which trades-off residual quantization distortion for improved error-resilience properties, such as graceful degradation of decoder picture quality. We describe how H.264 redundant slices can be used to generate the Wyner-Ziv description, and present simulation results to demonstrate the advantages of this method over traditional methods such as FEC.

  12. Systematic Errors of the Fsu Global Spectral Model

    NASA Astrophysics Data System (ADS)

    Surgi, Naomi

    Three 20 day winter forecasts have been carried out using the Florida State University Global Spectral Model to examine the systematic errors of the model. Most GCM's and global forecast models exhibit the same kind of error patterns even though the model formulations vary somewhat between them. Some of the dominant errors are a breakdown of the trade winds in the low latitudes, an over-prediction of the subtropical jets accompanied by an upward and poleward shift of the jets, an error in the mean sea-level pressure with over-intensification of the quasi-stationary oceanic lows and continental highs and a warming of the tropical mid and upper troposphere. In this study, a number of sensitivity experiments have been performed for which orography, model physics and initialization are considered as possible causes of these errors. A parameterization of the vertical distribution of momentum due to the sub-grid scale orography has been implemented in the model to address the model deficiencies associated with orographic forcing. This scheme incorporates the effects of moisture on the wave induced stress. The parameterization of gravity wave drag is shown to substantially reduce the large-scale wind and height errors in regions of direct forcing and well downstream of the mountainous regions. Also, a parameterization of the heat and moisture transport associated with shallow convection is found to have a positive impact on the errors particularly in the tropics. This is accomplished by the increase of moisture supply from the subtropics into the deep tropics and a subsequent enhancement of the secondary circulations. A dynamic relaxation was carried out to examine the impact of the long wave errors on the shorter wave. By constraining the long wave error, improvement is shown for wavenumbers 5-7 on medium to extended range time intervals. Thus, improved predictability of the transient flow is expected by applying this initialization procedure.

  13. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. PMID:25649961

  14. From Systematic Errors to Cosmology Using Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Hunterer, Dragan

    We propose to carry out a two-pronged program to significantly improve links between galaxy surveys and constraints on primordial cosmology and fundamental physics. We will first develop the methodology to self-calibrate the survey, that is, determine the large-angle calibration systematics internally from the survey. We will use this information to correct biases that propagate from the largest to smaller angular scales. Our approach for tackling the systematics is very complementary to existing ones, in particular in the sense that it does not assume knowledge of specific systematic maps or templates. It is timely to undertake these analyses, since none of the currently known methods addresses the multiplicative effects of large-angle calibration errors that contaminate the small-scale signal and present one of the most significant sources of error in the large-scale structure. The second part of the proposal is to precisely quantify the statistical and systematic errors in the reconstruction of the Integrated Sachs-Wolfe (ISW) contribution to the cosmic microwave background (CMB) sky map using information from galaxy surveys. Unlike the ISW contributions to CMB power, the ISW map reconstruction has not been studied in detail to date. We will create a nimble plug-and-play pipeline to ascertain how reliably a map from an arbitrary LSS survey can be used to separate the late-time and early-time contributions to CMB anisotropy at large angular scales. We will pay particular attention to partial sky coverage, incomplete redshift information, finite redshift range, and imperfect knowledge of the selection function for the galaxy survey. Our work should serve as the departure point for a variety of implications in cosmology, including the physical origin of the large-angle CMB "anomalies".

  15. SU-E-T-613: Dosimetric Consequences of Systematic MLC Leaf Positioning Errors

    SciTech Connect

    Kathuria, K; Siebers, J

    2014-06-01

    Purpose: The purpose of this study is to determine the dosimetric consequences of systematic MLC leaf positioning errors for clinical IMRT patient plans so as to establish detection tolerances for quality assurance programs. Materials and Methods: Dosimetric consequences were simulated by extracting mlc delivery instructions from the TPS, altering the file by the specified error, reloading the delivery instructions into the TPS, recomputing dose, and extracting dose-volume metrics for one head-andneck and one prostate patient. Machine error was simulated by offsetting MLC leaves in Pinnacle in a systematic way. Three different algorithms were followed for these systematic offsets, and are as follows: a systematic sequential one-leaf offset (one leaf offset in one segment per beam), a systematic uniform one-leaf offset (same one leaf offset per segment per beam) and a systematic offset of a given number of leaves picked uniformly at random from a given number of segments (5 out of 10 total). Dose to the PTV and normal tissue was simulated. Results: A systematic 5 mm offset of 1 leaf for all delivery segments of all beams resulted in a maximum PTV D98 deviation of 1%. Results showed very low dose error in all reasonably possible machine configurations, rare or otherwise, which could be simulated. Very low error in dose to PTV and OARs was shown in all possible cases of one leaf per beam per segment being offset (<1%), or that of only one leaf per beam being offset (<.2%). The errors resulting from a high number of adjacent leaves (maximum of 5 out of 60 total leaf-pairs) being simultaneously offset in many (5) of the control points (total 10–18 in all beams) per beam, in both the PTV and the OARs analyzed, were similarly low (<2–3%). Conclusions: The above results show that patient shifts and anatomical changes are the main source of errors in dose delivered, not machine delivery. These two sources of error are “visually complementary” and uncorrelated

  16. TRAINING ERRORS AND RUNNING RELATED INJURIES: A SYSTEMATIC REVIEW

    PubMed Central

    Buist, Ida; Sørensen, Henrik; Lind, Martin; Rasmussen, Sten

    2012-01-01

    Purpose: The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries. Methods: A systematic search was performed in PubMed, Web of Science, Embase, and SportDiscus. Studies were included if they examined novice, recreational, or elite runners between the ages of 18 and 65. Exposure variables were training characteristics defined as volume, distance or mileage, time or duration, frequency, intensity, speed or pace, or similar terms. The outcome of interest was Running Related Injuries (RRI) in general or specific RRI in the lower extremity or lower back. Methodological quality was evaluated using quality assessment tools of 11 to 16 items. Results: After examining 4561 titles and abstracts, 63 articles were identified as potentially relevant. Finally, nine retrospective cohort studies, 13 prospective cohort studies, six case-control studies, and three randomized controlled trials were included. The mean quality score was 44.1%. Conflicting results were reported on the relationships between volume, duration, intensity, and frequency and RRI. Conclusion: It was not possible to identify which training errors were related to running related injuries. Still, well supported data on which training errors relate to or cause running related injuries is highly important for determining proper prevention strategies. If methodological limitations in measuring training variables can be resolved, more work can be conducted to define training and the interactions between different training variables, create several hypotheses, test the hypotheses in a large scale prospective study, and explore cause and effect relationships in randomized controlled trials. Level of evidence: 2a PMID:22389869

  17. Jason-2 systematic error analysis in the GPS derived orbits

    NASA Astrophysics Data System (ADS)

    Melachroinos, S.; Lemoine, F. G.; Zelensky, N. P.; Rowlands, D. D.; Luthcke, S. B.; Chinn, D. S.

    2011-12-01

    Several results related to global or regional sea level changes still too often rely on the assumption that orbit errors coming from station coordinates adoption can be neglected in the total error budget (Ceri et al. 2010). In particular Instantaneous crust-fixed coordinates are obtained by adding to the linear ITRF model the geophysical high-frequency variations. In principle, geocenter motion should also be included in this computation, in order to reference these coordinates to the center of mass of the whole Earth. This correction is currently not applied when computing GDR orbits. Cerri et al. (2010) performed an analysis of systematic errors common to all coordinates along the North/South direction, as this type of bias, also known as Z-shift, has a clear impact on MSL estimates due to the unequal distribution of continental surface in the northern and southern hemispheres. The goal of this paper is to specifically study the main source of errors which comes from the current imprecision in the Z-axis realization of the frame. We focus here on the time variability of this Z-shift, which we can decompose in a drift and a periodic component due to the presumably omitted geocenter motion. A series of Jason-2 GPS-only orbits have been computed at NASA GSFC, using both IGS05 and IGS08. These orbits have been shown to agree radially at less than 1 cm RMS vs our SLR/DORIS std0905 and std1007 reduced-dynamic orbits and in comparison with orbits produced by other analysis centers (Melachroinos et al. 2011). Our GPS-only JASON-2 orbit accuracy is assessed using a number of tests including analysis of independent SLR and altimeter crossover residuals, orbit overlap differences, and direct comparison to orbits generated at GSFC using SLR and DORIS tracking, and to orbits generated externally at other centers. Tests based on SLR-crossover residuals provide the best performance indicator for independent validation of the NASA/GSFC GPS-only reduced dynamic orbits. Reduced

  18. Using Laser Scanners to Augment the Systematic Error Pointing Model

    NASA Astrophysics Data System (ADS)

    Wernicke, D. R.

    2016-08-01

    The antennas of the Deep Space Network (DSN) rely on precise pointing algorithms to communicate with spacecraft that are billions of miles away. Although the existing systematic error pointing model is effective at reducing blind pointing errors due to static misalignments, several of its terms have a strong dependence on seasonal and even daily thermal variation and are thus not easily modeled. Changes in the thermal state of the structure create a separation from the model and introduce a varying pointing offset. Compensating for this varying offset is possible by augmenting the pointing model with laser scanners. In this approach, laser scanners mounted to the alidade measure structural displacements while a series of transformations generate correction angles. Two sets of experiments were conducted in August 2015 using commercially available laser scanners. When compared with historical monopulse corrections under similar conditions, the computed corrections are within 3 mdeg of the mean. However, although the results show promise, several key challenges relating to the sensitivity of the optical equipment to sunlight render an implementation of this approach impractical. Other measurement devices such as inclinometers may be implementable at a significantly lower cost.

  19. Minor Planet Observations to Identify Reference System Systematic Errors

    NASA Astrophysics Data System (ADS)

    Hemenway, Paul D.; Duncombe, R. L.; Castelaz, M. W.

    2011-04-01

    In the 1930's Brouwer proposed using minor planets to correct the Fundamental System of celestial coordinates. Since then, many projects have used or proposed to use visual, photographic, photo detector, and space based observations to that end. From 1978 to 1990, a project was undertaken at the University of Texas utilizing the long focus and attendant advantageous plate scale (c. 7.37"/mm) of the 2.1m Otto Struve reflector's Cassegrain focus. The project followed precepts given in 1979. The program had several potential advantages over previous programs including high inclination orbits to cover half the celestial sphere, and, following Kristensen, the use of crossing points to remove entirely systematic star position errors from some observations. More than 1000 plates were obtained of 34 minor planets as part of this project. In July 2010 McDonald Observatory donated the plates to the Pisgah Astronomical Research Institute (PARI) in North Carolina. PARI is in the process of renovating the Space Telescope Science Institute GAMMA II modified PDS microdensitometer to scan the plates in the archives. We plan to scan the minor planet plates, reduce the plates to the densified ICRS using the UCAC4 positions (or the best available positions at the time of the reductions), and then determine the utility of attempting to find significant systematic corrections. Here we report the current status of various aspects of the project. Support from the National Science Foundation in the last millennium is gratefully acknowledged, as is help from Judit Ries and Wayne Green in packing and transporting the plates.

  20. Systematics for checking geometric errors in CNC lathes

    NASA Astrophysics Data System (ADS)

    Araújo, R. P.; Rolim, T. L.

    2015-10-01

    Non-idealities presented in machine tools compromise directly both the geometry and the dimensions of machined parts, generating distortions in the project. Given the competitive scenario among different companies, it is necessary to have knowledge of the geometric behavior of these machines in order to be able to establish their processing capability, avoiding waste of time and materials as well as satisfying customer requirements. But despite the fact that geometric tests are important and necessary to clarify the use of the machine correctly, therefore preventing future damage, most users do not apply such tests on their machines for lack of knowledge or lack of proper motivation, basically due to two factors: long period of time and high costs of testing. This work proposes a systematics for checking straightness and perpendicularity errors in CNC lathes demanding little time and cost with high metrological reliability, to be used on factory floors of small and medium-size businesses to ensure the quality of its products and make them competitive.

  1. A study of systematic errors in the PMD CamBoard nano

    NASA Astrophysics Data System (ADS)

    Chow, Jacky C. K.; Lichti, Derek D.

    2013-04-01

    Time-of-flight-based three-dimensional cameras are the state-of-the-art imaging modality for acquiring rapid 3D position information. Unlike any other technology on the market, it can deliver 2D images co-located with distance information at every pixel location, without any shadows. Recent technological advancements have begun miniaturizing such technology to be more suitable for laptops and eventually cellphones. This paper explores the systematic errors inherent to the new PMD CamBoard nano camera. As the world's most compact 3D time-of-flight camera it has applications in a wide domain, such as gesture control and facial recognition. To model the systematic errors, a one-step point-based and plane-based bundle adjustment method is used. It simultaneously estimates all systematic errors and unknown parameters by minimizing the residuals of image measurements, distance measurements, and amplitude measurements in a least-squares sense. The presented self-calibration method only requires a standard checkerboard target on a flat plane, making it a suitable candidate for on-site calibration. In addition, because distances are only constrained to lie on a plane, the raw pixel-by-pixel distance observations can be used. This makes it possible to increase the number of distance observations in the adjustment with ease. The results from this paper indicate that amplitude dependent range errors are the dominant error source for the nano under low scattering imaging configurations. Post user self-calibration, the RMSE of the range observations reduced by almost 50%, delivering range measurements at a precision of approximately 2.5cm within a 70cm interval.

  2. Systematic Error in UAV-derived Topographic Models: The Importance of Control

    NASA Astrophysics Data System (ADS)

    James, M. R.; Robson, S.; d'Oleire-Oltmanns, S.

    2014-12-01

    UAVs equipped with consumer cameras are increasingly being used to produce high resolution digital elevation models (DEMs) for a wide variety of geoscience applications. Image processing and DEM-generation is being facilitated by parallel increases in the use of software based on 'structure from motion' algorithms. However, recent work [1] has demonstrated that image networks from UAVs, for which camera pointing directions are generally near-parallel, are susceptible to producing systematic error in the resulting topographic surfaces (a vertical 'doming'). This issue primarily reflects error in the camera lens distortion model, which is dominated by the radial K1 term. Common data processing scenarios, in which self-calibration is used to refine the camera model within the bundle adjustment, can inherently result in such systematic error via poor K1 estimates. Incorporating oblique imagery into such data sets can mitigate error by enabling more accurate calculation of camera parameters [1]. Here, using a combination of simulated image networks and real imagery collected from a fixed wing UAV, we explore the additional roles of external ground control and the precision of image measurements. We illustrate similarities and differences between a variety of structure from motion software, and underscore the importance of well distributed and suitably accurate control for projects where a demonstrated high accuracy is required. [1] James & Robson (2014) Earth Surf. Proc. Landforms, 39, 1413-1420, doi: 10.1002/esp.3609

  3. Systematic errors in two-dimensional digital image correlation due to lens distortion

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Yu, Liping; Wu, Dafang; Tang, Liqun

    2013-02-01

    Lens distortion practically presents in a real optical imaging system causing non-uniform geometric distortion in the recorded images, and gives rise to additional errors in the displacement and strain results measured by two-dimensional digital image correlation (2D-DIC). In this work, the systematic errors in the displacement and strain results measured by 2D-DIC due to lens distortion are investigated theoretically using the radial lens distortion model and experimentally through easy-to-implement rigid body, in-plane translation tests. Theoretical analysis shows that the displacement and strain errors at an interrogated image point are not only in linear proportion to the distortion coefficient of the camera lens used, but also depend on its distance relative to distortion center and its magnitude of displacement. To eliminate the systematic errors caused by lens distortion, a simple linear least-squares algorithm is proposed to estimate the distortion coefficient from the distorted displacement results of rigid body, in-plane translation tests, which can be used to correct the distorted displacement fields to obtain unbiased displacement and strain fields. Experimental results verify the correctness of the theoretical derivation and the effectiveness of the proposed lens distortion correction method.

  4. Pressure Measurements Using an Airborne Differential Absorption Lidar. Part 1; Analysis of the Systematic Error Sources

    NASA Technical Reports Server (NTRS)

    Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.

    1999-01-01

    Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.

  5. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  6. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    NASA Astrophysics Data System (ADS)

    Kanphet, J.; Suriyapee, S.; Dumrongkijudom, N.; Sanghangthum, T.; Kumkhwao, J.; Wisetrintong, M.

    2016-03-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable.

  7. Investigating the epidemiology of medication errors and error-related adverse drug events (ADEs) in primary care, ambulatory care and home settings: a systematic review protocol

    PubMed Central

    Assiri, Ghadah Asaad; Grant, Liz; Aljadhey, Hisham; Sheikh, Aziz

    2016-01-01

    Introduction There is a need to better understand the epidemiology of medication errors and error-related adverse events in community care contexts. Methods and analysis We will systematically search the following databases: Cumulative Index to Nursing and Allied Health Literature (CINAHL), EMBASE, Eastern Mediterranean Regional Office of the WHO (EMRO), MEDLINE, PsycINFO and Web of Science. In addition, we will search Google Scholar and contact an international panel of experts to search for unpublished and in progress work. The searches will cover the time period January 1990–December 2015 and will yield data on the incidence or prevalence of and risk factors for medication errors and error-related adverse drug events in adults living in community settings (ie, primary care, ambulatory and home). Study quality will be assessed using the Critical Appraisal Skills Program quality assessment tool for cohort and case–control studies, and cross-sectional studies will be assessed using the Joanna Briggs Institute Critical Appraisal Checklist for Descriptive Studies. Meta-analyses will be undertaken using random-effects modelling using STATA (V.14) statistical software. Ethics and dissemination This protocol will be registered with PROSPERO, an international prospective register of systematic reviews, and the systematic review will be reported in the peer-reviewed literature using Preferred Reporting Items for Systematic Reviews and Meta-Analyses. PMID:27580826

  8. Multiscale Systematic Error Correction via Wavelet-Based Band Splitting and Bayesian Error Modeling in Kepler Light Curves

    NASA Astrophysics Data System (ADS)

    Stumpe, Martin C.; Smith, J. C.; Van Cleve, J.; Jenkins, J. M.; Barclay, T. S.; Fanelli, M. N.; Girouard, F.; Kolodziejczak, J.; McCauliff, S.; Morris, R. L.; Twicken, J. D.

    2012-05-01

    Kepler photometric data contain significant systematic and stochastic errors as they come from the Kepler Spacecraft. The main cause for the systematic errors are changes in the photometer focus due to thermal changes in the instrument, and also residual spacecraft pointing errors. It is the main purpose of the Presearch-Data-Conditioning (PDC) module of the Kepler Science processing pipeline to remove these systematic errors from the light curves. While PDC has recently seen a dramatic performance improvement by means of a Bayesian approach to systematic error correction and improved discontinuity correction, there is still room for improvement. One problem of the current (Kepler 8.1) implementation of PDC is that injection of high frequency noise can be observed in some light curves. Although this high frequency noise does not negatively impact the general cotrending, an increased noise level can make detection of planet transits or other astrophysical signals more difficult. The origin of this noise-injection is that high frequency components of light curves sometimes get included into detrending basis vectors characterizing long term trends. Similarly, small scale features like edges can sometimes get included in basis vectors which otherwise describe low frequency trends. As a side effect to removing the trends, detrending with these basis vectors can then also mistakenly introduce these small scale features into the light curves. A solution to this problem is to perform a separation of scales, such that small scale features and large scale features are described by different basis vectors. We present our new multiscale approach that employs wavelet-based band splitting to decompose small scale from large scale features in the light curves. The PDC Bayesian detrending can then be performed on each band individually to correct small and large scale systematics independently. Funding for the Kepler Mission is provided by the NASA Science Mission Directorate.

  9. Analysis of possible systematic errors in the Oslo method

    SciTech Connect

    Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.

    2011-03-15

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and {gamma}-ray transmission coefficient from a set of particle-{gamma} coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  10. Analysis of possible systematic errors in the Oslo method

    NASA Astrophysics Data System (ADS)

    Larsen, A. C.; Guttormsen, M.; Krtička, M.; Běták, E.; Bürger, A.; Görgen, A.; Nyhus, H. T.; Rekstad, J.; Schiller, A.; Siem, S.; Toft, H. K.; Tveten, G. M.; Voinov, A. V.; Wikan, K.

    2011-03-01

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and γ-ray transmission coefficient from a set of particle-γ coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  11. On causes of the origin of systematic errors in latitude determination with the Moscow PZT.

    NASA Astrophysics Data System (ADS)

    Volchkov, A. A.; Gutsalo, G. A.

    Peculiarities of eye response during visual measurements of star positions on photographic plates are considered. It is shown that variations of the plate background density can be a source of systematic errors during latitude determinations with a PZT.

  12. A concatenated coded modulation scheme for error control (addition 2)

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1988-01-01

    A concatenated coded modulation scheme for error control in data communications is described. The scheme is achieved by concatenating a Reed-Solomon outer code and a bandwidth efficient block inner code for M-ary PSK modulation. Error performance of the scheme is analyzed for an AWGN channel. It is shown that extremely high reliability can be attained by using a simple M-ary PSK modulation inner code and a relatively powerful Reed-Solomon outer code. Furthermore, if an inner code of high effective rate is used, the bandwidth expansion required by the scheme due to coding will be greatly reduced. The proposed scheme is particularly effective for high-speed satellite communications for large file transfer where high reliability is required. This paper also presents a simple method for constructing block codes for M-ary PSK modulation. Some short M-ary PSK codes with good minimum squared Euclidean distance are constructed. These codes have trellis structure and hence can be decoded with a soft-decision Viterbi decoding algorithm. Furthermore, some of these codes are phase invariant under multiples of 45 deg rotation.

  13. Difference image analysis: The interplay between the photometric scale factor and systematic photometric errors

    NASA Astrophysics Data System (ADS)

    Bramich, D. M.; Bachelet, E.; Alsubai, K. A.; Mislis, D.; Parley, N.

    2015-05-01

    Context. Understanding the source of systematic errors in photometry is essential for their calibration. Aims: We investigate how photometry performed on difference images can be influenced by errors in the photometric scale factor. Methods: We explore the equations for difference image analysis (DIA), and we derive an expression describing how errors in the difference flux, the photometric scale factor and the reference flux are propagated to the object photometry. Results: We find that the error in the photometric scale factor is important, and while a few studies have shown that it can be at a significant level, it is currently neglected by the vast majority of photometric surveys employing DIA. Conclusions: Minimising the error in the photometric scale factor, or compensating for it in a post-calibration model, is crucial for reducing the systematic errors in DIA photometry.

  14. Strategies for minimizing the impact of systematic errors on land data assimilation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Data assimilation concerns itself primarily with the impact of random stochastic errors on state estimation. However, the developers of land data assimilation systems are commonly faced with systematic errors arising from both the parameterization of a land surface model and the need to pre-process ...

  15. On systematic errors in spectral line parameters retrieved with the Voigt line profile

    NASA Astrophysics Data System (ADS)

    Kochanov, V. P.

    2012-08-01

    Systematic errors inherent in the Voigt line profile are analyzed. Molecular spectrum processing with the Voigt profile is shown to underestimate line intensities by 1-4%, with the errors in line positions being 0.0005 cm-1 and the decrease in pressure broadening coefficients varying from 5% to 55%.

  16. Systematic error of lidar profiles caused by a polarization-dependent receiver transmission: quantification and error correction scheme.

    PubMed

    Mattis, Ina; Tesche, Matthias; Grein, Matthias; Freudenthaler, Volker; Müller, Detlef

    2009-05-10

    Signals of many types of aerosol lidars can be affected with a significant systematic error, if depolarizing scatterers are present in the atmosphere. That error is caused by a polarization-dependent receiver transmission. In this contribution we present an estimation of the magnitude of this systematic error. We show that lidar signals can be biased by more than 20%, if linearly polarized laser light is emitted, if both polarization components of the backscattered light are measured with a single detection channel, and if the receiver transmissions for these two polarization components differ by more than 50%. This signal bias increases with increasing ratio between the two transmission values (transmission ratio) or with the volume depolarization ratio of the scatterers. The resulting error of the particle backscatter coefficient increases with decreasing backscatter ratio. If the particle backscatter coefficients are to have an accuracy better than 5%, the transmission ratio has to be in the range between 0.85 and 1.15. We present a method to correct the measured signals for this bias. We demonstrate an experimental method for the determination of the transmission ratio. We use collocated measurements of a lidar system strongly affected by this signal bias and an unbiased reference system to verify the applicability of the correction scheme. The errors in the case of no correction are illustrated with example measurements of fresh Saharan dust. PMID:19424398

  17. Emitter location independent of systematic errors in direction finders

    NASA Astrophysics Data System (ADS)

    Mahapatra, P. R.

    1980-11-01

    A scheme is suggested for the passive location of radio emitter position by using a mobile direction finder. The vehicle carrying the direction finder is made to maneuver such that the apparent direction of arrival is held constant. The resulting trajectory of the vehicle is a logarithmic spiral. The true direction of arrival can be obtained by monitoring the parameters of the spiral trajectory without using the value of the direction finder reading. Two specific algorithms to eliminate direction finder bias are presented and their sensitivity to random errors in measurement assessed.

  18. Using ridge regression in systematic pointing error corrections

    NASA Technical Reports Server (NTRS)

    Guiar, C. N.

    1988-01-01

    A pointing error model is used in the antenna calibration process. Data from spacecraft or radio star observations are used to determine the parameters in the model. However, the regression variables are not truly independent, displaying a condition known as multicollinearity. Ridge regression, a biased estimation technique, is used to combat the multicollinearity problem. Two data sets pertaining to Voyager 1 spacecraft tracking (days 105 and 106 of 1987) were analyzed using both linear least squares and ridge regression methods. The advantages and limitations of employing the technique are presented. The problem is not yet fully resolved.

  19. Second-order systematic errors in Mueller matrix dual rotating compensator ellipsometry.

    PubMed

    Broch, Laurent; En Naciri, Aotmane; Johann, Luc

    2010-06-10

    We investigate the systematic errors at the second order for a Mueller matrix ellipsometer in the dual rotating compensator configuration. Starting from a general formalism, we derive explicit second-order errors in the Mueller matrix coefficients of a given sample. We present the errors caused by the azimuthal inaccuracy of the optical components and their influences on the measurements. We demonstrate that the methods based on four-zone or two-zone averaging measurement are effective to vanish the errors due to the compensators. For the other elements, it is shown that the systematic errors at the second order can be canceled only for some coefficients of the Mueller matrix. The calibration step for the analyzer and the polarizer is developed. This important step is necessary to avoid the azimuthal inaccuracy in such elements. Numerical simulations and experimental measurements are presented and discussed. PMID:20539341

  20. MAXIMUM LIKELIHOOD ANALYSIS OF SYSTEMATIC ERRORS IN INTERFEROMETRIC OBSERVATIONS OF THE COSMIC MICROWAVE BACKGROUND

    SciTech Connect

    Zhang Le; Timbie, Peter; Karakci, Ata; Korotkov, Andrei; Tucker, Gregory S.; Sutter, Paul M.; Wandelt, Benjamin D.; Bunn, Emory F.

    2013-06-01

    We investigate the impact of instrumental systematic errors in interferometric measurements of the cosmic microwave background (CMB) temperature and polarization power spectra. We simulate interferometric CMB observations to generate mock visibilities and estimate power spectra using the statistically optimal maximum likelihood technique. We define a quadratic error measure to determine allowable levels of systematic error that does not induce power spectrum errors beyond a given tolerance. As an example, in this study we focus on differential pointing errors. The effects of other systematics can be simulated by this pipeline in a straightforward manner. We find that, in order to accurately recover the underlying B-modes for r = 0.01 at 28 < l < 384, Gaussian-distributed pointing errors must be controlled to 0. Degree-Sign 7 root mean square for an interferometer with an antenna configuration similar to QUBIC, in agreement with analytical estimates. Only the statistical uncertainty for 28 < l < 88 would be changed at {approx}10% level. With the same instrumental configuration, we find that the pointing errors would slightly bias the 2{sigma} upper limit of the tensor-to-scalar ratio r by {approx}10%. We also show that the impact of pointing errors on the TB and EB measurements is negligibly small.

  1. Drug treatment of inborn errors of metabolism: a systematic review

    PubMed Central

    Alfadhel, Majid; Al-Thihli, Khalid; Moubayed, Hiba; Eyaid, Wafaa; Al-Jeraisy, Majed

    2013-01-01

    Background The treatment of inborn errors of metabolism (IEM) has seen significant advances over the last decade. Many medicines have been developed and the survival rates of some patients with IEM have improved. Dosages of drugs used for the treatment of various IEM can be obtained from a range of sources but tend to vary among these sources. Moreover, the published dosages are not usually supported by the level of existing evidence, and they are commonly based on personal experience. Methods A literature search was conducted to identify key material published in English in relation to the dosages of medicines used for specific IEM. Textbooks, peer reviewed articles, papers and other journal items were identified. The PubMed and Embase databases were searched for material published since 1947 and 1974, respectively. The medications found and their respective dosages were graded according to their level of evidence, using the grading system of the Oxford Centre for Evidence-Based Medicine. Results 83 medicines used in various IEM were identified. The dosages of 17 medications (21%) had grade 1 level of evidence, 61 (74%) had grade 4, two medications were in level 2 and 3 respectively, and three had grade 5. Conclusions To the best of our knowledge, this is the first review to address this matter and the authors hope that it will serve as a quickly accessible reference for medications used in this important clinical field. PMID:23532493

  2. A study for systematic errors of the GLA forecast model in tropical regions

    NASA Technical Reports Server (NTRS)

    Chen, Tsing-Chang; Baker, Wayman E.; Pfaendtner, James; Corrigan, Martin

    1988-01-01

    From the sensitivity studies performed with the Goddard Laboratory for Atmospheres (GLA) analysis/forecast system, it was revealed that the forecast errors in the tropics affect the ability to forecast midlatitude weather in some cases. Apparently, the forecast errors occurring in the tropics can propagate to midlatitudes. Therefore, the systematic error analysis of the GLA forecast system becomes a necessary step in improving the model's forecast performance. The major effort of this study is to examine the possible impact of the hydrological-cycle forecast error on dynamical fields in the GLA forecast system.

  3. Reducing Systematic Centroid Errors Induced by Fiber Optic Faceplates in Intensified High-Accuracy Star Trackers

    PubMed Central

    Xiong, Kun; Jiang, Jie

    2015-01-01

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920

  4. Reducing systematic centroid errors induced by fiber optic faceplates in intensified high-accuracy star trackers.

    PubMed

    Xiong, Kun; Jiang, Jie

    2015-01-01

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920

  5. Impact of radar systematic error on the orthogonal frequency division multiplexing chirp waveform orthogonality

    NASA Astrophysics Data System (ADS)

    Wang, Jie; Liang, Xingdong; Chen, Longyong; Ding, Chibiao

    2015-01-01

    Orthogonal frequency division multiplexing (OFDM) chirp waveform, which is composed of two successive identical linear frequency modulated subpulses, is a newly proposed orthogonal waveform scheme for multiinput multioutput synthetic aperture radar (SAR) systems. However, according to the waveform model, radar systematic error, which introduces phase or amplitude difference between the subpulses of the OFDM waveform, significantly degrades the orthogonality. The impact of radar systematic error on the waveform orthogonality is mainly caused by the systematic nonlinearity rather than the thermal noise or the frequency-dependent systematic error. Due to the influence of the causal filters, the first subpulse leaks into the second one. The leaked signal interacts with the second subpulse in the nonlinear components of the transmitter. This interaction renders a dramatic phase distortion in the beginning of the second subpulse. The resultant distortion, which leads to a phase difference between the subpulses, seriously damages the waveform's orthogonality. The impact of radar systematic error on the waveform orthogonality is addressed. Moreover, the impact of the systematic nonlinearity on the waveform is avoided by adding a standby between the subpulses. Theoretical analysis is validated by practical experiments based on a C-band SAR system.

  6. SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER.

    SciTech Connect

    QIAN,S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.

    2007-08-25

    Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately.

  7. Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Helgesson, P.; Sjöstrand, H.; Koning, A. J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.

    2016-01-01

    In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also

  8. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius

    NASA Astrophysics Data System (ADS)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.

  9. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    PubMed

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius. PMID:26931894

  10. Accuracy of image-plane holographic tomography with filtered backprojection: random and systematic errors.

    PubMed

    Belashov, A V; Petrov, N V; Semenova, I V

    2016-01-01

    This paper explores the concept of image-plane holographic tomography applied to the measurements of laser-induced thermal gradients in an aqueous solution of a photosensitizer with respect to the reconstruction accuracy of three-dimensional variations of the refractive index. It uses the least-squares estimation algorithm to reconstruct refractive index variations in each holographic projection. Along with the bitelecentric optical system, transferring focused projection to the sensor plane, it facilitates the elimination of diffraction artifacts and noise suppression. This work estimates the influence of typical random and systematic errors in experiments and concludes that random errors such as accidental measurement errors or noise presence can be significantly suppressed by increasing the number of recorded digital holograms. On the contrary, even comparatively small systematic errors such as a displacement of the rotation axis projection in the course of a reconstruction procedure can significantly distort the results. PMID:26835625

  11. Mechanical temporal fluctuation induced distance and force systematic errors in Casimir force experiments

    NASA Astrophysics Data System (ADS)

    Lamoreaux, Steve; Wong, Douglas

    2015-06-01

    The basic theory of temporal mechanical fluctuation induced systematic errors in Casimir force experiments is developed and applications of this theory to several experiments is reviewed. This class of systematic error enters in a manner similar to the usual surface roughness correction, but unlike the treatment of surface roughness for which an exact result requires an electromagnetic mode analysis, time dependent fluctuations can be treated exactly, assuming the fluctuation times are much longer than the zero point and thermal fluctuation correlation times of the electromagnetic field between the plates. An experimental method for measuring absolute distance with high bandwidth is also described and measurement data presented.

  12. Mechanical temporal fluctuation induced distance and force systematic errors in Casimir force experiments.

    PubMed

    Lamoreaux, Steve; Wong, Douglas

    2015-06-01

    The basic theory of temporal mechanical fluctuation induced systematic errors in Casimir force experiments is developed and applications of this theory to several experiments is reviewed. This class of systematic error enters in a manner similar to the usual surface roughness correction, but unlike the treatment of surface roughness for which an exact result requires an electromagnetic mode analysis, time dependent fluctuations can be treated exactly, assuming the fluctuation times are much longer than the zero point and thermal fluctuation correlation times of the electromagnetic field between the plates. An experimental method for measuring absolute distance with high bandwidth is also described and measurement data presented. PMID:25965319

  13. Backtracing particle rays through magnetic spectrometers: avoiding systematic errors in the reconstruction of target coordinates

    NASA Astrophysics Data System (ADS)

    Veit, Th.; Friedrich, J.; Offermann, E. A. J. M.

    1993-12-01

    The procedures used to model [J. Friedrich, Nucl. Instr. and Meth. A 293 (1990) 575] or to determine [N. Voegler et al., Nucl. Instr. and Meth. A 249 (1986) 337, H. Blok et al., ibid., vol. A 262 (1987) 291, and E.A.J.M. Offermann et al., ibid., vol. A 262 (1987) 298] the mapping properties of a magnetic spectrometer are based on a minimization of the variance of target coordinates. We show that backtracing with matrix elements, determined in this way, may contain systematic errors. As alternative, we propose to minimize the variance of the detector coordinates. This procedure avoids these systematic errors.

  14. The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Suarez, M. J.; Bacmeister, J. T.; Chen, B.; Takacs, L. L.

    2006-01-01

    Previous GCM studies have found that the systematic errors in the GCM simulation of the seasonal mean ITCZ intensity and location could be substantially corrected by adding suitable amount of rain re-evaporation or cumulus momentum transport. However, the reason(s) for these systematic errors and solutions has remained a puzzle. In this work the knowledge gained from previous studies of the ITCZ in an aqua-planet model with zonally uniform SST is applied to solve this puzzle. The solution is supported by further aqua-planet and full model experiments using the latest version of the Goddard Earth Observing System GCM.

  15. ac driving amplitude dependent systematic error in scanning Kelvin probe microscope measurements: Detection and correction

    SciTech Connect

    Wu Yan; Shannon, Mark A.

    2006-04-15

    The dependence of the contact potential difference (CPD) reading on the ac driving amplitude in scanning Kelvin probe microscope (SKPM) hinders researchers from quantifying true material properties. We show theoretically and demonstrate experimentally that an ac driving amplitude dependence in the SKPM measurement can come from a systematic error, and it is common for all tip sample systems as long as there is a nonzero tracking error in the feedback control loop of the instrument. We further propose a methodology to detect and to correct the ac driving amplitude dependent systematic error in SKPM measurements. The true contact potential difference can be found by applying a linear regression to the measured CPD versus one over ac driving amplitude data. Two scenarios are studied: (a) when the surface being scanned by SKPM is not semiconducting and there is an ac driving amplitude dependent systematic error; (b) when a semiconductor surface is probed and asymmetric band bending occurs when the systematic error is present. Experiments are conducted using a commercial SKPM and CPD measurement results of two systems: platinum-iridium/gap/gold and platinum-iridium/gap/thermal oxide/silicon are discussed.

  16. Estimation of radiation risk in presence of classical additive and Berkson multiplicative errors in exposure doses.

    PubMed

    Masiuk, S V; Shklyar, S V; Kukush, A G; Carroll, R J; Kovgan, L N; Likhtarov, I A

    2016-07-01

    In this paper, the influence of measurement errors in exposure doses in a regression model with binary response is studied. Recently, it has been recognized that uncertainty in exposure dose is characterized by errors of two types: classical additive errors and Berkson multiplicative errors. The combination of classical additive and Berkson multiplicative errors has not been considered in the literature previously. In a simulation study based on data from radio-epidemiological research of thyroid cancer in Ukraine caused by the Chornobyl accident, it is shown that ignoring measurement errors in doses leads to overestimation of background prevalence and underestimation of excess relative risk. In the work, several methods to reduce these biases are proposed. They are new regression calibration, an additive version of efficient SIMEX, and novel corrected score methods. PMID:26795191

  17. SYSTEMATIC CONTINUUM ERRORS IN THE Ly{alpha} FOREST AND THE MEASURED TEMPERATURE-DENSITY RELATION

    SciTech Connect

    Lee, Khee-Gan

    2012-07-10

    Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T {proportional_to} {Delta}{sup {gamma}-1}) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly{alpha} forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in {gamma} of the order of unity. This is quantified further using a simple semi-analytic model of the Ly{alpha} forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of {gamma}. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of ({delta}({gamma})) Almost-Equal-To -0.1, while the error is increased to {sigma}{sub {gamma}} Almost-Equal-To 0.2 compared to {sigma}{sub {gamma}} Almost-Equal-To 0.1 in the absence of continuum errors.

  18. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    NASA Astrophysics Data System (ADS)

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; Fenech Conti, Ian; Gavazzi, Raphaël; Gentile, Marc; Gill, Mandeep S. S.; Hogg, David W.; Huff, Eric M.; Jee, M. James; Kacprzak, Tomasz; Kilbinger, Martin; Kuntzer, Thibault; Lang, Dustin; Luo, Wentao; March, Marisa C.; Marshall, Philip J.; Meyers, Joshua E.; Miller, Lance; Miyatake, Hironao; Nakajima, Reiko; Ngolé Mboula, Fred Maurice; Nurbaeva, Guldariya; Okura, Yuki; Paulin-Henriksson, Stéphane; Rhodes, Jason; Schneider, Michael D.; Shan, Huanyuan; Sheldon, Erin S.; Simet, Melanie; Starck, Jean-Luc; Sureau, Florent; Tewes, Malte; Zarb Adami, Kristian; Zhang, Jun; Zuntz, Joe

    2015-07-01

    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ˜1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods' results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

  19. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    SciTech Connect

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; Fenech Conti, Ian; Gavazzi, Raphael; Gentile, Marc; Gill, Mandeep S. S.; Hogg, David W.; Huff, Eric M.; Jee, M. James; Kacprzak, Tomasz; Kilbinger, Martin; Kuntzer, Thibault; Lang, Dustin; Luo, Wentao; March, Marisa C.; Marshall, Philip J.; Meyers, Joshua E.; Miller, Lance; Miyatake, Hironao; Nakajima, Reiko; Ngole Mboula, Fred Maurice; Nurbaeva, Guldariya; Okura, Yuki; Paulin-Henriksson, Stephane; Rhodes, Jason; Schneider, Michael D.; Shan, Huanyuan; Sheldon, Erin S.; Simet, Melanie; Starck, Jean -Luc; Sureau, Florent; Tewes, Malte; Zarb Adami, Kristian; Zhang, Jun; Zuntz, Joe

    2015-05-11

    The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

  20. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    DOE PAGESBeta

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; et al

    2015-05-11

    The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty aboutmore » a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less

  1. Minimizing systematic errors in phytoplankton pigment concentration derived from satellite ocean color measurements

    SciTech Connect

    Martin, D.L.

    1992-01-01

    Water-leaving radiances and phytoplankton pigment concentrations are calculated from Coastal Zone Color Scanner (CZCS) total radiance measurements by separating atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. Multiple scattering interactions between Rayleigh and aerosol components together with other meteorologically-moderated radiances cause systematic errors in calculated water-leaving radiances and produce errors in retrieved phytoplankton pigment concentrations. This thesis developed techniques which minimize the effects of these systematic errors in Level IIA CZCS imagery. Results of previous radiative transfer modeling by Gordon and Castano are extended to predict the pixel-specific magnitude of systematic errors caused by Rayleigh-aerosol multiple scattering interactions. CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere are simulated mathematically and radiance-retrieval errors are calculated for a range of aerosol optical depths. Pixels which exceed an error threshold in the simulated CZCS image are rejected in a corresponding actual image. Meteorological phenomena also cause artifactual errors in CZCS-derived phytoplankton pigment concentration imagery. Unless data contaminated with these effects are masked and excluded from analysis, they will be interpreted as containing valid biological information and will contribute significantly to erroneous estimates of phytoplankton temporal and spatial variability. A method is developed which minimizes these errors through a sequence of quality-control procedures including the calculation of variable cloud-threshold radiances, the computation of the extent of electronic overshoot from bright reflectors, and the imposition of a buffer zone around clouds to exclude contaminated data.

  2. Random and systematic beam modulator errors in dynamic intensity modulated radiotherapy

    NASA Astrophysics Data System (ADS)

    Parsai, Homayon; Cho, Paul S.; Phillips, Mark H.; Giansiracusa, Robert S.; Axen, David

    2003-05-01

    This paper reports on the dosimetric effects of random and systematic modulator errors in delivery of dynamic intensity modulated beams. A sliding-widow type delivery that utilizes a combination of multileaf collimators (MLCs) and backup diaphragms was examined. Gaussian functions with standard deviations ranging from 0.5 to 1.5 mm were used to simulate random positioning errors. A clinical example involving a clival meningioma was chosen with optic chiasm and brain stem as limiting critical structures in the vicinity of the tumour. Dose calculations for different modulator fluctuations were performed, and a quantitative analysis was carried out based on cumulative and differential dose volume histograms for the gross target volume and surrounding critical structures. The study indicated that random modulator errors have a strong tendency to reduce minimum target dose and homogeneity. Furthermore, it was shown that random perturbation of both MLCs and backup diaphragms in the order of σ = 1 mm can lead to 5% errors in prescribed dose. In comparison, when MLCs or backup diaphragms alone was perturbed, the system was more robust and modulator errors of at least σ = 1.5 mm were required to cause dose discrepancies greater than 5%. For systematic perturbation, even errors in the order of +/-0.5 mm were shown to result in significant dosimetric deviations.

  3. Systematic errors in conductimetric instrumentation due to bubble adhesions on the electrodes: An experimental assessment

    NASA Astrophysics Data System (ADS)

    Neelakantaswamy, P. S.; Rajaratnam, A.; Kisdnasamy, S.; Das, N. P.

    1985-02-01

    Systematic errors in conductimetric measurements are often encountered due to partial screening of interelectrode current paths resulting from adhesion of bubbles on the electrode surfaces of the cell. A method of assessing this error quantitatively by a simulated electrolytic tank technique is proposed here. The experimental setup simulates the bubble-curtain effect in the electrolytic tank by means of a pair of electrodes partially covered by a monolayer of small polystyrene-foam spheres representing the bubble adhesions. By varying the number of spheres stuck on the electrode surface, the fractional area covered by the bubbles is controlled; and by measuring the interelectrode impedance, the systematic error is determined as a function of the fractional area covered by the simulated bubbles. A theoretical model which depicts the interelectrode resistance and, hence, the systematic error caused by bubble adhesions is calculated by considering the random dispersal of bubbles on the electrodes. Relevant computed results are compared with the measured impedance data obtained from the electrolytic tank experiment. Results due to other models are also presented and discussed. A time-domain measurement on the simulated cell to study the capacitive effects of the bubble curtain is also explained.

  4. SU-F-BRD-03: Determination of Plan Robustness for Systematic Setup Errors Using Trilinear Interpolation

    SciTech Connect

    Fix, MK; Volken, W; Frei, D; Terribilini, D; Dal Pra, A; Schmuecking, M; Manser, P

    2014-06-15

    Purpose: Treatment plan evaluations in radiotherapy are currently ignoring the dosimetric impact of setup uncertainties. The determination of the robustness for systematic errors is rather computational intensive. This work investigates interpolation schemes to quantify the robustness of treatment plans for systematic errors in terms of efficiency and accuracy. Methods: The impact of systematic errors on dose distributions for patient treatment plans is determined by using the Swiss Monte Carlo Plan (SMCP). Errors in all translational directions are considered, ranging from −3 to +3 mm in mm steps. For each systematic error a full MC dose calculation is performed leading to 343 dose calculations, used as benchmarks. The interpolation uses only a subset of the 343 calculations, namely 9, 15 or 27, and determines all dose distributions by trilinear interpolation. This procedure is applied for a prostate and a head and neck case using Volumetric Modulated Arc Therapy with 2 arcs. The relative differences of the dose volume histograms (DVHs) of the target and the organs at risks are compared. Finally, the interpolation schemes are used to compare robustness of 4- versus 2-arcs in the head and neck treatment plan. Results: Relative local differences of the DVHs increase for decreasing number of dose calculations used in the interpolation. The mean deviations are <1%, 3.5% and 6.5% for a subset of 27, 15 and 9 used dose calculations, respectively. Thereby the dose computation times are reduced by factors of 13, 25 and 43, respectively. The comparison of the 4- versus 2-arcs plan shows a decrease in robustness; however, this is outweighed by the dosimetric improvements. Conclusion: The results of this study suggest that the use of trilinear interpolation to determine the robustness of treatment plans can remarkably reduce the number of dose calculations. This work was supported by Varian Medical Systems. This work was supported by Varian Medical Systems.

  5. Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid

    2015-07-01

    Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.

  6. Voigt profile introduces optical depth dependent systematic errors - Detected in high resolution laboratory spectra of water

    NASA Astrophysics Data System (ADS)

    Birk, Manfred; Wagner, Georg

    2016-02-01

    The Voigt profile commonly used in radiative transfer modeling of Earth's and planets' atmospheres for remote sensing/climate modeling produces systematic errors so far not accounted for. Saturated lines are systematically too narrow when calculated from pressure broadening parameters based on the analysis of laboratory data with the Voigt profile. This is caused by line narrowing effects leading to systematically too small fitted broadening parameters when applying the Voigt profile. These effective values are still valid to model non-saturated lines with sufficient accuracy. Saturated lines dominated by the wings of the line profile are sufficiently accurately modeled with a Voigt profile with the correct broadening parameters and are thus systematically too narrow when calculated with the effective values. The systematic error was quantified by mid infrared laboratory spectroscopy of the water ν2 fundamental. Correct Voigt profile based pressure broadening parameters for saturated lines were 3-4% larger than the effective ones in the spectroscopic database. Impacts on remote sensing and climate modeling are expected. Combination of saturated and non-saturated lines in the spectroscopic analysis will quantify line narrowing with unprecedented precision.

  7. Treatment of systematic errors in the processing of wide angle sonar sensor data for robotic navigation

    SciTech Connect

    Beckerman, M.; Oblow, E.M.

    1988-04-01

    A methodology has been developed for the treatment of systematic errors which arise in the processing of sparse sensor data. We present a detailed application of this methodology to the construction from wide-angle sonar sensor data of navigation maps for use in autonomous robotic navigation. In the methodology we introduce a four-valued labelling scheme and a simple logic for label combination. The four labels, conflict, occupied, empty and unknown, are used to mark the cells of the navigation maps; the logic allows for the rapid updating of these maps as new information is acquired. The systematic errors are treated by relabelling conflicting pixel assignments. Most of the new labels are obtained from analyses of the characteristic patterns of conflict which arise during the information processing. The remaining labels are determined by imposing an elementary consistent-labelling condition. 26 refs., 9 figs.

  8. Systematic vertical error in UAV-derived topographic models: Origins and solutions

    NASA Astrophysics Data System (ADS)

    James, Mike R.; Robson, Stuart

    2014-05-01

    Unmanned aerial vehicles (UAVs) equipped with consumer cameras are increasingly being used to produce high resolution digital elevation models (DEMs). However, although such DEMs may achieve centimetric detail, they can also display broad-scale systematic deformation (usually a vertical 'doming') that restricts their wider use. This effect can be particularly apparent in DEMs derived by structure-from-motion (SfM) processing, especially when control point data have not been incorporated in the bundle adjustment process. We illustrate that doming error results from a combination of inaccurate description of radial lens distortion and the use of imagery captured in near-parallel viewing directions. With such imagery, enabling camera self-calibration within the processing inherently leads to erroneous radial distortion values and associated DEM error. Using a simulation approach, we illustrate how existing understanding of systematic DEM error in stereo-pairs (from unaccounted radial distortion) up-scales in typical multiple-image blocks of UAV surveys. For image sets with dominantly parallel viewing directions, self-calibrating bundle adjustment (as normally used with images taken using consumer cameras) will not be able to derive radial lens distortion accurately, and will give associated systematic 'doming' DEM deformation. In the presence of image measurement noise (at levels characteristic of SfM software), and in the absence of control measurements, our simulations display domed deformation with amplitude of ~2 m over horizontal distances of ~100 m. We illustrate the sensitivity of this effect to variations in camera angle and flight height. Deformation will be reduced if suitable control points can be included within the bundle adjustment, but residual systematic vertical error may remain, accommodated by the estimated precision of the control measurements. Doming bias can be minimised by the inclusion of inclined images within the image set, for example

  9. A constant altitude flight survey method for mapping atmospheric ambient pressures and systematic radar errors

    NASA Technical Reports Server (NTRS)

    Larson, T. J.; Ehernberger, L. J.

    1985-01-01

    The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air data calibrations, but also for studies of atmospheric structure and space positioning accuracy performance. The examples presented cover a wide range of radar tracking conditions for both subsonic and supersonic flight to an altitude of 42,000 ft.

  10. Effects of systematic errors on the mixing ratios of trace gases obtained from occulation spectra

    NASA Technical Reports Server (NTRS)

    Shaffer, W. A.; Shaw, J. H.; Farmer, C. B.

    1983-01-01

    The influence of systematic errors in the parameters of the models describing the geometry and the atmosphere on the profiles of trace gases retrieved from simulated solar occultation spectra, collected at satellite altitudes, is investigated. Because of smearing effects and other uncertainties, it may be preferable to calibrate the spectra internally by measuring absorption lines of an atmospheric gas such as CO2 whose vertical distribution is assumed rather than to relay on externally supplied information.

  11. The effect of systematic errors on the hybridization of optical critical dimension measurements

    NASA Astrophysics Data System (ADS)

    Henn, Mark-Alexander; Barnes, Bryan M.; Zhang, Nien Fan; Zhou, Hui; Silver, Richard M.

    2015-06-01

    In hybrid metrology two or more measurements of the same measurand are combined to provide a more reliable result that ideally incorporates the individual strengths of each of the measurement methods. While these multiple measurements may come from dissimilar metrology methods such as optical critical dimension microscopy (OCD) and scanning electron microscopy (SEM), we investigated the hybridization of similar OCD methods featuring a focus-resolved simulation study of systematic errors performed at orthogonal polarizations. Specifically, errors due to line edge and line width roughness (LER, LWR) and their superposition (LEWR) are known to contribute a systematic bias with inherent correlated errors. In order to investigate the sensitivity of the measurement to LEWR, we follow a modeling approach proposed by Kato et al. who studied the effect of LEWR on extreme ultraviolet (EUV) and deep ultraviolet (DUV) scatterometry. Similar to their findings, we have observed that LEWR leads to a systematic bias in the simulated data. Since the critical dimensions (CDs) are determined by fitting the respective model data to the measurement data by minimizing the difference measure or chi square function, a proper description of the systematic bias is crucial to obtaining reliable results and to successful hybridization. In scatterometry, an analytical expression for the influence of LEWR on the measured orders can be derived, and accounting for this effect leads to a modification of the model function that not only depends on the critical dimensions but also on the magnitude of the roughness. For finite arrayed structures however, such an analytical expression cannot be derived. We demonstrate how to account for the systematic bias and that, if certain conditions are met, a significant improvement of the reliability of hybrid metrology for combining both dissimilar and similar measurement tools can be achieved.

  12. Systematic lossy error protection for video transmission over wireless ad hoc networks

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoqing; Rane, Shantanu; Girod, Bernd

    2005-07-01

    Wireless ad hoc networks present a challenge for error-resilient video transmission, since node mobility and multipath fading result in time-varying link qualities in terms of packet loss ratio and available bandwidth. In this paper, we propose to use a systematic lossy error protection (SLEP) scheme for video transmission over wireless ad hoc networks. The transmitted video signal has two parts-a systematic portion consisting of a video sequence transmitted without channel coding over an error-prone channel, and error protection information consisting of a bitstream generated by Wyner-Ziv encoding of the video sequence. Using an end-to-end video distortion model in conjunction with online estimates of packet loss ratio and available bandwidth, the optimal Wyner-Ziv description can be selected dynamically according to current channel conditions. The scheme can also be applied to choose one path for transmission from amongst multiple candidate routes with varying available bandwidths and packet loss ratios, so that the expected end-to-end video distortion is maximized. Experimental results of video transmission over a simulated ad hoc wireless network shows that the proposed SLEP scheme outperforms the conventional application layer FEC approach in that it provides graceful degradation of received video quality over a wider range of packet loss ratios and is less susceptible to inaccuracy in the packet loss ratio estimation.

  13. Local and Global Views of Systematic Errors of Atmosphere-Ocean General Circulation Models

    NASA Astrophysics Data System (ADS)

    Mechoso, C. Roberto; Wang, Chunzai; Lee, Sang-Ki; Zhang, Liping; Wu, Lixin

    2014-05-01

    Coupled Atmosphere-Ocean General Circulation Models (CGCMs) have serious systematic errors that challenge the reliability of climate predictions. One major reason for such biases is the misrepresentations of physical processes, which can be amplified by feedbacks among climate components especially in the tropics. Much effort, therefore, is dedicated to the better representation of physical processes in coordination with intense process studies. The present paper starts with a presentation of these systematic CGCM errors with an emphasis on the sea surface temperature (SST) in simulations by 22 participants in the Coupled Model Intercomparison Project phase 5 (CMIP5). Different regions are considered for discussion of model errors, including the one around the equator, the one covered by the stratocumulus decks off Peru and Namibia, and the confluence between the Angola and Benguela currents. Hypotheses on the reasons for the errors are reviewed, with particular attention on the parameterization of low-level marine clouds, model difficulties in the simulation of the ocean heat budget under the stratocumulus decks, and location of strong SST gradients. Next the presentation turns to a global perspective of the errors and their causes. It is shown that a simulated weak Atlantic Meridional Overturning Circulation (AMOC) tends to be associated with cold biases in the entire Northern Hemisphere with an atmospheric pattern that resembles the Northern Hemisphere annular mode. The AMOC weakening is also associated with a strengthening of Antarctic bottom water formation and warm SST biases in the Southern Ocean. It is also shown that cold biases in the tropical North Atlantic and West African/Indian monsoon regions during the warm season in the Northern Hemisphere have interhemispheric links with warm SST biases in the tropical southeastern Pacific and Atlantic, respectively. The results suggest that improving the simulation of regional processes may not suffice for a more

  14. A novel approach to an old problem: analysis of systematic errors in two models of recognition memory.

    PubMed

    Dede, Adam J O; Squire, Larry R; Wixted, John T

    2014-01-01

    For more than a decade, the high threshold dual process (HTDP) model has served as a guide for studying the functional neuroanatomy of recognition memory. The HTDP model's utility has been that it provides quantitative estimates of recollection and familiarity, two processes thought to support recognition ability. Important support for the model has been the observation that it fits experimental data well. The continuous dual process (CDP) model also fits experimental data well. However, this model does not provide quantitative estimates of recollection and familiarity, making it less immediately useful for illuminating the functional neuroanatomy of recognition memory. These two models are incompatible and cannot both be correct, and an alternative method of model comparison is needed. We tested for systematic errors in each model's ability to fit recognition memory data from four independent data sets from three different laboratories. Across participants and across data sets, the HTDP model (but not the CDP model) exhibited systematic error. In addition, the pattern of errors exhibited by the HTDP model was predicted by the CDP model. We conclude that the CDP model provides a better account of recognition memory than the HTDP model. PMID:24184486

  15. Improving SMOS retrieved salinity: characterization of systematic errors in reconstructed and modelled brightness temperature images

    NASA Astrophysics Data System (ADS)

    Gourrion, J.; Guimbard, S.; Sabia, R.; Portabella, M.; Gonzalez, V.; Turiel, A.; Ballabrera, J.; Gabarro, C.; Perez, F.; Martinez, J.

    2012-04-01

    The Microwave Imaging Radiometer using Aperture Synthesis (MIRAS) instrument onboard the Soil Moisture and Ocean Salinity (SMOS) mission was launched on November 2nd, 2009 with the aim of providing, over the oceans, synoptic sea surface salinity (SSS) measurements with spatial and temporal coverage adequate for large-scale oceanographic studies. For each single satellite overpass, SSS is retrieved after collecting, at fixed ground locations, a series of brightness temperature from successive scenes corresponding to various geometrical and polarization conditions. SSS is inversed through minimization of the difference between reconstructed and modeled brightness temperatures. To meet the challenging mission requirements, retrieved SSS needs to accomplish an accuracy of 0.1 psu after averaging in a 10- or 30-day period and 2°x2° or 1°x1° spatial boxes, respectively. It is expected that, at such scales, the high radiometric noise can be reduced to a level such that remaining errors and inconsistencies in the retrieved salinity fields can essentially be related to (1) systematic brightness temperature errors in the antenna reference frame, (2) systematic errors in the Geophysical Model Function - GMF, used to model the observations and retrieve salinity - for specific environmental conditions and/or particular auxiliary parameter values and (3) errors in the auxiliary datasets used as input to the GMF. The present communication primarily aims at adressing above point 1 and possibly point 2 for the whole polarimetric information i.e. issued from both co-polar and cross-polar measurements. Several factors may potentially produce systematic errors in the antenna reference frame: the unavoidable fact that all antenna are not perfectly identical, the imperfect characterization of the instrument response e.g. antenna patterns, account for receiver temperatures in the reconstruction, calibration using flat sky scenes, implementation of ripple reduction algorithms at sharp

  16. Systematic errors in the measurement of the permanent electric dipole moment (EDM) of the 199Hg atom

    NASA Astrophysics Data System (ADS)

    Chen, Yi; Graner, Brent; Lindahl, Eric; Heckel, Blayne

    2016-03-01

    This talk provides a discussion of the systematic errors that were encountered in the 199Hg experiment described earlier in this session. The dominant systematic error, unseen in previous 199Hg EDM experiments, arose from small motions of the Hg vapor cells due to forces exerted by the applied electric field. Methods used to understand this effect, as well as the anticipated sources of systematic errors such as leakage currents, parameter correlations, and E2 and v × E / c effects, will be presented. The total systematic error was found to be 72% as large as the statistical error of the EDM measurement. This work was supported by NSF Grant 1306743 and by DOE Grant DE-FG02-97ER41020.

  17. Systematic errors in the measurement of the permanent electric dipole moment (EDM) of the 199 Hg atom

    NASA Astrophysics Data System (ADS)

    Chen, Yi; Graner, Brent; Heckel, Blayne; Lindahl, Eric

    2016-05-01

    This talk provides a discussion of the systematic errors that were encountered in the 199 Hg experiment described earlier in this session. The dominant systematic error, unseen in previous 199 Hg EDM experiments, arose from small motions of the Hg vapor cells due to forces exerted by the applied electric field. Methods used to understand this effect, as well as the anticipated sources of systematic errors such as leakage currents, parameter correlations, and E2 and v × E / c effects, will be presented. The total systematic error was found to be 72% as large as the statistical error of the EDM measurement. This work was supported by NSF Grant 1306743 and by DOE Grant DE-FG02-97ER41020.

  18. Precision calibration and systematic error reduction in the long trace profiler

    SciTech Connect

    Qian, Shinan; Sostero, Giovanni; Takacs, Peter Z.

    2000-01-01

    The long trace profiler (LTP) has become the instrument of choice for surface figure testing and slope error measurement of mirrors used for synchrotron radiation and x-ray astronomy optics. In order to achieve highly accurate measurements with the LTP, systematic errors need to be reduced by precise angle calibration and accurate focal plane position adjustment. A self-scanning method is presented to adjust the focal plane position of the detector with high precision by use of a pentaprism scanning technique. The focal plane position can be set to better than 0.25 mm for a 1250-mm-focal-length Fourier-transform lens using this technique. The use of a 0.03-arcsec-resolution theodolite combined with the sensitivity of the LTP detector system can be used to calibrate the angular linearity error very precisely. Some suggestions are introduced for reducing the system error. With these precision calibration techniques, accuracy in the measurement of figure and slope error on meter-long mirrors is now at a level of about 1 {mu}rad rms over the whole testing range of the LTP. (c) 2000 Society of Photo-Optical Instrumentation Engineers.

  19. Random and systematic measurement errors in acoustic impedance as determined by the transmission line method

    NASA Technical Reports Server (NTRS)

    Parrott, T. L.; Smith, C. D.

    1977-01-01

    The effect of random and systematic errors associated with the measurement of normal incidence acoustic impedance in a zero-mean-flow environment was investigated by the transmission line method. The influence of random measurement errors in the reflection coefficients and pressure minima positions was investigated by computing fractional standard deviations of the normalized impedance. Both the standard techniques of random process theory and a simplified technique were used. Over a wavelength range of 68 to 10 cm random measurement errors in the reflection coefficients and pressure minima positions could be described adequately by normal probability distributions with standard deviations of 0.001 and 0.0098 cm, respectively. An error propagation technique based on the observed concentration of the probability density functions was found to give essentially the same results but with a computation time of about 1 percent of that required for the standard technique. The results suggest that careful experimental design reduces the effect of random measurement errors to insignificant levels for moderate ranges of test specimen impedance component magnitudes. Most of the observed random scatter can be attributed to lack of control by the mounting arrangement over mechanical boundary conditions of the test sample.

  20. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors

    SciTech Connect

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter

    2010-07-15

    Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets ({+-}1 mm in two banks, {+-}0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  1. A systematic impact assessment of GRACE error correlation on data assimilation in hydrological models

    NASA Astrophysics Data System (ADS)

    Schumacher, Maike; Kusche, Jürgen; Döll, Petra

    2016-06-01

    Recently, ensemble Kalman filters (EnKF) have found increasing application for merging hydrological models with total water storage anomaly (TWSA) fields from the Gravity Recovery And Climate Experiment (GRACE) satellite mission. Previous studies have disregarded the effect of spatially correlated errors of GRACE TWSA products in their investigations. Here, for the first time, we systematically assess the impact of the GRACE error correlation structure on EnKF data assimilation into a hydrological model, i.e. on estimated compartmental and total water storages and model parameter values. Our investigations include (1) assimilating gridded GRACE-derived TWSA into the WaterGAP Global Hydrology Model and, simultaneously, calibrating its parameters; (2) introducing GRACE observations on different spatial scales; (3) modelling observation errors as either spatially white or correlated in the assimilation procedure, and (4) replacing the standard EnKF algorithm by the square root analysis scheme or, alternatively, the singular evolutive interpolated Kalman filter. Results of a synthetic experiment designed for the Mississippi River Basin indicate that the hydrological parameters are sensitive to TWSA assimilation if spatial resolution of the observation data is sufficiently high. We find a significant influence of spatial error correlation on the adjusted water states and model parameters for all implemented filter variants, in particular for subbasins with a large discrepancy between observed and initially simulated TWSA and for north-south elongated sub-basins. Considering these correlated errors, however, does not generally improve results: while some metrics indicate that it is helpful to consider the full GRACE error covariance matrix, it appears to have an adverse effect on others. We conclude that considering the characteristics of GRACE error correlation is at least as important as the selection of the spatial discretisation of TWSA observations, while the choice

  2. A systematic impact assessment of GRACE error correlation on data assimilation in hydrological models

    NASA Astrophysics Data System (ADS)

    Schumacher, Maike; Kusche, Jürgen; Döll, Petra

    2016-02-01

    Recently, ensemble Kalman filters (EnKF) have found increasing application for merging hydrological models with total water storage anomaly (TWSA) fields from the Gravity Recovery And Climate Experiment (GRACE) satellite mission. Previous studies have disregarded the effect of spatially correlated errors of GRACE TWSA products in their investigations. Here, for the first time, we systematically assess the impact of the GRACE error correlation structure on EnKF data assimilation into a hydrological model, i.e. on estimated compartmental and total water storages and model parameter values. Our investigations include (1) assimilating gridded GRACE-derived TWSA into the WaterGAP Global Hydrology Model and, simultaneously, calibrating its parameters; (2) introducing GRACE observations on different spatial scales; (3) modelling observation errors as either spatially white or correlated in the assimilation procedure, and (4) replacing the standard EnKF algorithm by the square root analysis scheme or, alternatively, the singular evolutive interpolated Kalman filter. Results of a synthetic experiment designed for the Mississippi River Basin indicate that the hydrological parameters are sensitive to TWSA assimilation if spatial resolution of the observation data is sufficiently high. We find a significant influence of spatial error correlation on the adjusted water states and model parameters for all implemented filter variants, in particular for subbasins with a large discrepancy between observed and initially simulated TWSA and for north-south elongated sub-basins. Considering these correlated errors, however, does not generally improve results: while some metrics indicate that it is helpful to consider the full GRACE error covariance matrix, it appears to have an adverse effect on others. We conclude that considering the characteristics of GRACE error correlation is at least as important as the selection of the spatial discretisation of TWSA observations, while the choice

  3. Systematic Errors Associated with the CPMG Pulse Sequence and Their Effect on Motional Analysis of Biomolecules

    NASA Astrophysics Data System (ADS)

    Ross, A.; Czisch, M.; King, G. C.

    1997-02-01

    A theoretical approach to calculate the time evolution of magnetization during a CPMG pulse sequence of arbitrary parameter settings is developed and verified by experiment. The analysis reveals that off-resonance effects can cause systematic reductions in measured peak amplitudes that commonly lie in the range 5-25%, reaching 50% in unfavorable circumstances. These errors, which are finely dependent upon frequency offset and CPMG parameter settings, are subsequently transferred into erroneousT2values obtained by curve fitting, where they are reduced or amplified depending upon the magnitude of the relaxation time. Subsequent transfer to Lipari-Szabo model analysis can produce significant errors in derived motional parameters, with τeinternal correlation times being affected somewhat more thanS2order parameters. A hazard of this off-resonance phenomenon is its oscillatory nature, so that strongly affected and unaffected signals can be found at various frequencies within a CPMG spectrum. Methods for the reduction of the systematic error are discussed. Relaxation studies on biomolecules, especially at high field strengths, should take account of potential off-resonance contributions.

  4. An examination of the southern California field test for the systematic accumulation of the optical refraction error in geodetic leveling.

    USGS Publications Warehouse

    Castle, R.O.; Brown, B.W., Jr.; Gilmore, T.D.; Mark, R.K.; Wilson, R.C.

    1983-01-01

    Appraisals of the two levelings that formed the southern California field test for the accumulation of the atmospheric refraction error indicate that random error and systematic error unrelated to refraction competed with the systematic refraction error and severely complicate any analysis of the test results. If the fewer than one-third of the sections that met less than second-order, class I standards are dropped, the divergence virtually disappears between the presumably more refraction contaminated long-sight-length survey and the less contaminated short-sight-length survey. -Authors

  5. Influence of Additive and Multiplicative Structure and Direction of Comparison on the Reversal Error

    ERIC Educational Resources Information Center

    González-Calero, José Antonio; Arnau, David; Laserna-Belenguer, Belén

    2015-01-01

    An empirical study has been carried out to evaluate the potential of word order matching and static comparison as explanatory models of reversal error. Data was collected from 214 undergraduate students who translated a set of additive and multiplicative comparisons expressed in Spanish into algebraic language. In these multiplicative comparisons…

  6. Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Garcia, P.

    2016-04-01

    Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its

  7. Sherborn's Index Animalium: New names, systematic errors and availability of names in the light of modern nomenclature.

    PubMed

    Welter-Schultes, Francisco; Görlich, Angela; Lutze, Alexandra

    2016-01-01

    This study is aimed to shed light on the reliability of Sherborn's Index Animalium in terms of modern usage. The AnimalBase project spent several years' worth of teamwork dedicated to extracting new names from original sources in the period ranging from 1757 to the mid-1790s. This allowed us to closely analyse Sherborn's work and verify the completeness and correctness of his record. We found the reliability of Sherborn's resource generally very high, but in some special situations the reliability was reduced due to systematic errors or incompleteness in source material. Index Animalium is commonly used by taxonomists today who rely strongly on Sherborn's record; our study is directed most pointedly at those users. We recommend paying special attention to the situations where we found that Sherborn's data should be read with caution. In addition to some categories of systematic errors and mistakes that were Sherborn's own responsibility, readers should also take into account that nomenclatural rules have been changed or refined in the past 100 years, and that Sherborn's resource could eventually present outdated information. One of our main conclusions is that error rates in nomenclatoral compilations tend to be lower if one single and highly experienced person such as Sherborn carries out the work, than if a team is trying to do the task. Based on our experience with extracting names from original sources we came to the conclusion that error rates in such a manual work on names in a list are difficult to reduce below 2-4%. We suggest this is a natural limit and a point of diminishing returns for projects of this nature. PMID:26877658

  8. Sherborn’s Index Animalium: New names, systematic errors and availability of names in the light of modern nomenclature

    PubMed Central

    Welter-Schultes, Francisco; Görlich, Angela; Lutze, Alexandra

    2016-01-01

    Abstract This study is aimed to shed light on the reliability of Sherborn’s Index Animalium in terms of modern usage. The AnimalBase project spent several years’ worth of teamwork dedicated to extracting new names from original sources in the period ranging from 1757 to the mid-1790s. This allowed us to closely analyse Sherborn’s work and verify the completeness and correctness of his record. We found the reliability of Sherborn’s resource generally very high, but in some special situations the reliability was reduced due to systematic errors or incompleteness in source material. Index Animalium is commonly used by taxonomists today who rely strongly on Sherborn’s record; our study is directed most pointedly at those users. We recommend paying special attention to the situations where we found that Sherborn’s data should be read with caution. In addition to some categories of systematic errors and mistakes that were Sherborn’s own responsibility, readers should also take into account that nomenclatural rules have been changed or refined in the past 100 years, and that Sherborn’s resource could eventually present outdated information. One of our main conclusions is that error rates in nomenclatoral compilations tend to be lower if one single and highly experienced person such as Sherborn carries out the work, than if a team is trying to do the task. Based on our experience with extracting names from original sources we came to the conclusion that error rates in such a manual work on names in a list are difficult to reduce below 2–4%. We suggest this is a natural limit and a point of diminishing returns for projects of this nature. PMID:26877658

  9. A Novel Systematic Error Compensation Algorithm Based on Least Squares Support Vector Regression for Star Sensor Image Centroid Estimation

    PubMed Central

    Yang, Jun; Liang, Bin; Zhang, Tao; Song, Jingyan

    2011-01-01

    The star centroid estimation is the most important operation, which directly affects the precision of attitude determination for star sensors. This paper presents a theoretical study of the systematic error introduced by the star centroid estimation algorithm. The systematic error is analyzed through a frequency domain approach and numerical simulations. It is shown that the systematic error consists of the approximation error and truncation error which resulted from the discretization approximation and sampling window limitations, respectively. A criterion for choosing the size of the sampling window to reduce the truncation error is given in this paper. The systematic error can be evaluated as a function of the actual star centroid positions under different Gaussian widths of star intensity distribution. In order to eliminate the systematic error, a novel compensation algorithm based on the least squares support vector regression (LSSVR) with Radial Basis Function (RBF) kernel is proposed. Simulation results show that when the compensation algorithm is applied to the 5-pixel star sampling window, the accuracy of star centroid estimation is improved from 0.06 to 6 × 10−5 pixels. PMID:22164021

  10. Estimation of Systematic Errors for Deuteron Electric Dipole Moment Search at COSY

    NASA Astrophysics Data System (ADS)

    Chekmenev, Stanislav

    2016-02-01

    An experimental method which is aimed to find a permanent EDM of a charged particle was proposed by the JEDI (Jülich Electric Dipole moment Investigations) collaboration. EDMs can be observed by their influence on spin motion. The only possible way to perform a direct measurement is to use a storage ring. For this purpose, it was decided to carry out the first precursor experiment at the Cooler Synchrotron (COSY). Since the EDM of a particle violates CP invariance it is expected to be tiny, treatment of all various sources of systematic errors should be done with a great level of precision. One should clearly understand how misalignments of the magnets affects the beam and the spin motion. It is planned to use a RF Wien filter for the precusor experiment. In this paper the simulations of the systematic effects for the RF Wien filter device method will be discussed.

  11. Systematic errors in the measurement of emissivity caused by directional effects.

    PubMed

    Kribus, Abraham; Vishnevetsky, Irna; Rotenberg, Eyal; Yakir, Dan

    2003-04-01

    Accurate knowledge of surface emissivity is essential for applications in remote sensing (remote temperature measurement), radiative transport, and modeling of environmental energy balances. Direct measurements of surface emissivity are difficult when there is considerable background radiation at the same wavelength as the emitted radiation. This occurs, for example, when objects at temperatures near room temperature are measured in a terrestrial environment by use ofthe infrared 8-14-microm band.This problem is usually treated by assumption of a perfectly diffuse surface or of diffuse background radiation. However, real surfaces and actual background radiation are not diffuse; therefore there will be a systematic measurement error. It is demonstrated that, in some cases, the deviations from a diffuse behavior lead to large errors in the measured emissivity. Past measurements made with simplifying assumptions should therefore be reevaluated and corrected. Recommendations are presented for improving experimental procedures in emissivity measurement. PMID:12683764

  12. First Year Wilkinson Microwave Anisotropy Probe(WMAP) Observations: Data Processing Methods and Systematic Errors Limits

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.

    2003-01-01

    We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.

  13. Synoptic scale forecast skill and systematic errors in the MASS 2.0 model. [Mesoscale Atmospheric Simulation System

    NASA Technical Reports Server (NTRS)

    Koch, S. E.; Skillman, W. C.; Kocin, P. J.; Wetzel, P. J.; Brill, K. F.

    1985-01-01

    The synoptic scale performance characteristics of MASS 2.0 are determined by comparing filtered 12-24 hr model forecasts to same-case forecasts made by the National Meteorological Center's synoptic-scale Limited-area Fine Mesh model. Characteristics of the two systems are contrasted, and the analysis methodology used to determine statistical skill scores and systematic errors is described. The overall relative performance of the two models in the sample is documented, and important systematic errors uncovered are presented.

  14. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.

    PubMed

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip

    2015-01-01

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777

  15. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances

    PubMed Central

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvořáček, Filip

    2015-01-01

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777

  16. Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction

    NASA Technical Reports Server (NTRS)

    Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.

    2013-01-01

    The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.

  17. Statistical uncertainties and systematic errors in weak lensing mass estimates of galaxy clusters

    NASA Astrophysics Data System (ADS)

    Köhlinger, F.; Hoekstra, H.; Eriksen, M.

    2015-11-01

    Upcoming and ongoing large area weak lensing surveys will also discover large samples of galaxy clusters. Accurate and precise masses of galaxy clusters are of major importance for cosmology, for example, in establishing well-calibrated observational halo mass functions for comparison with cosmological predictions. We investigate the level of statistical uncertainties and sources of systematic errors expected for weak lensing mass estimates. Future surveys that will cover large areas on the sky, such as Euclid or LSST and to lesser extent DES, will provide the largest weak lensing cluster samples with the lowest level of statistical noise regarding ensembles of galaxy clusters. However, the expected low level of statistical uncertainties requires us to scrutinize various sources of systematic errors. In particular, we investigate the bias due to cluster member galaxies which are erroneously treated as background source galaxies due to wrongly assigned photometric redshifts. We find that this effect is significant when referring to stacks of galaxy clusters. Finally, we study the bias due to miscentring, i.e. the displacement between any observationally defined cluster centre and the true minimum of its gravitational potential. The impact of this bias might be significant with respect to the statistical uncertainties. However, complementary future missions such as eROSITA will allow us to define stringent priors on miscentring parameters which will mitigate this bias significantly.

  18. Systematic Error in Hippocampal Volume Asymmetry Measurement is Minimal with a Manual Segmentation Protocol

    PubMed Central

    Rogers, Baxter P.; Sheffield, Julia M.; Luksik, Andrew S.; Heckers, Stephan

    2012-01-01

    Hemispheric asymmetry of hippocampal volume is a common finding that has biological relevance, including associations with dementia and cognitive performance. However, a recent study has reported the possibility of systematic error in measurements of hippocampal asymmetry by magnetic resonance volumetry. We manually traced the volumes of the anterior and posterior hippocampus in 40 healthy people to measure systematic error related to image orientation. We found a bias due to the side of the screen on which the hippocampus was viewed, such that hippocampal volume was larger when traced on the left side of the screen than when traced on the right (p = 0.05). However, this bias was smaller than the anatomical right > left asymmetry of the anterior hippocampus. We found right > left asymmetry of hippocampal volume regardless of image presentation (radiological versus neurological). We conclude that manual segmentation protocols can minimize the effect of image orientation in the study of hippocampal volume asymmetry, but our confirmation that such bias exists suggests strategies to avoid it in future studies. PMID:23248580

  19. Systematic errors inherent in the current modeling of the reflected downward flux term used by remote sensing models.

    PubMed

    Turner, David S

    2004-04-10

    An underlying assumption of satellite data assimilation systems is that the radiative transfer model used to simulate observed satellite radiances has no errors. For practical reasons a fast-forward radiative transfer model is used instead of a highly accurate line-by-line model. The fast model usually replaces the spectral integration of spectral quantities with their monochromatic equivalents, and the errors due to these approximations are assumed to be negligible. The reflected downward flux term contains many approximations of this nature, which are shown to introduce systematic errors. In addition, many fast-forward radiative transfer models simulate the downward flux as the downward radiance along a path defined by the secant of the mean emergent angle, the diffusivity factor. The diffusivity factor is commonly set to 1.66 or to the secant of the satellite zenith angle. Neither case takes into account that the diffusivity factor varies with optical depth, which introduces further errors. I review the two most commonly used methods for simulating reflected downward flux by fast-forward radiative transfer models and point out their inadequacies and limitations. An alternate method of simulating the reflected downward flux is proposed. This method transforms the surface-to-satellite transmittance profile to a transmittance profile suitable for simulating the reflected downward flux by raising the former transmittance to the power of kappa, where kappa itself is a function of channel, surface pressure, and satellite zenith angle. It is demonstrated that this method reduces the fast-forward model error for low to moderate reflectivities. PMID:15098841

  20. [Second victims of medical errors: a systematic review of the literature].

    PubMed

    Panella, Massimiliano; Rinaldi, Carmela; Vanhaecht, Kris; Donnarumma, Chiara; Tozzi, Quinto; Di Stanislao, Francesco

    2014-01-01

    "Second victims" are health care providers who remain traumatized and suffer at the psycho-physical level after being involved in a patient adverse event. A systematic review of the literature was conducted to: a) estimate the prevalence of second victims among healthcare workers, b) describe personal and work outcomes of second victims, c) identify coping strategies used by second victims to face their problems, and d) describe current support strategies. Findings reveal that the prevalence of "second victims" of medical errors is high, ranging in four studies from 10.4% to 43.3%. Medical errors have a negative impact on healthcare providers involved, leading to physical, cognitive and behavioural symptoms including the practice of defensive medicine. Managers of health organizations need to be aware of the "second victim" phenomenon and ensure adequate support is given to healthcare providers involved. The best strategy seems to be the creation of networks of support at both the individual and organizational levels. More research is needed to evaluate the efficacy of support structures for second victims and to quantify the extent of the practice of defensive medicine following medical error. PMID:24770362

  1. An Examination of the Spatial Distribution of Carbon Dioxide and Systematic Errors

    NASA Technical Reports Server (NTRS)

    Coffey, Brennan; Gunson, Mike; Frankenberg, Christian; Osterman, Greg

    2011-01-01

    The industrial period and modern age is characterized by combustion of coal, oil, and natural gas for primary energy and transportation leading to rising levels of atmospheric of CO2. This increase, which is being carefully measured, has ramifications throughout the biological world. Through remote sensing, it is possible to measure how many molecules of CO2 lie in a defined column of air. However, other gases and particles are present in the atmosphere, such as aerosols and water, which make such measurements more complicated1. Understanding the detailed geometry and path length of the observation is vital to computing the concentration of CO2. Comparing these satellite readings with ground-truth data (TCCON) the systematic errors arising from these sources can be assessed. Once the error is understood, it can be scaled for in the retrieval algorithms to create a set of data, which is closer to the TCCON measurements1. Using this process, the algorithms are being developed to reduce bias, within.1% worldwide of the true value. At this stage, the accuracy is within 1%, but through correcting small errors contained in the algorithms, such as accounting for the scattering of sunlight, the desired accuracy can be achieved.

  2. SU-D-BRD-07: Evaluation of the Effectiveness of Statistical Process Control Methods to Detect Systematic Errors For Routine Electron Energy Verification

    SciTech Connect

    Parker, S

    2015-06-15

    Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignment of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors

  3. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    SciTech Connect

    Li, T.S.; et al.

    2016-01-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.

  4. Quantifying and minimising systematic and random errors in X-ray micro-tomography based volume measurements

    NASA Astrophysics Data System (ADS)

    Lin, Q.; Neethling, S. J.; Dobson, K. J.; Courtois, L.; Lee, P. D.

    2015-04-01

    X-ray micro-tomography (XMT) is increasingly used for the quantitative analysis of the volumes of features within the 3D images. As with any measurement, there will be error and uncertainty associated with these measurements. In this paper a method for quantifying both the systematic and random components of this error in the measured volume is presented. The systematic error is the offset between the actual and measured volume which is consistent between different measurements and can therefore be eliminated by appropriate calibration. In XMT measurements this is often caused by an inappropriate threshold value. The random error is not associated with any systematic offset in the measured volume and could be caused, for instance, by variations in the location of the specific object relative to the voxel grid. It can be eliminated by repeated measurements. It was found that both the systematic and random components of the error are a strong function of the size of the object measured relative to the voxel size. The relative error in the volume was found to follow approximately a power law relationship with the volume of the object, but with an exponent that implied, unexpectedly, that the relative error was proportional to the radius of the object for small objects, though the exponent did imply that the relative error was approximately proportional to the surface area of the object for larger objects. In an example application involving the size of mineral grains in an ore sample, the uncertainty associated with the random error in the volume is larger than the object itself for objects smaller than about 8 voxels and is greater than 10% for any object smaller than about 260 voxels. A methodology is presented for reducing the random error by combining the results from either multiple scans of the same object or scans of multiple similar objects, with an uncertainty of less than 5% requiring 12 objects of 100 voxels or 600 objects of 4 voxels. As the systematic

  5. RANDOM AND SYSTEMATIC FIELD ERRORS IN THE SNS RING: A STUDY OF THEIR EFFECTS AND COMPENSATION

    SciTech Connect

    GARDNER,C.J.; LEE,Y.Y.; WENG,W.T.

    1998-06-22

    The Accumulator Ring for the proposed Spallation Neutron Source (SNS) [l] is to accept a 1 ms beam pulse from a 1 GeV Proton Linac at a repetition rate of 60 Hz. For each beam pulse, 10{sup 14} protons (some 1,000 turns) are to be accumulated via charge-exchange injection and then promptly extracted to an external target for the production of neutrons by spallation. At this very high intensity, stringent limits (less than two parts in 10,000 per pulse) on beam loss during accumulation must be imposed in order to keep activation of ring components at an acceptable level. To stay within the desired limit, the effects of random and systematic field errors in the ring require careful attention. This paper describes the authors studies of these effects and the magnetic corrector schemes for their compensation.

  6. A multi-year methane inversion using SCIAMACHY, accounting for systematic errors using TCCON measurements

    NASA Astrophysics Data System (ADS)

    Houweling, S.; Krol, M.; Bergamaschi, P.; Frankenberg, C.; Dlugokencky, E. J.; Morino, I.; Notholt, J.; Sherlock, V.; Wunch, D.; Beck, V.; Gerbig, C.; Chen, H.; Kort, E. A.; Röckmann, T.; Aben, I.

    2013-10-01

    This study investigates the use of total column CH4 (XCH4) retrievals from the SCIAMACHY satellite instrument for quantifying large scale emissions of methane. A unique data set from SCIAMACHY is available spanning almost a decade of measurements, covering a period when the global CH4 growth rate showed a marked transition from stable to increasing mixing ratios. The TM5 4DVAR inverse modelling system has been used to infer CH4 emissions from a combination of satellite and surface measurements for the period 2003-2010. In contrast to earlier inverse modelling studies, the SCIAMACHY retrievals have been corrected for systematic errors using the TCCON network of ground based Fourier transform spectrometers. The aim is to further investigate the role of bias correction of satellite data in inversions. Methods for bias correction are discussed, and the sensitivity of the optimized emissions to alternative bias correction functions is quantified. It is found that the use of SCIAMACHY retrievals in TM5 4DVAR increases the estimated inter-annual variability of large-scale fluxes by 22% compared with the use of only surface observations. The difference in global methane emissions between two year periods before and after July 2006 is estimated at 27-35 Tg yr-1. The use of SCIAMACHY retrievals causes a shift in the emissions from the extra-tropics to the tropics of 50 ± 25 Tg yr-1. The large uncertainty in this value arises from the uncertainty in the bias correction functions. Using measurements from the HIPPO and BARCA aircraft campaigns, we show that systematic errors are a main factor limiting the performance of the inversions. To further constrain tropical emissions of methane using current and future satellite missions, extended validation capabilities in the tropics are of critical importance.

  7. A multi-year methane inversion using SCIAMACHY, accounting for systematic errors using TCCON measurements

    NASA Astrophysics Data System (ADS)

    Houweling, S.; Krol, M.; Bergamaschi, P.; Frankenberg, C.; Dlugokencky, E. J.; Morino, I.; Notholt, J.; Sherlock, V.; Wunch, D.; Beck, V.; Gerbig, C.; Chen, H.; Kort, E. A.; Röckmann, T.; Aben, I.

    2014-04-01

    This study investigates the use of total column CH4 (XCH4) retrievals from the SCIAMACHY satellite instrument for quantifying large-scale emissions of methane. A unique data set from SCIAMACHY is available spanning almost a decade of measurements, covering a period when the global CH4 growth rate showed a marked transition from stable to increasing mixing ratios. The TM5 4DVAR inverse modelling system has been used to infer CH4 emissions from a combination of satellite and surface measurements for the period 2003-2010. In contrast to earlier inverse modelling studies, the SCIAMACHY retrievals have been corrected for systematic errors using the TCCON network of ground-based Fourier transform spectrometers. The aim is to further investigate the role of bias correction of satellite data in inversions. Methods for bias correction are discussed, and the sensitivity of the optimized emissions to alternative bias correction functions is quantified. It is found that the use of SCIAMACHY retrievals in TM5 4DVAR increases the estimated inter-annual variability of large-scale fluxes by 22% compared with the use of only surface observations. The difference in global methane emissions between 2-year periods before and after July 2006 is estimated at 27-35 Tg yr-1. The use of SCIAMACHY retrievals causes a shift in the emissions from the extra-tropics to the tropics of 50 ± 25 Tg yr-1. The large uncertainty in this value arises from the uncertainty in the bias correction functions. Using measurements from the HIPPO and BARCA aircraft campaigns, we show that systematic errors in the SCIAMACHY measurements are a main factor limiting the performance of the inversions. To further constrain tropical emissions of methane using current and future satellite missions, extended validation capabilities in the tropics are of critical importance.

  8. Calibration and systematic error analysis for the COBE(1) DMR 4year sky maps

    SciTech Connect

    Kogut, A.; Banday, A.J.; Bennett, C.L.; Gorski, K.M.; Hinshaw,G.; Jackson, P.D.; Keegstra, P.; Lineweaver, C.; Smoot, G.F.; Tenorio,L.; Wright, E.L.

    1996-01-04

    The Differential Microwave Radiometers (DMR) instrument aboard the Cosmic Background Explorer (COBE) has mapped the full microwave sky to mean sensitivity 26 mu K per 7 degrees held of view. The absolute calibration is determined to 0.7 percent with drifts smaller than 0.2 percent per year. We have analyzed both the raw differential data and the pixelized sky maps for evidence of contaminating sources such as solar system foregrounds, instrumental susceptibilities, and artifacts from data recovery and processing. Most systematic effects couple only weakly to the sky maps. The largest uncertainties in the maps result from the instrument susceptibility to Earth's magnetic field, microwave emission from Earth, and upper limits to potential effects at the spacecraft spin period. Systematic effects in the maps are small compared to either the noise or the celestial signal: the 95 percent confidence upper limit for the pixel-pixel rms from all identified systematics is less than 6 mu K in the worst channel. A power spectrum analysis of the (A-B)/2 difference maps shows no evidence for additional undetected systematic effects.

  9. Mapping systematic errors in helium abundance determinations using Markov Chain Monte Carlo

    SciTech Connect

    Aver, Erik; Olive, Keith A.; Skillman, Evan D. E-mail: olive@umn.edu

    2011-03-01

    Monte Carlo techniques have been used to evaluate the statistical and systematic uncertainties in the helium abundances derived from extragalactic H II regions. The helium abundance is sensitive to several physical parameters associated with the H II region. In this work, we introduce Markov Chain Monte Carlo (MCMC) methods to efficiently explore the parameter space and determine the helium abundance, the physical parameters, and the uncertainties derived from observations of metal poor nebulae. Experiments with synthetic data show that the MCMC method is superior to previous implementations (based on flux perturbation) in that it is not affected by biases due to non-physical parameter space. The MCMC analysis allows a detailed exploration of degeneracies, and, in particular, a false minimum that occurs at large values of optical depth in the He I emission lines. We demonstrate that introducing the electron temperature derived from the [O III] emission lines as a prior, in a very conservative manner, produces negligible bias and effectively eliminates the false minima occurring at large optical depth. We perform a frequentist analysis on data from several ''high quality'' systems. Likelihood plots illustrate degeneracies, asymmetries, and limits of the determination. In agreement with previous work, we find relatively large systematic errors, limiting the precision of the primordial helium abundance for currently available spectra.

  10. Systematic and Statistical Errors Associated with Nuclear Decay Constant Measurements Using the Counting Technique

    NASA Astrophysics Data System (ADS)

    Koltick, David; Wang, Haoyu; Liu, Shih-Chieh; Heim, Jordan; Nistor, Jonathan

    2016-03-01

    Typical nuclear decay constants are measured at the accuracy level of 10-2. There are numerous reasons: tests of unconventional theories, dating of materials, and long term inventory evolution which require decay constants accuracy at a level of 10-4 to 10-5. The statistical and systematic errors associated with precision measurements of decays using the counting technique are presented. Precision requires high count rates, which introduces time dependent dead time and pile-up corrections. An approach to overcome these issues is presented by continuous recording of the detector current. Other systematic corrections include, the time dependent dead time due to background radiation, control of target motion and radiation flight path variation due to environmental conditions, and the time dependent effects caused by scattered events are presented. The incorporation of blind experimental techniques can help make measurement independent of past results. A spectrometer design and data analysis is reviewed that can accomplish these goals. The author would like to thank TechSource, Inc. and Advanced Physics Technologies, LLC. for their support in this work.

  11. Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra

    NASA Astrophysics Data System (ADS)

    Whitmore, Jonathan B.; Murphy, Michael T.

    2015-02-01

    We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high-resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium-argon calibration can be tracked with ˜10 m s-1 precision over the entire optical wavelength range on scales of both echelle orders (˜50-100 Å) and entire spectrographs arms (˜1000-3000 Å). Using archival spectra from the past 20 yr, we have probed the supercalibration history of the Very Large Telescope-Ultraviolet and Visible Echelle Spectrograph (VLT-UVES) and Keck-High Resolution Echelle Spectrograph (HIRES) spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically ±200 m s-1 per 1000 Å. We apply a simple model of these distortions to simulated spectra that characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the fine-structure constant, α. The spurious deviations in α produced by the model closely match important aspects of the VLT-UVES quasar results at all redshifts and partially explain the HIRES results, though not self-consistently at all redshifts. That is, the apparent ubiquity, size and general characteristics of the distortions are capable of significantly weakening the evidence for variations in α from quasar absorption lines.

  12. X-ray optics metrology limited by random noise, instrumental drifts, and systematic errors

    SciTech Connect

    Yashchuk, Valeriy V.; Anderson, Erik H.; Barber, Samuel K.; Cambie, Rossana; Celestre, Richard; Conley, Raymond; Goldberg, Kenneth A.; McKinney, Wayne R.; Morrison, Gregory; Takacs, Peter Z.; Voronov, Dmitriy L.; Yuan, Sheng; Padmore, Howard A.

    2010-07-09

    Continuous, large-scale efforts to improve and develop third- and forth-generation synchrotron radiation light sources for unprecedented high-brightness, low emittance, and coherent x-ray beams demand diffracting and reflecting x-ray optics suitable for micro- and nano-focusing, brightness preservation, and super high resolution. One of the major impediments for development of x-ray optics with the required beamline performance comes from the inadequate present level of optical and at-wavelength metrology and insufficient integration of the metrology into the fabrication process and into beamlines. Based on our experience at the ALS Optical Metrology Laboratory, we review the experimental methods and techniques that allow us to mitigate significant optical metrology problems related to random, systematic, and drift errors with super-high-quality x-ray optics. Measurement errors below 0.2 mu rad have become routine. We present recent results from the ALS of temperature stabilized nano-focusing optics and dedicated at-wavelength metrology. The international effort to develop a next generation Optical Slope Measuring System (OSMS) to address these problems is also discussed. Finally, we analyze the remaining obstacles to further improvement of beamline x-ray optics and dedicated metrology, and highlight the ways we see to overcome the problems.

  13. Strategies for Assessing Diffusion Anisotropy on the Basis of Magnetic Resonance Images: Comparison of Systematic Errors

    PubMed Central

    Boujraf, Saïd

    2014-01-01

    Diffusion weighted imaging uses the signal loss associated with the random thermal motion of water molecules in the presence of magnetic field gradients to derive a number of parameters that reflect the translational mobility of the water molecules in tissues. With a suitable experimental set-up, it is possible to calculate all the elements of the local diffusion tensor (DT) and derived parameters describing the behavior of the water molecules in each voxel. One of the emerging applications of the information obtained is an interpretation of the diffusion anisotropy in terms of the architecture of the underlying tissue. These interpretations can only be made provided the experimental data which are sufficiently accurate. However, the DT results are susceptible to two systematic error sources: On one hand, the presence of signal noise can lead to artificial divergence of the diffusivities. In contrast, the use of a simplified model for the interaction of the protons with the diffusion weighting and imaging field gradients (b matrix calculation), common in the clinical setting, also leads to deviation in the derived diffusion characteristics. In this paper, we study the importance of these two sources of error on the basis of experimental data obtained on a clinical magnetic resonance imaging system for an isotropic phantom using a state of the art single-shot echo planar imaging sequence. Our results show that optimal diffusion imaging require combining a correct calculation of the b-matrix and a sufficiently large signal to noise ratio. PMID:24761372

  14. Strategies for assessing diffusion anisotropy on the basis of magnetic resonance images: comparison of systematic errors.

    PubMed

    Boujraf, Saïd

    2014-04-01

    Diffusion weighted imaging uses the signal loss associated with the random thermal motion of water molecules in the presence of magnetic field gradients to derive a number of parameters that reflect the translational mobility of the water molecules in tissues. With a suitable experimental set-up, it is possible to calculate all the elements of the local diffusion tensor (DT) and derived parameters describing the behavior of the water molecules in each voxel. One of the emerging applications of the information obtained is an interpretation of the diffusion anisotropy in terms of the architecture of the underlying tissue. These interpretations can only be made provided the experimental data which are sufficiently accurate. However, the DT results are susceptible to two systematic error sources: On one hand, the presence of signal noise can lead to artificial divergence of the diffusivities. In contrast, the use of a simplified model for the interaction of the protons with the diffusion weighting and imaging field gradients (b matrix calculation), common in the clinical setting, also leads to deviation in the derived diffusion characteristics. In this paper, we study the importance of these two sources of error on the basis of experimental data obtained on a clinical magnetic resonance imaging system for an isotropic phantom using a state of the art single-shot echo planar imaging sequence. Our results show that optimal diffusion imaging require combining a correct calculation of the b-matrix and a sufficiently large signal to noise ratio. PMID:24761372

  15. A table of integrals of the error function. II - Additions and corrections.

    NASA Technical Reports Server (NTRS)

    Geller, M.; Ng, E. W.

    1971-01-01

    Integrals of products of error functions with other functions are presented, taking into account a combination of the error function with powers, a combination of the error function with exponentials and powers, a combination of the error function with exponentials of more complicated arguments, definite integrals from Laplace transforms, and a combination of the error function with trigonometric functions. Other integrals considered include a combination of the error function with logarithms and powers, a combination of two error functions, and a combination of the error function with other special functions.

  16. Systematic Errors in Low-latency Gravitational Wave Parameter Estimation Impact Electromagnetic Follow-up Observations

    NASA Astrophysics Data System (ADS)

    Littenberg, Tyson B.; Farr, Ben; Coughlin, Scott; Kalogera, Vicky

    2016-03-01

    Among the most eagerly anticipated opportunities made possible by Advanced LIGO/Virgo are multimessenger observations of compact mergers. Optical counterparts may be short-lived so rapid characterization of gravitational wave (GW) events is paramount for discovering electromagnetic signatures. One way to meet the demand for rapid GW parameter estimation is to trade off accuracy for speed, using waveform models with simplified treatment of the compact objects’ spin. We report on the systematic errors in GW parameter estimation suffered when using different spin approximations to recover generic signals. Component mass measurements can be biased by \\gt 5σ using simple-precession waveforms and in excess of 20σ when non-spinning templates are employed. This suggests that electromagnetic observing campaigns should not take a strict approach to selecting which LIGO/Virgo candidates warrant follow-up observations based on low-latency mass estimates. For sky localization, we find that searched areas are up to a factor of ∼ 2 larger for non-spinning analyses, and are systematically larger for any of the simplified waveforms considered in our analysis. Distance biases for the non-precessing waveforms can be in excess of 100% and are largest when the spin angular momenta are in the orbital plane of the binary. We confirm that spin-aligned waveforms should be used for low-latency parameter estimation at the minimum. Including simple precession, though more computationally costly, mitigates biases except for signals with extreme precession effects. Our results shine a spotlight on the critical need for development of computationally inexpensive precessing waveforms and/or massively parallel algorithms for parameter estimation.

  17. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    SciTech Connect

    Strömberg, Sten; Nistor, Mihaela; Liu, Jing

    2014-11-15

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2{sup 4} full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world.

  18. Interventions to reduce wrong blood in tube errors in transfusion: a systematic review.

    PubMed

    Cottrell, Susan; Watson, Douglas; Eyre, Toby A; Brunskill, Susan J; Dorée, Carolyn; Murphy, Michael F

    2013-10-01

    This systematic review addresses the issue of wrong blood in tube (WBIT). The objective was to identify interventions that have been implemented and the effectiveness of these interventions to reduce WBIT incidence in red blood cell transfusion. Eligible articles were identified through a comprehensive search of The Cochrane Library, MEDLINE, EMBASE, Cinahl, BNID, and the Transfusion Evidence Library to April 2013. Initial search criteria were wide including primary intervention or observational studies, case reports, expert opinion, and guidelines. There was no restriction by study type, language, or status. Publications before 1995, reviews or reports of a secondary nature, studies of sampling errors outwith transfusion, and articles involving animals were excluded. The primary outcome was a reduction in errors. Study characteristics, outcomes measured, and methodological quality were extracted by 2 authors independently. The principal method of analysis was descriptive. A total of 12,703 references were initially identified. Preliminary secondary screening by 2 reviewers reduced articles for detailed screening to 128 articles. Eleven articles were eventually identified as eligible, resulting in 9 independent studies being included in the review. The overall finding was that all the identified interventions reduced WBIT incidence. Five studies measured the effect of a single intervention, for example, changes to blood sample labeling, weekly feedback, handwritten transfusion requests, and an electronic transfusion system. Four studies reported multiple interventions including education, second check of ID at sampling, and confirmatory sampling. It was not clear which intervention was the most effective. Sustainability of the effectiveness of interventions was also unclear. Targeted interventions, either single or multiple, can lead to a reduction in WBIT; but the sustainability of effectiveness is uncertain. Data on the pre- and postimplementation of

  19. Derivation and Application of a Global Albedo yielding an Optical Brightness To Physical Size Transformation Free of Systematic Errors

    NASA Technical Reports Server (NTRS)

    Mulrooney, Dr. Mark K.; Matney, Dr. Mark J.

    2007-01-01

    Orbital object data acquired via optical telescopes can play a crucial role in accurately defining the space environment. Radar systems probe the characteristics of small debris by measuring the reflected electromagnetic energy from an object of the same order of size as the wavelength of the radiation. This signal is affected by electrical conductivity of the bulk of the debris object, as well as its shape and orientation. Optical measurements use reflected solar radiation with wavelengths much smaller than the size of the objects. Just as with radar, the shape and orientation of an object are important, but we only need to consider the surface electrical properties of the debris material (i.e., the surface albedo), not the bulk electromagnetic properties. As a result, these two methods are complementary in that they measure somewhat independent physical properties to estimate the same thing, debris size. Short arc optical observations such as are typical of NASA's Liquid Mirror Telescope (LMT) give enough information to estimate an Assumed Circular Orbit (ACO) and an associated range. This information, combined with the apparent magnitude, can be used to estimate an "absolute" brightness (scaled to a fixed range and phase angle). This absolute magnitude is what is used to estimate debris size. However, the shape and surface albedo effects make the size estimates subject to systematic and random errors, such that it is impossible to ascertain the size of an individual object with any certainty. However, as has been shown with radar debris measurements, that does not preclude the ability to estimate the size distribution of a number of objects statistically. After systematic errors have been eliminated (range errors, phase function assumptions, photometry) there remains a random geometric albedo distribution that relates object size to absolute magnitude. Measurements by the LMT of a subset of tracked debris objects with sizes estimated from their radar cross

  20. Systematic errors in optical-flow velocimetry for turbulent flows and flames.

    PubMed

    Fielding, J; Long, M B; Fielding, G; Komiyama, M

    2001-02-20

    Optical-flow (OF) velocimetry is based on extracting velocity information from two-dimensional scalar images and represents an unseeded alternative to particle-image velocimetry in turbulent flows. The performance of the technique is examined by direct comparison with simultaneous particle-image velocimetry in both an isothermal turbulent flow and a turbulent flame by use of acetone-OH laser-induced fluorescence. Two representative region-based correlation OF algorithms are applied to assess the general accuracy of the technique. Systematic discrepancies between particle-imaging velocimetry and OF velocimetry are identified with increasing distance from the center line, indicating potential limitations of the current OF techniques. Directional errors are present at all radial positions, with differences in excess of 10 degrees being typical. An experimental measurement setup is described that allows the simultaneous measurement of Mie scattering from seed particles and laser-induced fluorescence on the same CCD camera at two distinct times for validation studies. PMID:18357055

  1. The shrinking Sun: A systematic error in local correlation tracking of solar granulation

    NASA Astrophysics Data System (ADS)

    Löptien, B.; Birch, A. C.; Duvall, T. L.; Gizon, L.; Schou, J.

    2016-05-01

    Context. Local correlation tracking of granulation (LCT) is an important method for measuring horizontal flows in the photosphere. This method exhibits a systematic error that looks like a flow converging toward disk center, which is also known as the shrinking-Sun effect. Aims: We aim to study the nature of the shrinking-Sun effect for continuum intensity data and to derive a simple model that can explain its origin. Methods: We derived LCT flow maps by running the LCT code Fourier Local Correlation Tracking (FLCT) on tracked and remapped continuum intensity maps provided by the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO). We also computed flow maps from synthetic continuum images generated from STAGGER code simulations of solar surface convection. We investigated the origin of the shrinking-Sun effect by generating an average granule from synthetic data from the simulations. Results: The LCT flow maps derived from the HMI data and the simulations exhibit a shrinking-Sun effect of comparable magnitude. The origin of this effect is related to the apparent asymmetry of granulation originating from radiative transfer effects when observing with a viewing angle inclined from vertical. This causes, in combination with the expansion of the granules, an apparent motion toward disk center.

  2. Multi-Period Many-Objective Groundwater Monitoring Design Given Systematic Model Errors and Uncertainty

    NASA Astrophysics Data System (ADS)

    Kollat, J. B.; Reed, P. M.

    2011-12-01

    This study demonstrates how many-objective long-term groundwater monitoring (LTGM) network design tradeoffs evolve across multiple management periods given systematic models errors (i.e., predictive bias), groundwater flow-and-transport forecasting uncertainties, and contaminant observation uncertainties. Our analysis utilizes the Adaptive Strategies for Sampling in Space and Time (ASSIST) framework, which is composed of three primary components: (1) bias-aware Ensemble Kalman Filtering, (2) many-objective hierarchical Bayesian optimization, and (3) interactive visual analytics for understanding spatiotemporal network design tradeoffs. A physical aquifer experiment is utilized to develop a severely challenging multi-period observation system simulation experiment (OSSE) that reflects the challenges and decisions faced in monitoring contaminated groundwater systems. The experimental aquifer OSSE shows both the influence and consequences of plume dynamics as well as alternative cost-savings strategies in shaping how LTGM many-objective tradeoffs evolve. Our findings highlight the need to move beyond least cost purely statistical monitoring frameworks to consider many-objective evaluations of LTGM tradeoffs. The ASSIST framework provides a highly flexible approach for measuring the value of observables that simultaneously improves how the data are used to inform decisions.

  3. Sources of systematic error in calibrated BOLD based mapping of baseline oxygen extraction fraction.

    PubMed

    Blockley, Nicholas P; Griffeth, Valerie E M; Stone, Alan J; Hare, Hannah V; Bulte, Daniel P

    2015-11-15

    Recently a new class of calibrated blood oxygen level dependent (BOLD) functional magnetic resonance imaging (MRI) methods were introduced to quantitatively measure the baseline oxygen extraction fraction (OEF). These methods rely on two respiratory challenges and a mathematical model of the resultant changes in the BOLD functional MRI signal to estimate the OEF. However, this mathematical model does not include all of the effects that contribute to the BOLD signal, it relies on several physiological assumptions and it may be affected by intersubject physiological variability. The aim of this study was to investigate these sources of systematic error and their effect on estimating the OEF. This was achieved through simulation using a detailed model of the BOLD signal. Large ranges for intersubject variability in baseline physiological parameters such as haematocrit and cerebral blood volume were considered. Despite this the uncertainty in the relationship between the measured BOLD signals and the OEF was relatively low. Investigations of the physiological assumptions that underlie the mathematical model revealed that OEF measurements are likely to be overestimated if oxygen metabolism changes during hypercapnia or cerebral blood flow changes under hyperoxia. Hypoxic hypoxia was predicted to result in an underestimation of the OEF, whilst anaemic hypoxia was found to have only a minimal effect. PMID:26254114

  4. The Additional Error of Inertial Sensors Induced by Hypersonic Flight Conditions.

    PubMed

    Karachun, Volodimir; Mel'nick, Viktorij; Korobiichuk, Igor; Nowicki, Michał; Szewczyk, Roman; Kobzar, Svitlana

    2016-01-01

    The emergence of hypersonic technology pose a new challenge for inertial navigation sensors, widely used in aerospace industry. The main problems are: extremely high temperatures, vibration of the fuselage, penetrating acoustic radiation and shock N-waves. The nature of the additional errors of the gyroscopic inertial sensor with hydrostatic suspension components under operating conditions generated by forced precession of the movable part of the suspension due to diffraction phenomena in acoustic fields is explained. The cause of the disturbing moments in the form of the Coriolis inertia forces during the transition of the suspension surface into the category of impedance is revealed. The boundaries of occurrence of the features on the resonance wave match are described. The values of the "false" angular velocity as a result of the elastic-stress state of suspension in the acoustic fields are determined. PMID:26927122

  5. The Additional Error of Inertial Sensors Induced by Hypersonic Flight Conditions

    PubMed Central

    Karachun, Volodimir; Mel’nick, Viktorij; Korobiichuk, Igor; Nowicki, Michał; Szewczyk, Roman; Kobzar, Svitlana

    2016-01-01

    The emergence of hypersonic technology pose a new challenge for inertial navigation sensors, widely used in aerospace industry. The main problems are: extremely high temperatures, vibration of the fuselage, penetrating acoustic radiation and shock N-waves. The nature of the additional errors of the gyroscopic inertial sensor with hydrostatic suspension components under operating conditions generated by forced precession of the movable part of the suspension due to diffraction phenomena in acoustic fields is explained. The cause of the disturbing moments in the form of the Coriolis inertia forces during the transition of the suspension surface into the category of impedance is revealed. The boundaries of occurrence of the features on the resonance wave match are described. The values of the “false” angular velocity as a result of the elastic-stress state of suspension in the acoustic fields are determined. PMID:26927122

  6. Analysis of systematic errors of the ASM/RXTE monitor and GT-48 γ-ray telescope

    NASA Astrophysics Data System (ADS)

    Fidelis, V. V.

    2011-06-01

    The observational data concerning variations of light curves of supernovae remnants—the Crab Nebula, Cassiopeia A, Tycho Brahe, and pulsar Vela—over 14 days scale that may be attributed to systematic errors of the ASM/RXTE monitor are presented. The experimental systematic errors of the GT-48 γ-ray telescope in the mono mode of operation were also determined. For this the observational data of TeV J2032 + 4130 (Cyg γ-2, according to the Crimean version) were used and the stationary nature of its γ-ray emission was confirmed by long-term observations performed with HEGRA and MAGIC. The results of research allow us to draw the following conclusions: (1) light curves of supernovae remnants averaged for long observing periods have false statistically significant flux variations, (2) the level of systematic errors is proportional to the registered flux and decreases with increasing temporal scale of averaging, (3) the light curves of sources may be modulated by the year period, and (4) the systematic errors of the GT-48 γ-ray telescope, in the amount caused by observations in the mono mode and data processing with the stereo-algorithm come to 0.12 min-1.

  7. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    SciTech Connect

    Nelms, Benjamin E.; Chan, Maria F.; Jarry, Geneviève; Lemire, Matthieu; Lowden, John; Hampton, Carnell

    2013-11-15

    correctable after detection and diagnosis, and the uncorrectable errors provided useful information about system limitations, which is another key element of system commissioning.Conclusions: Many forms of relevant systematic errors can go undetected when the currently prevalent metrics for IMRT/VMAT commissioning are used. If alternative methods and metrics are used instead of (or in addition to) the conventional metrics, these errors are more likely to be detected, and only once they are detected can they be properly diagnosed and rooted out of the system. Removing systematic errors should be a goal not only of commissioning by the end users but also product validation by the manufacturers. For any systematic errors that cannot be removed, detecting and quantifying them is important as it will help the physicist understand the limits of the system and work with the manufacturer on improvements. In summary, IMRT and VMAT commissioning, along with product validation, would benefit from the retirement of the 3%/3 mm passing rates as a primary metric of performance, and the adoption instead of tighter tolerances, more diligent diagnostics, and more thorough analysis.

  8. An online model correction method based on an inverse problem: Part II—systematic model error correction

    NASA Astrophysics Data System (ADS)

    Xue, Haile; Shen, Xueshun; Chou, Jifan

    2015-11-01

    An online systematic error correction is presented and examined as a technique to improve the accuracy of real-time numerical weather prediction, based on the dataset of model errors (MEs) in past intervals. Given the analyses, the ME in each interval (6 h) between two analyses can be iteratively obtained by introducing an unknown tendency term into the prediction equation, shown in Part I of this two-paper series. In this part, after analyzing the 5-year (2001-2005) GRAPES-GFS (Global Forecast System of the Global and Regional Assimilation and Prediction System) error patterns and evolution, a systematic model error correction is given based on the least-squares approach by firstly using the past MEs. To test the correction, we applied the approach in GRAPES-GFS for July 2009 and January 2010. The datasets associated with the initial condition and SST used in this study were based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results indicated that the Northern Hemispheric systematically underestimated equator-to-pole geopotential gradient and westerly wind of GRAPES-GFS were largely enhanced, and the biases of temperature and wind in the tropics were strongly reduced. Therefore, the correction results in a more skillful forecast with lower mean bias and root-mean-square error and higher anomaly correlation coefficient.

  9. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    PubMed Central

    Casas, Francisco J.; Ortiz, David; Villa, Enrique; Cano, Juan L.; Cagigas, Jaime; Pérez, Ana R.; Aja, Beatriz; Terán, J. Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-01-01

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906

  10. Modeling systematic errors: polychromatic sources of Beer-Lambert deviations in HPLC/UV and nonchromatographic spectrophotometric assays.

    PubMed

    Galli, C

    2001-07-01

    It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods. PMID:11377063

  11. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis.

    PubMed

    Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-01-01

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906

  12. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-area Sky Surveys

    NASA Astrophysics Data System (ADS)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; Tucker, D.; Kessler, R.; Annis, J.; Bernstein, G. M.; Boada, S.; Burke, D. L.; Finley, D. A.; James, D. J.; Kent, S.; Lin, H.; Marriner, J.; Mondrik, N.; Nagasawa, D.; Rykoff, E. S.; Scolnic, D.; Walker, A. R.; Wester, W.; Abbott, T. M. C.; Allam, S.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Capozzi, D.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; Cunha, C. E.; D’Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Doel, P.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gaztanaga, E.; Goldstein, D. A.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Kuehn, K.; Kuropatkin, N.; Maia, M. A. G.; Melchior, P.; Miller, C. J.; Miquel, R.; Mohr, J. J.; Neilsen, E.; Nichol, R. C.; Nord, B.; Ogando, R.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Thomas, D.; Vikram, V.; The DES Collaboration

    2016-06-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for

  13. SU-E-T-550: Range Effects in Proton Therapy Caused by Systematic Errors in the Stoichiometric Calibration

    SciTech Connect

    Doolan, P; Dias, M; Collins Fekete, C; Seco, J

    2014-06-01

    Purpose: The procedure for proton treatment planning involves the conversion of the patient's X-ray CT from Hounsfield units into relative stopping powers (RSP), using a stoichiometric calibration curve (Schneider 1996). In clinical practice a 3.5% margin is added to account for the range uncertainty introduced by this process and other errors. RSPs for real tissues are calculated using composition data and the Bethe-Bloch formula (ICRU 1993). The purpose of this work is to investigate the impact that systematic errors in the stoichiometric calibration have on the proton range. Methods: Seven tissue inserts of the Gammex 467 phantom were imaged using our CT scanner. Their known chemical compositions (Watanabe 1999) were then used to calculate the theoretical RSPs, using the same formula as would be used for human tissues in the stoichiometric procedure. The actual RSPs of these inserts were measured using a Bragg peak shift measurement in the proton beam at our institution. Results: The theoretical calculation of the RSP was lower than the measured RSP values, by a mean/max error of - 1.5/-3.6%. For all seven inserts the theoretical approach underestimated the RSP, with errors variable across the range of Hounsfield units. Systematic errors for lung (average of two inserts), adipose and cortical bone were - 3.0/-2.1/-0.5%, respectively. Conclusion: There is a systematic underestimation caused by the theoretical calculation of RSP; a crucial step in the stoichiometric calibration procedure. As such, we propose that proton calibration curves should be based on measured RSPs. Investigations will be made to see if the same systematic errors exist for biological tissues. The impact of these differences on the range of proton beams, for phantoms and patient scenarios, will be investigated. This project was funded equally by the Engineering and Physical Sciences Research Council (UK) and Ion Beam Applications (Louvain-La-Neuve, Belgium)

  14. Minimizing systematic errors from atmospheric multiple scattering and satellite viewing geometry in coastal zone color scanner level IIA imagery

    NASA Technical Reports Server (NTRS)

    Martin, D. L.; Perry, M. J.

    1994-01-01

    Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.

  15. Standard addition/absorption detection microfluidic system for salt error-free nitrite determination.

    PubMed

    Ahn, Jae-Hoon; Jo, Kyoung Ho; Hahn, Jong Hoon

    2015-07-30

    A continuous-flow microfluidic chip-based standard addition/absorption detection system has been developed for accurate determination of nitrite in water of varying salinity. The absorption detection of nitrite is made via color development using the Griess reaction. We have found the yield of the reaction is significantly affected by salinity (e.g., -12% error for 30‰ NaCl, 50.0 μg L(-1)N-NO2(-) solution). The microchip has been designed to perform standard addition, color development, and absorbance detection in sequence. To effectively block stray light, the microchip made from black poly(dimethylsiloxane) is placed on the top of a compact housing that accommodates a light-emitting diode, a photomultiplier tube, and an interference filter, where the light source and the detector are optically isolated. An 80-mm liquid-core waveguide mounted on the chip externally has been employed as the absorption detection flow cell. These designs for optics secure a wide linear response range (up to 500 μg L(-1)N-NO2(-)) and a low detection limit (0.12 μg L(-1)N-NO2(-) = 8.6 nM N-NO2(-), S/N = 3). From determination of nitrite in standard samples and real samples collected from an estuary, it has been demonstrated that our microfluidic system is highly accurate (<1% RSD, n = 3) and precise (<1% RSD, n = 3). PMID:26320643

  16. Effects of systematic and random errors on the retrieval of particle microphysical properties from multiwavelength lidar measurements using inversion with regularization

    NASA Astrophysics Data System (ADS)

    Pérez-Ramírez, D.; Whiteman, D. N.; Veselovskii, I.; Kolgotin, A.; Korenskiy, M.; Alados-Arboledas, L.

    2013-11-01

    In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.

  17. Effects of Systematic and Random Errors on the Retrieval of Particle Microphysical Properties from Multiwavelength Lidar Measurements Using Inversion with Regularization

    NASA Technical Reports Server (NTRS)

    Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas

    2013-01-01

    In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.

  18. Assessment of the accuracy of global geodetic satellite laser ranging observations and estimated impact on ITRF scale: estimation of systematic errors in LAGEOS observations 1993-2014

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Rodríguez, José; Altamimi, Zuheir

    2016-06-01

    Satellite laser ranging (SLR) to the geodetic satellites LAGEOS and LAGEOS-2 uniquely determines the origin of the terrestrial reference frame and, jointly with very long baseline interferometry, its scale. Given such a fundamental role in satellite geodesy, it is crucial that any systematic errors in either technique are at an absolute minimum as efforts continue to realise the reference frame at millimetre levels of accuracy to meet the present and future science requirements. Here, we examine the intrinsic accuracy of SLR measurements made by tracking stations of the International Laser Ranging Service using normal point observations of the two LAGEOS satellites in the period 1993 to 2014. The approach we investigate in this paper is to compute weekly reference frame solutions solving for satellite initial state vectors, station coordinates and daily Earth orientation parameters, estimating along with these weekly average range errors for each and every one of the observing stations. Potential issues in any of the large number of SLR stations assumed to have been free of error in previous realisations of the ITRF may have been absorbed in the reference frame, primarily in station height. Likewise, systematic range errors estimated against a fixed frame that may itself suffer from accuracy issues will absorb network-wide problems into station-specific results. Our results suggest that in the past two decades, the scale of the ITRF derived from the SLR technique has been close to 0.7 ppb too small, due to systematic errors either or both in the range measurements and their treatment. We discuss these results in the context of preparations for ITRF2014 and additionally consider the impact of this work on the currently adopted value of the geocentric gravitational constant, GM.

  19. Multiplicative errors in the galaxy power spectrum: self-calibration of unknown photometric systematics for precision cosmology

    NASA Astrophysics Data System (ADS)

    Shafer, Daniel L.; Huterer, Dragan

    2015-03-01

    We develop a general method to `self-calibrate' observations of galaxy clustering with respect to systematics associated with photometric calibration errors. We first point out the danger posed by the multiplicative effect of calibration errors, where large-angle error propagates to small scales and may be significant even if the large-scale information is cleaned or not used in the cosmological analysis. We then propose a method to measure the arbitrary large-scale calibration errors and use these measurements to correct the small-scale (high-multipole) power which is most useful for constraining the majority of cosmological parameters. We demonstrate the effectiveness of our approach on synthetic examples and briefly discuss how it may be applied to real data.

  20. Tolerance analysis of optical telescopes using coherent addition of wavefront errors

    NASA Technical Reports Server (NTRS)

    Davenport, J. W.

    1982-01-01

    A near diffraction-limited telescope requires that tolerance analysis be done on the basis of system wavefront error. One method of analyzing the wavefront error is to represent the wavefront error function in terms of its Zernike polynomial expansion. A Ramsey-Korsch ray trace package, a computer program that simulates the tracing of rays through an optical telescope system, was expanded to include the Zernike polynomial expansion up through the fifth-order spherical term. An option to determine a 3 dimensional plot of the wavefront error function was also included in the Ramsey-Korsch package. Several assimulation runs were analyzed to determine the particular set of coefficients in the Zernike expansion that are effected by various errors such as tilt, decenter and despace. A 3 dimensional plot of each error up through the fifth-order spherical term was also included in the study. Tolerance analysis data are presented.

  1. Convolution method and CTV-to-PTV margins for finite fractions and small systematic errors

    NASA Astrophysics Data System (ADS)

    Gordon, J. J.; Siebers, J. V.

    2007-04-01

    The van Herk margin formula (VHMF) relies on the accuracy of the convolution method (CM) to determine clinical target volume (CTV) to planning target volume (PTV) margins. This work (1) evaluates the accuracy of the CM and VHMF as a function of the number of fractions N and other parameters, and (2) proposes an alternative margin algorithm which ensures target coverage for a wider range of parameter values. Dose coverage was evaluated for a spherical target with uniform margin, using the same simplified dose model and CTV coverage criterion as were used in development of the VHMF. Systematic and random setup errors were assumed to be normally distributed with standard deviations Σ and σ. For clinically relevant combinations of σ, Σ and N, margins were determined by requiring that 90% of treatment course simulations have a CTV minimum dose greater than or equal to the static PTV minimum dose. Simulation results were compared with the VHMF and the alternative margin algorithm. The CM and VHMF were found to be accurate for parameter values satisfying the approximate criterion: σ[1 - γN/25] < 0.2, where γ = Σ/σ. They were found to be inaccurate for σ[1 - γN/25] > 0.2, because they failed to account for the non-negligible dose variability associated with random setup errors. These criteria are applicable when σ gap σP, where σP = 0.32 cm is the standard deviation of the normal dose penumbra. (Qualitative behaviour of the CM and VHMF will remain the same, though the criteria might vary if σP takes values other than 0.32 cm.) When σ Lt σP, dose variability due to random setup errors becomes negligible, and the CM and VHMF are valid regardless of the values of Σ and N. When σ gap σP, consistent with the above criteria, it was found that the VHMF can underestimate margins for large σ, small Σ and small N. A potential consequence of this underestimate is that the CTV minimum dose can fall below its planned value in more than the prescribed 10% of

  2. Convolution method and CTV-to-PTV margins for finite fractions and small systematic errors.

    PubMed

    Gordon, J J; Siebers, J V

    2007-04-01

    The van Herk margin formula (VHMF) relies on the accuracy of the convolution method (CM) to determine clinical target volume (CTV) to planning target volume (PTV) margins. This work (1) evaluates the accuracy of the CM and VHMF as a function of the number of fractions N and other parameters, and (2) proposes an alternative margin algorithm which ensures target coverage for a wider range of parameter values. Dose coverage was evaluated for a spherical target with uniform margin, using the same simplified dose model and CTV coverage criterion as were used in development of the VHMF. Systematic and random setup errors were assumed to be normally distributed with standard deviations Sigma and sigma. For clinically relevant combinations of sigma, Sigma and N, margins were determined by requiring that 90% of treatment course simulations have a CTV minimum dose greater than or equal to the static PTV minimum dose. Simulation results were compared with the VHMF and the alternative margin algorithm. The CM and VHMF were found to be accurate for parameter values satisfying the approximate criterion: sigma[1 - gammaN/25] < 0.2, where gamma = Sigma/sigma. They were found to be inaccurate for sigma[1 - gammaN/25] > 0.2, because they failed to account for the non-negligible dose variability associated with random setup errors. These criteria are applicable when sigma greater than or approximately egual sigma(P), where sigma(P) = 0.32 cm is the standard deviation of the normal dose penumbra. (Qualitative behaviour of the CM and VHMF will remain the same, though the criteria might vary if sigma(P) takes values other than 0.32 cm.) When sigma < sigma(P), dose variability due to random setup errors becomes negligible, and the CM and VHMF are valid regardless of the values of Sigma and N. When sigma greater than or approximately egual sigma(P), consistent with the above criteria, it was found that the VHMF can underestimate margins for large sigma, small Sigma and small N. A

  3. Impact of moisture divergence on systematic errors in precipitation around the Tibetan Plateau in a general circulation model

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Li, Jian

    2016-02-01

    Current state-of-the-art atmospheric general circulation models tend to strongly overestimate the amount of precipitation around steep mountains, which constitutes a stubborn systematic error that causes the climate drift and hinders the model performance. In this study, two contrasting model tests are performed to investigate the sensitivity of precipitation around steep slopes. The first model solves a true moisture advection equation, whereas the second solves an artificial advection equation with an additional moisture divergence term. It is shown that the orographic precipitation can be largely impacted by this term. Excessive (insufficient) precipitation amounts at the high (low) parts of the steep slopes decrease (increase) when the moisture divergence term is added. The precipitation changes between the two models are primarily attributed to large-scale precipitation, which is directly associated with water vapor saturation and condensation. Numerical weather prediction experiments using these two models suggest that precipitation differences between the models emerge shortly after the model startup. The implications of the results are also discussed.

  4. Elimination of Systematic Mass Measurement Errors in Liquid Chromatography-Mass Spectrometry Based Proteomics using Regression Models and a priori Partial Knowledge of the Sample Content

    SciTech Connect

    Petyuk, Vladislav A.; Jaitly, Navdeep; Moore, Ronald J.; Ding, Jie; Metz, Thomas O.; Tang, Keqi; Monroe, Matthew E.; Tolmachev, Aleksey V.; Adkins, Joshua N.; Belov, Mikhail E.; Dabney, Alan R.; Qian, Weijun; Camp, David G.; Smith, Richard D.

    2008-02-01

    The high mass measurement accuracy and precision available with recently developed mass spectrometers is increasingly used in proteomics analyses to confidently identify tryptic peptides from complex mixtures of proteins, as well as post-translational modifications and peptides from non-annotated proteins. To take full advantage of high mass measurement accuracy instruments it is necessary to limit systematic mass measurement errors. It is well known that errors in the measurement of m/z can be affected by experimental parameters that include e.g., outdated calibration coefficients, ion intensity, and temperature changes during the measurement. Traditionally, these variations have been corrected through the use of internal calibrants (well-characterized standards introduced with the sample being analyzed). In this paper we describe an alternative approach where the calibration is provided through the use of a priori knowledge of the sample being analyzed. Such an approach has previously been demonstrated based on the dependence of systematic error on m/z alone. To incorporate additional explanatory variables, we employed multidimensional, nonparametric regression models, which were evaluated using several commercially available instruments. The applied approach is shown to remove any noticeable biases from the overall mass measurement errors, and decreases the overall standard deviation of the mass measurement error distribution by 1.2- to 2-fold, depending on instrument type. Subsequent reduction of the random errors based on multiple measurements over consecutive spectra further improves accuracy and results in an overall decrease of the standard deviation by 1.8- to 3.7-fold. This new procedure will decrease the false discovery rates for peptide identifications using high accuracy mass measurements.

  5. Reduction of systematic errors in regional climate simulations of the summer monsoon over East Asia and the western North Pacific by applying the spectral nudging technique

    NASA Astrophysics Data System (ADS)

    Cha, Dong-Hyun; Lee, Dong-Kyou

    2009-07-01

    In this study, the systematic errors in regional climate simulation of 28-year summer monsoon over East Asia and the western North Pacific (WNP) and the impact of the spectral nudging technique (SNT) on the reduction of the systematic errors are investigated. The experiment in which the SNT is not applied (the CLT run) has large systematic errors in seasonal mean climatology such as overestimated precipitation, weakened subtropical high, and enhanced low-level southwesterly over the subtropical WNP, while in the experiment using the SNT (the SP run) considerably smaller systematic errors are resulted. In the CTL run, the systematic error of simulated precipitation over the ocean increases significantly after mid-June, since the CTL run cannot reproduce the principal intraseasonal variation of summer monsoon precipitation. The SP run can appropriately capture the spatial distribution as well as temporal variation of the principal empirical orthogonal function mode, and therefore, the systematic error over the ocean does not increase after mid-June. The systematic error of simulated precipitation over the subtropical WNP in the CTL run results from the unreasonable positive feedback between precipitation and surface latent heat flux induced by the warm sea surface temperature anomaly. Since the SNT plays a role in decreasing the positive feedback by improving monsoon circulations, the SP run can considerably reduce the systematic errors of simulated precipitation as well as atmospheric fields over the subtropical WNP region.

  6. Procedures for dealing with certain types of noise and systematic errors common to many Hadamard transform optical systems

    NASA Technical Reports Server (NTRS)

    Harwit, M.

    1977-01-01

    Sources of noise and error correcting procedures characteristic of Hadamard transform optical systems were investigated. Reduction of spectral noise due to noise spikes in the data, the effect of random errors, the relative performance of Fourier and Hadamard transform spectrometers operated under identical detector-noise-limited conditions, and systematic means for dealing with mask defects are among the topics discussed. The distortion in Hadamard transform optical instruments caused by moving Masks, incorrect mask alignment, missing measurements, and diffraction is analyzed and techniques for reducing or eliminating this distortion are described.

  7. Does computerized provider order entry reduce prescribing errors for hospital inpatients? A systematic review.

    PubMed

    Reckmann, Margaret H; Westbrook, Johanna I; Koh, Yvonne; Lo, Connie; Day, Richard O

    2009-01-01

    Previous reviews have examined evidence of the impact of CPOE on medication errors, but have used highly variable definitions of "error". We attempted to answer a very focused question, namely, what evidence exists that CPOE systems reduce prescribing errors among hospital inpatients? We identified 13 papers (reporting 12 studies) published between 1998 and 2007. Nine demonstrated a significant reduction in prescribing error rates for all or some drug types. Few studies examined changes in error severity, but minor errors were most often reported as decreasing. Several studies reported increases in the rate of duplicate orders and failures to discontinue drugs, often attributed to inappropriate selection from a dropdown menu or to an inability to view all active medication orders concurrently. The evidence-base reporting the effectiveness of CPOE to reduce prescribing errors is not compelling and is limited by modest study sample sizes and designs. Future studies should include larger samples including multiple sites, controlled study designs, and standardized error and severity reporting. The role of decision support in minimizing severe prescribing error rates also requires investigation. PMID:19567798

  8. Analysis of systematic errors in the calculation of renormalization constants of the topological susceptibility on the lattice

    SciTech Connect

    Alles, B.; D'Elia, M.; Di Giacomo, A.; Pica, C.

    2006-11-01

    A Ginsparg-Wilson based calibration of the topological charge is used to calculate the renormalization constants which appear in the field-theoretical determination of the topological susceptibility on the lattice. A systematic comparison is made with calculations based on cooling. The two methods agree within present statistical errors (3%-4%). We also discuss the independence of the multiplicative renormalization constant Z from the background topological charge used to determine it.

  9. Optimizing MRI-targeted fusion prostate biopsy: the effect of systematic error and anisotropy on tumor sampling

    NASA Astrophysics Data System (ADS)

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2015-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the 21-47% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still has a substantial false negative rate. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. As a step toward this optimization, we obtained multiparametric MRI (mpMRI) and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy, and investigated the effects of systematic errors and anisotropy on P. Our experiments indicated that a biopsy system's lateral and elevational errors have a much greater effect on sampling probabilities, relative to its axial error. We have also determined that for a system with RMS error of 3.5 mm, tumors of volume 1.9 cm3 and smaller may require more than one biopsy core to ensure 95% probability of a sample with 50% core involvement, and tumors 1.0 cm3 and smaller may require more than two cores.

  10. Mitigating systematic errors in angular correlation function measurements from wide field surveys

    NASA Astrophysics Data System (ADS)

    Morrison, C. B.; Hildebrandt, H.

    2015-12-01

    We present an investigation into the effects of survey systematics such as varying depth, point spread function size, and extinction on the galaxy selection and correlation in photometric, multi-epoch, wide area surveys. We take the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) as an example. Variations in galaxy selection due to systematics are found to cause density fluctuations of up to 10 per cent for some small fraction of the area for most galaxy redshift slices and as much as 50 per cent for some extreme cases of faint high-redshift samples. This results in correlations of galaxies against survey systematics of order ˜1 per cent when averaged over the survey area. We present an empirical method for mitigating these systematic correlations from measurements of angular correlation functions using weighted random points. These weighted random catalogues are estimated from the observed galaxy overdensities by mapping these to survey parameters. We are able to model and mitigate the effect of systematic correlations allowing for non-linear dependences of density on systematics. Applied to CFHTLenS, we find that the method reduces spurious correlations in the data by a factor of 2 for most galaxy samples and as much as an order of magnitude in others. Such a treatment is particularly important for an unbiased estimation of very small correlation signals, as e.g. from weak gravitational lensing magnification bias. We impose a criterion for using a galaxy sample in a magnification measurement of the majority of the systematic correlations show improvement and are less than 10 per cent of the expected magnification signal when combined in the galaxy cross-correlation. After correction the galaxy samples in CFHTLenS satisfy this criterion for zphot < 0.9 and will be used in a future analysis of magnification.

  11. Patients' willingness and ability to participate actively in the reduction of clinical errors: a systematic literature review.

    PubMed

    Doherty, Carole; Stavropoulou, Charitini

    2012-07-01

    This systematic review identifies the factors that both support and deter patients from being willing and able to participate actively in reducing clinical errors. Specifically, we add to our understanding of the safety culture in healthcare by engaging with the call for more focus on the relational and subjective factors which enable patients' participation (Iedema, Jorm, & Lum, 2009; Ovretveit, 2009). A systematic search of six databases, ten journals and seven healthcare organisations' web sites resulted in the identification of 2714 studies of which 68 were included in the review. These studies investigated initiatives involving patients in safety or studies of patients' perspectives of being actively involved in the safety of their care. The factors explored varied considerably depending on the scope, setting and context of the study. Using thematic analysis we synthesized the data to build an explanation of why, when and how patients are likely to engage actively in helping to reduce clinical errors. The findings show that the main factors for engaging patients in their own safety can be summarised in four categories: illness; individual cognitive characteristics; the clinician-patient relationship; and organisational factors. We conclude that illness and patients' perceptions of their role and status as subordinate to that of clinicians are the most important barriers to their involvement in error reduction. In sum, patients' fear of being labelled "difficult" and a consequent desire for clinicians' approbation may cause them to assume a passive role as a means of actively protecting their personal safety. PMID:22541799

  12. Analysis of systematic errors in lateral shearing interferometry for EUV optical testing

    SciTech Connect

    Miyakawa, Ryan; Naulleau, Patrick; Goldberg, Kenneth A.

    2009-02-24

    Lateral shearing interferometry (LSI) provides a simple means for characterizing the aberrations in optical systems at EUV wavelengths. In LSI, the test wavefront is incident on a low-frequency grating which causes the resulting diffracted orders to interfere on the CCD. Due to its simple experimental setup and high photon efficiency, LSI is an attractive alternative to point diffraction interferometry and other methods that require spatially filtering the wavefront through small pinholes which notoriously suffer from low contrast fringes and improper alignment. In order to demonstrate that LSI can be accurate and robust enough to meet industry standards, analytic models are presented to study the effects of unwanted grating and detector tilt on the system aberrations, and a method for identifying and correcting for these errors in alignment is proposed. The models are subsequently verified by numerical simulation. Finally, an analysis is performed of how errors in the identification and correction of grating and detector misalignment propagate to errors in fringe analysis.

  13. Systematic Errors and Combination of the Individual CRF Solutions in the Framework of the IVS ICRF Pilot Project

    NASA Astrophysics Data System (ADS)

    Sokolova, J. R.

    2006-08-01

    New international Pilot Project for the redetermination of the ICRF was initiated by the International VLBI Service for Geodesy and Astrometry (IVS) in January 2005. The purpose of this project is to compare the individual CRF solutions and to analyse their systematic and random errors with focus on the selection of the optimal strategy for the next ICRF realization. Eight CRF realizations provided by the IVS Analysis Centres (GA, SHAO, DGFI, GIUB-BKG, JPL, MAO NANU, GSFC, USNO) were analyzed. In present study, four analytical models were used to investigate the systematic differences between solutions: solid rotation, rotation and deformation, and expansion by orthogonal functions: Legendre-Fourier polynomials and spherical functions. It was found that expansions by orthogonal function describe the differences between individual catalogues better than the two former models. Finally, the combined CRF was generated. Using the radio source positions from this combined catalogue for estimation of EOP has shown improvement of the uncertainty of universal time and nutation.

  14. Systematic Errors in Stereo PIV When Imaging through a Glass Window

    NASA Technical Reports Server (NTRS)

    Green, Richard; McAlister, Kenneth W.

    2004-01-01

    This document assesses the magnitude of velocity measurement errors that may arise when performing stereo particle image velocimetry (PIV) with cameras viewing through thick, refractive window and where the calibration is performed in one plane only. The effect of the window is to introduce a refractive error that increases with window thickness and the camera angle of incidence. The calibration should be performed while viewing through the test section window, otherwise a potentially significant error may be introduced that affects each velocity component differently. However, even when the calibration is performed correctly, another error may arise during the stereo reconstruction if the perspective angle determined for each camera does not account for the displacement of the light rays as they refract through the thick window. Care should be exercised when applying in a single-plane calibration since certain implicit assumptions may in fact require conditions that are extremely difficult to meet in a practical laboratory environment. It is suggested that the effort expended to ensure this accuracy may be better expended in performing a more lengthy volumetric calibration procedure, which does not rely upon the assumptions implicit in the single plane method and avoids the need for the perspective angle to be calculated.

  15. Reduction of Systematic Errors in Diagnostic Receivers Through the Use of Balanced Dicke-Switching and Y-Factor Noise Calibrations

    SciTech Connect

    John Musson, Trent Allison, Roger Flood, Jianxun Yan

    2009-05-01

    Receivers designed for diagnostic applications range from those having moderate sensitivity to those possessing large dynamic range. Digital receivers have a dynamic range which are a function of the number of bits represented by the ADC and subsequent processing. If some of this range is sacrificed for extreme sensitivity, noise power can then be used to perform two-point load calibrations. Since load temperatures can be precisely determined, the receiver can be quickly and accurately characterized; minute changes in system gain can then be detected, and systematic errors corrected. In addition, using receiver pairs in a balanced approach to measuring X+, X-, Y+, Y-, reduces systematic offset errors from non-identical system gains, and changes in system performance. This paper describes and demonstrates a balanced BPM-style diagnostic receiver, employing Dicke-switching to establish and maintain real-time system calibration. Benefits of such a receiver include wide bandwidth, solid absolute accuracy, improved position accuracy, and phase-sensitive measurements. System description, static and dynamic modelling, and measurement data are presented.

  16. A systematic approach to identify the sources of tropical SST errors in coupled models using the adjustment of initialised experiments

    NASA Astrophysics Data System (ADS)

    Vannière, Benoît; Guilyardi, Eric; Toniazzo, Thomas; Madec, Gurvan; Woolnough, Steve

    2014-10-01

    Understanding the sources of systematic errors in climate models is challenging because of coupled feedbacks and errors compensation. The developing seamless approach proposes that the identification and the correction of short term climate model errors have the potential to improve the modeled climate on longer time scales. In previous studies, initialised atmospheric simulations of a few days have been used to compare fast physics processes (convection, cloud processes) among models. The present study explores how initialised seasonal to decadal hindcasts (re-forecasts) relate transient week-to-month errors of the ocean and atmospheric components to the coupled model long-term pervasive SST errors. A protocol is designed to attribute the SST biases to the source processes. It includes five steps: (1) identify and describe biases in a coupled stabilized simulation, (2) determine the time scale of the advent of the bias and its propagation, (3) find the geographical origin of the bias, (4) evaluate the degree of coupling in the development of the bias, (5) find the field responsible for the bias. This strategy has been implemented with a set of experiments based on the initial adjustment of initialised simulations and exploring various degrees of coupling. In particular, hindcasts give the time scale of biases advent, regionally restored experiments show the geographical origin and ocean-only simulations isolate the field responsible for the bias and evaluate the degree of coupling in the bias development. This strategy is applied to four prominent SST biases of the IPSLCM5A-LR coupled model in the tropical Pacific, that are largely shared by other coupled models, including the Southeast Pacific warm bias and the equatorial cold tongue bias. Using the proposed protocol, we demonstrate that the East Pacific warm bias appears in a few months and is caused by a lack of upwelling due to too weak meridional coastal winds off Peru. The cold equatorial bias, which

  17. DtaRefinery, a Software Tool for Elimination of Systematic Errors from Parent Ion Mass Measurements in Tandem Mass Spectra Data Sets*

    PubMed Central

    Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.; Polpitiya, Ashoka D.; Purvine, Samuel O.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

    2010-01-01

    Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and high throughput MS/MS fragmentation have become widely available in recent years, allowing for significantly better discrimination between true and false MS/MS peptide identifications by the application of a relatively narrow window for maximum allowable deviations of measured parent ion masses. To fully gain the advantage of highly accurate parent ion mass measurements, it is important to limit systematic mass measurement errors. Based on our previous studies of systematic biases in mass measurement errors, here, we have designed an algorithm and software tool that eliminates the systematic errors from the peptide ion masses in MS/MS data. We demonstrate that the elimination of the systematic mass measurement errors allows for the use of tighter criteria on the deviation of measured mass from theoretical monoisotopic peptide mass, resulting in a reduction of both false discovery and false negative rates of peptide identification. A software implementation of this algorithm called DtaRefinery reads a set of fragmentation spectra, searches for MS/MS peptide identifications using a FASTA file containing expected protein sequences, fits a regression model that can estimate systematic errors, and then corrects the parent ion mass entries by removing the estimated systematic error components. The output is a new file with fragmentation spectra with updated parent ion masses. The software is freely available. PMID:20019053

  18. Enhanced flux pinning in MOCVD-YBCO films through Zr-additions:Systematic feasibility studies

    SciTech Connect

    Aytug, Tolga; Paranthaman, Mariappan Parans; Specht, Eliot D; Kim, Kyunghoon; Zhang, Yifei; Cantoni, Claudia; Zuev, Yuri L; Goyal, Amit; Christen, David K; Maroni, Victor A.

    2009-01-01

    Systematic effects of Zr additions on the structural and flux pinning properties of YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} (YBCO) films deposited by metal-organic chemical vapor deposition (MOCVD) have been investigated. Detailed characterization, conducted by coordinated transport, x-ray diffraction, scanning and transmission electron microscopy analyses, and imaging Raman microscopy have revealed trends in the resulting property/performance correlations of these films with respect to varying mole percentages (mol%) of added Zr. For compositions {le} 7.5 mol%, Zr additions lead to improved in-field critical current density, as well as extra correlated pinning along the c-axis direction of the YBCO films via the formation of columnar, self-assembled stacks of BaZrO{sub 3} nanodots.

  19. Enhanced flux pinning in MOCVD-YBCO films through Zr additions : systematic feasibility studies.

    SciTech Connect

    Aytug, T.; Paranthaman, M.; Specht, E. D.; Zhang, Y.; Kim, K.; Zuev, Y. L.; Cantoni, C.; Goyal, A.; Christen, D. K.; Maroni, V. A.; Chen, Y.; Selvamanickam, V.; ORNL; SuperPower, Inc.

    2010-01-01

    Systematic effects of Zr additions on the structural and flux pinning properties of YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} (YBCO) films deposited by metal-organic chemical vapor deposition (MOCVD) have been investigated. Detailed characterization, conducted by coordinated transport, x-ray diffraction, scanning and transmission electron microscopy analyses, and imaging Raman microscopy have revealed trends in the resulting property/performance correlations of these films with respect to varying mole percentages (mol%) of added Zr. For compositions {le} 7.5 mol%, Zr additions lead to improved in-field critical current density, as well as extra correlated pinning along the c-axis direction of the YBCO films via the formation of columnar, self-assembled stacks of BaZrO{sub 3} nanodots.

  20. Diagnostic errors in older patients: a systematic review of incidence and potential causes in seven prevalent diseases

    PubMed Central

    Skinner, Thomas R; Scott, Ian A; Martin, Jennifer H

    2016-01-01

    Background Misdiagnosis, either over- or underdiagnosis, exposes older patients to increased risk of inappropriate or omitted investigations and treatments, psychological distress, and financial burden. Objective To evaluate the frequency and nature of diagnostic errors in 16 conditions prevalent in older patients by undertaking a systematic literature review. Data sources and study selection Cohort studies, cross-sectional studies, or systematic reviews of such studies published in Medline between September 1993 and May 2014 were searched using key search terms of “diagnostic error”, “misdiagnosis”, “accuracy”, “validity”, or “diagnosis” and terms relating to each disease. Data synthesis A total of 938 articles were retrieved. Diagnostic error rates of >10% for both over- and underdiagnosis were seen in chronic obstructive pulmonary disease, dementia, Parkinson’s disease, heart failure, stroke/transient ischemic attack, and acute myocardial infarction. Diabetes was overdiagnosed in <5% of cases. Conclusion Over- and underdiagnosis are common in older patients. Explanations for over-diagnosis include subjective diagnostic criteria and the use of criteria not validated in older patients. Underdiagnosis was associated with long preclinical phases of disease or lack of sensitive diagnostic criteria. Factors that predispose to misdiagnosis in older patients must be emphasized in education and clinical guidelines. PMID:27284262

  1. Diagnostic and therapeutic errors in trigeminal autonomic cephalalgias and hemicrania continua: a systematic review.

    PubMed

    Viana, Michele; Tassorelli, Cristina; Allena, Marta; Nappi, Giuseppe; Sjaastad, Ottar; Antonaci, Fabio

    2013-01-01

    Trigeminal autonomic cephalalgias (TACs) and hemicrania continua (HC) are relatively rare but clinically rather well-defined primary headaches. Despite the existence of clear-cut diagnostic criteria (The International Classification of Headache Disorders, 2nd edition - ICHD-II) and several therapeutic guidelines, errors in workup and treatment of these conditions are frequent in clinical practice. We set out to review all available published data on mismanagement of TACs and HC patients in order to understand and avoid its causes. The search strategy identified 22 published studies. The most frequent errors described in the management of patients with TACs and HC are: referral to wrong type of specialist, diagnostic delay, misdiagnosis, and the use of treatments without overt indication. Migraine with and without aura, trigeminal neuralgia, sinus infection, dental pain and temporomandibular dysfunction are the disorders most frequently overdiagnosed. Even when the clinical picture is clear-cut, TACs and HC are frequently not recognized and/or mistaken for other disorders, not only by general physicians, dentists and ENT surgeons, but also by neurologists and headache specialists. This seems to be due to limited knowledge of the specific characteristics and variants of these disorders, and it results in the unnecessary prescription of ineffective and sometimes invasive treatments which may have negative consequences for patients. Greater knowledge of and education about these disorders, among both primary care physicians and headache specialists, might contribute to improving the quality of life of TACs and HC patients. PMID:23565739

  2. The Curious Anomaly of Skewed Judgment Distributions and Systematic Error in the Wisdom of Crowds

    PubMed Central

    Nash, Ulrik W.

    2014-01-01

    Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ) model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem. PMID:25406078

  3. Genome Size Variation in the Genus Carthamus (Asteraceae, Cardueae): Systematic Implications and Additive Changes During Allopolyploidization

    PubMed Central

    GARNATJE, TERESA; GARCIA, SÒNIA; VILATERSANA, ROSER; VALLÈS, JOAN

    2006-01-01

    • Background and Aims Plant genome size is an important biological characteristic, with relationships to systematics, ecology and distribution. Currently, there is no information regarding nuclear DNA content for any Carthamus species. In addition to improving the knowledge base, this research focuses on interspecific variation and its implications for the infrageneric classification of this genus. Genome size variation in the process of allopolyploid formation is also addressed. • Methods Nuclear DNA samples from 34 populations of 16 species of the genus Carthamus were assessed by flow cytometry using propidium iodide. • Key Results The 2C values ranged from 2·26 pg for C. leucocaulos to 7·46 pg for C. turkestanicus, and monoploid genome size (1Cx-value) ranged from 1·13 pg in C. leucocaulos to 1·53 pg in C. alexandrinus. Mean genome sizes differed significantly, based on sectional classification. Both allopolyploid species (C. creticus and C. turkestanicus) exhibited nuclear DNA contents in accordance with the sum of the putative parental C-values (in one case with a slight reduction, frequent in polyploids), supporting their hybrid origin. • Conclusions Genome size represents a useful tool in elucidating systematic relationships between closely related species. A considerable reduction in monoploid genome size, possibly due to the hybrid formation, is also reported within these taxa. PMID:16390843

  4. Systematic identification and correction of annotation errors in the genetic interaction map of Saccharomyces cerevisiae

    PubMed Central

    Atias, Nir; Kupiec, Martin; Sharan, Roded

    2016-01-01

    The yeast mutant collections are a fundamental tool in deciphering genomic organization and function. Over the last decade, they have been used for the systematic exploration of ∼6 000 000 double gene mutants, identifying and cataloging genetic interactions among them. Here we studied the extent to which these data are prone to neighboring gene effects (NGEs), a phenomenon by which the deletion of a gene affects the expression of adjacent genes along the genome. Analyzing ∼90,000 negative genetic interactions observed to date, we found that more than 10% of them are incorrectly annotated due to NGEs. We developed a novel algorithm, GINGER, to identify and correct erroneous interaction annotations. We validated the algorithm using a comparative analysis of interactions from Schizosaccharomyces pombe. We further showed that our predictions are significantly more concordant with diverse biological data compared to their mis-annotated counterparts. Our work uncovered about 9500 new genetic interactions in yeast. PMID:26602688

  5. A review of sources of systematic errors and uncertainties in observations and simulations at 183 GHz

    NASA Astrophysics Data System (ADS)

    Brogniez, Helene; English, Stephen; Mahfouf, Jean-Francois; Behrendt, Andreas; Berg, Wesley; Boukabara, Sid; Buehler, Stefan Alexander; Chambon, Philippe; Gambacorta, Antonia; Geer, Alan; Ingram, William; Kursinski, E. Robert; Matricardi, Marco; Odintsova, Tatyana A.; Payne, Vivienne H.; Thorne, Peter W.; Tretyakov, Mikhail Yu.; Wang, Junhong

    2016-05-01

    Several recent studies have observed systematic differences between measurements in the 183.31 GHz water vapor line by space-borne sounders and calculations using radiative transfer models, with inputs from either radiosondes (radiosonde observations, RAOBs) or short-range forecasts by numerical weather prediction (NWP) models. This paper discusses all the relevant categories of observation-based or model-based data, quantifies their uncertainties and separates biases that could be common to all causes from those attributable to a particular cause. Reference observations from radiosondes, Global Navigation Satellite System (GNSS) receivers, differential absorption lidar (DIAL) and Raman lidar are thus overviewed. Biases arising from their calibration procedures, NWP models and data assimilation, instrument biases and radiative transfer models (both the models themselves and the underlying spectroscopy) are presented and discussed. Although presently no single process in the comparisons seems capable of explaining the observed structure of bias, recommendations are made in order to better understand the causes.

  6. Systematic identification and correction of annotation errors in the genetic interaction map of Saccharomyces cerevisiae.

    PubMed

    Atias, Nir; Kupiec, Martin; Sharan, Roded

    2016-03-18

    The yeast mutant collections are a fundamental tool in deciphering genomic organization and function. Over the last decade, they have been used for the systematic exploration of ∼6 000 000 double gene mutants, identifying and cataloging genetic interactions among them. Here we studied the extent to which these data are prone to neighboring gene effects (NGEs), a phenomenon by which the deletion of a gene affects the expression of adjacent genes along the genome. Analyzing ∼90,000 negative genetic interactions observed to date, we found that more than 10% of them are incorrectly annotated due to NGEs. We developed a novel algorithm, GINGER, to identify and correct erroneous interaction annotations. We validated the algorithm using a comparative analysis of interactions from Schizosaccharomyces pombe. We further showed that our predictions are significantly more concordant with diverse biological data compared to their mis-annotated counterparts. Our work uncovered about 9500 new genetic interactions in yeast. PMID:26602688

  7. No additional value of fusion techniques on anterior discectomy for neck pain: a systematic review.

    PubMed

    van Middelkoop, Marienke; Rubinstein, Sidney M; Ostelo, Raymond; van Tulder, Maurits W; Peul, Wilco; Koes, Bart W; Verhagen, Arianne P

    2012-11-01

    We aimed to assess the effects of additional fusion on surgical interventions to the cervical spine for patients with neck pain with or without radiculopathy or myelopathy by performing a systematic review. The search strategy outlined by the Cochrane Back Review Group (CBRG) was followed. The primary search was conducted in MEDLINE, EMBASE, CINAHL, CENTRAL and PEDro up to June 2011. Only randomised, controlled trials of adults with neck pain that evaluated at least one clinically relevant primary outcome measure (pain, functional status, recovery) were included. Two authors independently assessed the risk of bias by using the criteria recommended by the CBRG and extracted the data. Data were pooled using a random effects model. The quality of the evidence was rated using the GRADE method. In total, 10 randomised, controlled trials were identified comparing additional fusion upon anterior decompression techniques, including 2 studies with a low risk of bias. Results revealed no clinically relevant differences in recovery: the pooled risk difference in the short-term follow-up was -0.06 (95% confidence interval -0.22 to 0.10) and -0.07 (95% confidence interval -0.14 to 0.00) in the long-term follow-up. Pooled risk differences for pain and return to work all demonstrated no differences. There is no additional benefit of fusion techniques applied within an anterior discectomy procedure on pain, recovery and return to work. PMID:22818181

  8. SU-E-T-405: Robustness of Volumetric-Modulated Arc Therapy (VMAT) Plans to Systematic MLC Positional Errors

    SciTech Connect

    Qi, P; Xia, P

    2014-06-01

    Purpose: To evaluate the dosimetric impact of systematic MLC positional errors (PEs) on the quality of volumetric-modulated arc therapy (VMAT) plans. Methods: Five patients with head-and-neck cancer (HN) and five patients with prostate cancer were randomly chosen for this study. The clinically approved VMAT plans were designed with 2–4 coplanar arc beams with none-zero collimator angles in the Pinnacle planning system. The systematic MLC PEs of 0.5, 1.0, and 2.0 mm on both MLC banks were introduced into the original VMAT plans using an in-house program, and recalculated with the same planned Monitor Units in the Pinnacle system. For each patient, the original VMAT plans and plans with MLC PEs were evaluated according to the dose-volume histogram information and Gamma index analysis. Results: For one primary target, the ratio of V100 in the plans with 0.5, 1.0, and 2.0 mm MLC PEs to those in the clinical plans was 98.8 ± 2.2%, 97.9 ± 2.1%, 90.1 ± 9.0% for HN cases and 99.5 ± 3.2%, 98.9 ± 1.0%, 97.0 ± 2.5% for prostate cases. For all OARs, the relative difference of Dmean in all plans was less than 1.5%. With 2mm/2% criteria for Gamma analysis, the passing rates were 99.0 ± 1.5% for HN cases and 99.7 ± 0.3% for prostate cases between the planar doses from the original plans and the plans with 1.0 mm MLC errors. The corresponding Gamma passing rates dropped to 88.9 ± 5.3% for HN cases and 83.4 ± 3.2% for prostate cases when comparing planar doses from the original plans and the plans with 2.0 mm MLC errors. Conclusion: For VMAT plans, systematic MLC PEs up to 1.0 mm did not affect the plan quality in term of target coverage, OAR sparing, and Gamma analysis with 2mm/2% criteria.

  9. Refractive Error and Risk of Early or Late Age-Related Macular Degeneration: A Systematic Review and Meta-Analysis

    PubMed Central

    Li, Ying; Wang, JiWen; Zhong, XiaoJing; Tian, Zhen; Wu, Peipei; Zhao, Wenbo; Jin, Chenjin

    2014-01-01

    Objective To summarize relevant evidence investigating the associations between refractive error and age-related macular degeneration (AMD). Design Systematic review and meta-analysis. Methods We searched Medline, Web of Science, and Cochrane databases as well as the reference lists of retrieved articles to identify studies that met the inclusion criteria. Extracted data were combined using a random-effects meta-analysis. Studies that were pertinent to our topic but did not meet the criteria for quantitative analysis were reported in a systematic review instead. Main outcome measures Pooled odds ratios (ORs) and 95% confidence intervals (CIs) for the associations between refractive error (hyperopia, myopia, per-diopter increase in spherical equivalent [SE] toward hyperopia, per-millimeter increase in axial length [AL]) and AMD (early and late, prevalent and incident). Results Fourteen studies comprising over 5800 patients were eligible. Significant associations were found between hyperopia, myopia, per-diopter increase in SE, per-millimeter increase in AL, and prevalent early AMD. The pooled ORs and 95% CIs were 1.13 (1.06–1.20), 0.75 (0.56–0.94), 1.10 (1.07–1.14), and 0.79 (0.73–0.85), respectively. The per-diopter increase in SE was also significantly associated with early AMD incidence (OR, 1.06; 95% CI, 1.02–1.10). However, no significant association was found between hyperopia or myopia and early AMD incidence. Furthermore, neither prevalent nor incident late AMD was associated with refractive error. Considerable heterogeneity was found among studies investigating the association between myopia and prevalent early AMD (P = 0.001, I2 = 72.2%). Geographic location might play a role; the heterogeneity became non-significant after stratifying these studies into Asian and non-Asian subgroups. Conclusion Refractive error is associated with early AMD but not with late AMD. More large-scale longitudinal studies are needed to further investigate such

  10. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests.

    PubMed

    Strömberg, Sten; Nistor, Mihaela; Liu, Jing

    2014-11-01

    The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2(4) full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors' impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors' influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world. PMID:25151444

  11. Variations in Learning Rate: Student Classification Based on Systematic Residual Error Patterns across Practice Opportunities

    ERIC Educational Resources Information Center

    Liu, Ran; Koedinger, Kenneth R.

    2015-01-01

    A growing body of research suggests that accounting for student specific variability in educational data can improve modeling accuracy and may have implications for individualizing instruction. The Additive Factors Model (AFM), a logistic regression model used to fit educational data and discover/refine skill models of learning, contains a…

  12. The Systematic Error Test for PSF Correction in Weak Gravitational Lensing Shear Measurement By the ERA Method By Idealizing PSF

    NASA Astrophysics Data System (ADS)

    Okura, Yuki; Futamase, Toshifumi

    2016-08-01

    We improve the ellipticity of re-smeared artificial image (ERA) method of point-spread function (PSF) correction in a weak lensing shear analysis in order to treat the realistic shape of galaxies and the PSF. This is done by re-smearing the PSF and the observed galaxy image using a re-smearing function (RSF) and allows us to use a new PSF with a simple shape and to correct the PSF effect without any approximations or assumptions. We perform a numerical test to show that the method applied for galaxies and PSF with some complicated shapes can correct the PSF effect with a systematic error of less than 0.1%. We also apply the ERA method for real data of the Abell 1689 cluster to confirm that it is able to detect the systematic weak lensing shear pattern. The ERA method requires less than 0.1 or 1 s to correct the PSF for each object in a numerical test and a real data analysis, respectively.

  13. DtaRefinery: a software tool for elimination of systematic errors from parent ion mass measurements in tandem mass spectra datasets

    SciTech Connect

    Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.; Polpitiya, Ashoka D.; Purvine, Samuel O.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

    2009-12-16

    Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and MS/MS fragmentation have become widely available in recent years and have allowed for sig-nificantly better discrimination between true and false MS/MS pep-tide identifications by applying relatively narrow windows for maxi-mum allowable deviations for parent ion mass measurements. To fully gain the advantage of highly accurate parent ion mass meas-urements, it is important to limit systematic mass measurement errors. The DtaRefinery software tool can correct systematic errors in parent ion masses by reading a set of fragmentation spectra, searching for MS/MS peptide identifications, then fitting a model that can estimate systematic errors, and removing them. This results in a new fragmentation spectrum file with updated parent ion masses.

  14. Contrast evaluation of the polarimetric images of different targets in turbid medium: possible sources of systematic errors

    NASA Astrophysics Data System (ADS)

    Novikova, T.; Bénière, A.; Goudail, F.; De Martino, A.

    2010-04-01

    Subsurface polarimetric (differential polarization, degree of polarization or Mueller matrix) imaging of various targets in turbid media shows image contrast enhancement compared with total intensity measurements. The image contrast depends on the target immersion depth and on both target and background medium optical properties, such as scattering coefficient, absorption coefficient and anisotropy. The differential polarization image contrast is usually not the same for circularly and linearly polarized light. With linearly and circularly polarized light we acquired the orthogonal state contrast (OSC) images of reflecting, scattering and absorbing targets. The targets were positioned at various depths within the container filled with polystyrene particle suspension in water. We also performed numerical Monte Carlo modelling of backscattering Mueller matrix images of the experimental set-up. Quite often the dimensions of container, its shape and optical properties of container walls are not reported for similar experiments and numerical simulations. However, we found, that depending on the photon transport mean free path in the scattering medium, the above mentioned parameters, as well as multiple target design could all be sources of significant systematic errors in the evaluation of polarimetric image contrast. Thus, proper design of experiment geometry is of prime importance in order to remove the sources of possible artefacts in the image contrast evaluation and to make a correct choice between linear and circular polarization of the light for better target detection.

  15. Noise-induced systematic errors in ratio imaging: serious artefacts and correction with multi-resolution denoising.

    PubMed

    Wang, Yu-Li

    2007-11-01

    Ratio imaging is playing an increasingly important role in modern cell biology. Combined with ratiometric dyes or fluorescence resonance energy transfer (FRET) biosensors, the approach allows the detection of conformational changes and molecular interactions in living cells. However, the approach is conducted increasingly under limited signal-to-noise ratio (SNR), where noise from multiple images can easily accumulate and lead to substantial uncertainty in ratio values. This study demonstrates that a far more serious concern is systematic errors that generate artificially high ratio values at low SNR. Thus, uneven SNR alone may lead to significant variations in ratios among different regions of a cell. Although correct average ratios may be obtained by applying conventional noise reduction filters, such as a Gaussian filter before calculating the ratio, these filters have a limited performance at low SNR and are prone to artefacts such as generating discrete domains not found in the correct ratio image. Much more reliable restoration may be achieved with multi-resolution denoising filters that take into account the actual noise characteristics of the detector. These filters are also capable of restoring structural details and photometric accuracy, and may serve as a general tool for retrieving reliable information from low-light live cell images. PMID:17970912

  16. Save now, pay later? Multi-period many-objective groundwater monitoring design given systematic model errors and uncertainty

    NASA Astrophysics Data System (ADS)

    Reed, P. M.; Kollat, J. B.

    2012-01-01

    This study demonstrates how many-objective long-term groundwater monitoring (LTGM) network design tradeoffs evolve across multiple management periods given systematic models errors (i.e., predictive bias), groundwater flow-and-transport forecasting uncertainties, and contaminant observation uncertainties. Our analysis utilizes the Adaptive Strategies for Sampling in Space and Time (ASSIST) framework, which is composed of three primary components: (1) bias-aware Ensemble Kalman Filtering, (2) many-objective hierarchical Bayesian optimization, and (3) interactive visual analytics for understanding spatiotemporal network design tradeoffs. A physical aquifer experiment is utilized to develop a severely challenging multi-period observation system simulation experiment (OSSE) that reflects the challenges and decisions faced in monitoring contaminated groundwater systems. The experimental aquifer OSSE shows both the influence and consequences of plume dynamics as well as alternative cost-savings strategies in shaping how LTGM many-objective tradeoffs evolve. Our findings highlight the need to move beyond least cost purely statistical monitoring frameworks to consider many-objective evaluations of LTGM tradeoffs. The ASSIST framework provides a highly flexible approach for measuring the value of observables that simultaneously improves how the data are used to inform decisions.

  17. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  18. Adjustment of wind-drift effect for real-time systematic error correction in radar rainfall data

    NASA Astrophysics Data System (ADS)

    Dai, Qiang; Han, Dawei; Zhuo, Lu; Huang, Jing; Islam, Tanvir; Zhang, Shuliang

    An effective bias correction procedure using gauge measurement is a significant step for radar data processing to reduce the systematic error in hydrological applications. In these bias correction methods, the spatial matching of precipitation patterns between radar and gauge networks is an important premise. However, the wind-drift effect on radar measurement induces an inconsistent spatial relationship between radar and gauge measurements as the raindrops observed by radar do not fall vertically to the ground. Consequently, a rain gauge does not correspond to the radar pixel based on the projected location of the radar beam. In this study, we introduce an adjustment method to incorporate the wind-drift effect into a bias correlation scheme. We first simulate the trajectory of raindrops in the air using downscaled three-dimensional wind data from the weather research and forecasting model (WRF) and calculate the final location of raindrops on the ground. The displacement of rainfall is then estimated and a radar-gauge spatial relationship is reconstructed. Based on this, the local real-time biases of the bin-average radar data were estimated for 12 selected events. Then, the reference mean local gauge rainfall, mean local bias, and adjusted radar rainfall calculated with and without consideration of the wind-drift effect are compared for different events and locations. There are considerable differences for three estimators, indicating that wind drift has a considerable impact on the real-time radar bias correction. Based on these facts, we suggest bias correction schemes based on the spatial correlation between radar and gauge measurements should consider the adjustment of the wind-drift effect and the proposed adjustment method is a promising solution to achieve this.

  19. The apparent British sea slope is caused by systematic errors in the levelling-based vertical datum

    NASA Astrophysics Data System (ADS)

    Penna, N. T.; Featherstone, W. E.; Gazeaux, J.; Bingham, R. J.

    2013-08-01

    The spirit-levelling-based British vertical datum (Ordnance Datum Newlyn) implies a south-north apparent slope in mean sea level of up to 53 mm deg-1 latitude, due to the datum falling on heading northwards. Although this apparent slope has been investigated since the 1960s, explanations of its origin have remained inconclusive. It has also been suggested that, rather than a slope, the British vertical datum includes a step of about 240 mm affecting all sites north of about 53°N. In either case, the British vertical datum may be of limited use for any study requiring accurate heights or changes in heights, such as testing geoid models, groundwater and hydrocarbon extraction, the calibration and validation of satellite-based digital terrain models, and the unification of vertical datums internationally. Within the last decade, however, based on an apparent reduction in the slope to only -12 mm deg-1 latitude with respect to recent geoid models, it has been claimed that the British vertical datum does provide a physically meaningful surface for use in scientific applications. In this paper, we reinvestigate the presence of apparent south-north sea slopes around Britain and reported slopes in the vertical datum, using the EGM2008 global gravitational model, together with mean sea level and GPS data from British tide gauges, GPS ellipsoidal heights of 178 fundamental benchmarks across mainland Britain, and vertical deflection observations at 192 stations. We demonstrate a south-north slope in the British vertical datum of -(20-25) mm deg-1 latitude with respect to both mean sea level (corrected for the ocean's mean dynamic topography and the inverse barometer response to atmospheric pressure loading) and the EGM2008 quasigeoid model, while EGM2008 is shown to exhibit a negligible slope of (2 ± 4) mm deg-1 with respect to mean sea level. It is clear, therefore, that the slope can only arise from systematic errors in the levelling, although we are unable to isolate

  20. Edge profile analysis of Joint European Torus (JET) Thomson scattering data: Quantifying the systematic error due to edge localised mode synchronisation.

    PubMed

    Leyland, M J; Beurskens, M N A; Flanagan, J C; Frassinetti, L; Gibson, K J; Kempenaars, M; Maslov, M; Scannell, R

    2016-01-01

    The Joint European Torus (JET) high resolution Thomson scattering (HRTS) system measures radial electron temperature and density profiles. One of the key capabilities of this diagnostic is measuring the steep pressure gradient, termed the pedestal, at the edge of JET plasmas. The pedestal is susceptible to limiting instabilities, such as Edge Localised Modes (ELMs), characterised by a periodic collapse of the steep gradient region. A common method to extract the pedestal width, gradient, and height, used on numerous machines, is by performing a modified hyperbolic tangent (mtanh) fit to overlaid profiles selected from the same region of the ELM cycle. This process of overlaying profiles, termed ELM synchronisation, maximises the number of data points defining the pedestal region for a given phase of the ELM cycle. When fitting to HRTS profiles, it is necessary to incorporate the diagnostic radial instrument function, particularly important when considering the pedestal width. A deconvolved fit is determined by a forward convolution method requiring knowledge of only the instrument function and profiles. The systematic error due to the deconvolution technique incorporated into the JET pedestal fitting tool has been documented by Frassinetti et al. [Rev. Sci. Instrum. 83, 013506 (2012)]. This paper seeks to understand and quantify the systematic error introduced to the pedestal width due to ELM synchronisation. Synthetic profiles, generated with error bars and point-to-point variation characteristic of real HRTS profiles, are used to evaluate the deviation from the underlying pedestal width. We find on JET that the ELM synchronisation systematic error is negligible in comparison to the statistical error when assuming ten overlaid profiles (typical for a pre-ELM fit to HRTS profiles). This confirms that fitting a mtanh to ELM synchronised profiles is a robust and practical technique for extracting the pedestal structure. PMID:26827321

  1. Statistical and systematic errors in the measurement of weak-lensing Minkowski functionals: Application to the Canada-France-Hawaii Lensing Survey

    SciTech Connect

    Shirasaki, Masato; Yoshida, Naoki

    2014-05-01

    The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degrades the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ∼1400 deg{sup 2} will constrain the dark energy equation of the state parameter with an error of Δw {sub 0} ∼ 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1σ error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density Ω{sub m0}=0.256±{sub 0.046}{sup 0.054}.

  2. Systematics of the family Plectopylidae in Vietnam with additional information on Chinese taxa (Gastropoda, Pulmonata, Stylommatophora)

    PubMed Central

    Páll-Gergely, Barna; Hunyadi, András; Ablett, Jonathan; Lương, Hào Văn; Fred Naggs; Asami, Takahiro

    2015-01-01

    Abstract Vietnamese species from the family Plectopylidae are revised based on the type specimens of all known taxa, more than 600 historical non-type museum lots, and almost 200 newly-collected samples. Altogether more than 7000 specimens were investigated. The revision has revealed that species diversity of the Vietnamese Plectopylidae was previously overestimated. Overall, thirteen species names (anterides Gude, 1909, bavayi Gude, 1901, congesta Gude, 1898, fallax Gude, 1909, gouldingi Gude, 1909, hirsuta Möllendorff, 1901, jovia Mabille, 1887, moellendorffi Gude, 1901, persimilis Gude, 1901, pilsbryana Gude, 1901, soror Gude, 1908, tenuis Gude, 1901, verecunda Gude, 1909) were synonymised with other species. In addition to these, Gudeodiscus hemmeni sp. n. and Gudeodiscus messageri raheemi ssp. n. are described from north-western Vietnam. Sixteen species and two subspecies are recognized from Vietnam. The reproductive anatomy of eight taxa is described. Based on anatomical information, Halongella gen. n. is erected to include Plectopylis schlumbergeri and Plectopylis fruhstorferi. Additionally, the genus Gudeodiscus is subdivided into two subgenera (Gudeodiscus and Veludiscus subgen. n.) on the basis of the morphology of the reproductive anatomy and the radula. The Chinese Gudeodiscus phlyarius werneri Páll-Gergely, 2013 is moved to synonymy of Gudeodiscus phlyarius. A spermatophore was found in the organ situated next to the gametolytic sac in one specimen. This suggests that this organ in the Plectopylidae is a diverticulum. Statistically significant evidence is presented for the presence of calcareous hook-like granules inside the penis being associated with the absence of embryos in the uterus in four genera. This suggests that these probably play a role in mating periods before disappearing when embryos develop. Sicradiscus mansuyi is reported from China for the first time. PMID:25632253

  3. A New Approach to Detection of Systematic Errors in Secondary Substation Monitoring Equipment Based on Short Term Load Forecasting

    PubMed Central

    Moriano, Javier; Rodríguez, Francisco Javier; Martín, Pedro; Jiménez, Jose Antonio; Vuksanovic, Branislav

    2016-01-01

    In recent years, Secondary Substations (SSs) are being provided with equipment that allows their full management. This is particularly useful not only for monitoring and planning purposes but also for detecting erroneous measurements, which could negatively affect the performance of the SS. On the other hand, load forecasting is extremely important since they help electricity companies to make crucial decisions regarding purchasing and generating electric power, load switching, and infrastructure development. In this regard, Short Term Load Forecasting (STLF) allows the electric power load to be predicted over an interval ranging from one hour to one week. However, important issues concerning error detection by employing STLF has not been specifically addressed until now. This paper proposes a novel STLF-based approach to the detection of gain and offset errors introduced by the measurement equipment. The implemented system has been tested against real power load data provided by electricity suppliers. Different gain and offset error levels are successfully detected. PMID:26771613

  4. A New Approach to Detection of Systematic Errors in Secondary Substation Monitoring Equipment Based on Short Term Load Forecasting.

    PubMed

    Moriano, Javier; Rodríguez, Francisco Javier; Martín, Pedro; Jiménez, Jose Antonio; Vuksanovic, Branislav

    2016-01-01

    In recent years, Secondary Substations (SSs) are being provided with equipment that allows their full management. This is particularly useful not only for monitoring and planning purposes but also for detecting erroneous measurements, which could negatively affect the performance of the SS. On the other hand, load forecasting is extremely important since they help electricity companies to make crucial decisions regarding purchasing and generating electric power, load switching, and infrastructure development. In this regard, Short Term Load Forecasting (STLF) allows the electric power load to be predicted over an interval ranging from one hour to one week. However, important issues concerning error detection by employing STLF has not been specifically addressed until now. This paper proposes a novel STLF-based approach to the detection of gain and offset errors introduced by the measurement equipment. The implemented system has been tested against real power load data provided by electricity suppliers. Different gain and offset error levels are successfully detected. PMID:26771613

  5. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    PubMed

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies. PMID:26217266

  6. Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations

    PubMed Central

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human–robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human–robot interaction experiments. For that, we analyzed 201 videos of five human–robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human–robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies. PMID:26217266

  7. Performance analysis for time-frequency MUSIC algorithm in presence of both additive noise and array calibration errors

    NASA Astrophysics Data System (ADS)

    Khodja, Mohamed; Belouchrani, Adel; Abed-Meraim, Karim

    2012-12-01

    This article deals with the application of Spatial Time-Frequency Distribution (STFD) to the direction finding problem using the Multiple Signal Classification (MUSIC)algorithm. A comparative performance analysis is performed for the method under consideration with respect to that using data covariance matrix when the received array signals are subject to calibration errors in a non-stationary environment. An unified analytical expression of the Direction Of Arrival (DOA) error estimation is derived for both methods. Numerical results show the effect of the parameters intervening in the derived expression on the algorithm performance. It is particularly observed that for low Signal to Noise Ratio (SNR) and high Signal to sensor Perturbation Ratio (SPR) the STFD method gives better performance, while for high SNR and for the same SPR both methods give similar performance.

  8. Toward reducing systematic errors in NWP - cross-evaluation of common physics from 6h-regional to 6d-global to 6mon-coupled applications

    NASA Astrophysics Data System (ADS)

    Benjamin, S.

    2015-12-01

    An integrated evaluation system against gridded data and observations is being applied against global models (FIM, GFS) and regional models (WRF-ARW applications for RAP/HRRR). An overview will be presented on wind, relative humidity, and temperature model errors as measured against rawinsonde and aircraft observations in common at 12h forecast duration for global and regional models. Systematic errors common to both applications will be presented. A common problem with deficient cloud cover has been evident in both 6h (3km HRRR-WRF-ARW) regional forecasts and 6-month coupled-global (FIM-HYCOM) forecasts, allowing improvements in a common deep/shallow convection scheme (Grell-Freitas) with subgrid-scale clouds to be evaluated across time scales.

  9. Impact of random and systematic recall errors and selection bias in case--control studies on mobile phone use and brain tumors in adolescents (CEFALO study).

    PubMed

    Aydin, Denis; Feychting, Maria; Schüz, Joachim; Andersen, Tina Veje; Poulsen, Aslak Harbo; Prochazka, Michaela; Klaeboe, Lars; Kuehni, Claudia E; Tynes, Tore; Röösli, Martin

    2011-07-01

    Whether the use of mobile phones is a risk factor for brain tumors in adolescents is currently being studied. Case--control studies investigating this possible relationship are prone to recall error and selection bias. We assessed the potential impact of random and systematic recall error and selection bias on odds ratios (ORs) by performing simulations based on real data from an ongoing case--control study of mobile phones and brain tumor risk in children and adolescents (CEFALO study). Simulations were conducted for two mobile phone exposure categories: regular and heavy use. Our choice of levels of recall error was guided by a validation study that compared objective network operator data with the self-reported amount of mobile phone use in CEFALO. In our validation study, cases overestimated their number of calls by 9% on average and controls by 34%. Cases also overestimated their duration of calls by 52% on average and controls by 163%. The participation rates in CEFALO were 83% for cases and 71% for controls. In a variety of scenarios, the combined impact of recall error and selection bias on the estimated ORs was complex. These simulations are useful for the interpretation of previous case-control studies on brain tumor and mobile phone use in adults as well as for the interpretation of future studies on adolescents. PMID:21294138

  10. The effectiveness of computerized order entry at reducing preventable adverse drug events and medication errors in hospital settings: a systematic review and meta-analysis

    PubMed Central

    2014-01-01

    Background The Health Information Technology for Economic and Clinical Health (HITECH) Act subsidizes implementation by hospitals of electronic health records with computerized provider order entry (CPOE), which may reduce patient injuries caused by medication errors (preventable adverse drug events, pADEs). Effects on pADEs have not been rigorously quantified, and effects on medication errors have been variable. The objectives of this analysis were to assess the effectiveness of CPOE at reducing pADEs in hospital-related settings, and examine reasons for heterogeneous effects on medication errors. Methods Articles were identified using MEDLINE, Cochrane Library, Econlit, web-based databases, and bibliographies of previous systematic reviews (September 2013). Eligible studies compared CPOE with paper-order entry in acute care hospitals, and examined diverse pADEs or medication errors. Studies on children or with limited event-detection methods were excluded. Two investigators extracted data on events and factors potentially associated with effectiveness. We used random effects models to pool data. Results Sixteen studies addressing medication errors met pooling criteria; six also addressed pADEs. Thirteen studies used pre-post designs. Compared with paper-order entry, CPOE was associated with half as many pADEs (pooled risk ratio (RR) = 0.47, 95% CI 0.31 to 0.71) and medication errors (RR = 0.46, 95% CI 0.35 to 0.60). Regarding reasons for heterogeneous effects on medication errors, five intervention factors and two contextual factors were sufficiently reported to support subgroup analyses or meta-regression. Differences between commercial versus homegrown systems, presence and sophistication of clinical decision support, hospital-wide versus limited implementation, and US versus non-US studies were not significant, nor was timing of publication. Higher baseline rates of medication errors predicted greater reductions (P < 0.001). Other context and

  11. Influence of convective parameterization on the systematic errors of Climate Forecast System (CFS) model over the Indian monsoon region from an extended range forecast perspective

    NASA Astrophysics Data System (ADS)

    Pattnaik, S.; Abhilash, S.; De, S.; Sahai, A. K.; Phani, R.; Goswami, B. N.

    2013-07-01

    This study investigates the influence of Simplified Arakawa Schubert (SAS) and Relax Arakawa Schubert (RAS) cumulus parameterization schemes on coupled Climate Forecast System version.1 (CFS-1, T62L64) retrospective forecasts over Indian monsoon region from an extended range forecast perspective. The forecast data sets comprise 45 days of model integrations based on 31 different initial conditions at pentad intervals starting from 1 May to 28 September for the years 2001 to 2007. It is found that mean climatological features of Indian summer monsoon months (JJAS) are reasonably simulated by both the versions (i.e. SAS and RAS) of the model; however strong cross equatorial flow and excess stratiform rainfall are noted in RAS compared to SAS. Both the versions of the model overestimated apparent heat source and moisture sink compared to NCEP/NCAR reanalysis. The prognosis evaluation of daily forecast climatology reveals robust systematic warming (moistening) in RAS and cooling (drying) biases in SAS particularly at the middle and upper troposphere of the model respectively. Using error energy/variance and root mean square error methodology it is also established that major contribution to the model total error is coming from the systematic component of the model error. It is also found that the forecast error growth of temperature in RAS is less than that of SAS; however, the scenario is reversed for moisture errors, although the difference of moisture errors between these two forecasts is not very large compared to that of temperature errors. Broadly, it is found that both the versions of the model are underestimating (overestimating) the rainfall area and amount over the Indian land region (and neighborhood oceanic region). The rainfall forecast results at pentad interval exhibited that, SAS and RAS have good prediction skills over the Indian monsoon core zone and Arabian Sea. There is less excess rainfall particularly over oceanic region in RAS up to 30 days of

  12. Functional mapping of left parietal areas involved in simple addition and multiplication. A single-case study of qualitative analysis of errors.

    PubMed

    Della Puppa, Alessandro; De Pellegrin, Serena; Salillas, Elena; Grego, Alberto; Lazzarini, Anna; Vallesi, Antonino; Saladini, Marina; Semenza, Carlo

    2015-09-01

    All electrostimulation studies on arithmetic have so far solely reported general errors. Nonetheless, a classification of the errors during stimulation can inform us about underlying arithmetic processes. The present electrostimulation study was performed in a case of left parietal glioma. The patient's erroneous responses suggested that calculation was mainly applied for addition and a combination of retrieval and calculation was mainly applied for multiplication. The findings of the present single-case study encourage follow up with further data collection with the same paradigm. PMID:24646158

  13. Elimination of 'ghost'-effect-related systematic error in metrology of X-ray optics with a long trace profiler

    SciTech Connect

    Yashchuk, Valeriy V.; Irick, Steve C.; MacDowell, Alastair A.

    2005-04-28

    A data acquisition technique and relevant program for suppression of one of the systematic effects, namely the ''ghost'' effect, of a second generation long trace profiler (LTP) is described. The ''ghost'' effect arises when there is an unavoidable cross-contamination of the LTP sample and reference signals into one another, leading to a systematic perturbation in the recorded interference patterns and, therefore, a systematic variation of the measured slope trace. Perturbations of about 1-2 {micro}rad have been observed with a cylindrically shaped X-ray mirror. Even stronger ''ghost'' effects show up in an LTP measurement with a mirror having a toroidal surface figure. The developed technique employs separate measurement of the ''ghost''-effect-related interference patterns in the sample and the reference arms and then subtraction of the ''ghost'' patterns from the sample and the reference interference patterns. The procedure preserves the advantage of simultaneously measuring the sample and reference signals. The effectiveness of the technique is illustrated with LTP metrology of a variety of X-ray mirrors.

  14. Meta-analysis of gene–environment-wide association scans accounting for education level identifies additional loci for refractive error

    PubMed Central

    Fan, Qiao; Verhoeven, Virginie J. M.; Wojciechowski, Robert; Barathi, Veluchamy A.; Hysi, Pirro G.; Guggenheim, Jeremy A.; Höhn, René; Vitart, Veronique; Khawaja, Anthony P.; Yamashiro, Kenji; Hosseini, S Mohsen; Lehtimäki, Terho; Lu, Yi; Haller, Toomas; Xie, Jing; Delcourt, Cécile; Pirastu, Mario; Wedenoja, Juho; Gharahkhani, Puya; Venturini, Cristina; Miyake, Masahiro; Hewitt, Alex W.; Guo, Xiaobo; Mazur, Johanna; Huffman, Jenifer E.; Williams, Katie M.; Polasek, Ozren; Campbell, Harry; Rudan, Igor; Vatavuk, Zoran; Wilson, James F.; Joshi, Peter K.; McMahon, George; St Pourcain, Beate; Evans, David M.; Simpson, Claire L.; Schwantes-An, Tae-Hwi; Igo, Robert P.; Mirshahi, Alireza; Cougnard-Gregoire, Audrey; Bellenguez, Céline; Blettner, Maria; Raitakari, Olli; Kähönen, Mika; Seppala, Ilkka; Zeller, Tanja; Meitinger, Thomas; Ried, Janina S.; Gieger, Christian; Portas, Laura; van Leeuwen, Elisabeth M.; Amin, Najaf; Uitterlinden, André G.; Rivadeneira, Fernando; Hofman, Albert; Vingerling, Johannes R.; Wang, Ya Xing; Wang, Xu; Tai-Hui Boh, Eileen; Ikram, M. Kamran; Sabanayagam, Charumathi; Gupta, Preeti; Tan, Vincent; Zhou, Lei; Ho, Candice E. H.; Lim, Wan'e; Beuerman, Roger W.; Siantar, Rosalynn; Tai, E-Shyong; Vithana, Eranga; Mihailov, Evelin; Khor, Chiea-Chuen; Hayward, Caroline; Luben, Robert N.; Foster, Paul J.; Klein, Barbara E. K.; Klein, Ronald; Wong, Hoi-Suen; Mitchell, Paul; Metspalu, Andres; Aung, Tin; Young, Terri L.; He, Mingguang; Pärssinen, Olavi; van Duijn, Cornelia M.; Jin Wang, Jie; Williams, Cathy; Jonas, Jost B.; Teo, Yik-Ying; Mackey, David A.; Oexle, Konrad; Yoshimura, Nagahisa; Paterson, Andrew D.; Pfeiffer, Norbert; Wong, Tien-Yin; Baird, Paul N.; Stambolian, Dwight; Wilson, Joan E. Bailey; Cheng, Ching-Yu; Hammond, Christopher J.; Klaver, Caroline C. W.; Saw, Seang-Mei; Rahi, Jugnoo S.; Korobelnik, Jean-François; Kemp, John P.; Timpson, Nicholas J.; Smith, George Davey; Craig, Jamie E.; Burdon, Kathryn P.; Fogarty, Rhys D.; Iyengar, Sudha K.; Chew, Emily; Janmahasatian, Sarayut; Martin, Nicholas G.; MacGregor, Stuart; Xu, Liang; Schache, Maria; Nangia, Vinay; Panda-Jonas, Songhomitra; Wright, Alan F.; Fondran, Jeremy R.; Lass, Jonathan H.; Feng, Sheng; Zhao, Jing Hua; Khaw, Kay-Tee; Wareham, Nick J.; Rantanen, Taina; Kaprio, Jaakko; Pang, Chi Pui; Chen, Li Jia; Tam, Pancy O.; Jhanji, Vishal; Young, Alvin L.; Döring, Angela; Raffel, Leslie J.; Cotch, Mary-Frances; Li, Xiaohui; Yip, Shea Ping; Yap, Maurice K.H.; Biino, Ginevra; Vaccargiu, Simona; Fossarello, Maurizio; Fleck, Brian; Yazar, Seyhan; Tideman, Jan Willem L.; Tedja, Milly; Deangelis, Margaret M.; Morrison, Margaux; Farrer, Lindsay; Zhou, Xiangtian; Chen, Wei; Mizuki, Nobuhisa; Meguro, Akira; Mäkelä, Kari Matti

    2016-01-01

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10−5), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia. PMID:27020472

  15. Meta-analysis of gene-environment-wide association scans accounting for education level identifies additional loci for refractive error.

    PubMed

    Fan, Qiao; Verhoeven, Virginie J M; Wojciechowski, Robert; Barathi, Veluchamy A; Hysi, Pirro G; Guggenheim, Jeremy A; Höhn, René; Vitart, Veronique; Khawaja, Anthony P; Yamashiro, Kenji; Hosseini, S Mohsen; Lehtimäki, Terho; Lu, Yi; Haller, Toomas; Xie, Jing; Delcourt, Cécile; Pirastu, Mario; Wedenoja, Juho; Gharahkhani, Puya; Venturini, Cristina; Miyake, Masahiro; Hewitt, Alex W; Guo, Xiaobo; Mazur, Johanna; Huffman, Jenifer E; Williams, Katie M; Polasek, Ozren; Campbell, Harry; Rudan, Igor; Vatavuk, Zoran; Wilson, James F; Joshi, Peter K; McMahon, George; St Pourcain, Beate; Evans, David M; Simpson, Claire L; Schwantes-An, Tae-Hwi; Igo, Robert P; Mirshahi, Alireza; Cougnard-Gregoire, Audrey; Bellenguez, Céline; Blettner, Maria; Raitakari, Olli; Kähönen, Mika; Seppala, Ilkka; Zeller, Tanja; Meitinger, Thomas; Ried, Janina S; Gieger, Christian; Portas, Laura; van Leeuwen, Elisabeth M; Amin, Najaf; Uitterlinden, André G; Rivadeneira, Fernando; Hofman, Albert; Vingerling, Johannes R; Wang, Ya Xing; Wang, Xu; Tai-Hui Boh, Eileen; Ikram, M Kamran; Sabanayagam, Charumathi; Gupta, Preeti; Tan, Vincent; Zhou, Lei; Ho, Candice E H; Lim, Wan'e; Beuerman, Roger W; Siantar, Rosalynn; Tai, E-Shyong; Vithana, Eranga; Mihailov, Evelin; Khor, Chiea-Chuen; Hayward, Caroline; Luben, Robert N; Foster, Paul J; Klein, Barbara E K; Klein, Ronald; Wong, Hoi-Suen; Mitchell, Paul; Metspalu, Andres; Aung, Tin; Young, Terri L; He, Mingguang; Pärssinen, Olavi; van Duijn, Cornelia M; Jin Wang, Jie; Williams, Cathy; Jonas, Jost B; Teo, Yik-Ying; Mackey, David A; Oexle, Konrad; Yoshimura, Nagahisa; Paterson, Andrew D; Pfeiffer, Norbert; Wong, Tien-Yin; Baird, Paul N; Stambolian, Dwight; Wilson, Joan E Bailey; Cheng, Ching-Yu; Hammond, Christopher J; Klaver, Caroline C W; Saw, Seang-Mei; Rahi, Jugnoo S; Korobelnik, Jean-François; Kemp, John P; Timpson, Nicholas J; Smith, George Davey; Craig, Jamie E; Burdon, Kathryn P; Fogarty, Rhys D; Iyengar, Sudha K; Chew, Emily; Janmahasatian, Sarayut; Martin, Nicholas G; MacGregor, Stuart; Xu, Liang; Schache, Maria; Nangia, Vinay; Panda-Jonas, Songhomitra; Wright, Alan F; Fondran, Jeremy R; Lass, Jonathan H; Feng, Sheng; Zhao, Jing Hua; Khaw, Kay-Tee; Wareham, Nick J; Rantanen, Taina; Kaprio, Jaakko; Pang, Chi Pui; Chen, Li Jia; Tam, Pancy O; Jhanji, Vishal; Young, Alvin L; Döring, Angela; Raffel, Leslie J; Cotch, Mary-Frances; Li, Xiaohui; Yip, Shea Ping; Yap, Maurice K H; Biino, Ginevra; Vaccargiu, Simona; Fossarello, Maurizio; Fleck, Brian; Yazar, Seyhan; Tideman, Jan Willem L; Tedja, Milly; Deangelis, Margaret M; Morrison, Margaux; Farrer, Lindsay; Zhou, Xiangtian; Chen, Wei; Mizuki, Nobuhisa; Meguro, Akira; Mäkelä, Kari Matti

    2016-01-01

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10(-5)), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia. PMID:27020472

  16. Toward a Framework for Systematic Error Modeling of NASA Spaceborne Radar with NOAA/NSSL Ground Radar-Based National Mosaic QPE

    NASA Technical Reports Server (NTRS)

    Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.

    2011-01-01

    Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.

  17. Effectiveness of Barcoding for Reducing Patient Specimen and Laboratory Testing Identification Errors: A Laboratory Medicine Best Practices Systematic Review and Meta-Analysis

    PubMed Central

    Snyder, Susan R.; Favoretto, Alessandra M.; Derzon, James H.; Christenson, Robert; Kahn, Stephen; Shaw, Colleen; Baetz, Rich Ann; Mass, Diana; Fantz, Corrine; Raab, Stephen; Tanasijevic, Milenko; Liebow, Edward B.

    2015-01-01

    Objectives This is the first systematic review of the effectiveness of barcoding practices for reducing patient specimen and laboratory testing identification errors. Design and Methods The CDC-funded Laboratory Medicine Best Practices Initiative systematic review methods for quality improvement practices were used. Results A total of 17 observational studies reporting on barcoding systems are included in the body of evidence; 10 for patient specimens and 7 for point-of-care testing. All 17 studies favored barcoding, with meta-analysis mean odds ratios for barcoding systems of 4.39 (95% CI: 3.05 – 6.32) and for point-of-care testing of 5.93 (95% CI: 5.28 – 6.67). Conclusions Barcoding is effective for reducing patient specimen and laboratory testing identification errors in diverse hospital settings and is recommended as an evidence-based “best practice.” The overall strength of evidence rating is high and the effect size rating is substantial. Unpublished studies made an important contribution comprising almost half of the body of evidence. PMID:22750145

  18. Impact of contacting study authors to obtain additional data for systematic reviews: diagnostic accuracy studies for hepatic fibrosis

    PubMed Central

    2014-01-01

    Background Seventeen of 172 included studies in a recent systematic review of blood tests for hepatic fibrosis or cirrhosis reported diagnostic accuracy results discordant from 2 × 2 tables, and 60 studies reported inadequate data to construct 2 × 2 tables. This study explores the yield of contacting authors of diagnostic accuracy studies and impact on the systematic review findings. Methods Sixty-six corresponding authors were sent letters requesting additional information or clarification of data from 77 studies. Data received from the authors were synthesized with data included in the previous review, and diagnostic accuracy sensitivities, specificities, and positive and likelihood ratios were recalculated. Results Of the 66 authors, 68% were successfully contacted and 42% provided additional data for 29 out of 77 studies (38%). All authors who provided data at all did so by the third emailed request (ten authors provided data after one request). Authors of more recent studies were more likely to be located and provide data compared to authors of older studies. The effects of requests for additional data on the conclusions regarding the utility of blood tests to identify patients with clinically significant fibrosis or cirrhosis were generally small for ten out of 12 tests. Additional data resulted in reclassification (using median likelihood ratio estimates) from less useful to moderately useful or vice versa for the remaining two blood tests and enabled the calculation of an estimate for a third blood test for which previously the data had been insufficient to do so. We did not identify a clear pattern for the directional impact of additional data on estimates of diagnostic accuracy. Conclusions We successfully contacted and received results from 42% of authors who provided data for 38% of included studies. Contacting authors of studies evaluating the diagnostic accuracy of serum biomarkers for hepatic fibrosis and cirrhosis in hepatitis C patients

  19. Systematic errors in temperature estimates from MODIS data covering the western Palearctic and their impact on a parasite development model.

    PubMed

    Alonso-Carné, Jorge; García-Martín, Alberto; Estrada-Peña, Agustin

    2013-11-01

    The modelling of habitat suitability for parasites is a growing area of research due to its association with climate change and ensuing shifts in the distribution of infectious diseases. Such models depend on remote sensing data and require accurate, high-resolution temperature measurements. The temperature is critical for accurate estimation of development rates and potential habitat ranges for a given parasite. The MODIS sensors aboard the Aqua and Terra satellites provide high-resolution temperature data for remote sensing applications. This paper describes comparative analysis of MODIS-derived temperatures relative to ground records of surface temperature in the western Palaearctic. The results show that MODIS overestimated maximum temperature values and underestimated minimum temperatures by up to 5-6 °C. The combined use of both Aqua and Terra datasets provided the most accurate temperature estimates around latitude 35-44° N, with an overestimation during spring-summer months and an underestimation in autumn-winter. Errors in temperature estimation were associated with specific ecological regions within the target area as well as technical limitations in the temporal and orbital coverage of the satellites (e.g. sensor limitations and satellite transit times). We estimated error propagation of temperature uncertainties in parasite habitat suitability models by comparing outcomes of published models. Error estimates reached 36% of annual respective measurements depending on the model used. Our analysis demonstrates the importance of adequate image processing and points out the limitations of MODIS temperature data as inputs into predictive models concerning parasite lifecycles. PMID:24258878

  20. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    SciTech Connect

    Yu, Jaehyung; Wagner, Lucas K.; Ertekin, Elif

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.

  1. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    NASA Astrophysics Data System (ADS)

    Yu, Jaehyung; Wagner, Lucas K.; Ertekin, Elif

    2015-12-01

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.

  2. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide.

    PubMed

    Yu, Jaehyung; Wagner, Lucas K; Ertekin, Elif

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used. PMID:26671396

  3. A statistical analysis of systematic errors in temperature and ram velocity estimates from satellite-borne retarding potential analyzers

    SciTech Connect

    Klenzing, J. H.; Earle, G. D.; Heelis, R. A.; Coley, W. R.

    2009-05-15

    The use of biased grids as energy filters for charged particles is common in satellite-borne instruments such as a planar retarding potential analyzer (RPA). Planar RPAs are currently flown on missions such as the Communications/Navigation Outage Forecast System and the Defense Meteorological Satellites Program to obtain estimates of geophysical parameters including ion velocity and temperature. It has been shown previously that the use of biased grids in such instruments creates a nonuniform potential in the grid plane, which leads to inherent errors in the inferred parameters. A simulation of ion interactions with various configurations of biased grids has been developed using a commercial finite-element analysis software package. Using a statistical approach, the simulation calculates collected flux from Maxwellian ion distributions with three-dimensional drift relative to the instrument. Perturbations in the performance of flight instrumentation relative to expectations from the idealized RPA flux equation are discussed. Both single grid and dual-grid systems are modeled to investigate design considerations. Relative errors in the inferred parameters for each geometry are characterized as functions of ion temperature and drift velocity.

  4. Analysis and mitigation of systematic errors in spectral shearing interferometry of pulses approaching the single-cycle limit [Invited

    SciTech Connect

    Birge, Jonathan R.; Kaertner, Franz X.

    2008-06-15

    We derive an analytical approximation for the measured pulse width error in spectral shearing methods, such as spectral phase interferometry for direct electric-field reconstruction (SPIDER), caused by an anomalous delay between the two sheared pulse components. This analysis suggests that, as pulses approach the single-cycle limit, the resulting requirements on the calibration and stability of this delay become significant, requiring precision orders of magnitude higher than the scale of a wavelength. This is demonstrated by numerical simulations of SPIDER pulse reconstruction using actual data from a sub-two-cycle laser. We briefly propose methods to minimize the effects of this sensitivity in SPIDER and review variants of spectral shearing that attempt to avoid this difficulty.

  5. Kinematic GPS solutions for aircraft trajectories: Identifying and minimizing systematic height errors associated with atmospheric propagation delays

    USGS Publications Warehouse

    Shan, S.; Bevis, M.; Kendrick, E.; Mader, G.L.; Raleigh, D.; Hudnut, K.; Sartori, M.; Phillips, D.

    2007-01-01

    When kinematic GPS processing software is used to estimate the trajectory of an aircraft, unless the delays imposed on the GPS signals by the atmosphere are either estimated or calibrated via external observations, then vertical height errors of decimeters can occur. This problem is clearly manifested when the aircraft is positioned against multiple base stations in areas of pronounced topography because the aircraft height solutions obtained using different base stations will tend to be mutually offset, or biased, in proportion to the elevation differences between the base stations. When performing kinematic surveys in areas with significant topography it should be standard procedure to use multiple base stations, and to separate them vertically to the maximum extent possible, since it will then be much easier to detect mis-modeling of the atmosphere. Copyright 2007 by the American Geophysical Union.

  6. Adenomyomatosis of the gallbladder in childhood: A systematic review of the literature and an additional case report

    PubMed Central

    Parolini, Filippo; Indolfi, Giuseppe; Magne, Miguel Garcia; Salemme, Marianna; Cheli, Maurizio; Boroni, Giovanni; Alberti, Daniele

    2016-01-01

    AIM: To investigate the diagnostic and therapeutic assessment in children with adenomyomatosis of the gallbladder (AMG). METHODS: AMG is a degenerative disease characterized by a proliferation of the mucosal epithelium which deeply invaginates and extends into the thickened muscular layer of the gallbladder, causing intramural diverticula. Although AMG is found in up to 5% of cholecystectomy specimens in adult populations, this condition in childhood is extremely uncommon. Authors provide a detailed systematic review of the pediatric literature according to PRISMA guidelines, focusing on diagnostic and therapeutic assessment. An additional case of AMG is also presented. RESULTS: Five studies were finally enclosed, encompassing 5 children with AMG. Analysis was extended to our additional 11-year-old patient, who presented diffuse AMG and pancreatic acinar metaplasia of the gallbladder mucosa and was successfully managed with laparoscopic cholecystectomy. Mean age at presentation was 7.2 years. Unspecific abdominal pain was the commonest symptom. Abdominal ultrasound was performed on all patients, with a diagnostic accuracy of 100%. Five patients underwent cholecystectomy, and at follow-up were asymptomatic. In the remaining patient, completely asymptomatic at diagnosis, a conservative approach with monthly monitoring via ultrasonography was undertaken. CONCLUSION: Considering the remote but possible degeneration leading to cancer and the feasibility of laparoscopic cholecystectomy even in small children, evidence suggests that elective laparoscopic cholecystectomy represent the treatment of choice. Pre-operative evaluation of the extrahepatic biliary tree anatomy with cholangio-MRI is strongly recommended. PMID:27170933

  7. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  8. Medication Errors

    MedlinePlus

    ... to reduce the risk of medication errors to industry and others at FDA. Additionally, DMEPA prospectively reviews ... List of Abbreviations Regulations and Guidances Guidance for Industry: Safety Considerations for Product Design to Minimize Medication ...

  9. Bacterial Cooperation Causes Systematic Errors in Pathogen Risk Assessment due to the Failure of the Independent Action Hypothesis.

    PubMed

    Cornforth, Daniel M; Matthews, Andrew; Brown, Sam P; Raymond, Ben

    2015-04-01

    The Independent Action Hypothesis (IAH) states that pathogenic individuals (cells, spores, virus particles etc.) behave independently of each other, so that each has an independent probability of causing systemic infection or death. The IAH is not just of basic scientific interest; it forms the basis of our current estimates of infectious disease risk in humans. Despite the important role of the IAH in managing disease interventions for food and water-borne pathogens, experimental support for the IAH in bacterial pathogens is indirect at best. Moreover since the IAH was first proposed, cooperative behaviors have been discovered in a wide range of microorganisms, including many pathogens. A fundamental principle of cooperation is that the fitness of individuals is affected by the presence and behaviors of others, which is contrary to the assumption of independent action. In this paper, we test the IAH in Bacillus thuringiensis (B.t), a widely occurring insect pathogen that releases toxins that benefit others in the inoculum, infecting the diamondback moth, Plutella xylostella. By experimentally separating B.t. spores from their toxins, we demonstrate that the IAH fails because there is an interaction between toxin and spore effects on mortality, where the toxin effect is synergistic and cannot be accommodated by independence assumptions. Finally, we show that applying recommended IAH dose-response models to high dose data leads to systematic overestimation of mortality risks at low doses, due to the presence of synergistic pathogen interactions. Our results show that cooperative secretions can easily invalidate the IAH, and that such mechanistic details should be incorporated into pathogen risk analysis. PMID:25909384

  10. Bacterial Cooperation Causes Systematic Errors in Pathogen Risk Assessment due to the Failure of the Independent Action Hypothesis

    PubMed Central

    Cornforth, Daniel M.; Matthews, Andrew; Brown, Sam P.; Raymond, Ben

    2015-01-01

    The Independent Action Hypothesis (IAH) states that pathogenic individuals (cells, spores, virus particles etc.) behave independently of each other, so that each has an independent probability of causing systemic infection or death. The IAH is not just of basic scientific interest; it forms the basis of our current estimates of infectious disease risk in humans. Despite the important role of the IAH in managing disease interventions for food and water-borne pathogens, experimental support for the IAH in bacterial pathogens is indirect at best. Moreover since the IAH was first proposed, cooperative behaviors have been discovered in a wide range of microorganisms, including many pathogens. A fundamental principle of cooperation is that the fitness of individuals is affected by the presence and behaviors of others, which is contrary to the assumption of independent action. In this paper, we test the IAH in Bacillus thuringiensis (B.t), a widely occurring insect pathogen that releases toxins that benefit others in the inoculum, infecting the diamondback moth, Plutella xylostella. By experimentally separating B.t. spores from their toxins, we demonstrate that the IAH fails because there is an interaction between toxin and spore effects on mortality, where the toxin effect is synergistic and cannot be accommodated by independence assumptions. Finally, we show that applying recommended IAH dose-response models to high dose data leads to systematic overestimation of mortality risks at low doses, due to the presence of synergistic pathogen interactions. Our results show that cooperative secretions can easily invalidate the IAH, and that such mechanistic details should be incorporated into pathogen risk analysis. PMID:25909384

  11. Effectiveness of additional supervised exercises compared with conventional treatment alone in patients with acute lateral ankle sprains: systematic review

    PubMed Central

    van Ochten, John; Luijsterburg, Pim A J; van Middelkoop, Marienke; Koes, Bart W; Bierma-Zeinstra, Sita M A

    2010-01-01

    Objective To summarise the effectiveness of adding supervised exercises to conventional treatment compared with conventional treatment alone in patients with acute lateral ankle sprains. Design Systematic review. Data sources Medline, Embase, Cochrane Central Register of Controlled Trials, Cinahl, and reference screening. Study selection Included studies were randomised controlled trials, quasi-randomised controlled trials, or clinical trials. Patients were adolescents or adults with an acute lateral ankle sprain. The treatment options were conventional treatment alone or conventional treatment combined with supervised exercises. Two reviewers independently assessed the risk of bias, and one reviewer extracted data. Because of clinical heterogeneity we analysed the data using a best evidence synthesis. Follow-up was classified as short term (up to two weeks), intermediate (two weeks to three months), and long term (more than three months). Results 11 studies were included. There was limited to moderate evidence to suggest that the addition of supervised exercises to conventional treatment leads to faster and better recovery and a faster return to sport at short term follow-up than conventional treatment alone. In specific populations (athletes, soldiers, and patients with severe injuries) this evidence was restricted to a faster return to work and sport only. There was no strong evidence of effectiveness for any of the outcome measures. Most of the included studies had a high risk of bias, with few having adequate statistical power to detect clinically relevant differences. Conclusion Additional supervised exercises compared with conventional treatment alone have some benefit for recovery and return to sport in patients with ankle sprain, though the evidence is limited or moderate and many studies are subject to bias. PMID:20978065

  12. Additive Synergism between Asbestos and Smoking in Lung Cancer Risk: A Systematic Review and Meta-Analysis

    PubMed Central

    Ngamwong, Yuwadee; Tangamornsuksan, Wimonchat; Lohitnavy, Ornrat; Chaiyakunapruk, Nathorn; Scholfield, C. Norman; Reisfeld, Brad; Lohitnavy, Manupat

    2015-01-01

    Smoking and asbestos exposure are important risks for lung cancer. Several epidemiological studies have linked asbestos exposure and smoking to lung cancer. To reconcile and unify these results, we conducted a systematic review and meta-analysis to provide a quantitative estimate of the increased risk of lung cancer associated with asbestos exposure and cigarette smoking and to classify their interaction. Five electronic databases were searched from inception to May, 2015 for observational studies on lung cancer. All case-control (N = 10) and cohort (N = 7) studies were included in the analysis. We calculated pooled odds ratios (ORs), relative risks (RRs) and 95% confidence intervals (CIs) using a random-effects model for the association of asbestos exposure and smoking with lung cancer. Lung cancer patients who were not exposed to asbestos and non-smoking (A-S-) were compared with; (i) asbestos-exposed and non-smoking (A+S-), (ii) non-exposure to asbestos and smoking (A-S+), and (iii) asbestos-exposed and smoking (A+S+). Our meta-analysis showed a significant difference in risk of developing lung cancer among asbestos exposed and/or smoking workers compared to controls (A-S-), odds ratios for the disease (95% CI) were (i) 1.70 (A+S-, 1.31–2.21), (ii) 5.65; (A-S+, 3.38–9.42), (iii) 8.70 (A+S+, 5.8–13.10). The additive interaction index of synergy was 1.44 (95% CI = 1.26–1.77) and the multiplicative index = 0.91 (95% CI = 0.63–1.30). Corresponding values for cohort studies were 1.11 (95% CI = 1.00–1.28) and 0.51 (95% CI = 0.31–0.85). Our results point to an additive synergism for lung cancer with co-exposure of asbestos and cigarette smoking. Assessments of industrial health risks should take smoking and other airborne health risks when setting occupational asbestos exposure limits. PMID:26274395

  13. Addition of dipeptidyl peptidase-4 inhibitors to sulphonylureas and risk of hypoglycaemia: systematic review and meta-analysis

    PubMed Central

    Moore, Nicholas; Arnaud, Mickael; Robinson, Philip; Raschi, Emanuel; De Ponti, Fabrizio; Bégaud, Bernard; Pariente, Antoine

    2016-01-01

    Objective To quantify the risk of hypoglycaemia associated with the concomitant use of dipeptidyl peptidase-4 (DPP-4) inhibitors and sulphonylureas compared with placebo and sulphonylureas. Design Systematic review and meta-analysis. Data sources Medline, ISI Web of Science, SCOPUS, Cochrane Central Register of Controlled Trials, and clinicaltrial.gov were searched without any language restriction. Study selection Placebo controlled randomised trials comprising at least 50 participants with type 2 diabetes treated with DPP-4 inhibitors and sulphonylureas. Review methods Risk of bias in each trial was assessed using the Cochrane Collaboration tool. The risk ratio of hypoglycaemia with 95% confidence intervals was computed for each study and then pooled using fixed effect models (Mantel Haenszel method) or random effect models, when appropriate. Subgroup analyses were also performed (eg, dose of DPP-4 inhibitors). The number needed to harm (NNH) was estimated according to treatment duration. Results 10 studies were included, representing a total of 6546 participants (4020 received DPP-4 inhibitors plus sulphonylureas, 2526 placebo plus sulphonylureas). The risk ratio of hypoglycaemia was 1.52 (95% confidence interval 1.29 to 1.80). The NNH was 17 (95% confidence interval 11 to 30) for a treatment duration of six months or less, 15 (9 to 26) for 6.1 to 12 months, and 8 (5 to 15) for more than one year. In subgroup analysis, no difference was found between full and low doses of DPP-4 inhibitors: the risk ratio related to full dose DPP-4 inhibitors was 1.66 (1.34 to 2.06), whereas the increased risk ratio related to low dose DPP-4 inhibitors did not reach statistical significance (1.33, 0.92 to 1.94). Conclusions Addition of DPP-4 inhibitors to sulphonylurea to treat people with type 2 diabetes is associated with a 50% increased risk of hypoglycaemia and to one excess case of hypoglycaemia for every 17 patients in the first six months of treatment. This

  14. Errors in Viking Lander Atmospheric Profiles Discovered Using MOLA Topography

    NASA Technical Reports Server (NTRS)

    Withers, Paul; Lorenz, R. D.; Neumann, G. A.

    2002-01-01

    Each Viking lander measured a topographic profile during entry. Comparing to MOLA (Mars Orbiter Laser Altimeter), we find a vertical error of 1-2 km in the Viking trajectory. This introduces a systematic error of 10-20% in the Viking densities and pressures at a given altitude. Additional information is contained in the original extended abstract.

  15. Effects of Active Student Response during Error Correction on the Acquisition, Maintenance, and Generalization of Science Vocabulary by Elementary Students: A Systematic Replication.

    ERIC Educational Resources Information Center

    Drevno, Gregg E.; And Others

    1994-01-01

    This study compared active student response (ASR) error correction and no-response (NR) error correction while teaching science terms to five elementary students. When a student erred, the teacher modeled the definition and the student either repeated it (ASR) or not (NR). ASR error correction was superior on each of seven dependent variables.…

  16. Reducing diagnostic errors in primary care. A systematic meta-review of computerized diagnostic decision support systems by the LINNEAUS collaboration on patient safety in primary care

    PubMed Central

    Nurek, Martine; Kostopoulou, Olga; Delaney, Brendan C; Esmail, Aneez

    2015-01-01

    ABSTRACT Background: Computerized diagnostic decision support systems (CDDSS) have the potential to support the cognitive task of diagnosis, which is one of the areas where general practitioners have greatest difficulty and which accounts for a significant proportion of adverse events recorded in the primary care setting. Objective: To determine the extent to which CDDSS may meet the requirements of supporting the cognitive task of diagnosis, and the currently perceived barriers that prevent the integration of CDDSS with electronic health record (EHR) systems. Methods: We conducted a meta-review of existing systematic reviews published in English, searching MEDLINE, Embase, PsycINFO and Web of Knowledge for articles on the features and effectiveness of CDDSS for medical diagnosis published since 2004. Eligibility criteria included systematic reviews where individual clinicians were primary end users. Outcomes we were interested in were the effectiveness and identification of specific features of CDDSS on diagnostic performance. Results: We identified 1970 studies and excluded 1938 because they did not fit our inclusion criteria. A total of 45 articles were identified and 12 were found suitable for meta-review. Extraction of high-level requirements identified that a more standardized computable approach is needed to knowledge representation, one that can be readily updated as new knowledge is gained. In addition, a deep integration with the EHR is needed in order to trigger at appropriate points in cognitive workflow. Conclusion: Developing a CDDSS that is able to utilize dynamic vocabulary tools to quickly capture and code relevant diagnostic findings, and coupling these with individualized diagnostic suggestions based on the best-available evidence has the potential to improve diagnostic accuracy, but requires evaluation. PMID:26339829

  17. Systematic errors in the simulation of the Asian summer monsoon: the role of rainfall variability on a range of time and space scales

    NASA Astrophysics Data System (ADS)

    Martin, Gill; Levine, Richard; Klingaman, Nicholas; Bush, Stephanie; Turner, Andrew; Woolnough, Steven

    2015-04-01

    -scale circulation than is indicated by the intensity and frequency of rainfall alone. Monsoon depressions and low pressure systems are important contributors to monsoon rainfall over central and northern India, areas where MetUM climate simulations typically show deficient monsoon rainfall. Analysis of MetUM climate simulations at resolutions ranging from N96 (~135km) to N512 (~25km) suggests that at lower resolution the numbers and intensities of monsoon depressions and low pressure systems and their associated rainfall are very low compared with re-analyses/observations. We show that there are substantial increases with horizontal resolution, but resolution is not the only factor. Idealised simulations, either using nudged atmospheric winds or initialised coupled hindcasts, which improve (strengthen) the mean state monsoon and cyclonic circulation over the Indian peninsula, also result in a substantial increase in monsoon depressions and associated rainfall. This suggests that a more realistic representation of monsoon depressions is possible even at lower resolution if the larger-scale systematic error pattern in the monsoon is improved.

  18. Preventive zinc supplementation for children, and the effect of additional iron: a systematic review and meta-analysis

    PubMed Central

    Mayo-Wilson, Evan; Imdad, Aamer; Junior, Jean; Dean, Sohni; Bhutta, Zulfiqar A

    2014-01-01

    Objective Zinc deficiency is widespread, and preventive supplementation may have benefits in young children. Effects for children over 5 years of age, and effects when coadministered with other micronutrients are uncertain. These are obstacles to scale-up. This review seeks to determine if preventive supplementation reduces mortality and morbidity for children aged 6 months to 12 years. Design Systematic review conducted with the Cochrane Developmental, Psychosocial and Learning Problems Group. Two reviewers independently assessed studies. Meta-analyses were performed for mortality, illness and side effects. Data sources We searched multiple databases, including CENTRAL and MEDLINE in January 2013. Authors were contacted for missing information. Eligibility criteria for selecting studies Randomised trials of preventive zinc supplementation. Hospitalised children and children with chronic diseases were excluded. Results 80 randomised trials with 205 401 participants were included. There was a small but non-significant effect on all-cause mortality (risk ratio (RR) 0.95 (95% CI 0.86 to 1.05)). Supplementation may reduce incidence of all-cause diarrhoea (RR 0.87 (0.85 to 0.89)), but there was evidence of reporting bias. There was no evidence of an effect of incidence or prevalence of respiratory infections or malaria. There was moderate quality evidence of a very small effect on linear growth (standardised mean difference 0.09 (0.06 to 0.13)) and an increase in vomiting (RR 1.29 (1.14 to 1.46)). There was no evidence of an effect on iron status. Comparing zinc with and without iron cosupplementation and direct comparisons of zinc plus iron versus zinc administered alone favoured cointervention for some outcomes and zinc alone for other outcomes. Effects may be larger for children over 1 year of age, but most differences were not significant. Conclusions Benefits of preventive zinc supplementation may outweigh any potentially adverse effects in areas where

  19. Systematic Dissection of Coding Exons at Single Nucleotide Resolution Supports an Additional Role in Cell-Specific Transcriptional Regulation

    PubMed Central

    Kim, Mee J.; Findlay, Gregory M.; Martin, Beth; Zhao, Jingjing; Bell, Robert J. A.; Smith, Robin P.; Ku, Angel A.; Shendure, Jay; Ahituv, Nadav

    2014-01-01

    In addition to their protein coding function, exons can also serve as transcriptional enhancers. Mutations in these exonic-enhancers (eExons) could alter both protein function and transcription. However, the functional consequence of eExon mutations is not well known. Here, using massively parallel reporter assays, we dissect the enhancer activity of three liver eExons (SORL1 exon 17, TRAF3IP2 exon 2, PPARG exon 6) at single nucleotide resolution in the mouse liver. We find that both synonymous and non-synonymous mutations have similar effects on enhancer activity and many of the deleterious mutation clusters overlap known liver-associated transcription factor binding sites. Carrying a similar massively parallel reporter assay in HeLa cells with these three eExons found differences in their mutation profiles compared to the liver, suggesting that enhancers could have distinct operating profiles in different tissues. Our results demonstrate that eExon mutations could lead to multiple phenotypes by disrupting both the protein sequence and enhancer activity and that enhancers can have distinct mutation profiles in different cell types. PMID:25340400

  20. Uncertainty quantification and error analysis

    SciTech Connect

    Higdon, Dave M; Anderson, Mark C; Habib, Salman; Klein, Richard; Berliner, Mark; Covey, Curt; Ghattas, Omar; Graziani, Carlo; Seager, Mark; Sefcik, Joseph; Stark, Philip

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  1. Systematic review of ERP and fMRI studies investigating inhibitory control and error processing in people with substance dependence and behavioural addictions

    PubMed Central

    Luijten, Maartje; Machielsen, Marise W.J.; Veltman, Dick J.; Hester, Robert; de Haan, Lieuwe; Franken, Ingmar H.A.

    2014-01-01

    Background Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The combined evaluation of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) findings in the present review offers unique information on neural deficits in addicted individuals. Methods We selected 19 ERP and 22 fMRI studies using stop-signal, go/no-go or Flanker paradigms based on a search of PubMed and Embase. Results The most consistent findings in addicted individuals relative to healthy controls were lower N2, error-related negativity and error positivity amplitudes as well as hypoactivation in the anterior cingulate cortex (ACC), inferior frontal gyrus and dorsolateral prefrontal cortex. These neural deficits, however, were not always associated with impaired task performance. With regard to behavioural addictions, some evidence has been found for similar neural deficits; however, studies are scarce and results are not yet conclusive. Differences among the major classes of substances of abuse were identified and involve stronger neural responses to errors in individuals with alcohol dependence versus weaker neural responses to errors in other substance-dependent populations. Limitations Task design and analysis techniques vary across studies, thereby reducing comparability among studies and the potential of clinical use of these measures. Conclusion Current addiction theories were supported by identifying consistent abnormalities in prefrontal brain function in individuals with addiction. An integrative model is proposed, suggesting that neural deficits in the dorsal ACC may constitute a hallmark neurocognitive deficit underlying addictive behaviours, such as loss of control. PMID:24359877

  2. Steroids Versus Steroids Plus Additional Agent in Frontline Treatment of Acute Graft-versus-Host Disease: A Systematic Review and Meta-Analysis of Randomized Trials.

    PubMed

    Rashidi, Armin; DiPersio, John F; Sandmaier, Brenda M; Colditz, Graham A; Weisdorf, Daniel J

    2016-06-01

    Despite extensive research in the last few decades, progress in treatment of acute graft-versus-host disease (aGVHD), a common complication of allogeneic hematopoietic cell transplantation (HCT), has been limited and steroids continue to be the standard frontline treatment. Randomized clinical trials (RCTs) have failed to find a beneficial effect of escalating immunosuppression using additional agents. Considering the small number of RCTs, limited sample sizes, and frequent early termination because of anticipated futility, we conducted a systematic review and an aggregate data meta-analysis to explore whether a true efficacy signal has been missed because of the limitations of individual RCTs. Seven reports met our inclusion criteria. The control arm in all studies was 2 mg/kg/day prednisone (or equivalent). The additional agent(s) used in the experimental arm(s) were higher-dose steroids, antithymocyte globulin, infliximab, anti-interleukin-2 receptor antibody (daclizumab and BT563), CD5-specific immunotoxin, and mycophenolate mofetil. Random effects meta-analysis revealed no efficacy signal in pooled response rates at various times points. Overall survival at 100 days was significantly worse in the experimental arm (relative risk [RR], .83; 95% confidence interval [CI], .74 to .94; P = .004, data from 3 studies) and showed a similar trend (albeit not statistically significantly) at 1 year as well (RR, .86; 95% CI, .68 to 1.09; P = .21, data from 5 studies). In conclusion, these results argue against the value of augmented generic immunosuppression beyond steroids for frontline treatment of aGVHD and emphasize the importance of developing alternative strategies. Novel forms of immunomodulation and targeted therapies against non-immune-related pathways may enhance the efficacy of steroids in this setting, and early predictive and prognostic biomarkers can help identify the subgroup of patients who would likely need treatments other than (or in addition to

  3. Determining the isotopic abundance of a labeled compound by mass spectrometry and how correcting for natural abundance distribution using analogous data from the unlabeled compound leads to a systematic error.

    PubMed

    Schenk, David J; Lockley, William J S; Elmore, Charles S; Hesk, Dave; Roberts, Drew

    2016-04-01

    When the isotopic abundance or specific activity of a labeled compound is determined by mass spectrometry (MS), it is necessary to correct the raw MS data to eliminate ion intensity contributions, which arise from the presence of heavy isotopes at natural abundance (e.g., a typical carbon compound contains ~1.1% (13) C per carbon atom). The most common approach is to employ a correction in which the mass-to-charge distribution of the corresponding unlabeled compound is used to subtract the natural abundance contributions from the raw mass-to-charge distribution pattern of the labeled compound. Following this correction, the residual intensities should be due to the presence of the newly introduced labeled atoms only. However, this will only be the case when the natural abundance mass isotopomer distribution of the unlabeled compound is the same as that of the labeled species. Although this may be a good approximation, it cannot be accurate in all cases. The implications of this approximation for the determination of isotopic abundance and specific activity have been examined in practice. Isotopically mixed stable-atom labeled valine batches were produced, and both these and [(14) C6 ]carbamazepine were analyzed by MS to determine the extent of the error introduced by the approach. Our studies revealed that significant errors are possible for small highly-labeled compounds, such as valine, under some circumstances. In the case with [(14) C6 ]carbamazepine, the errors introduced were minor but could be significant for (14) C-labeled compounds with particular isotopic distributions. This source of systematic error can be minimized, although not eliminated, by the selection of an appropriate isotopic correction pattern or by the use of a program that varies the natural abundance distribution throughout the correction. PMID:26916110

  4. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close ...

  5. NLO error propagation exercise: statistical results

    SciTech Connect

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or /sup 235/U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, /sup 235/U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and /sup 235/U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods.

  6. Systematic reviews need systematic searchers

    PubMed Central

    McGowan, Jessie; Sampson, Margaret

    2005-01-01

    Purpose: This paper will provide a description of the methods, skills, and knowledge of expert searchers working on systematic review teams. Brief Description: Systematic reviews and meta-analyses are very important to health care practitioners, who need to keep abreast of the medical literature and make informed decisions. Searching is a critical part of conducting these systematic reviews, as errors made in the search process potentially result in a biased or otherwise incomplete evidence base for the review. Searches for systematic reviews need to be constructed to maximize recall and deal effectively with a number of potentially biasing factors. Librarians who conduct the searches for systematic reviews must be experts. Discussion/Conclusion: Expert searchers need to understand the specifics about data structure and functions of bibliographic and specialized databases, as well as the technical and methodological issues of searching. Search methodology must be based on research about retrieval practices, and it is vital that expert searchers keep informed about, advocate for, and, moreover, conduct research in information retrieval. Expert searchers are an important part of the systematic review team, crucial throughout the review process—from the development of the proposal and research question to publication. PMID:15685278

  7. Rapid mapping of volumetric errors

    SciTech Connect

    Krulewich, D.; Hale, L.; Yordy, D.

    1995-09-13

    This paper describes a relatively inexpensive, fast, and easy to execute approach to mapping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) modeling the relationship between the volumetric error and the current state of the machine; (2) acquiring error data based on length measurements throughout the work volume; and (3) optimizing the model to the particular machine.

  8. Error analysis of friction drive elements

    NASA Astrophysics Data System (ADS)

    Wang, Guomin; Yang, Shihai; Wang, Daxing

    2008-07-01

    Friction drive is used in some large astronomical telescopes in recent years. Comparing to the direct drive, friction drive train consists of more buildup parts. Usually, the friction drive train consists of motor-tachometer unit, coupling, reducer, driving roller, big wheel, encoder and encoder coupling. Normally, these buildup parts will introduce somewhat errors to the drive system. Some of them are random error and some of them are systematic error. For the random error, the effective way is to estimate their contributions and try to find proper way to decrease its influence. For the systematic error, the useful way is to analyse and test them quantitively, and then feedback the error to the control system to correct them. The main task of this paper is to analyse these error sources and find out their characteristics, such as random error, systematic error and contributions. The methods or equations used in the analysis will be also presented detail in this paper.

  9. On the combination procedure of correlated errors

    NASA Astrophysics Data System (ADS)

    Erler, Jens

    2015-09-01

    When averages of different experimental determinations of the same quantity are computed, each with statistical and systematic error components, then frequently the statistical and systematic components of the combined error are quoted explicitly. These are important pieces of information since statistical errors scale differently and often more favorably with the sample size than most systematical or theoretical errors. In this communication we describe a transparent procedure by which the statistical and systematic error components of the combination uncertainty can be obtained. We develop a general method and derive a general formula for the case of Gaussian errors with or without correlations. The method can easily be applied to other error distributions, as well. For the case of two measurements, we also define disparity and misalignment angles, and discuss their relation to the combination weight factors.

  10. Image pre-filtering for measurement error reduction in digital image correlation

    NASA Astrophysics Data System (ADS)

    Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing

    2015-02-01

    In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random

  11. Medication Errors

    MedlinePlus

    Medicines cure infectious diseases, prevent problems from chronic diseases, and ease pain. But medicines can also cause harmful reactions if not used ... You can help prevent errors by Knowing your medicines. Keep a list of the names of your ...

  12. Systematic approach for simultaneously correcting the band-gap andp-dseparation errors of common cation III-V or II-VI binaries in density functional theory calculations within a local density approximation

    DOE PAGESBeta

    Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang

    2015-07-31

    We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles methodmore » can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.« less

  13. Error Analysis and the EFL Classroom Teaching

    ERIC Educational Resources Information Center

    Xie, Fang; Jiang, Xue-mei

    2007-01-01

    This paper makes a study of error analysis and its implementation in the EFL (English as Foreign Language) classroom teaching. It starts by giving a systematic review of the concepts and theories concerning EA (Error Analysis), the various reasons causing errors are comprehensively explored. The author proposes that teachers should employ…

  14. Random errors in egocentric networks.

    PubMed

    Almquist, Zack W

    2012-10-01

    The systematic errors that are induced by a combination of human memory limitations and common survey design and implementation have long been studied in the context of egocentric networks. Despite this, little if any work exists in the area of random error analysis on these same networks; this paper offers a perspective on the effects of random errors on egonet analysis, as well as the effects of using egonet measures as independent predictors in linear models. We explore the effects of false-positive and false-negative error in egocentric networks on both standard network measures and on linear models through simulation analysis on a ground truth egocentric network sample based on facebook-friendships. Results show that 5-20% error rates, which are consistent with error rates known to occur in ego network data, can cause serious misestimation of network properties and regression parameters. PMID:23878412

  15. Random errors in egocentric networks

    PubMed Central

    Almquist, Zack W.

    2013-01-01

    The systematic errors that are induced by a combination of human memory limitations and common survey design and implementation have long been studied in the context of egocentric networks. Despite this, little if any work exists in the area of random error analysis on these same networks; this paper offers a perspective on the effects of random errors on egonet analysis, as well as the effects of using egonet measures as independent predictors in linear models. We explore the effects of false-positive and false-negative error in egocentric networks on both standard network measures and on linear models through simulation analysis on a ground truth egocentric network sample based on facebook-friendships. Results show that 5–20% error rates, which are consistent with error rates known to occur in ego network data, can cause serious misestimation of network properties and regression parameters. PMID:23878412

  16. Error Modeling of Multi-baseline Optical Truss. Part II; Application to SIM Metrology Truss Field Dependent Error

    NASA Technical Reports Server (NTRS)

    Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert

    2004-01-01

    The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices

  17. The need for annual echocardiography to detect cabergoline-associated valvulopathy in patients with prolactinoma: a systematic review and additional clinical data.

    PubMed

    Caputo, Carmela; Prior, David; Inder, Warrick J

    2015-11-01

    Present recommendations by the US Food and Drug Administration advise that patients with prolactinoma treated with cabergoline should have an annual echocardiogram to screen for valvular heart disease. Here, we present new clinical data and a systematic review of the scientific literature showing that the prevalence of cabergoline-associated valvulopathy is very low. We prospectively assessed 40 patients with prolactinoma taking cabergoline. Cardiovascular examination before echocardiography detected an audible systolic murmur in 10% of cases (all were functional murmurs), and no clinically significant valvular lesion was shown on echocardiogram in the 90% of patients without a murmur. Our systematic review identified 21 studies that assessed the presence of valvular abnormalities in patients with prolactinoma treated with cabergoline. Including our new clinical data, only two (0·11%) of 1811 patients were confirmed to have cabergoline-associated valvulopathy (three [0·17%] if possible cases were included). The probability of clinically significant valvular heart disease is low in the absence of a murmur. On the basis of these findings, we challenge the present recommendations to do routine echocardiography in all patients taking cabergoline for prolactinoma every 12 months. We propose that such patients should be screened by a clinical cardiovascular examination and that echocardiogram should be reserved for those patients with an audible murmur, those treated for more than 5 years at a dose of more than 3 mg per week, or those who maintain cabergoline treatment after the age of 50 years. PMID:25466526

  18. TU-C-BRE-08: IMRT QA: Selecting Meaningful Gamma Criteria Based On Error Detection Sensitivity

    SciTech Connect

    Steers, J; Fraass, B

    2014-06-15

    Purpose: To develop a strategy for defining meaningful tolerance limits and studying the sensitivity of IMRT QA gamma criteria by inducing known errors in QA plans. Methods: IMRT QA measurements (ArcCHECK, Sun Nuclear) were compared to QA plan calculations with induced errors. Many (>24) gamma comparisons between data and calculations were performed for each of several kinds of cases and classes of induced error types with varying magnitudes (e.g. MU errors ranging from -10% to +10%), resulting in over 3,000 comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using various gamma criteria. Results: This study demonstrates that random, case-specific, and systematic errors can be detected by the error curve analysis. Depending on location of the peak of the error curve (e.g., not centered about zero), 3%/3mm threshold=10% criteria may miss MU errors of up to 10% and random MLC errors of up to 5 mm. Additionally, using larger dose thresholds for specific devices may increase error sensitivity (for the same X%/Ymm criteria) by up to a factor of two. This analysis will allow clinics to select more meaningful gamma criteria based on QA device, treatment techniques, and acceptable error tolerances. Conclusion: We propose a strategy for selecting gamma parameters based on the sensitivity of gamma criteria and individual QA devices to induced calculation errors in QA plans. Our data suggest large errors may be missed using conventional gamma criteria and that using stricter criteria with an increased dose threshold may reduce the range of missed errors. This approach allows quantification of gamma criteria sensitivity and is straightforward to apply to other combinations of devices and treatment techniques.

  19. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  20. Cardinality Balanced Multi-Target Multi-Bernoulli Filter with Error Compensation.

    PubMed

    He, Xiangyu; Liu, Guixi

    2016-01-01

    The cardinality balanced multi-target multi-Bernoulli (CBMeMBer) filter developed recently has been proved an effective multi-target tracking (MTT) algorithm based on the random finite set (RFS) theory, and it can jointly estimate the number of targets and their states from a sequence of sensor measurement sets. However, because of the existence of systematic errors in sensor measurements, the CBMeMBer filter can easily produce different levels of performance degradation. In this paper, an extended CBMeMBer filter, in which the joint probability density function of target state and systematic error is recursively estimated, is proposed to address the MTT problem based on the sensor measurements with systematic errors. In addition, an analytic implementation of the extended CBMeMBer filter is also presented for linear Gaussian models. Simulation results confirm that the proposed algorithm can track multiple targets with better performance. PMID:27589764

  1. A systematic review of image segmentation methodology, used in the additive manufacture of patient-specific 3D printed models of the cardiovascular system

    PubMed Central

    Byrne, N; Velasco Forte, M; Tandon, A; Valverde, I

    2016-01-01

    Background Shortcomings in existing methods of image segmentation preclude the widespread adoption of patient-specific 3D printing as a routine decision-making tool in the care of those with congenital heart disease. We sought to determine the range of cardiovascular segmentation methods and how long each of these methods takes. Methods A systematic review of literature was undertaken. Medical imaging modality, segmentation methods, segmentation time, segmentation descriptive quality (SDQ) and segmentation software were recorded. Results Totally 136 studies met the inclusion criteria (1 clinical trial; 80 journal articles; 55 conference, technical and case reports). The most frequently used image segmentation methods were brightness thresholding, region growing and manual editing, as supported by the most popular piece of proprietary software: Mimics (Materialise NV, Leuven, Belgium, 1992–2015). The use of bespoke software developed by individual authors was not uncommon. SDQ indicated that reporting of image segmentation methods was generally poor with only one in three accounts providing sufficient detail for their procedure to be reproduced. Conclusions and implication of key findings Predominantly anecdotal and case reporting precluded rigorous assessment of risk of bias and strength of evidence. This review finds a reliance on manual and semi-automated segmentation methods which demand a high level of expertise and a significant time commitment on the part of the operator. In light of the findings, we have made recommendations regarding reporting of 3D printing studies. We anticipate that these findings will encourage the development of advanced image segmentation methods. PMID:27170842

  2. Theory of Test Translation Error

    ERIC Educational Resources Information Center

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  3. Neural network calibration of a snapshot birefringent Fourier transform spectrometer with periodic phase errors.

    PubMed

    Luo, David; Kudenov, Michael W

    2016-05-16

    Systematic phase errors in Fourier transform spectroscopy can severely degrade the calculated spectra. Compensation of these errors is typically accomplished using post-processing techniques, such as Fourier deconvolution, linear unmixing, or iterative solvers. This results in increased computational complexity when reconstructing and calibrating many parallel interference patterns. In this paper, we describe a new method of calibrating a Fourier transform spectrometer based on the use of artificial neural networks (ANNs). In this way, it is demonstrated that a simpler and more straightforward reconstruction process can be achieved at the cost of additional calibration equipment. To this end, we provide a theoretical model for general systematic phase errors in a polarization birefringent interferometer. This is followed by a discussion of our experimental setup and a demonstration of our technique, as applied to data with and without phase error. The technique's utility is then supported by comparison to alternative reconstruction techniques using fast Fourier transforms (FFTs) and linear unmixing. PMID:27409947

  4. Discretization vs. Rounding Error in Euler's Method

    ERIC Educational Resources Information Center

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  5. New oral anticoagulants in addition to single or dual antiplatelet therapy after an acute coronary syndrome: a systematic review and meta-analysis

    PubMed Central

    Oldgren, Jonas; Wallentin, Lars; Alexander, John H.; James, Stefan; Jönelid, Birgitta; Steg, Gabriel; Sundström, Johan

    2013-01-01

    Background Oral anticoagulation in addition to antiplatelet treatment after an acute coronary syndrome might reduce ischaemic events but increase bleeding risk. We performed a meta-analysis to evaluate the efficacy and safety of adding direct thrombin or factor-Xa inhibition by any of the novel oral anticoagulants (apixaban, dabigatran, darexaban, rivaroxaban, and ximelagatran) to single (aspirin) or dual (aspirin and clopidogrel) antiplatelet therapy in this setting. Methods and results All seven published randomized, placebo-controlled phase II and III studies of novel oral anticoagulants in acute coronary syndromes were included. The database consisted of 30 866 patients, 4135 (13.4%) on single, and 26 731 (86.6%) on dual antiplatelet therapy, with a non-ST- or ST-elevation acute coronary syndrome within the last 7–14 days. We defined major adverse cardiovascular events (MACEs) as the composite of all-cause mortality, myocardial infarction, or stroke; and clinically significant bleeding as the composite of major and non-major bleeding requiring medical attention according to the study definitions. When compared with aspirin alone the combination of an oral anticoagulant and aspirin reduced the incidence of MACE [hazard ratio (HR) and 95% confidence interval 0.70; 0.59–0.84], but increased clinically significant bleeding (HR: 1.79; 1.54–2.09). Compared with dual antiplatelet therapy with aspirin and clopidogrel, adding an oral anticoagulant decreased the incidence of MACE modestly (HR: 0.87; 0.80–0.95), but more than doubled the bleeding (HR: 2.34; 2.06–2.66). Heterogeneity between studies was low, and results were similar when restricting the analysis to phase III studies. Conclusion In patients with a recent acute coronary syndrome, the addition of a new oral anticoagulant to antiplatelet therapy results in a modest reduction in cardiovascular events but a substantial increase in bleeding, most pronounced when new oral anticoagulants are combined with

  6. Efficacy and Safety Assessment of the Addition of Bevacizumab to Adjuvant Therapy Agents in Cancer Patients: A Systematic Review and Meta-Analysis of Randomized Controlled Trials

    PubMed Central

    Ahmadizar, Fariba; Onland-Moret, N. Charlotte; de Boer, Anthonius; Liu, Geoffrey; Maitland-van der Zee, Anke H.

    2015-01-01

    Aim To evaluate the efficacy and safety of bevacizumab in the adjuvant cancer therapy setting within different subset of patients. Methods & Design/ Results PubMed, EMBASE, Cochrane and Clinical trials.gov databases were searched for English language studies of randomized controlled trials comparing bevacizumab and adjuvant therapy with adjuvant therapy alone published from January 1966 to 7th of May 2014. Progression free survival, overall survival, overall response rate, safety and quality of life were analyzed using random- or fixed-effects models according to the PRISMA guidelines. We obtained data from 44 randomized controlled trials (30,828 patients). Combining bevacizumab with different adjuvant therapies resulted in significant improvement of progression free survival (log hazard ratio, 0.87; 95% confidence interval (CI), 0.84–0.89), overall survival (log hazard ratio, 0.96; 95% CI, 0.94–0.98) and overall response rate (relative risk, 1.46; 95% CI: 1.33–1.59) compared to adjuvant therapy alone in all studied tumor types. In subgroup analyses, there were no interactions of bevacizumab with baseline characteristics on progression free survival and overall survival, while overall response rate was influenced by tumor type and bevacizumab dose (p-value: 0.02). Although bevacizumab use resulted in additional expected adverse drug reactions except anemia and fatigue, it was not associated with a significant decline in quality of life. There was a trend towards a higher risk of several side effects in patients treated by high-dose bevacizumab compared to the low-dose e.g. all grade proteinuria (9.24; 95% CI: 6.60–12.94 vs. 2.64; 95% CI: 1.29–5.40). Conclusions Combining bevacizumab with different adjuvant therapies provides a survival benefit across all major subsets of patients, including by tumor type, type of adjuvant therapy, and duration and dose of bevacizumab therapy. Though bevacizumab was associated with increased risks of some adverse drug

  7. Error handling strategies in multiphase inverse modeling

    SciTech Connect

    Finsterle, S.; Zhang, Y.

    2010-12-01

    Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.

  8. Robustness and modeling error characterization

    NASA Technical Reports Server (NTRS)

    Lehtomaki, N. A.; Castanon, D. A.; Sandell, N. R., Jr.; Levy, B. C.; Athans, M.; Stein, G.

    1984-01-01

    The results on robustness theory presented here are extensions of those given in Lehtomaki et al., (1981). The basic innovation in these new results is that they utilize minimal additional information about the structure of the modeling error, as well as its magnitude, to assess the robustness of feedback systems for which robustness tests based on the magnitude of modeling error alone are inconclusive.

  9. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-01

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors. PMID:26649954

  10. Setup errors in patients treated with intensity-modulated whole pelvic radiation therapy for gynecological malignancies

    SciTech Connect

    Haslam, Joshua J.; Lujan, Anthony E.; Mundt, Arno J.; Bonta, Dacian V.; Roeske, John C. . E-mail: Roeske@rover.uchicago.edu

    2005-03-31

    Intensity-modulated whole pelvic radiation therapy (IM-WPRT) has decreased the incidence of gastrointestinal complications by reducing the volume of normal tissue irradiated in gynecologic patients. However, IM-WPRT plans result in steep dose gradients around the target volume, and thus accurate patient setup is essential. To quantify the accuracy of our patient positioning, we examined the weekly portal films of 46 women treated with IM-WPRT at our institution. All patients were positioned using a customized immobilization device that was indexed to the treatment table. Setup errors were evaluated by comparing portal images to simulation images using an algorithm that registers user-defined open curve segments drawn on both sets of film. The setup errors, which were separated into systematic and random components, ranged from 1.9 to 3.7 mm for the translations and 1.3 deg to 4.4 deg for the 2 in-plane translations. The systematic errors were all less than the respective random errors, with the largest error in the anterior/posterior direction. In addition, there was no correlation between the magnitude of these errors and patient-specific factors (age, weight, height). In the future, we will investigate the effect of these setup errors on the delivered dose distribution.

  11. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  12. Using data assimilation for systematic model improvement

    NASA Astrophysics Data System (ADS)

    Lang, Matthew S.; van Leeuwen, Peter Jan; Browne, Phil

    2016-04-01

    In Numerical Weather Prediction parameterisations are used to simulate missing physics in the model. These can be due to a lack of scientific understanding or a lack of computing power available to address all the known physical processes. Parameterisations are sources of large uncertainty in a model as parameter values used in these parameterisations cannot be measured directly and hence are often not well known, and the parameterisations themselves are approximations of the processes present in the true atmosphere. Whilst there are many efficient and effective methods for combined state/parameter estimation in data assimilation, such as state augmentation, these are not effective at estimating the structure of parameterisations. A new method of parameterisation estimation is proposed that uses sequential data assimilation methods to estimate errors in the numerical models at each space-time point for each model equation. These errors are then fitted to predetermined functional forms of missing physics or parameterisations, that are based upon prior information. The method picks out the functional form, or that combination of functional forms, that bests fits the error structure. The prior information typically takes the form of expert knowledge. We applied the method to a one-dimensional advection model with additive model error, and it is shown that the method can accurately estimate parameterisations, with consistent error estimates. It is also demonstrated that state augmentation is not successful. The results indicate that this new method is a powerful tool in systematic model improvement.

  13. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  14. A Systematic Methodology for Verifying Superscalar Microprocessors

    NASA Technical Reports Server (NTRS)

    Srivas, Mandayam; Hosabettu, Ravi; Gopalakrishnan, Ganesh

    1999-01-01

    We present a systematic approach to decompose and incrementally build the proof of correctness of pipelined microprocessors. The central idea is to construct the abstraction function by using completion functions, one per unfinished instruction, each of which specifies the effect (on the observables) of completing the instruction. In addition to avoiding the term size and case explosion problem that limits the pure flushing approach, our method helps localize errors, and also handles stages with interactive loops. The technique is illustrated on pipelined and superscalar pipelined implementations of a subset of the DLX architecture. It has also been applied to a processor with out-of-order execution.

  15. Immediate error correction process following sleep deprivation.

    PubMed

    Hsieh, Shulan; Cheng, I-Chen; Tsai, Ling-Ling

    2007-06-01

    Previous studies have suggested that one night of sleep deprivation decreases frontal lobe metabolic activity, particularly in the anterior cingulated cortex (ACC), resulting in decreased performance in various executive function tasks. This study thus attempted to address whether sleep deprivation impaired the executive function of error detection and error correction. Sixteen young healthy college students (seven women, nine men, with ages ranging from 18 to 23 years) participated in this study. Participants performed a modified letter flanker task and were instructed to make immediate error corrections on detecting performance errors. Event-related potentials (ERPs) during the flanker task were obtained using a within-subject, repeated-measure design. The error negativity or error-related negativity (Ne/ERN) and the error positivity (Pe) seen immediately after errors were analyzed. The results show that the amplitude of the Ne/ERN was reduced significantly following sleep deprivation. Reduction also occurred for error trials with subsequent correction, indicating that sleep deprivation influenced error correction ability. This study further demonstrated that the impairment in immediate error correction following sleep deprivation was confined to specific stimulus types, with both Ne/ERN and behavioral correction rates being reduced only for trials in which flanker stimuli were incongruent with the target stimulus, while the response to the target was compatible with that of the flanker stimuli following sleep deprivation. The results thus warrant future systematic investigation of the interaction between stimulus type and error correction following sleep deprivation. PMID:17542943

  16. Remediating Common Math Errors.

    ERIC Educational Resources Information Center

    Wagner, Rudolph F.

    1981-01-01

    Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)

  17. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the

  18. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have approximately 7am/7pm orbital geometry) and. afternoon satellites (NOAA 7, 9, 11 and 14 that have approximately 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error eo. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error eo. We find eo can decrease the global temperature trend by approximately 0.07 K/decade. In addition there are systematic time dependent errors ed and ec present in the data that are introduced by the drift in the satellite orbital geometry. ed arises from the diurnal cycle in temperature and ec is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error ed can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observation made in the MSU Ch 1 (50.3 GHz) support this approach. The error ec is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the

  19. On the errors and stability of reference frames

    NASA Astrophysics Data System (ADS)

    Gubanov, V. S.; Kurdubov, S. L.

    2013-12-01

    The distribution of systematic errors in the coordinates of 1217 extragalactic radio sources included in the latest version of the ICRF2 (International Celestial Reference Frame) reference catalog has been mapped for the first time by processing VLBI observations from international astrometric and geodetic programs spanning the period 1980-2012. These errors are shown to reach ±1.0 mas (milliarcseconds). However, for a sample of 752 sources observed more than 100 times, these errors do not exceed ±0.2 mas, suggesting that ICRF2 is inhomogeneous. In addition, the individual stability of dozens of extragalactic radio sources and ground-based stations included in the latest version of the International Terrestrial Reference Frame, ITRF2005, has been investigated. Significant linear trends and anomalous shifts reaching ±20 µas (microarcseconds) have been detected for many of the sources. Significant systematic shifts have also been found for some of the reference stations. The results obtained stimulate a search for new methods of analyzing VLBI observations and ways of their global adjustment that would provide greater homogeneity and stability of the ICRF and ITRF. This is needed both to increase the accuracy of determining the astrometric, geodetic, and geodynamic parameters derived from these observations and to facilitate their physical interpretation. The work has been performed with the QUASAR multifunction software package developed at the Institute of Applied Astronomy of the Russian Academy of Sciences.

  20. Sources of Error in Mammalian Genetic Screens

    PubMed Central

    Sack, Laura Magill; Davoli, Teresa; Xu, Qikai; Li, Mamie Z.; Elledge, Stephen J.

    2016-01-01

    Genetic screens are invaluable tools for dissection of biological phenomena. Optimization of such screens to enhance discovery of candidate genes and minimize false positives is thus a critical aim. Here, we report several sources of error common to pooled genetic screening techniques used in mammalian cell culture systems, and demonstrate methods to eliminate these errors. We find that reverse transcriptase-mediated recombination during retroviral replication can lead to uncoupling of molecular tags, such as DNA barcodes (BCs), from their associated library elements, leading to chimeric proviral genomes in which BCs are paired to incorrect ORFs, shRNAs, etc. This effect depends on the length of homologous sequence between unique elements, and can be minimized with careful vector design. Furthermore, we report that residual plasmid DNA from viral packaging procedures can contaminate transduced cells. These plasmids serve as additional copies of the PCR template during library amplification, resulting in substantial inaccuracies in measurement of initial reference populations for screen normalization. The overabundance of template in some samples causes an imbalance between PCR cycles of contaminated and uncontaminated samples, which results in a systematic artifactual depletion of GC-rich library elements. Elimination of contaminating plasmid DNA using the bacterial endonuclease Benzonase can restore faithful measurements of template abundance and minimize GC bias. PMID:27402361

  1. Sources of Error in Mammalian Genetic Screens.

    PubMed

    Sack, Laura Magill; Davoli, Teresa; Xu, Qikai; Li, Mamie Z; Elledge, Stephen J

    2016-01-01

    Genetic screens are invaluable tools for dissection of biological phenomena. Optimization of such screens to enhance discovery of candidate genes and minimize false positives is thus a critical aim. Here, we report several sources of error common to pooled genetic screening techniques used in mammalian cell culture systems, and demonstrate methods to eliminate these errors. We find that reverse transcriptase-mediated recombination during retroviral replication can lead to uncoupling of molecular tags, such as DNA barcodes (BCs), from their associated library elements, leading to chimeric proviral genomes in which BCs are paired to incorrect ORFs, shRNAs, etc This effect depends on the length of homologous sequence between unique elements, and can be minimized with careful vector design. Furthermore, we report that residual plasmid DNA from viral packaging procedures can contaminate transduced cells. These plasmids serve as additional copies of the PCR template during library amplification, resulting in substantial inaccuracies in measurement of initial reference populations for screen normalization. The overabundance of template in some samples causes an imbalance between PCR cycles of contaminated and uncontaminated samples, which results in a systematic artifactual depletion of GC-rich library elements. Elimination of contaminating plasmid DNA using the bacterial endonuclease Benzonase can restore faithful measurements of template abundance and minimize GC bias. PMID:27402361

  2. Errors Associated with the Direct Measurement of Radionuclides in Wounds

    SciTech Connect

    Hickman, D P

    2006-03-02

    Work in radiation areas can occasionally result in accidental wounds containing radioactive materials. When a wound is incurred within a radiological area, the presence of radioactivity in the wound needs to be confirmed to determine if additional remedial action needs to be taken. Commonly used radiation area monitoring equipment is poorly suited for measurement of radioactive material buried within the tissue of the wound. The Lawrence Livermore National Laboratory (LLNL) In Vivo Measurement Facility has constructed a portable wound counter that provides sufficient detection of radioactivity in wounds as shown in Fig. 1. The LLNL wound measurement system is specifically designed to measure low energy photons that are emitted from uranium and transuranium radionuclides. The portable wound counting system uses a 2.5cm diameter by 1mm thick NaI(Tl) detector. The detector is connected to a Canberra NaI InSpector{trademark}. The InSpector interfaces with an IBM ThinkPad laptop computer, which operates under Genie 2000 software. The wound counting system is maintained and used at the LLNL In Vivo Measurement Facility. The hardware is designed to be portable and is occasionally deployed to respond to the LLNL Health Services facility or local hospitals for examination of personnel that may have radioactive materials within a wound. The typical detection levels in using the LLNL portable wound counter in a low background area is 0.4 nCi to 0.6 nCi assuming a near zero mass source. This paper documents the systematic errors associated with in vivo measurement of radioactive materials buried within wounds using the LLNL portable wound measurement system. These errors are divided into two basic categories, calibration errors and in vivo wound measurement errors. Within these categories, there are errors associated with particle self-absorption of photons, overlying tissue thickness, source distribution within the wound, and count errors. These errors have been examined and

  3. Analysis of field errors in existing undulators

    SciTech Connect

    Kincaid, B.M.

    1990-01-01

    The Advanced Light Source (ALS) and other third generation synchrotron light sources have been designed for optimum performance with undulator insertion devices. The performance requirements for these new undulators are explored, with emphasis on the effects of errors on source spectral brightness. Analysis of magnetic field data for several existing hybrid undulators is presented, decomposing errors into systematic and random components. An attempts is made to identify the sources of these errors, and recommendations are made for designing future insertion devices. 12 refs., 16 figs.

  4. Measurement error in geometric morphometrics.

    PubMed

    Fruciano, Carmelo

    2016-06-01

    Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset. PMID:27038025

  5. Addition of docetaxel or bisphosphonates to standard of care in men with localised or metastatic, hormone-sensitive prostate cancer: a systematic review and meta-analyses of aggregate data

    PubMed Central

    Vale, Claire L; Burdett, Sarah; Rydzewska, Larysa H M; Albiges, Laurence; Clarke, Noel W; Fisher, David; Fizazi, Karim; Gravis, Gwenaelle; James, Nicholas D; Mason, Malcolm D; Parmar, Mahesh K B; Sweeney, Christopher J; Sydes, Matthew R; Tombal, Bertrand; Tierney, Jayne F

    2016-01-01

    Summary Background Results from large randomised controlled trials combining docetaxel or bisphosphonates with standard of care in hormone-sensitive prostate cancer have emerged. In order to investigate the effects of these therapies and to respond to emerging evidence, we aimed to systematically review all relevant trials using a framework for adaptive meta-analysis. Methods For this systematic review and meta-analysis, we searched MEDLINE, Embase, LILACS, and the Cochrane Central Register of Controlled Trials, trial registers, conference proceedings, review articles, and reference lists of trial publications for all relevant randomised controlled trials (published, unpublished, and ongoing) comparing either standard of care with or without docetaxel or standard of care with or without bisphosphonates for men with high-risk localised or metastatic hormone-sensitive prostate cancer. For each trial, we extracted hazard ratios (HRs) of the effects of docetaxel or bisphosphonates on survival (time from randomisation until death from any cause) and failure-free survival (time from randomisation to biochemical or clinical failure or death from any cause) from published trial reports or presentations or obtained them directly from trial investigators. HRs were combined using the fixed-effect model (Mantel-Haenzsel). Findings We identified five eligible randomised controlled trials of docetaxel in men with metastatic (M1) disease. Results from three (CHAARTED, GETUG-15, STAMPEDE) of these trials (2992 [93%] of 3206 men randomised) showed that the addition of docetaxel to standard of care improved survival. The HR of 0·77 (95% CI 0·68–0·87; p<0·0001) translates to an absolute improvement in 4-year survival of 9% (95% CI 5–14). Docetaxel in addition to standard of care also improved failure-free survival, with the HR of 0·64 (0·58–0·70; p<0·0001) translating into a reduction in absolute 4-year failure rates of 16% (95% CI 12–19). We identified 11 trials of

  6. The Error Distribution of BATSE GRB Location

    NASA Technical Reports Server (NTRS)

    Briggs, Michael S.; Pendleton, Geoffrey N.; Kippen, R. Marc; Brainerd, J. J.; Hurley, Kevin; Connaughton, Valerie; Meegan, Charles A.

    1998-01-01

    We develop empirical probability models for BATSE GRB location errors by a Bayesian analysis of the separations between BATSE GRB locations and locations obtained with the InterPlanetary Network (IPN). Models are compared and their parameters estimated using 394 GRBs with single IPN annuli and 20 GRBs with intersecting IPN annuli. Most of the analysis is for the 4B (rev) BATSE catalog; earlier catalogs are also analyzed. The simplest model that provides a good representation of the error distribution has 78% of the locations in a 'core' term with a systematic error of 1.85 degrees and the remainder in an extended tail with a systematic error of 5.36 degrees, implying a 68% confidence region for bursts with negligible statistical errors of 2.3 degrees. There is some evidence for a more complicated model in which the error distribution depends on the BATSE datatype that was used to obtain the location. Bright bursts are typically located using the CONT datatype, and according to the more complicated model, the 68% confidence region for CONT-located bursts with negligible statistical errors is 2.0 degrees.

  7. Error management in blood establishments: results of eight years of experience (2003–2010) at the Croatian Institute of Transfusion Medicine

    PubMed Central

    Vuk, Tomislav; Barišić, Marijan; Očić, Tihomir; Mihaljević, Ivanka; Šarlija, Dorotea; Jukić, Irena

    2012-01-01

    Background. Continuous and efficient error management, including procedures from error detection to their resolution and prevention, is an important part of quality management in blood establishments. At the Croatian Institute of Transfusion Medicine (CITM), error management has been systematically performed since 2003. Materials and methods. Data derived from error management at the CITM during an 8-year period (2003–2010) formed the basis of this study. Throughout the study period, errors were reported to the Department of Quality Assurance. In addition to surveys and the necessary corrective activities, errors were analysed and classified according to the Medical Event Reporting System for Transfusion Medicine (MERS-TM). Results. During the study period, a total of 2,068 errors were recorded, including 1,778 (86.0%) in blood bank activities and 290 (14.0%) in blood transfusion services. As many as 1,744 (84.3%) errors were detected before issue of the product or service. Among the 324 errors identified upon release from the CITM, 163 (50.3%) errors were detected by customers and reported as complaints. In only five cases was an error detected after blood product transfusion however without any harmful consequences for the patients. All errors were, therefore, evaluated as “near miss” and “no harm” events. Fifty-two (2.5%) errors were evaluated as high-risk events. With regards to blood bank activities, the highest proportion of errors occurred in the processes of labelling (27.1%) and blood collection (23.7%). With regards to blood transfusion services, errors related to blood product issuing prevailed (24.5%). Conclusion. This study shows that comprehensive management of errors, including near miss errors, can generate data on the functioning of transfusion services, which is a precondition for implementation of efficient corrective and preventive actions that will ensure further improvement of the quality and safety of transfusion treatment. PMID

  8. Errors in paleomagnetism: Structural control on overlapped vectors - mathematical models

    NASA Astrophysics Data System (ADS)

    Rodríguez-Pintó, A.; Ramón, M. J.; Oliva-Urcia, B.; Pueyo, E. L.; Pocoví, A.

    2011-05-01

    The reliability of paleomagnetic data is a keystone to obtain trustable kinematics interpretations. The determination of the real paleomagnetic component recorded at certain time in the geological evolution of a rock can be affected by several sources of errors: inclination shallowing, declination biases caused by incorrect restoration to the ancient field, internal deformation of rock volumes and lack of isolation of the paleomagnetic primary vector during the laboratory procedures (overlapping of components). These errors will limit or impede the validity of paleomagnetism as the only three-dimension reference. This paper presents the first systematic modeling of the effect of overlapped vectors referred to declination, inclination and stability tests taking into account the key variables: orientation of a primary and secondary (overlapped to the primary) vectors, degree of overlapping (intensity ratio of primary and secondary paleomagnetic vectors) and the fold axis orientation and dip of bedding plane. In this way, several scenarios of overlapping have been modeled in different fold geometries considering both polarities and all the variables aforementioned, allowing to calculate the deviations of the vector obtained in the laboratory (overlapped) with respect to the paleomagnetic reference (not overlapped). Observations from the models confirm that declination errors are larger than the inclination ones. In addition to the geometry factor, errors are mainly controlled by the relative magnitude of the primary respect to the secondary component (P/S ratio). We observe larger asymmetries and bigger magnitudes of errors along the fold location if the primary and secondary records have different polarities. If the primary record (declination) and the fold axis orientation are perpendicular ( Ω = 90°), errors reach maximum magnitudes and larger asymmetries along the fold surface (different dips). The effect of overlapping in the fold and reversal tests is also

  9. Estimation of discretization errors in contact pressure measurements.

    PubMed

    Fregly, Benjamin J; Sawyer, W Gregory

    2003-04-01

    Contact pressure measurements in total knee replacements are often made using a discrete sensor such as the Tekscan K-Scan sensor. However, no method currently exists for predicting the magnitude of sensor discretization errors in contact force, peak pressure, average pressure, and contact area, making it difficult to evaluate the accuracy of such measurements. This study identifies a non-dimensional area variable, defined as the ratio of the number of perimeter elements to the total number of elements with pressure, which can be used to predict these errors. The variable was evaluated by simulating discrete pressure sensors subjected to Hertzian and uniform pressure distributions with two different calibration procedures. The simulations systematically varied the size of the sensor elements, the contact ellipse aspect ratio, and the ellipse's location on the sensor grid. In addition, contact pressure measurements made with a K-Scan sensor on four different total knee designs were used to evaluate the magnitude of discretization errors under practical conditions. The simulations predicted a strong power law relationship (r(2)>0.89) between worst-case discretization errors and the proposed non-dimensional area variable. In the total knee experiments, predicted discretization errors were on the order of 1-4% for contact force and peak pressure and 3-9% for average pressure and contact area. These errors are comparable to those arising from inserting a sensor into the joint space or truncating pressures with pressure sensitive film. The reported power law regression coefficients provide a simple way to estimate the accuracy of experimental measurements made with discrete pressure sensors when the contact patch is approximately elliptical. PMID:12600352

  10. Beta systems error analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  11. Medical Error and Moral Luck.

    PubMed

    Hubbeling, Dieneke

    2016-09-01

    This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome. PMID:26662613

  12. Field error lottery

    SciTech Connect

    Elliott, C.J.; McVey, B. ); Quimby, D.C. )

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  13. Inborn errors of metabolism

    MedlinePlus

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  14. Programming Errors in APL.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  15. Improved modelling of tool tracking errors by modelling dependent marker errors.

    PubMed

    Thompson, Stephen; Penney, Graeme; Dasgupta, Prokar; Hawkes, David

    2013-02-01

    Accurate understanding of equipment tracking error is essential for decision making in image guided surgery. For tools tracked using markers attached to a rigid body, existing error estimation methods use the assumption that the individual marker errors are independent random variables. This assumption is not valid for all tracking systems. This paper presents a method to estimate a more accurate tracking error function, consisting of a systematic and random component. The proposed method does not require detailed knowledge of the tracking system physics. Results from a pointer calibration are used to demonstrate that the proposed method provides a better match to observed results than the existing state of the art. A simulation of the pointer calibration process is then used to show that existing methods can underestimate the pointer calibration error by a factor of two. A further simulation of laparoscopic camera tracking is used to show that existing methods cannot model important variations in system performance due to the angular arrangement of the tracking markers. By arranging the markers such that the systematic errors are nearly identical for all markers, the rotational component of the tracking error can be reduced, resulting in a significant reduction in target tracking errors. PMID:22961298

  16. Issues in automatic OCR error classification

    SciTech Connect

    Esakov, J.; Lopresti, D.P.; Sandberg, J.S.; Zhou, J.

    1994-12-31

    In this paper we present the surprising result that OCR errors are not always uniformly distributed across a page. Under certain circumstances, 30% or more of the errors incurred can be attributed to a single, avoidable phenomenon in the scanning process. This observation has important ramifications for work that explicitly or implicitly assumes a uniform error distribution. In addition, our experiments show that not just the quantity but also the nature of the errors is affected. This could have an impact on strategies used for post-process error correction. Results such as these can be obtained only by analyzing large quantities of data in a controlled way. To this end, we also describe our algorithm for classifying OCR errors. This is based on a well-known dynamic programming approach for determining string edit distance which we have extended to handle the types of character segmentation errors inherent to OCR.

  17. Refractive errors in children.

    PubMed

    Tongue, A C

    1987-12-01

    Optical correction of refractive errors in infants and young children is indicated when the refractive errors are sufficiently large to cause unilateral or bilateral amblyopia, if they are impairing the child's ability to function normally, or if the child has accommodative strabismus. Screening for refractive errors is important and should be performed as part of the annual physical examination in all verbal children. Screening for significant refractive errors in preverbal children is more difficult; however, the red reflex test of Bruckner is useful for the detection of anisometropic refractive errors. The photorefraction test, which is an adaptation of Bruckner's red reflex test, may prove to be a useful screening device for detecting bilateral as well as unilateral refractive errors. Objective testing as well as subjective testing enables ophthalmologists to prescribe proper optical correction for refractive errors for infants and children of any age. PMID:3317238

  18. Error-prone signalling.

    PubMed

    Johnstone, R A; Grafen, A

    1992-06-22

    The handicap principle of Zahavi is potentially of great importance to the study of biological communication. Existing models of the handicap principle, however, make the unrealistic assumption that communication is error free. It seems possible, therefore, that Zahavi's arguments do not apply to real signalling systems, in which some degree of error is inevitable. Here, we present a general evolutionarily stable strategy (ESS) model of the handicap principle which incorporates perceptual error. We show that, for a wide range of error functions, error-prone signalling systems must be honest at equilibrium. Perceptual error is thus unlikely to threaten the validity of the handicap principle. Our model represents a step towards greater realism, and also opens up new possibilities for biological signalling theory. Concurrent displays, direct perception of quality, and the evolution of 'amplifiers' and 'attenuators' are all probable features of real signalling systems, yet handicap models based on the assumption of error-free communication cannot accommodate these possibilities. PMID:1354361

  19. Error analysis using organizational simulation.

    PubMed Central

    Fridsma, D. B.

    2000-01-01

    Organizational simulations have been used by project organizations in civil and aerospace industries to identify work processes and organizational structures that are likely to fail under certain conditions. Using a simulation system based on Galbraith's information-processing theory and Simon's notion of bounded-rationality, we retrospectively modeled a chemotherapy administration error that occurred in a hospital setting. Our simulation suggested that when there is a high rate of unexpected events, the oncology fellow was differentially backlogged with work when compared with other organizational members. Alternative scenarios suggested that providing more knowledge resources to the oncology fellow improved her performance more effectively than adding additional staff to the organization. Although it is not possible to know whether this might have prevented the error, organizational simulation may be an effective tool to prospectively evaluate organizational "weak links", and explore alternative scenarios to correct potential organizational problems before they generate errors. PMID:11079885

  20. Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel

    ERIC Educational Resources Information Center

    Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.

    2007-01-01

    A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…

  1. A comparison of some observations of the Galilean satellites with Sampson's tables. [position error analysis

    NASA Technical Reports Server (NTRS)

    Arlot, J.-E.

    1975-01-01

    Two series of photographic observations of the Galilean satellites are analyzed to determine systematic errors in the observations and errors in Sampson's (1921) theory. Satellite-satellite as well as planet-satellite positions are used in comparing theory with observation. Ten unknown errors are identified, and results are presented for three determinations of the unknown longitude error.

  2. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  3. Search, Memory, and Choice Error: An Experiment

    PubMed Central

    Sanjurjo, Adam

    2015-01-01

    Multiple attribute search is a central feature of economic life: we consider much more than price when purchasing a home, and more than wage when choosing a job. An experiment is conducted in order to explore the effects of cognitive limitations on choice in these rich settings, in accordance with the predictions of a new model of search memory load. In each task, subjects are made to search the same information in one of two orders, which differ in predicted memory load. Despite standard models of choice treating such variations in order of acquisition as irrelevant, lower predicted memory load search orders are found to lead to substantially fewer choice errors. An implication of the result for search behavior, more generally, is that in order to reduce memory load (thus choice error) a limited memory searcher ought to deviate from the search path of an unlimited memory searcher in predictable ways-a mechanism that can explain the systematic deviations from optimal sequential search that have recently been discovered in peoples' behavior. Further, as cognitive load is induced endogenously (within the task), and found to affect choice behavior, this result contributes to the cognitive load literature (in which load is induced exogenously), as well as the cognitive ability literature (in which cognitive ability is measured in a separate task). In addition, while the information overload literature has focused on the detrimental effects of the quantity of information on choice, this result suggests that, holding quantity constant, the order that information is observed in is an essential determinant of choice failure. PMID:26121356

  4. Search, Memory, and Choice Error: An Experiment.

    PubMed

    Sanjurjo, Adam

    2015-01-01

    Multiple attribute search is a central feature of economic life: we consider much more than price when purchasing a home, and more than wage when choosing a job. An experiment is conducted in order to explore the effects of cognitive limitations on choice in these rich settings, in accordance with the predictions of a new model of search memory load. In each task, subjects are made to search the same information in one of two orders, which differ in predicted memory load. Despite standard models of choice treating such variations in order of acquisition as irrelevant, lower predicted memory load search orders are found to lead to substantially fewer choice errors. An implication of the result for search behavior, more generally, is that in order to reduce memory load (thus choice error) a limited memory searcher ought to deviate from the search path of an unlimited memory searcher in predictable ways-a mechanism that can explain the systematic deviations from optimal sequential search that have recently been discovered in peoples' behavior. Further, as cognitive load is induced endogenously (within the task), and found to affect choice behavior, this result contributes to the cognitive load literature (in which load is induced exogenously), as well as the cognitive ability literature (in which cognitive ability is measured in a separate task). In addition, while the information overload literature has focused on the detrimental effects of the quantity of information on choice, this result suggests that, holding quantity constant, the order that information is observed in is an essential determinant of choice failure. PMID:26121356

  5. A Cross-Linguistic Speech Error Investigation of Functional Complexity

    ERIC Educational Resources Information Center

    Wells-Jensen, Sheri

    2007-01-01

    This work is a systematic, cross-linguistic examination of speech errors in English, Hindi, Japanese, Spanish and Turkish. It first describes a methodology for the generation of parallel corpora of error data, then uses these data to examine three general hypotheses about the relationship between language structure and the speech production…

  6. Digital correction of geometric and radiometric errors in ERTS data.

    NASA Technical Reports Server (NTRS)

    Bakis, R.; Wesley, M. A.; Will, P. M.

    1971-01-01

    The sensor systems of the ERTS-A satellite are discussed and sources of geometric and radiometric errors in the received images are identified. Digital algorithms are presented for detection of reseau and ground control points, for rapid implementation of geometric corrections, and for radiometric correction of errors caused by shading, image motion, modulation transfer function, and quantum and systematic noise.

  7. Error prevention and error management in medicine--adopting strategies from other professions.

    PubMed

    Thomeczek, C

    2003-12-01

    The report of the Institute of Medicine (IOM) 'To Err Is Human' received public interest. The simple term 'medical error' as it has been used in public so far does not describe the complex setting in medicine. The development of error management in industry (e.g. aviation) with an emphasis on human factors, communication, and systematic error is demonstrated in order to design similar approaches for medicine. Recommendations are based on the principles for designing safety systems in health care organisations published in the IOM report. PMID:14709928

  8. Effects of Structural Errors on Parameter Estimates

    NASA Technical Reports Server (NTRS)

    Hadaegh, F. Y.; Bekey, G. A.

    1987-01-01

    Paper introduces concept of near equivalence in probability between different parameters or mathematical models of physical system. One in series of papers, each establishes different part of rigorous theory of mathematical modeling based on concepts of structural error, identifiability, and equivalence. This installment focuses upon effects of additive structural errors on degree of bias in estimates parameters.

  9. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  10. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  11. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    NASA Astrophysics Data System (ADS)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  12. Food additives

    MedlinePlus

    Food additives are substances that become part of a food product when they are added during the processing or making of that food. "Direct" food additives are often added during processing to: Add nutrients ...

  13. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  14. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  15. Error coding simulations

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1993-11-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  16. QUANTIFIERS UNDONE: REVERSING PREDICTABLE SPEECH ERRORS IN COMPREHENSION

    PubMed Central

    Frazier, Lyn; Clifton, Charles

    2015-01-01

    Speakers predictably make errors during spontaneous speech. Listeners may identify such errors and repair the input, or their analysis of the input, accordingly. Two written questionnaire studies investigated error compensation mechanisms in sentences with doubled quantifiers such as Many students often turn in their assignments late. Results show a considerable number of undoubled interpretations for all items tested (though fewer for sentences containing doubled negation than for sentences containing many-often, every-always or few-seldom.) This evidence shows that the compositional form-meaning pairing supplied by the grammar is not the only systematic mapping between form and meaning. Implicit knowledge of the workings of the performance systems provides an additional mechanism for pairing sentence form and meaning. Alternate accounts of the data based on either a concord interpretation or an emphatic interpretation of the doubled quantifier don’t explain why listeners fail to apprehend the ‘extra meaning’ added by the potentially redundant material only in limited circumstances. PMID:26478637

  17. A Fully Bayesian Approach to Improved Calibration and Prediction of Groundwater Models With Structure Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.

    2014-12-01

    Effective water resource management typically relies on numerical models to analyse groundwater flow and solute transport processes. These models are usually subject to model structure error due to simplification and/or misrepresentation of the real system. As a result, the model outputs may systematically deviate from measurements, thus violating a key assumption for traditional regression-based calibration and uncertainty analysis. On the other hand, model structure error induced bias can be described statistically in an inductive, data-driven way based on historical model-to-measurement misfit. We adopt a fully Bayesian approach that integrates a Gaussian process error model to account for model structure error to the calibration, prediction and uncertainty analysis of groundwater models. The posterior distributions of parameters of the groundwater model and the Gaussian process error model are jointly inferred using DREAM, an efficient Markov chain Monte Carlo sampler. We test the usefulness of the fully Bayesian approach towards a synthetic case study of surface-ground water interaction under changing pumping conditions. We first illustrate through this example that traditional least squares regression without accounting for model structure error yields biased parameter estimates due to parameter compensation as well as biased predictions. In contrast, the Bayesian approach gives less biased parameter estimates. Moreover, the integration of a Gaussian process error model significantly reduces predictive bias and leads to prediction intervals that are more consistent with observations. The results highlight the importance of explicit treatment of model structure error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification. In addition, the data-driven error modelling approach is capable of extracting more information from observation data than using a groundwater model alone.

  18. Food additives

    PubMed Central

    Spencer, Michael

    1974-01-01

    Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857

  19. Analysis of Medication Error Reports

    SciTech Connect

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  20. Quantum Computation with Phase Drift Errors

    NASA Astrophysics Data System (ADS)

    Miquel, César; Paz, Juan Pablo; Zurek, Wojciech Hubert

    1997-05-01

    We numerically simulate the evolution of an ion trap quantum computer made out of 18 ions subject to a sequence of nearly 15 000 laser pulses in order to find the prime factors of N = 15. We analyze the effect of random and systematic phase drift errors arising from inaccuracies in the laser pulses which induce over (under) rotation of the quantum state. Simple analytic estimates of the tolerance for the quality of driving pulses are presented. We examine the use of watchdog stabilization to partially correct phase drift errors concluding that, in the regime investigated, it is rather inefficient.

  1. Everyday Scale Errors

    ERIC Educational Resources Information Center

    Ware, Elizabeth A.; Uttal, David H.; DeLoache, Judy S.

    2010-01-01

    Young children occasionally make "scale errors"--they attempt to fit their bodies into extremely small objects or attempt to fit a larger object into another, tiny, object. For example, a child might try to sit in a dollhouse-sized chair or try to stuff a large doll into it. Scale error research was originally motivated by parents' and…

  2. Medical error and disclosure.

    PubMed

    White, Andrew A; Gallagher, Thomas H

    2013-01-01

    Errors occur commonly in healthcare and can cause significant harm to patients. Most errors arise from a combination of individual, system, and communication failures. Neurologists may be involved in harmful errors in any practice setting and should familiarize themselves with tools to prevent, report, and examine errors. Although physicians, patients, and ethicists endorse candid disclosure of harmful medical errors to patients, many physicians express uncertainty about how to approach these conversations. A growing body of research indicates physicians often fail to meet patient expectations for timely and open disclosure. Patients desire information about the error, an apology, and a plan for preventing recurrence of the error. To meet these expectations, physicians should participate in event investigations and plan thoroughly for each disclosure conversation, preferably with a disclosure coach. Physicians should also anticipate and attend to the ongoing medical and emotional needs of the patient. A cultural change towards greater transparency following medical errors is in motion. Substantial progress is still required, but neurologists can further this movement by promoting policies and environments conducive to open reporting, respectful disclosure to patients, and support for the healthcare workers involved. PMID:24182370

  3. Errors Disrupt Subsequent Early Attentional Processes

    PubMed Central

    Van der Borght, Liesbet; Schevernels, Hanne; Burle, Boris; Notebaert, Wim

    2016-01-01

    It has been demonstrated that target detection is impaired following an error in an unrelated flanker task. These findings support the idea that the occurrence or processing of unexpected error-like events interfere with subsequent information processing. In the present study, we investigated the effect of errors on early visual ERP components. We therefore combined a flanker task and a visual discrimination task. Additionally, the intertrial interval between both tasks was manipulated in order to investigate the duration of these negative after-effects. The results of the visual discrimination task indicated that the amplitude of the N1 component, which is related to endogenous attention, was significantly decreased following an error, irrespective of the intertrial interval. Additionally, P3 amplitude was attenuated after an erroneous trial, but only in the long-interval condition. These results indicate that low-level attentional processes are impaired after errors. PMID:27050303

  4. The characteristics of key analysis errors

    NASA Astrophysics Data System (ADS)

    Caron, Jean-Francois

    This thesis investigates the characteristics of the corrections to the initial state of the atmosphere. The technique employed is the key analysis error algorithm, recently developed to estimate the initial state errors responsible for poor short-range to medium-range numerical weather prediction (NWP) forecasts. The main goal of this work is to determine to which extent the initial corrections obtained with this method can be associated with analysis errors. A secondary goal is to understand their dynamics in improving the forecast. In the first part of the thesis, we examine the realism of the initial corrections obtained from the key analysis error algorithm in terms of dynamical balance and closeness to the observations. The result showed that the initial corrections are strongly out of balance and systematically increase the departure between the control analysis and the observations suggesting that the key analysis error algorithm produced initial corrections that represent more than analysis errors. Significant artificial correction to the initial state seems to be present. The second part of this work examines a few approaches to isolate the balanced component of the initial corrections from the key analysis error method. The best results were obtained with the nonlinear balance potential vorticity (PV) inversion technique. The removal of the imbalance part of the initial corrections makes the corrected analysis slightly closer to the observations, but remains systematically further away as compared to the control analysis. Thus the balanced part of the key analysis errors cannot justifiably be associated with analysis errors. In light of the results presented, some recommendations to improve the key analysis error algorithm were proposed. In the third and last part of the thesis, a diagnosis of the evolution of the initial corrections from the key analysis error method is presented using a PV approach. The initial corrections tend to grow rapidly in time

  5. Error analysis and data reduction for interferometric surface measurements

    NASA Astrophysics Data System (ADS)

    Zhou, Ping

    High-precision optical systems are generally tested using interferometry, since it often is the only way to achieve the desired measurement precision and accuracy. Interferometers can generally measure a surface to an accuracy of one hundredth of a wave. In order to achieve an accuracy to the next order of magnitude, one thousandth of a wave, each error source in the measurement must be characterized and calibrated. Errors in interferometric measurements are classified into random errors and systematic errors. An approach to estimate random errors in the measurement is provided, based on the variation in the data. Systematic errors, such as retrace error, imaging distortion, and error due to diffraction effects, are also studied in this dissertation. Methods to estimate the first order geometric error and errors due to diffraction effects are presented. Interferometer phase modulation transfer function (MTF) is another intrinsic error. The phase MTF of an infrared interferometer is measured with a phase Siemens star, and a Wiener filter is designed to recover the middle spatial frequency information. Map registration is required when there are two maps tested in different systems and one of these two maps needs to be subtracted from the other. Incorrect mapping causes wavefront errors. A smoothing filter method is presented which can reduce the sensitivity to registration error and improve the overall measurement accuracy. Interferometric optical testing with computer-generated holograms (CGH) is widely used for measuring aspheric surfaces. The accuracy of the drawn pattern on a hologram decides the accuracy of the measurement. Uncertainties in the CGH manufacturing process introduce errors in holograms and then the generated wavefront. An optimal design of the CGH is provided which can reduce the sensitivity to fabrication errors and give good diffraction efficiency for both chrome-on-glass and phase etched CGHs.

  6. Dinosaur Systematics

    NASA Astrophysics Data System (ADS)

    Carpenter, Kenneth; Currie, Philip J.

    1992-07-01

    In recent years dinosaurs have captured the attention of the public at an unprecedented level. At the heart of this resurgence in popular interest is an increased level of research activity, much of which is innovative in the field of paleontology. For instance, whereas earlier paleontological studies emphasized basic morphologic description and taxonomic classification, modern studies attempt to examine the role and nature of dinosaurs as living animals. More than ever before, we understand how these extinct species functioned, behaved, interacted with each other and the environment, and evolved. Nevertheless, these studies rely on certain basic building blocks of knowledge, including facts about dinosaur anatomy and taxonomic relationships. One of the purposes of this volume is to unravel some of the problems surrounding dinosaur systematics and to increase our understanding of dinosaurs as a biological species. Dinosaur Systematics presents a current overview of dinosaur systematics using various examples to explore what is a species in a dinosaur, what separates genders in dinosaurs, what morphological changes occur with maturation of a species, and what morphological variations occur within a species.

  7. Simulation of systematic errors in the SLC magnets

    SciTech Connect

    Jaeger, J.

    1983-08-08

    The distance (iron to iron) between a focusing and a defocusing magnet in the SLC-arcs is 6.7056 cm and the iron length of each of them is 2.52914 m. To represent these magnets by a hard-edge model in computer codes TRANSPORT or TURTLE the magnetic length rather than the core length of these magnets is of interest. In the present lattice the magnetic length for the field and the gradient of each of these magnets is assumed to be 2.5462 m.

  8. Uncorrected refractive errors

    PubMed Central

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  9. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  10. Systematic approach for simultaneously correcting the band-gap andp-dseparation errors of common cation III-V or II-VI binaries in density functional theory calculations within a local density approximation

    SciTech Connect

    Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang

    2015-07-31

    We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles method can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.

  11. Subaperture test of wavefront error of large telescopes: error sources and stitching performance simulations

    NASA Astrophysics Data System (ADS)

    Chen, Shanyong; Li, Shengyi; Wang, Guilin

    2014-11-01

    The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and

  12. Insulin use: preventable errors.

    PubMed

    2014-01-01

    Insulin is vital for patients with type 1 diabetes and useful for certain patients with type 2 diabetes. The serious consequences of insulin-related medication errors are overdose, resulting in severe hypoglycaemia, causing seizures, coma and even death; or underdose, resulting in hyperglycaemia and sometimes ketoacidosis. Errors associated with the preparation and administration of insulin are often reported, both outside and inside the hospital setting. These errors are preventable. By analysing reports from organisations devoted to medication error prevention and from poison control centres, as well as a few studies and detailed case reports of medication errors, various types of error associated with insulin use have been identified, especially in the hospital setting. Generally, patients know more about the practicalities of their insulin treatment than healthcare professionals with intermittent involvement. Medication errors involving insulin can occur at each step of the medication-use process: prescribing, data entry, preparation, dispensing and administration. When prescribing insulin, wrong-dose errors have been caused by the use of abbreviations, especially "U" instead of the word "units" (often resulting in a 10-fold overdose because the "U" is read as a zero), or by failing to write the drug's name correctly or in full. In electronic prescribing, the sheer number of insulin products is a source of confusion and, ultimately, wrong-dose errors, and often overdose. Prescribing, dispensing or administration software is rarely compatible with insulin prescriptions in which the dose is adjusted on the basis of the patient's subsequent capillary blood glucose readings, and can therefore generate errors. When preparing and dispensing insulin, a tuberculin syringe is sometimes used instead of an insulin syringe, leading to overdose. Other errors arise from confusion created by similar packaging, between different insulin products or between insulin and other

  13. Error Prevention Aid

    NASA Technical Reports Server (NTRS)

    1987-01-01

    In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.

  14. The Crucial Role of Error Correlation for Uncertainty Modeling of CFD-Based Aerodynamics Increments

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.; Walker, Eric L.

    2011-01-01

    The Ares I ascent aerodynamics database for Design Cycle 3 (DAC-3) was built from wind-tunnel test results and CFD solutions. The wind tunnel results were used to build the baseline response surfaces for wind-tunnel Reynolds numbers at power-off conditions. The CFD solutions were used to build increments to account for Reynolds number effects. We calculate the validation errors for the primary CFD code results at wind tunnel Reynolds number power-off conditions and would like to be able to use those errors to predict the validation errors for the CFD increments. However, the validation errors are large compared to the increments. We suggest a way forward that is consistent with common practice in wind tunnel testing which is to assume that systematic errors in the measurement process and/or the environment will subtract out when increments are calculated, thus making increments more reliable with smaller uncertainty than absolute values of the aerodynamic coefficients. A similar practice has arisen for the use of CFD to generate aerodynamic database increments. The basis of this practice is the assumption of strong correlation of the systematic errors inherent in each of the results used to generate an increment. The assumption of strong correlation is the inferential link between the observed validation uncertainties at wind-tunnel Reynolds numbers and the uncertainties to be predicted for flight. In this paper, we suggest a way to estimate the correlation coefficient and demonstrate the approach using code-to-code differences that were obtained for quality control purposes during the Ares I CFD campaign. Finally, since we can expect the increments to be relatively small compared to the baseline response surface and to be typically of the order of the baseline uncertainty, we find that it is necessary to be able to show that the correlation coefficients are close to unity to avoid overinflating the overall database uncertainty with the addition of the increments.

  15. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  16. Facts about Refractive Errors

    MedlinePlus

    ... the lens can cause refractive errors. What is refraction? Refraction is the bending of light as it passes ... rays entering the eye, causing a more precise refraction or focus. In many cases, contact lenses provide ...

  17. Errors in prenatal diagnosis.

    PubMed

    Anumba, Dilly O C

    2013-08-01

    Prenatal screening and diagnosis are integral to antenatal care worldwide. Prospective parents are offered screening for common fetal chromosomal and structural congenital malformations. In most developed countries, prenatal screening is routinely offered in a package that includes ultrasound scan of the fetus and the assay in maternal blood of biochemical markers of aneuploidy. Mistakes can arise at any point of the care pathway for fetal screening and diagnosis, and may involve individual or corporate systemic or latent errors. Special clinical circumstances, such as maternal size, fetal position, and multiple pregnancy, contribute to the complexities of prenatal diagnosis and to the chance of error. Clinical interventions may lead to adverse outcomes not caused by operator error. In this review I discuss the scope of the errors in prenatal diagnosis, and highlight strategies for their prevention and diagnosis, as well as identify areas for further research and study to enhance patient safety. PMID:23725900

  18. Error mode prediction.

    PubMed

    Hollnagel, E; Kaarstad, M; Lee, H C

    1999-11-01

    The study of accidents ('human errors') has been dominated by efforts to develop 'error' taxonomies and 'error' models that enable the retrospective identification of likely causes. In the field of Human Reliability Analysis (HRA) there is, however, a significant practical need for methods that can predict the occurrence of erroneous actions--qualitatively and quantitatively. The present experiment tested an approach for qualitative performance prediction based on the Cognitive Reliability and Error Analysis Method (CREAM). Predictions of possible erroneous actions were made for operators using different types of alarm systems. The data were collected as part of a large-scale experiment using professional nuclear power plant operators in a full scope simulator. The analysis showed that the predictions were correct in more than 70% of the cases, and also that the coverage of the predictions depended critically on the comprehensiveness of the preceding task analysis. PMID:10582035

  19. Pronominal Case-Errors

    ERIC Educational Resources Information Center

    Kaper, Willem

    1976-01-01

    Contradicts a previous assertion by C. Tanz that children commit substitution errors usually using objective pronoun forms for nominative ones. Examples from Dutch and German provide evidence that substitutions are made in both directions. (CHK)

  20. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  1. Slowing after Observed Error Transfers across Tasks

    PubMed Central

    Wang, Lijun; Pan, Weigang; Tan, Jinfeng; Liu, Congcong; Chen, Antao

    2016-01-01

    . Moreover, the PES effect appears across tasksets with distinct stimuli and response rules in the context of observed errors, reflecting a generic process. Additionally, the slowing effect and improved accuracy in the post-observed error trial do not occur together, suggesting that they are independent behavioral adjustments in the context of observed errors. PMID:26934579

  2. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements. PMID:27250375

  3. Error-Compensated Telescope

    NASA Technical Reports Server (NTRS)

    Meinel, Aden B.; Meinel, Marjorie P.; Stacy, John E.

    1989-01-01

    Proposed reflecting telescope includes large, low-precision primary mirror stage and small, precise correcting mirror. Correcting mirror machined under computer control to compensate for error in primary mirror. Correcting mirror machined by diamond cutting tool. Computer analyzes interferometric measurements of primary mirror to determine shape of surface of correcting mirror needed to compensate for errors in wave front reflected from primary mirror and commands position and movement of cutting tool accordingly.

  4. Dysgraphia in dementia: a systematic investigation of graphemic buffer features in a case series.

    PubMed

    Haslam, Catherine; Kay, Janice; Tree, Jeremy; Baron, Rachel

    2009-08-01

    In this paper we report findings from a systematic investigation of spelling performance in three patients - PR, RH and AC - who despite their different medical diagnoses showed a very consistent pattern of dysgraphia, more typical of graphemic buffer disorder. Systematic investigation of the features characteristic of this disorder in Study 1 confirmed the presence of length effects in spelling, classic errors (i.e., letter substitution, omission, addition, transposition), a bow-shaped curve in the serial position of errors and consistency in substitution of consonants and vowels. However, in addition to this clear pattern of graphemic buffer impairment, evidence of regularity effects and phonologically plausible errors in spelling raised questions about the integrity of the lexical spelling route in each case. A second study was conducted, using a word and non-word immediate delay copy task, in an attempt to minimise the influence of orthographic representations on written output. Persistence of graphemic buffer errors would suggest an additional, independent source of damage. Two patients, PR and AC, took part and in both cases symptoms of graphemic buffer disorder persisted. Together, these findings suggest that damage to the graphemic buffer may be more common than currently suggested in the literature. PMID:19370478

  5. Food additives.

    PubMed

    Berglund, F

    1978-01-01

    The use of additives to food fulfils many purposes, as shown by the index issued by the Codex Committee on Food Additives: Acids, bases and salts; Preservatives, Antioxidants and antioxidant synergists; Anticaking agents; Colours; Emulfifiers; Thickening agents; Flour-treatment agents; Extraction solvents; Carrier solvents; Flavours (synthetic); Flavour enhancers; Non-nutritive sweeteners; Processing aids; Enzyme preparations. Many additives occur naturally in foods, but this does not exclude toxicity at higher levels. Some food additives are nutrients, or even essential nutritents, e.g. NaCl. Examples are known of food additives causing toxicity in man even when used according to regulations, e.g. cobalt in beer. In other instances, poisoning has been due to carry-over, e.g. by nitrate in cheese whey - when used for artificial feed for infants. Poisonings also occur as the result of the permitted substance being added at too high levels, by accident or carelessness, e.g. nitrite in fish. Finally, there are examples of hypersensitivity to food additives, e.g. to tartrazine and other food colours. The toxicological evaluation, based on animal feeding studies, may be complicated by impurities, e.g. orthotoluene-sulfonamide in saccharin; by transformation or disappearance of the additive in food processing in storage, e.g. bisulfite in raisins; by reaction products with food constituents, e.g. formation of ethylurethane from diethyl pyrocarbonate; by metabolic transformation products, e.g. formation in the gut of cyclohexylamine from cyclamate. Metabolic end products may differ in experimental animals and in man: guanylic acid and inosinic acid are metabolized to allantoin in the rat but to uric acid in man. The magnitude of the safety margin in man of the Acceptable Daily Intake (ADI) is not identical to the "safety factor" used when calculating the ADI. The symptoms of Chinese Restaurant Syndrome, although not hazardous, furthermore illustrate that the whole ADI

  6. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  7. Do calculation errors by nurses cause medication errors in clinical practice? A literature review.

    PubMed

    Wright, Kerri

    2010-01-01

    This review aims to examine the literature available to ascertain whether medication errors in clinical practice are the result of nurses' miscalculating drug dosages. The research studies highlighting poor calculation skills of nurses and student nurses have been tested using written drug calculation tests in formal classroom settings [Kapborg, I., 1994. Calculation and administration of drug dosage by Swedish nurses, student nurses and physicians. International Journal for Quality in Health Care 6(4): 389 -395; Hutton, M., 1998. Nursing Mathematics: the importance of application Nursing Standard 13(11): 35-38; Weeks, K., Lynne, P., Torrance, C., 2000. Written drug dosage errors made by students: the threat to clinical effectiveness and the need for a new approach. Clinical Effectiveness in Nursing 4, 20-29]; Wright, K., 2004. Investigation to find strategies to improve student nurses' maths skills. British Journal Nursing 13(21) 1280-1287; Wright, K., 2005. An exploration into the most effective way to teach drug calculation skills to nursing students. Nurse Education Today 25, 430-436], but there have been no reviews of the literature on medication errors in practice that specifically look to see whether the medication errors are caused by nurses' poor calculation skills. The databases Medline, CINAHL, British Nursing Index (BNI), Journal of American Medical Association (JAMA) and Archives and Cochrane reviews were searched for research studies or systematic reviews which reported on the incidence or causes of drug errors in clinical practice. In total 33 articles met the criteria for this review. There were no studies that examined nurses' drug calculation errors in practice. As a result studies and systematic reviews that investigated the types and causes of drug errors were examined to establish whether miscalculations by nurses were the causes of errors. The review found insufficient evidence to suggest that medication errors are caused by nurses' poor

  8. Systematic Characterization of High Mass Accuracy Influence on False Discovery and Probability Scoring in Peptide Mass Fingerprinting

    PubMed Central

    Dodds, Eric D.; Clowers, Brian H.; Hagerman, Paul J.; Lebrilla, Carlito B.

    2009-01-01

    While the bearing of mass measurement error upon protein identification is sometimes underestimated, uncertainty in observed peptide masses unavoidably translates to ambiguity in subsequent protein identifications. While ongoing instrumental advances continue to make high accuracy mass spectrometry (MS) increasingly accessible, many proteomics experiments are still conducted with rather large mass error tolerances. Additionally, the ranking schemes of most protein identification algorithms do not include a meaningful incorporation of mass measurement error. This report provides a critical evaluation of mass error tolerance as it pertains to false positive peptide and protein associations resulting from peptide mass fingerprint (PMF) database searching. High accuracy, high resolution PMFs of several model proteins were obtained using matrix-assisted laser desorption/ionization Fourier transform ion cyclotron resonance mass spectrometry (MALDI-FTICR-MS). Varying levels of mass accuracy were simulated by systematically modulating the mass error tolerance of the PMF query and monitoring the effect on figures of merit indicating the PMF quality. Importantly, the benefits of decreased mass error tolerance are not manifest in Mowse scores when operating at tolerances in the low parts per million range, but become apparent with the consideration of additional metrics that are often overlooked. Furthermore, the outcomes of these experiments support the concept that false discovery is closely tied to mass measurement error in PMF analysis. Clear establishment of this relation demonstrates the need for mass error aware protein identification routines and argues for a more prominent contribution of high accuracy mass measurement to proteomic science. PMID:17980142

  9. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Nagel, David C.

    1988-01-01

    The role of human error in commercial and general aviation accidents and the techniques used to evaluate it are reviewed from a human-factors perspective. Topics addressed include the general decline in accidents per million departures since the 1960s, the increase in the proportion of accidents due to human error, methods for studying error, theoretical error models, and the design of error-resistant systems. Consideration is given to information acquisition and processing errors, visually guided flight, disorientation, instrument-assisted guidance, communication errors, decision errors, debiasing, and action errors.

  10. Gross error detection and stage efficiency estimation in a separation process

    SciTech Connect

    Serth, R.W.; Srikanth, B. . Dept. of Chemical and Natural Gas Engineering); Maronga, S.J. . Dept. of Chemical and Process Engineering)

    1993-10-01

    Accurate process models are required for optimization and control in chemical plants and petroleum refineries. These models involve various equipment parameters, such as stage efficiencies in distillation columns, the values of which must be determined by fitting the models to process data. Since the data contain random and systematic measurement errors, some of which may be large (gross errors), they must be reconciled to obtain reliable estimates of equipment parameters. The problem thus involves parameter estimation coupled with gross error detection and data reconciliation. MacDonald and Howat (1988) studied the above problem for a single-stage flash distillation process. Their analysis was based on the definition of stage efficiency due to Hausen, which has some significant disadvantages in this context, as discussed below. In addition, they considered only data sets which contained no gross errors. The purpose of this article is to extend the above work by considering alternative definitions of state efficiency and efficiency estimation in the presence of gross errors.

  11. Errata: Papers in Error Analysis.

    ERIC Educational Resources Information Center

    Svartvik, Jan, Ed.

    Papers presented at the symposium of error analysis in Lund, Sweden, in September 1972, approach error analysis specifically in its relation to foreign language teaching and second language learning. Error analysis is defined as having three major aspects: (1) the description of the errors, (2) the explanation of errors by means of contrastive…

  12. Neuroimaging measures of error-processing: Extracting reliable signals from event-related potentials and functional magnetic resonance imaging.

    PubMed

    Steele, Vaughn R; Anderson, Nathaniel E; Claus, Eric D; Bernat, Edward M; Rao, Vikram; Assaf, Michal; Pearlson, Godfrey D; Calhoun, Vince D; Kiehl, Kent A

    2016-05-15

    Error-related brain activity has become an increasingly important focus of cognitive neuroscience research utilizing both event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI). Given the significant time and resources required to collect these data, it is important for researchers to plan their experiments such that stable estimates of error-related processes can be achieved efficiently. Reliability of error-related brain measures will vary as a function of the number of error trials and the number of participants included in the averages. Unfortunately, systematic investigations of the number of events and participants required to achieve stability in error-related processing are sparse, and none have addressed variability in sample size. Our goal here is to provide data compiled from a large sample of healthy participants (n=180) performing a Go/NoGo task, resampled iteratively to demonstrate the relative stability of measures of error-related brain activity given a range of sample sizes and event numbers included in the averages. We examine ERP measures of error-related negativity (ERN/Ne) and error positivity (Pe), as well as event-related fMRI measures locked to False Alarms. We find that achieving stable estimates of ERP measures required four to six error trials and approximately 30 participants; fMRI measures required six to eight trials and approximately 40 participants. Fewer trials and participants were required for measures where additional data reduction techniques (i.e., principal component analysis and independent component analysis) were implemented. Ranges of reliability statistics for various sample sizes and numbers of trials are provided. We intend this to be a useful resource for those planning or evaluating ERP or fMRI investigations with tasks designed to measure error-processing. PMID:26908319

  13. Smoothing error pitfalls

    NASA Astrophysics Data System (ADS)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  14. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  15. Potlining Additives

    SciTech Connect

    Rudolf Keller

    2004-08-10

    In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.

  16. Phosphazene additives

    SciTech Connect

    Harrup, Mason K; Rollins, Harry W

    2013-11-26

    An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.

  17. A Parametric Analysis of Errors of Commission during Discrete-Trial Training

    ERIC Educational Resources Information Center

    DiGennaro Reed, Florence D.; Reed, Derek D.; Baez, Cynthia N.; Maguire, Helena

    2011-01-01

    We investigated the effects of systematic changes in levels of treatment integrity by altering errors of commission during error-correction procedures as part of discrete-trial training. We taught 3 students with autism receptive nonsense shapes under 3 treatment integrity conditions (0%, 50%, or 100% errors of commission). Participants exhibited…

  18. Performance of focused error control codes

    NASA Astrophysics Data System (ADS)

    Alajaji, Fady; Fuja, Thomas

    1994-02-01

    Consider an additive noise channel with inputs and outputs in the field GF(q) where qgreater than 2; every time a symbol is transmitted over such a channel, there are q - 1 different errors that can occur, corresponding to the q - 1 non-zero elements that the channel can add to the transmitted symbol. In many data communication/storage systems, there are some errors that occur much more frequently than others; however, traditional error correcting codes - designed with respect to the Hamming metric - treat each of these q - 1 errors the same. Fuja and Heegard have designed a class of codes, called focused error control codes, that offer different levels of protection against common and uncommon errors; the idea is to define the level of protection in a way based not only on the number of errors, but the kind as well. In this paper, the performance of these codes is analyzed with respect to idealized 'skewed' channels as well as realistic non-binary modulation schemes. It is shown that focused codes, used in conjunction with PSK and QAM signaling, can provide more than 1.0 dB of additional coding gain when compared with Reed-Solomon codes for small blocklengths.

  19. Error modes in implicit Monte Carlo

    SciTech Connect

    Martin, William Russell,; Brown, F. B.

    2001-01-01

    The Implicit Monte Carlo (IMC) method of Fleck and Cummings [1] has been used for years to analyze radiative transfer problems, such as those encountered in stellar atmospheres or inertial confinement fusion. Larsen and Mercier [2] have shown that the IMC method violates a maximum principle that is satisfied by the exact solution to the radiative transfer equation. Except for [2] and related papers regarding the maximum principle, there have been no other published results regarding the analysis of errors or convergence properties for the IMC method. This work presents an exact error analysis for the IMC method by using the analytical solutions for infinite medium geometry (0-D) to determine closed form expressions for the errors. The goal is to gain insight regarding the errors inherent in the IMC method by relating the exact 0-D errors to multi-dimensional geometry. Additional work (not described herein) has shown that adding a leakage term (i.e., a 'buckling' term) to the 0-D equations has relatively little effect on the IMC errors analyzed in this paper, so that the 0-D errors should provide useful guidance for the errors observed in multi-dimensional simulations.

  20. Testing Scientific Software: A Systematic Literature Review

    PubMed Central

    Kanewala, Upulee; Bieman, James M.

    2014-01-01

    Context Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. Objective This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. Method We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. Results We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Conclusions Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques. PMID:25125798

  1. Experimental Quantum Error Detection

    PubMed Central

    Jin, Xian-Min; Yi, Zhen-Huan; Yang, Bin; Zhou, Fei; Yang, Tao; Peng, Cheng-Zhi

    2012-01-01

    Faithful transmission of quantum information is a crucial ingredient in quantum communication networks. To overcome the unavoidable decoherence in a noisy channel, to date, many efforts have been made to transmit one state by consuming large numbers of time-synchronized ancilla states. However, such huge demands of quantum resources are hard to meet with current technology and this restricts practical applications. Here we experimentally demonstrate quantum error detection, an economical approach to reliably protecting a qubit against bit-flip errors. Arbitrary unknown polarization states of single photons and entangled photons are converted into time bins deterministically via a modified Franson interferometer. Noise arising in both 10 m and 0.8 km fiber, which induces associated errors on the reference frame of time bins, is filtered when photons are detected. The demonstrated resource efficiency and state independence make this protocol a promising candidate for implementing a real-world quantum communication network. PMID:22953047

  2. Measurement error revisited

    NASA Astrophysics Data System (ADS)

    Henderson, Robert K.

    1999-12-01

    It is widely accepted in the electronics industry that measurement gauge error variation should be no larger than 10% of the related specification window. In a previous paper, 'What Amount of Measurement Error is Too Much?', the author used a framework from the process industries to evaluate the impact of measurement error variation in terms of both customer and supplier risk (i.e., Non-conformance and Yield Loss). Application of this framework in its simplest form suggested that in many circumstances the 10% criterion might be more stringent than is reasonably necessary. This paper reviews the framework and results of the earlier work, then examines some of the possible extensions to this framework suggested in that paper, including variance component models and sampling plans applicable in the photomask and semiconductor businesses. The potential impact of imperfect process control practices will be examined as well.

  3. Discretization error estimation and exact solution generation using the method of nearby problems.

    SciTech Connect

    Sinclair, Andrew J.; Raju, Anil; Kurzen, Matthew J.; Roy, Christopher John; Phillips, Tyrone S.

    2011-10-01

    The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.

  4. Surprise beyond prediction error

    PubMed Central

    Chumbley, Justin R; Burke, Christopher J; Stephan, Klaas E; Friston, Karl J; Tobler, Philippe N; Fehr, Ernst

    2014-01-01

    Surprise drives learning. Various neural “prediction error” signals are believed to underpin surprise-based reinforcement learning. Here, we report a surprise signal that reflects reinforcement learning but is neither un/signed reward prediction error (RPE) nor un/signed state prediction error (SPE). To exclude these alternatives, we measured surprise responses in the absence of RPE and accounted for a host of potential SPE confounds. This new surprise signal was evident in ventral striatum, primary sensory cortex, frontal poles, and amygdala. We interpret these findings via a normative model of surprise. PMID:24700400

  5. Evolution of error diffusion

    NASA Astrophysics Data System (ADS)

    Knox, Keith T.

    1999-10-01

    As we approach the new millennium, error diffusion is approaching the 25th anniversary of its invention. Because of its exceptionally high image quality, it continues to be a popular choice among digital halftoning algorithms. Over the last 24 years, many attempts have been made to modify and improve the algorithm--to eliminate unwanted textures and to extend it to printing media and color. Some of these modifications have been very successful and are in use today. This paper will review the history of the algorithm and its modifications. Three watershed events in the development of error diffusion will be described, together with the lessons learned along the way.

  6. Evolution of error diffusion

    NASA Astrophysics Data System (ADS)

    Knox, Keith T.

    1998-12-01

    As we approach the new millennium, error diffusion is approaching the 25th anniversary of its invention. Because of its exceptionally high image quality, it continues to be a popular choice among digital halftoning algorithms. Over the last 24 years, many attempts have been made to modify and improve the algorithm - to eliminate unwanted textures and to extend it to printing media and color. Some of these modifications have been very successful and are in use today. This paper will review the history of the algorithm and its modifications. Three watershed events in the development of error diffusion will be described, together with the lesions learned along the way.

  7. Error Free Software

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  8. Multijoint error compensation mediates unstable object control.

    PubMed

    Cluff, Tyler; Manos, Aspasia; Lee, Timothy D; Balasubramaniam, Ramesh

    2012-08-01

    A key feature of skilled object control is the ability to correct performance errors. This process is not straightforward for unstable objects (e.g., inverted pendulum or "stick" balancing) because the mechanics of the object are sensitive to small control errors, which can lead to rapid performance changes. In this study, we have characterized joint recruitment and coordination processes in an unstable object control task. Our objective was to determine whether skill acquisition involves changes in the recruitment of individual joints or distributed error compensation. To address this problem, we monitored stick-balancing performance across four experimental sessions. We confirmed that subjects learned the task by showing an increase in the stability and length of balancing trials across training sessions. We demonstrated that motor learning led to the development of a multijoint error compensation strategy such that after training, subjects preferentially constrained joint angle variance that jeopardized task performance. The selective constraint of destabilizing joint angle variance was an important metric of motor learning. Finally, we performed a combined uncontrolled manifold-permutation analysis to ensure the variance structure was not confounded by differences in the variance of individual joint angles. We showed that reliance on multijoint error compensation increased, whereas individual joint variation (primarily at the wrist joint) decreased systematically with training. We propose a learning mechanism that is based on the accurate estimation of sensory states. PMID:22623491

  9. A new analysis of fine-structure constant measurements and modelling errors from quasar absorption lines

    NASA Astrophysics Data System (ADS)

    Wilczynska, Michael R.; Webb, John K.; King, Julian A.; Murphy, Michael T.; Bainbridge, Matthew B.; Flambaum, Victor V.

    2015-12-01

    We present an analysis of 23 absorption systems along the lines of sight towards 18 quasars in the redshift range of 0.4 ≤ zabs ≤ 2.3 observed on the Very Large Telescope (VLT) using the Ultraviolet and Visual Echelle Spectrograph (UVES). Considering both statistical and systematic error contributions we find a robust estimate of the weighted mean deviation of the fine-structure constant from its current, laboratory value of Δα/α = (0.22 ± 0.23) × 10-5, consistent with the dipole variation reported in Webb et al. and King et al. This paper also examines modelling methodologies and systematic effects. In particular, we focus on the consequences of fitting quasar absorption systems with too few absorbing components and of selectively fitting only the stronger components in an absorption complex. We show that using insufficient continuum regions around an absorption complex causes a significant increase in the scatter of a sample of Δα/α measurements, thus unnecessarily reducing the overall precision. We further show that fitting absorption systems with too few velocity components also results in a significant increase in the scatter of Δα/α measurements, and in addition causes Δα/α error estimates to be systematically underestimated. These results thus identify some of the potential pitfalls in analysis techniques and provide a guide for future analyses.

  10. Help prevent hospital errors

    MedlinePlus

    ... A.D.A.M. Editorial team. Related MedlinePlus Health Topics Medication Errors Patient Safety Browse the Encyclopedia A.D.A.M., Inc. is accredited by URAC, also known as the American Accreditation HealthCare Commission ... for online health information and services. Learn more about A.D. ...

  11. Orwell's Instructive Errors

    ERIC Educational Resources Information Center

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  12. Enhanced orbit determination filter: Inclusion of ground system errors as filter parameters

    NASA Technical Reports Server (NTRS)

    Masters, W. C.; Scheeres, D. J.; Thurman, S. W.

    1994-01-01

    The theoretical aspects of an orbit determination filter that incorporates ground-system error sources as model parameters for use in interplanetary navigation are presented in this article. This filter, which is derived from sequential filtering theory, allows a systematic treatment of errors in calibrations of transmission media, station locations, and earth orientation models associated with ground-based radio metric data, in addition to the modeling of the spacecraft dynamics. The discussion includes a mathematical description of the filter and an analytical comparison of its characteristics with more traditional filtering techniques used in this application. The analysis in this article shows that this filter has the potential to generate navigation products of substantially greater accuracy than more traditional filtering procedures.

  13. Test models for improving filtering with model errors through stochastic parameter estimation

    SciTech Connect

    Gershgorin, B.; Harlim, J. Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  14. Rapid mapping of volumetric machine errors using distance measurements

    SciTech Connect

    Krulewich, D.A.

    1998-04-01

    This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are

  15. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  16. Microdensitometer errors: Their effect on photometric data reduction

    NASA Technical Reports Server (NTRS)

    Bozyan, E. P.; Opal, C. B.

    1984-01-01

    The performance of densitometers used for photometric data reduction of high dynamic range electrographic plate material is analyzed. Densitometer repeatability is tested by comparing two scans of one plate. Internal densitometer errors are examined by constructing histograms of digitized densities and finding inoperative bits and differential nonlinearity in the analog to digital converter. Such problems appear common to the four densitometers used in this investigation and introduce systematic algorithm dependent errors in the results. Strategies to improve densitometer performance are suggested.

  17. Identification and Minimization of Errors in Doppler Global Velocimetry Measurements

    NASA Technical Reports Server (NTRS)

    Meyers, James F.; Lee, Joseph W.

    2000-01-01

    A systematic laboratory investigation was conducted to identify potential measurement error sources in Doppler Global Velocimetry technology. Once identified, methods were developed to eliminate or at least minimize the effects of these errors. The areas considered included the Iodine vapor cell, optical alignment, scattered light characteristics, noise sources, and the laser. Upon completion the demonstrated measurement uncertainty was reduced to 0.5 m/sec.

  18. Breast Patient Setup Error Assessment: Comparison of Electronic Portal Image Devices and Cone-Beam Computed Tomography Matching Results

    SciTech Connect

    Topolnjak, Rajko; Sonke, Jan-Jakob; Nijkamp, Jasper; Rasch, Coen; Minkema, Danny; Remeijer, Peter; Vliet-Vroegindeweij, Corine van

    2010-11-15

    Purpose: To quantify the differences in setup errors measured with the cone-beam computed tomography (CBCT) and electronic portal image devices (EPID) in breast cancer patients. Methods and Materials: Repeat CBCT scan were acquired for routine offline setup verification in 20 breast cancer patients. During the CBCT imaging fractions, EPID images of the treatment beams were recorded. Registrations of the bony anatomy for CBCT to planning CT and EPID to digitally reconstructed-radiographs (DRRs) were compared. In addition, similar measurements of an anthropomorphic thorax phantom were acquired. Bland-Altman and linear regression analysis were performed for clinical and phantom registrations. Systematic and random setup errors were quantified for CBCT and EPID-driven correction protocols in the EPID coordinate system (U, V), with V parallel to the cranial-caudal axis and U perpendicular to V and the central beam axis. Results: Bland-Altman analysis of clinical EPID and CBCT registrations yielded 4 to 6-mm limits of agreement, indicating that both methods were not compatible. The EPID-based setup errors were smaller than the CBCT-based setup errors. Phantom measurements showed that CBCT accurately measures setup error whereas EPID underestimates setup errors in the cranial-caudal direction. In the clinical measurements, the residual bony anatomy setup errors after offline CBCT-based corrections were {Sigma}{sub U} = 1.4 mm, {Sigma}{sub V} = 1.7 mm, and {sigma}{sub U} = 2.6 mm, {sigma}{sub V} = 3.1 mm. Residual setup errors of EPID driven corrections corrected for underestimation were estimated at {Sigma}{sub U} = 2.2mm, {Sigma}{sub V} = 3.3 mm, and {sigma}{sub U} = 2.9 mm, {sigma}{sub V} = 2.9 mm. Conclusion: EPID registration underestimated the actual bony anatomy setup error in breast cancer patients by 20% to 50%. Using CBCT decreased setup uncertainties significantly.

  19. Systematic neutron guide misalignment for an accelerator-driven spallation neutron source

    NASA Astrophysics Data System (ADS)

    Zendler, C.; Bentley, P. M.

    2016-08-01

    The European Spallation Source (ESS) is a long pulse spallation neutron source that is currently under construction in Lund, Sweden. A considerable fraction of the 22 planned instruments extend as far as 75-150 m from the source. In such long beam lines, misalignment between neutron guide segments can decrease the neutron transmission significantly. In addition to a random misalignment from installation tolerances, the ground on which ESS is built can be expected to sink with time, and thus shift the neutron guide segments further away from the ideal alignment axis in a systematic way. These systematic errors are correlated to the ground structure, position of buildings and shielding installation. Since the largest deformation is expected close to the target, even short instruments might be noticeably affected. In this study, the effect of this systematic misalignment on short and long ESS beam lines is analyzed, and a possible mitigation by overillumination of subsequent guide sections investigated.

  20. [The error, source of learning].

    PubMed

    Joyeux, Stéphanie; Bohic, Valérie

    2016-05-01

    The error itself is not recognised as a fault. It is the intentionality which differentiates between an error and a fault. An error is unintentional while a fault is a failure to respect known rules. The risk of error is omnipresent in health institutions. Public authorities have therefore set out a series of measures to reduce this risk. PMID:27155272

  1. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  2. Neural Correlates of Reach Errors

    PubMed Central

    Hashambhoy, Yasmin; Rane, Tushar; Shadmehr, Reza

    2005-01-01

    Reach errors may be broadly classified into errors arising from unpredictable changes in target location, called target errors, and errors arising from miscalibration of internal models, called execution errors. Execution errors may be caused by miscalibration of dynamics (e.g.. when a force field alters limb dynamics) or by miscalibration of kinematics (e.g., when prisms alter visual feedback). While all types of errors lead to similar online corrections, we found that the motor system showed strong trial-by-trial adaptation in response to random execution errors but not in response to random target errors. We used fMRI and a compatible robot to study brain regions involved in processing each kind of error. Both kinematic and dynamic execution errors activated regions along the central and the post-central sulci and in lobules V, VI, and VIII of the cerebellum, making these areas possible sites of plastic changes in internal models for reaching. Only activity related to kinematic errors extended into parietal area 5. These results are inconsistent with the idea that kinematics and dynamics of reaching are computed in separate neural entities. In contrast, only target errors caused increased activity in the striatum and the posterior superior parietal lobule. The cerebellum and motor cortex were as strongly activated as with execution errors. These findings indicate a neural and behavioral dissociation between errors that lead to switching of behavioral goals, and errors that lead to adaptation of internal models of limb dynamics and kinematics. PMID:16251440

  3. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  4. The Insufficiency of Error Analysis

    ERIC Educational Resources Information Center

    Hammarberg, B.

    1974-01-01

    The position here is that error analysis is inadequate, particularly from the language-teaching point of view. Non-errors must be considered in specifying the learner's current command of the language, its limits, and his learning tasks. A cyclic procedure of elicitation and analysis, to secure evidence of errors and non-errors, is outlined.…

  5. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  6. Financial errors in dementia: Testing a neuroeconomic conceptual framework

    PubMed Central

    Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L.; Rosen, Howard J.

    2013-01-01

    Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer’s disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p< 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p< 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention. PMID:23550884

  7. Manson's triple error.

    PubMed

    F, Delaporte

    2008-09-01

    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729

  8. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  9. The causes of horizontal-vertical (H-V) bias in optical lithography: dipole source errors

    NASA Astrophysics Data System (ADS)

    Biafore, John J.; Mack, Chris A.; Robertson, Stewart A.; Smith, Mark D.; Kapasi, Sanjay

    2007-03-01

    Horizontal-Vertical (H-V) bias is the systematic difference in linewidth between closely located horizontally and vertically oriented resist features that, other than orientation, should be identical. There are two major causes of H-V bias: astigmatism, which causes an H-V bias that varies through focus, and illumination source errors such as telecentricity error. In this paper, the effects of simple dipole source errors upon H-V bias and placement error through focus are explored through simulation.

  10. A Review of the Literature on Computational Errors With Whole Numbers. Mathematics Education Diagnostic and Instructional Centre (MEDIC).

    ERIC Educational Resources Information Center

    Burrows, J. K.

    Research on error patterns associated with whole number computation is reviewed. Details of the results of some of the individual studies cited are given in the appendices. In Appendix A, 33 addition errors, 27 subtraction errors, 41 multiplication errors, and 41 division errors are identified, and the frequency of these errors made by 352…

  11. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  12. TOA/FOA geolocation error analysis.

    SciTech Connect

    Mason, John Jeffrey

    2008-08-01

    This paper describes how confidence intervals can be calculated for radiofrequency emitter position estimates based on time-of-arrival and frequency-of-arrival measurements taken at several satellites. These confidence intervals take the form of 50th and 95th percentile circles and ellipses to convey horizontal error and linear intervals to give vertical error. We consider both cases where an assumed altitude is and is not used. Analysis of velocity errors is also considered. We derive confidence intervals for horizontal velocity magnitude and direction including the case where the emitter velocity is assumed to be purely horizontal, i.e., parallel to the ellipsoid. Additionally, we derive an algorithm that we use to combine multiple position fixes to reduce location error. The algorithm uses all available data, after more than one location estimate for an emitter has been made, in a mathematically optimal way.

  13. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  14. Human Error In Complex Systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1991-01-01

    Report presents results of research aimed at understanding causes of human error in such complex systems as aircraft, nuclear powerplants, and chemical processing plants. Research considered both slips (errors of action) and mistakes (errors of intention), and influence of workload on them. Results indicated that: humans respond to conditions in which errors expected by attempting to reduce incidence of errors; and adaptation to conditions potent influence on human behavior in discretionary situations.

  15. [The notion and classification of expert errors].

    PubMed

    Klevno, V A

    2012-01-01

    The author presents the analysis of the legal and forensic medical literature concerning currently accepted concepts and classification of expert malpractice. He proposes a new easy-to-remember definition of the expert error and considers the classification of such mistakes. The analysis of the cases of erroneous application of the medical criteria for estimation of the harm to health made it possible to reveal and systematize the causes accounting for the cases of expert malpractice committed by forensic medical experts and health providers when determining the degree of harm to human health. PMID:22686055

  16. Effects of the ephemeris error on effective pointing for a spaceborne SAR

    NASA Technical Reports Server (NTRS)

    Jin, Michael Y.

    1989-01-01

    Both Magellan SAR data acquisition and image processing require the knowledge of both the ephemeris (spacecraft position and velocity) and the radar pointing direction. Error in the knowledge of the radar pointing direction results in a loss of SNR in the image product. An error in the ephemeris data has a similar effect. To facilitate SNR performance analysis, an effective pointing error is defined to characterize the effect of the ephemeris error. A systematic approach to relate the ephemeris error to the effective pointing errors is described. Result of this analysis has led to a formal accuracy requirement levied on the Magellan navigation system.

  17. Instrumental error in chromotomosynthetic hyperspectral imaging.

    PubMed

    Bostick, Randall L; Perram, Glen P

    2012-07-20

    Chromotomosynthetic imaging (CTI) is a method of convolving spatial and spectral information that can be reconstructed into a hyperspectral image cube using the same transforms employed in medical tomosynthesis. A direct vision prism instrument operating in the visible (400-725 nm) with 0.6 mrad instantaneous field of view (IFOV) and 0.6-10 nm spectral resolution has been constructed and characterized. Reconstruction of hyperspectral data cubes requires an estimation of the instrument component properties that define the forward transform. We analyze the systematic instrumental error in collected projection data resulting from prism spectral dispersion, prism alignment, detector array position, and prism rotation angle. The shifting and broadening of both the spectral lineshape function and the spatial point spread function in the reconstructed hyperspectral imagery is compared with experimental results for monochromatic point sources. The shorter wavelength (λ<500 nm) region where the prism has the highest spectral dispersion suffers mostly from degradation of spectral resolution in the presence of systematic error, while longer wavelengths (λ>600 nm) suffer mostly from a shift of the spectral peaks. The quality of the reconstructed hyperspectral imagery is most sensitive to the misalignment of the prism rotation mount. With less than 1° total angular error in the two axes of freedom, spectral resolution was degraded by as much as a factor of 2 in the blue spectral region. For larger errors than this, spectral peaks begin to split into bimodal distributions, and spatial point response functions are reconstructed in rings with radii proportional to wavelength and spatial resolution. PMID:22858961

  18. A new approach to the assessment of stochastic errors of radio source position catalogues

    NASA Astrophysics Data System (ADS)

    Malkin, Zinovy

    2013-10-01

    Assessing the external stochastic errors of radio source position catalogues derived from VLBI observations is important for tasks such as estimating the quality of the catalogues and their weighting during combination. One of the widely used methods to estimate these errors is the three-cornered-hat technique, which can be extended to the N-cornered-hat technique. A critical point of this method is how to properly account for the correlations between the compared catalogues. We present a new approach to solving this problem that is suitable for simultaneous investigations of several catalogues. To compute the correlation between two catalogues A and B, the differences between these catalogues and a third arbitrary catalogue C are computed. Then the correlation between these differences is considered as an estimate of the correlation between catalogues A and B. The average value of these estimates over all catalogues C is taken as a final estimate of the target correlation. In this way, an exhaustive search of all possible combinations allows one to compute the paired correlations between all catalogues. As an additional refinement of the method, we introduce the concept of weighted correlation coefficient. This technique was applied to nine recently published radio source position catalogues. We found large systematic differences between catalogues, that significantly impact determination of their stochastic errors. Finally, we estimated the stochastic errors of the nine catalogues.

  19. Evaluating a medical error taxonomy.

    PubMed Central

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting. PMID:12463789

  20. Understanding error generation in fused deposition modeling

    NASA Astrophysics Data System (ADS)

    Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David

    2015-03-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.

  1. A Guideline for Applying Systematic Reviews to Child Language Intervention

    ERIC Educational Resources Information Center

    Hargrove, Patricia; Lund, Bonnie; Griffer, Mona

    2005-01-01

    This article focuses on applying systematic reviews to the Early Intervention (EI) literature. Systematic reviews are defined and differentiated from traditional, or narrative, reviews and from meta-analyses. In addition, the steps involved in critiquing systematic reviews and an illustration of a systematic review from the EI literature are…

  2. Correction of Errors in Polarization Based Dynamic Phase Shifting Interferometers

    NASA Astrophysics Data System (ADS)

    Kimbrough, Brad

    2014-10-01

    Polarization based interferometers for single snap-shot measurements allow single frame, quantitative phase acquisition for vibration insensitive measurements of optical surfaces. Application of these polarization based phase sensors requires the test and reference beams of the interferometer to be orthogonally polarized. As with all polarization based interferometers, these systems can suffer from phase dependent errors resulting from systematic polarization aberrations. This type of measurement error presents a particular challenge because it varies in magnitude both spatially and temporally between each measurement. In this article, a general discussion of phase calculation error is presented. We then present an algorithm that is capable of mitigating phase-dependent measurement errors on-the-fly. The algorithm implementation is non-iterative providing sensor frame rate limited phase calculations. Finally, results are presented for both a high numerical aperture system, where the residual error is reduced to the shot noise limit, and a system with significant birefringence in the test arm.

  3. Error Management Behavior in Classrooms: Teachers' Responses to Student Mistakes

    ERIC Educational Resources Information Center

    Tulis, Maria

    2013-01-01

    Only a few studies have focused on how teachers deal with mistakes in actual classroom settings. Teachers' error management behavior was analyzed based on data obtained from direct (Study 1) and videotaped systematic observation (Study 2), and students' self-reports. In Study 3 associations between students' and teachers' attitudes towards…

  4. Pitch Error Analysis of Young Piano Students' Music Reading Performances

    ERIC Educational Resources Information Center

    Rut Gudmundsdottir, Helga

    2010-01-01

    This study analyzed the music reading performances of 6-13-year-old piano students (N = 35) in their second year of piano study. The stimuli consisted of three piano pieces, systematically constructed to vary in terms of left-hand complexity and input simultaneity. The music reading performances were recorded digitally and a code of error analysis…

  5. Non-Gaussian Error Distributions of LMC Distance Moduli Measurements

    NASA Astrophysics Data System (ADS)

    Crandall, Sara; Ratra, Bharat

    2015-12-01

    We construct error distributions for a compilation of 232 Large Magellanic Cloud (LMC) distance moduli values from de Grijs et al. that give an LMC distance modulus of (m - M)0 = 18.49 ± 0.13 mag (median and 1σ symmetrized error). Central estimates found from weighted mean and median statistics are used to construct the error distributions. The weighted mean error distribution is non-Gaussian—flatter and broader than Gaussian—with more (less) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of unaccounted-for systematic uncertainties. The median statistics error distribution, which does not make use of the individual measurement errors, is also non-Gaussian—more peaked than Gaussian—with less (more) probability in the tails (center) than is predicted by a Gaussian distribution; this could be the consequence of publication bias and/or the non-independence of the measurements. We also construct the error distributions of 247 SMC distance moduli values from de Grijs & Bono. We find a central estimate of {(m-M)}0=18.94+/- 0.14 mag (median and 1σ symmetrized error), and similar probabilities for the error distributions.

  6. Reviewing the literature, how systematic is systematic?

    PubMed

    MacLure, Katie; Paudyal, Vibhu; Stewart, Derek

    2016-06-01

    Introduction Professor Archibald Cochrane, after whom the Cochrane Collaboration is named, was influential in promoting evidence-based clinical practice. He called for "relevant, valid research" to underpin all aspects of healthcare. Systematic reviews of the literature are regarded as a high quality source of cumulative evidence but it is unclear how truly systematic they, or other review articles, are or 'how systematic is systematic?' Today's evidence-based review industry is a burgeoning mix of specialist terminology, collaborations and foundations, databases, portals, handbooks, tools, criteria and training courses. Aim of the review This study aims to identify uses and types of reviews, key issues in planning, conducting, reporting and critiquing reviews, and factors which limit claims to be systematic. Method A rapid review of review articles published in IJCP. Results This rapid review identified 17 review articles published in IJCP between 2010 and 2015 inclusive. It explored the use of different types of review article, the variation and widely available range of guidelines, checklists and criteria which, through systematic application, aim to promote best practice. It also identified common pitfalls in endeavouring to conduct reviews of the literature systematically. Discussion Although a limited set of IJCP reviews were identified, there is clear evidence of the variation in adoption and application of systematic methods. The burgeoning evidence industry offers the tools and guidelines required to conduct systematic reviews, and other types of review, systematically. This rapid review was limited to the database of one journal over a period of 6 years. Although this review was conducted systematically, it is not presented as a systematic review. Conclusion As a research community we have yet to fully engage with readily available guidelines and tools which would help to avoid the common pitfalls. Therefore the question remains, of not just IJCP but

  7. Speech Errors, Error Correction, and the Construction of Discourse.

    ERIC Educational Resources Information Center

    Linde, Charlotte

    Speech errors have been used in the construction of production models of the phonological and semantic components of language, and for a model of interactional processes. Errors also provide insight into how speakers plan discourse and syntactic structure,. Different types of discourse exhibit different types of error. The present data are taken…

  8. Magnetic nanoparticle thermometer: an investigation of minimum error transmission path and AC bias error.

    PubMed

    Du, Zhongzhou; Su, Rijian; Liu, Wenzhong; Huang, Zhixing

    2015-01-01

    The signal transmission module of a magnetic nanoparticle thermometer (MNPT) was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias), was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA) when the hardware system of the MNPT was designed with the aforementioned method. PMID:25875188

  9. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    PubMed Central

    Du, Zhongzhou; Su, Rijian; Liu, Wenzhong; Huang, Zhixing

    2015-01-01

    The signal transmission module of a magnetic nanoparticle thermometer (MNPT) was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias), was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA) when the hardware system of the MNPT was designed with the aforementioned method. PMID:25875188

  10. Addressing medical errors in hand surgery.

    PubMed

    Johnson, Shepard P; Adkinson, Joshua M; Chung, Kevin C

    2014-09-01

    Influential think tanks such as the Institute of Medicine have raised awareness about the implications of medical errors. In response, organizations, medical societies, and hospitals have initiated programs to decrease the incidence and prevent adverse effects of these errors. Surgeons deal with the direct implications of adverse events involving patients. In addition to managing the physical consequences, they are confronted with ethical and social issues when caring for a harmed patient. Although there is considerable effort to implement system-wide changes, there is little guidance for hand surgeons on how to address medical errors. Admitting an error by a physician is difficult, but a transparent environment where patients are notified of errors and offered consolation and compensation is essential to maintain physician-patient trust. Furthermore, equipping hand surgeons with a guide for addressing medical errors will help identify system failures, provide learning points for safety improvement, decrease litigation against physicians, and demonstrate a commitment to ethical and compassionate medical care. PMID:25154576

  11. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    PubMed

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article

  12. Different setup errors assessed by weekly cone-beam computed tomography on different registration in nasopharyngeal carcinoma treated with intensity-modulated radiation therapy

    PubMed Central

    Su, Jiqing; Chen, Wen; Yang, Huiyun; Hong, Jidong; Zhang, Zijian; Yang, Guangzheng; Li, Li; Wei, Rui

    2015-01-01

    The study aimed to investigate the difference of setup errors on different registration in the treatment of nasopharyngeal carcinoma based on weekly cone-beam computed tomography (CBCT). Thirty nasopharyngeal cancer patients scheduled to undergo intensity-modulated radiotherapy (IMRT) were prospectively enrolled in the study. Each patient had a weekly CBCT before radiation therapy. In the entire study, 201 CBCT scans were obtained. The scans were registered to the planning CT to determine the difference of setup errors on different registration sites. Different registration sites were represented by bony landmarks. Nasal septum and pterygoid process represent head, cervical vertebrae 1–3 represent upper neck, and cervical vertebrae 4–6 represent lower neck. Patient positioning errors were recorded in the right–left (RL), superior–inferior (SI), and anterior–posterior (AP) directions over the course of radiotherapy. Planning target volume margins were calculated from the systematic and random errors. In this study, we can make a conclusion that there are setup errors in RL, SI, and AP directions of nasopharyngeal carcinoma patients undergoing IMRT. In addition, the head and neck setup error has the difference, with statistical significance, while patient setup error of neck is greater than that of head during the course of radiotherapy. In our institution, we recommend a planning target volume margin of 3.0 mm in RL direction, 1.3 mm in SI direction, and 2.6 mm in AP direction for nasopharyngeal cancer patients undergoing IMRT with weekly CBCT scans. PMID:26396530

  13. On the undetected error probability of a concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Deng, H.; Costello, D. J., Jr.

    1984-01-01

    Consider a concatenated coding scheme for error control on a binary symmetric channel, called the inner channel. The bit error rate (BER) of the channel is correspondingly called the inner BER, and is denoted by Epsilon (sub i). Two linear block codes, C(sub f) and C(sub b), are used. The inner code C(sub f), called the frame code, is an (n,k) systematic binary block code with minimum distance, d(sub f). The frame code is designed to correct + or fewer errors and simultaneously detect gamma (gamma +) or fewer errors, where + + gamma + 1 = to or d(sub f). The outer code C(sub b) is either an (n(sub b), K(sub b)) binary block with a n(sub b) = mk, or an (n(sub b), k(Sub b) maximum distance separable (MDS) code with symbols from GF(q), where q = 2(b) and the code length n(sub b) satisfies n(sub)(b) = mk. The integerim is the number of frames. The outercode is designed for error detection only.

  14. Standard Errors for Matrix Correlations.

    ERIC Educational Resources Information Center

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  15. Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans

    SciTech Connect

    Siebers, Jeffrey V. . E-mail: jsiebers@vcu.edu; Keall, Paul J.; Wu Qiuwen; Williamson, Jeffrey F.; Schmidt-Ullrich, Rupert K.

    2005-10-01

    Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errors of {sigma} = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with {sigma} = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for {sigma} = {sigma} = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D{sub 98}), clinical target volume (CTV) D{sub 90}, nodes D{sub 90}, cord D{sub 2}, and parotid D{sub 50} and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error {sigma} exceeded 3 mm. Simulated systematic setup errors with {sigma} = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a {sigma} = 3.0 mm resulted in half of the plans having

  16. Grammatical Errors and Communication Breakdown.

    ERIC Educational Resources Information Center

    Tomiyama, Machiko

    This study investigated the relationship between grammatical errors and communication breakdown by examining native speakers' ability to correct grammatical errors. The assumption was that communication breakdown exists to a certain degree if a native speaker cannot correct the error or if the correction distorts the information intended to be…

  17. Dose error analysis for a scanned proton beam delivery system

    NASA Astrophysics Data System (ADS)

    Coutrakon, G.; Wang, N.; Miller, D. W.; Yang, Y.

    2010-12-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 × 10 × 8 cm3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  18. Self-calibration of terrestrial laser scanners: selection of the best geometric additional parameters

    NASA Astrophysics Data System (ADS)

    Lerma, J. L.; García-San-Miguel, D.

    2014-05-01

    Systematic errors are present in laser scanning system observations due to manufacturer imperfections, wearing over time, vibrations, changing environmental conditions and, last but not least, involuntary hits. To achieve maximum quality and rigorous measurements from terrestrial laser scanners, a least squares estimation of additional calibration parameters can be used to model the a priori unknown systematic errors and therefore improve output observations. The selection of the right set of additional parameters is not trivial and requires laborious statistical analysis. Based on this requirement, this article presents an approach to determine the best set of additional parameters which provides the best mathematical solution based on a dimensionless quality index. The best set of additional parameters is the one which provides the maximum quality index (i.e. minimum value) for the group of observables, exterior orientation parameters and reference points. Calibration performance is tested using both a phase shift continuous wave scanner, FARO PHOTON 880, and a pulse-based time-of-flight system, Leica HDS3000. The improvement achieved after the geometric calibration is 30% for the former and 70% for the latter.

  19. Errors inducing radiation overdoses.

    PubMed

    Grammaticos, Philip C

    2013-01-01

    There is no doubt that equipments exposing radiation and used for therapeutic purposes should be often checked for possibly administering radiation overdoses to the patients. Technologists, radiation safety officers, radiologists, medical physicists, healthcare providers and administration should take proper care on this issue. "We must be beneficial and not harmful to the patients", according to the Hippocratic doctrine. Cases of radiation overdose are often reported. A series of cases of radiation overdoses have recently been reported. Doctors who were responsible, received heavy punishments. It is much better to prevent than to treat an error or a disease. A Personal Smart Card or Score Card has been suggested for every patient undergoing therapeutic and/or diagnostic procedures by the use of radiation. Taxonomy may also help. PMID:24251304

  20. Medical device error.

    PubMed

    Goodman, Gerald R

    2002-12-01

    This article discusses principal concepts for the analysis, classification, and reporting of problems involving medical device technology. We define a medical device in regulatory terminology and define and discuss concepts and terminology used to distinguish the causes and sources of medical device problems. Database classification systems for medical device failure tracking are presented, as are sources of information on medical device failures. The importance of near-accident reporting is discussed to alert users that reported medical device errors are typically limited to those that have caused an injury or death. This can represent only a fraction of the true number of device problems. This article concludes with a summary of the most frequently reported medical device failures by technology type, clinical application, and clinical setting. PMID:12400632

  1. Systematic Alternatives to Proposal Preparation.

    ERIC Educational Resources Information Center

    Knirk, Frederick G.; And Others

    Educators who have to develop proposals must be concerned with making effective decisions. This paper discusses a number of educational systems management tools which can be used to reduce the time and effort in developing a proposal. In addition, ways are introduced to systematically increase the quality of the proposal through the development of…

  2. Error analysis and system optimization of non-null aspheric testing system

    NASA Astrophysics Data System (ADS)

    Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo

    2010-10-01

    A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.

  3. Identification of Error Patterns in Terminal-Area ATC Communications

    NASA Technical Reports Server (NTRS)

    Quinn, Cheryl; Walter, Kim E.; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    Advancing air traffic management technologies have enabled a greater number of aircraft to use the same airspace more effectively. As aircraft separations are reduced and final approaches are more finely timed, there is less room for error. The present study examined 122 terminal-area, loss-of-separation and procedure violation incidents reported to the Aviation Safety Reporting System (ASRS) by air traffic controllers. Narrative description codes were used for the incidents for type of violation, contributing factors, recovery strategies, and consequences. Usually multiple errors occurred prior to the violation. Error sequences were analyzed and common patterns of errors were identified. In half of the incidents, errors were noticed in time to correct mistakes. Of these, almost 43% committed additional errors during the recovery attempt. This analysis shows that redundancies in the present air traffic control system may not be sufficient to support large increases in traffic density. Error prevention and design considerations for air traffic management systems are discussed.

  4. Structured error recovery for code-word-stabilized quantum codes

    SciTech Connect

    Li Yunfan; Dumer, Ilya; Grassl, Markus; Pryadko, Leonid P.

    2010-05-15

    Code-word-stabilized (CWS) codes are, in general, nonadditive quantum codes that can correct errors by an exhaustive search of different error patterns, similar to the way that we decode classical nonlinear codes. For an n-qubit quantum code correcting errors on up to t qubits, this brute-force approach consecutively tests different errors of weight t or less and employs a separate n-qubit measurement in each test. In this article, we suggest an error grouping technique that allows one to simultaneously test large groups of errors in a single measurement. This structured error recovery technique exponentially reduces the number of measurements by about 3{sup t} times. While it still leaves exponentially many measurements for a generic CWS code, the technique is equivalent to syndrome-based recovery for the special case of additive CWS codes.

  5. Fixed-point error analysis of Winograd Fourier transform algorithms

    NASA Technical Reports Server (NTRS)

    Patterson, R. W.; Mcclellan, J. H.

    1978-01-01

    The quantization error introduced by the Winograd Fourier transform algorithm (WFTA) when implemented in fixed-point arithmetic is studied and compared with that of the fast Fourier transform (FFT). The effect of ordering the computational modules and the relative contributions of data quantization error and coefficient quantization error are determined. In addition, the quantization error introduced by the Good-Winograd (GW) algorithm, which uses Good's prime-factor decomposition for the discrete Fourier transform (DFT) together with Winograd's short length DFT algorithms, is studied. Error introduced by the WFTA is, in all cases, worse than that of the FFT. In general, the WFTA requires one or two more bits for data representation to give an error similar to that of the FFT. Error introduced by the GW algorithm is approximately the same as that of the FFT.

  6. Register file soft error recovery

    DOEpatents

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  7. Influence of nonhomogeneous earth on the rms phase error and beam-pointing errors of large, sparse high-frequency receiving arrays

    NASA Astrophysics Data System (ADS)

    Weiner, M. M.

    1994-01-01

    The performance of ground-based high-frequency (HF) receiving arrays is reduced when the array elements have electrically small ground planes. The array rms phase error and beam-pointing errors, caused by multipath rays reflected from a nonhomogeneous Earth, are determined for a sparse array of elements that are modeled as Hertzian dipoles in close proximity to Earth with no ground planes. Numerical results are presented for cases of randomly distributed and systematically distributed Earth nonhomogeneities where one-half of vertically polarized array elements are located in proximity to one type of Earth and the remaining half are located in proximity to a second type of Earth. The maximum rms phase errors, for the cases examined, are 18 deg and 9 deg for randomly distributed and systematically distributed nonhomogeneities, respectively. The maximum beampointing errors are 0 and 0.3 beam widths for randomly distributed and systematically distributed nonhomogeneities, respectively.

  8. Social aspects of clinical errors.

    PubMed

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors. PMID:19201405

  9. Diagnostic Errors Study Findings

    MedlinePlus

    ... ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/office-testing-toolkit/ . Additionally, the Office of the National ... Assistance on Health Initiatives Measurement & Reporting Tools Research Tools & ... Centers & Programs Centers & Offices Initiatives About AHRQ Portfolios ...

  10. Retrace error reconstruction based on point characteristic function.

    PubMed

    Yiwei, He; Hou, Xi; Haiyang, Quan; Song, Weihong

    2015-11-01

    Figure measuring interferometers generally work in the null condition, i.e., the reference rays share the same optical path with the test rays through the imaging system. In this case, except field distortion error, effect of other aberrations cancels out and doesn't result in measureable systematic error. However, for spatial carrier interferometry and non-null aspheric test cases, null condition cannot be achieved typically, and there is excessive measurement error that is referenced as retrace error. Previous studies about retrace error can be generally distinguished into two categories: one based on 4th-order aberration formalism, the other based on ray tracing through interferometer model. In this paper, point characteristic function (PCF) is used to analyze retrace error in a Fizeau interferometer working in high spatial carrier condition. We present the process of reconstructing retrace error with and without element error in detail. Our results are in contrast with those obtained by ray tracing through interferometer model. The small difference between them (less than 3%) shows that our method is effective. PMID:26561092

  11. Saccadic adaptation to a systematically varying disturbance.

    PubMed

    Cassanello, Carlos R; Ohl, Sven; Rolfs, Martin

    2016-08-01

    Saccadic adaptation maintains the correct mapping between eye movements and their targets, yet the dynamics of saccadic gain changes in the presence of systematically varying disturbances has not been extensively studied. Here we assessed changes in the gain of saccade amplitudes induced by continuous and periodic postsaccadic visual feedback. Observers made saccades following a sequence of target steps either along the horizontal meridian (Two-way adaptation) or with unconstrained saccade directions (Global adaptation). An intrasaccadic step-following a sinusoidal variation as a function of the trial number (with 3 different frequencies tested in separate blocks)-consistently displaced the target along its vector. The oculomotor system responded to the resulting feedback error by modifying saccade amplitudes in a periodic fashion with similar frequency of variation but lagging the disturbance by a few tens of trials. This periodic response was superimposed on a drift toward stronger hypometria with similar asymptotes and decay rates across stimulus conditions. The magnitude of the periodic response decreased with increasing frequency and was smaller and more delayed for Global than Two-way adaptation. These results suggest that-in addition to the well-characterized return-to-baseline response observed in protocols using constant visual feedback-the oculomotor system attempts to minimize the feedback error by integrating its variation across trials. This process resembles a convolution with an internal response function, whose structure would be determined by coefficients of the learning model. Our protocol reveals this fast learning process in single short experimental sessions, qualifying it for the study of sensorimotor learning in health and disease. PMID:27098027

  12. Experimental quantum error correction with high fidelity

    NASA Astrophysics Data System (ADS)

    Zhang, Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond

    2011-09-01

    More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.81.2152 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from ɛ to ˜ɛ2. In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.

  13. Experimental quantum error correction with high fidelity

    SciTech Connect

    Zhang Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond

    2011-09-15

    More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett. 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from {epsilon} to {approx}{epsilon}{sup 2}. In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.

  14. Scaling prediction errors to reward variability benefits error-driven learning in humans

    PubMed Central

    Schultz, Wolfram

    2015-01-01

    Effective error-driven learning requires individuals to adapt learning to environmental reward variability. The adaptive mechanism may involve decays in learning rate across subsequent trials, as shown previously, and rescaling of reward prediction errors. The present study investigated the influence of prediction error scaling and, in particular, the consequences for learning performance. Participants explicitly predicted reward magnitudes that were drawn from different probability distributions with specific standard deviations. By fitting the data with reinforcement learning models, we found scaling of prediction errors, in addition to the learning rate decay shown previously. Importantly, the prediction error scaling was closely related to learning performance, defined as accuracy in predicting the mean of reward distributions, across individual participants. In addition, participants who scaled prediction errors relative to standard deviation also presented with more similar performance for different standard deviations, indicating that increases in standard deviation did not substantially decrease “adapters'” accuracy in predicting the means of reward distributions. However, exaggerated scaling beyond the standard deviation resulted in impaired performance. Thus efficient adaptation makes learning more robust to changing variability. PMID:26180123

  15. Quantum Error Correction with Biased Noise

    NASA Astrophysics Data System (ADS)

    Brooks, Peter

    Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security. At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level. In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations. In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction. In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled

  16. Marker genotyping errors in old data on X-linkage in bipolar illness.

    PubMed

    Gershon, E S

    1991-04-01

    Investigations of linkage markers of the X-chromosome colorblindness region in bipolar manic-depressive illness (BP) have yielded inconsistent results, with linkage accepted in some and rejected in other studies. Although genetic heterogeneity has been proposed as the reason for differences, other possibilities exist, including systematic procedural errors. Statistical evidence for linkage between the markers, Xg and colorblindness, is present in a series of papers on bipolar illness reported in 1972-1975. The linkage implied by this reanalysis is spurious, since the two markers are at opposite ends of the X chromosome. The presumptive reason for this spurious linkage is that it is a result of systematic genotyping errors. The support provided by these data to the X-linkage hypothesis in BP illness is thus diminished. That is, the linkage to illness may depend on systematic errors in marker genotyping. In general, the possible causes of inconsistency between linkage reports may be divided into statistical and systematic causes. Statistical causes would generally consist of chance differences in sampling, such as might occur under genetic heterogeneity. If this occurs, the reports rejecting linkage may be false negatives, or the reports detecting linkage may be false-positive results. Systematic causes of differences among reports could include systematic errors (or variations) in procedures, including ascertainment, diagnosis, genotyping, or analysis. Consistency of the marker map in a particular study with the known marker map is one test for systematic errors in genotyping. PMID:1888383

  17. Estimation of Model Error Variances During Data Assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick

    2003-01-01

    Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data

  18. The 13 errors.

    PubMed

    Flower, J

    1998-01-01

    The reality is that most change efforts fail. McKinsey & Company carried out a fascinating research project on change to "crack the code" on creating and managing change in large organizations. One of the questions they asked--and answered--is why most organizations fail in their efforts to manage change. They found that 80 percent of these failures could be traced to 13 common errors. They are: (1) No winning strategy; (2) failure to make a compelling and urgent case for change; (3) failure to distinguish between decision-driven and behavior-dependent change; (4) over-reliance on structure and systems to change behavior; (5) lack of skills and resources; (6) failure to experiment; (7) leaders' inability or unwillingness to confront how they and their roles must change; (8) failure to mobilize and engage pivotal groups; (9) failure to understand and shape the informal organization; (10) inability to integrate and align all the initiatives; (11) no performance focus; (12) excessively open-ended process; and (13) failure to make the whole process transparent and meaningful to individuals. PMID:10351717

  19. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  20. Error analysis in laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Gantert, Walter A.; Tendick, Frank; Bhoyrul, Sunil; Tyrrell, Dana; Fujino, Yukio; Rangel, Shawn; Patti, Marco G.; Way, Lawrence W.

    1998-06-01

    Iatrogenic complications in laparoscopic surgery, as in any field, stem from human error. In recent years, cognitive psychologists have developed theories for understanding and analyzing human error, and the application of these principles has decreased error rates in the aviation and nuclear power industries. The purpose of this study was to apply error analysis to laparoscopic surgery and evaluate its potential for preventing complications. Our approach is based on James Reason's framework using a classification of errors according to three performance levels: at the skill- based performance level, slips are caused by attention failures, and lapses result form memory failures. Rule-based mistakes constitute the second level. Knowledge-based mistakes occur at the highest performance level and are caused by shortcomings in conscious processing. These errors committed by the performer 'at the sharp end' occur in typical situations which often times are brought about by already built-in latent system failures. We present a series of case studies in laparoscopic surgery in which errors are classified and the influence of intrinsic failures and extrinsic system flaws are evaluated. Most serious technical errors in lap surgery stem from a rule-based or knowledge- based mistake triggered by cognitive underspecification due to incomplete or illusory visual input information. Error analysis in laparoscopic surgery should be able to improve human performance, and it should detect and help eliminate system flaws. Complication rates in laparoscopic surgery due to technical errors can thus be considerably reduced.

  1. Nature of the Refractive Errors in Rhesus Monkeys (Macaca mulatta) with Experimentally Induced Ametropias

    PubMed Central

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-su; Ramamirtham, Ramkumar; Smith, Earl L.

    2010-01-01

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. PMID:20600237

  2. Sensitivity of SLR baselines to errors in Earth orientation

    NASA Technical Reports Server (NTRS)

    Smith, D. E.; Christodoulidis, D. C.

    1984-01-01

    The sensitivity of inter station distances derived from Satellite Laser Ranging (SLR) to errors in Earth orientation is discussed. An analysis experiment is performed which imposes a known polar motion error on all of the arcs used over this interval. The effect of the averaging of the errors over the tracking periods of individual sites is assessed. Baselines between stations that are supported by a global network of tracking stations are only marginally affected by errors in Earth orientation. The global network of stations retains its integrity even in the presence of systematic changes in the coordinate frame. The effect of these coordinate frame changes on the relative locations of the stations is minimal.

  3. Nutrition Informatics Applications in Clinical Practice: a Systematic Review

    PubMed Central

    North, Jennifer C.; Jordan, Kristine C.; Metos, Julie; Hurdle, John F.

    2015-01-01

    Nutrition care and metabolic control contribute to clinical patient outcomes. Biomedical informatics applications represent a way to potentially improve quality and efficiency of nutrition management. We performed a systematic literature review to identify clinical decision support and computerized provider order entry systems used to manage nutrition care. Online research databases were searched using a specific set of keywords. Additionally, bibliographies were referenced for supplemental citations. Four independent reviewers selected sixteen studies out of 364 for review. These papers described adult and neonatal nutrition support applications, blood glucose management applications, and other nutrition applications. Overall, results indicated that computerized interventions could contribute to improved patient outcomes and provider performance. Specifically, computer systems in the clinical setting improved nutrient delivery, rates of malnutrition, weight loss, blood glucose values, clinician efficiency, and error rates. In conclusion, further investigation of informatics applications on nutritional and performance outcomes utilizing rigorous study designs is recommended. PMID:26958233

  4. ANALYSIS OF CASE-CONTROL DATA WITH COVARIATE MEASUREMENT ERROR: APPLICATION TO DIET AND COLON CANCER

    EPA Science Inventory

    We propose a method for estimating odds ratios from case-control data in which ovariates are subject to mesurement error. he mesurement error may contain both a random component and a systematic difference between cases and controls (recall bias). ultivariate normal discriminant ...

  5. The Role of Supralexical Prosodic Units in Speech Production: Evidence from the Distribution of Speech Errors

    ERIC Educational Resources Information Center

    Choe, Wook Kyung

    2013-01-01

    The current dissertation represents one of the first systematic studies of the distribution of speech errors within supralexical prosodic units. Four experiments were conducted to gain insight into the specific role of these units in speech planning and production. The first experiment focused on errors in adult English. These were found to be…

  6. Error correction for IFSAR

    DOEpatents

    Doerry, Armin W.; Bickel, Douglas L.

    2002-01-01

    IFSAR images of a target scene are generated by compensating for variations in vertical separation between collection surfaces defined for each IFSAR antenna by adjusting the baseline projection during image generation. In addition, height information from all antennas is processed before processing range and azimuth information in a normal fashion to create the IFSAR image.

  7. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  8. Knowledge of error-correcting protocol helps in individual eavesdropping

    NASA Astrophysics Data System (ADS)

    Horoshko, D. B.

    2007-06-01

    The quantum key distribution protocol ΒΒ84 combined with the repetition protocol for error correction are analyzed from the viewpoint of security against individual eavesdropping empowered by quantum memory. We show that a mere knowledge of the error correction protocol changes the optimal attack and provides the eavesdropper with additional information about the generated key.

  9. Grammatical Errors Produced by English Majors: The Translation Task

    ERIC Educational Resources Information Center

    Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad

    2011-01-01

    This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…

  10. Error characterisation of global active and passive microwave soil moisture datasets

    NASA Astrophysics Data System (ADS)

    Dorigo, W. A.; Scipal, K.; Parinussa, R. M.; Liu, Y. Y.; Wagner, W.; de Jeu, R. A. M.; Naeimi, V.

    2010-12-01

    Understanding the error structures of remotely sensed soil moisture observations is essential for correctly interpreting observed variations and trends in the data or assimilating them in hydrological or numerical weather prediction models. Nevertheless, a spatially coherent assessment of the quality of the various globally available datasets is often hampered by the limited availability over space and time of reliable in-situ measurements. As an alternative, this study explores the triple collocation error estimation technique for assessing the relative quality of several globally available soil moisture products from active (ASCAT) and passive (AMSR-E and SSM/I) microwave sensors. The triple collocation is a powerful statistical tool to estimate the root mean square error while simultaneously solving for systematic differences in the climatologies of a set of three linearly related data sources with independent error structures. Prerequisite for this technique is the availability of a sufficiently large number of timely corresponding observations. In addition to the active and passive satellite-based datasets, we used the ERA-Interim and GLDAS-NOAH reanalysis soil moisture datasets as a third, independent reference. The prime objective is to reveal trends in uncertainty related to different observation principles (passive versus active), the use of different frequencies (C-, X-, and Ku-band) for passive microwave observations, and the choice of the independent reference dataset (ERA-Interim versus GLDAS-NOAH). The results suggest that the triple collocation method provides realistic error estimates. Observed spatial trends agree well with the existing theory and studies on the performance of different observation principles and frequencies with respect to land cover and vegetation density. In addition, if all theoretical prerequisites are fulfilled (e.g. a sufficiently large number of common observations is available and errors of the different datasets are

  11. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  12. [Error factors in spirometry].

    PubMed

    Quadrelli, S A; Montiel, G C; Roncoroni, A J

    1994-01-01

    Spirometry is the more frequently used method to estimate pulmonary function in the clinical laboratory. It is important to comply with technical requisites to approximate the real values sought as well as adequate interpretation of results. Recommendations are made to establish: 1--quality control 2--define abnormality 3--classify the change from normal and its degree 4--define reversibility. In relation to quality control several criteria are pointed out such as end of the test, back-extrapolation and extrapolated volume in order to delineate most common errors. Daily calibration is advised. Inspection of graphical records of the test is mandatory. The limitations to the common use of 80% of predicted values to establish abnormality is stressed. The reasons for employing 95% confidence limits are detailed. It is important to select the reference values equation (in view of the differences in predicted values). It is advisable to validate the selection with local population normal values. In relation to the definition of the defect as restrictive or obstructive, the limitations of vital capacity (VC) to establish restriction, when obstruction is also present, are defined. Also the limitations of maximal mid-expiratory flow 25-75 (FMF 25-75) as an isolated marker of obstruction. Finally the qualities of forced expiratory volume in 1 sec (VEF1) and the difficulties with other indicators (CVF, FMF 25-75, VEF1/CVF) to estimate reversibility after bronchodilators are evaluated. The value of different methods used to define reversibility (% of change in initial value, absolute change or % of predicted), is commented. Clinical spirometric studies in order to be valuable should be performed with the same technical rigour as any other more complex studies. PMID:7990690

  13. Teratogenic inborn errors of metabolism.

    PubMed Central

    Leonard, J. V.

    1986-01-01

    Most children with inborn errors of metabolism are born healthy without malformations as the fetus is protected by the metabolic activity of the placenta. However, certain inborn errors of the fetus have teratogenic effects although the mechanisms responsible for the malformations are not generally understood. Inborn errors in the mother may also be teratogenic. The adverse effects of these may be reduced by improved metabolic control of the biochemical disorder. PMID:3540927

  14. Confidence limits and their errors

    SciTech Connect

    Rajendran Raja

    2002-03-22

    Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.

  15. Compensating For GPS Ephemeris Error

    NASA Technical Reports Server (NTRS)

    Wu, Jiun-Tsong

    1992-01-01

    Method of computing position of user station receiving signals from Global Positioning System (GPS) of navigational satellites compensates for most of GPS ephemeris error. Present method enables user station to reduce error in its computed position substantially. User station must have access to two or more reference stations at precisely known positions several hundred kilometers apart and must be in neighborhood of reference stations. Based on fact that when GPS data used to compute baseline between reference station and user station, vector error in computed baseline is proportional ephemeris error and length of baseline.

  16. Retransmission error control with memory

    NASA Technical Reports Server (NTRS)

    Sindhu, P. S.

    1977-01-01

    In this paper, an error control technique that is a basic improvement over automatic-repeat-request ARQ is presented. Erroneously received blocks in an ARQ system are used for error control. The technique is termed ARQ-with-memory (MRQ). The general MRQ system is described, and simple upper and lower bounds are derived on the throughput achievable by MRQ. The performance of MRQ with respect to throughput, message delay and probability of error is compared to that of ARQ by simulating both systems using error data from a VHF satellite channel being operated in the ALOHA packet broadcasting mode.

  17. Medication Errors in Outpatient Pediatrics.

    PubMed

    Berrier, Kyla

    2016-01-01

    Medication errors may occur during parental administration of prescription and over-the-counter medications in the outpatient pediatric setting. Misinterpretation of medication labels and dosing errors are two types of errors in medication administration. Health literacy may play an important role in parents' ability to safely manage their child's medication regimen. There are several proposed strategies for decreasing these medication administration errors, including using standardized dosing instruments, using strictly metric units for medication dosing, and providing parents and caregivers with picture-based dosing instructions. Pediatric healthcare providers should be aware of these strategies and seek to implement many of them into their practices. PMID:27537086

  18. Physical examination. Frequently observed errors.

    PubMed

    Wiener, S; Nathanson, M

    1976-08-16

    A method allowing for direct observation of intern and resident physicians while interviewing and examining patients has been in use on our medical wards for the last five years. A large number of errors in the performance of the medical examination by young physicians were noted and a classification of these errors into those of technique, omission, detection, interpretation, and recording was made. An approach to detection and correction of each of these kinds of errors is presented, as well as a discussion of possible reasons for the occurrence of these errors in physician performance. PMID:947266

  19. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  20. Roundoff error in long-term planetary orbit integrations

    NASA Astrophysics Data System (ADS)

    Quinn, T.; Tremaine, S.

    1990-03-01

    Possible sources of roundoff error in solar system integrations are studied. It is suggested than when floating point arithmetic is optimal, the majority of roundoff error arises from two sources: the approximate representation of the numerical coefficients used in multistep integration formulas and the additions required to evaluate these formulas. An algorithm to remove these two sources of error in computers with optimal arithmetic is presented. It is shown that the corrections result in a substantial reduction in the energy error at the cost of less than a factor of 2 increase in computing time.

  1. Systematics and limit calculations

    SciTech Connect

    Fisher, Wade; /Fermilab

    2006-12-01

    This note discusses the estimation of systematic uncertainties and their incorporation into upper limit calculations. Two different approaches to reducing systematics and their degrading impact on upper limits are introduced. An improved {chi}{sup 2} function is defined which is useful in comparing Poisson distributed data with models marginalized by systematic uncertainties. Also, a technique using profile likelihoods is introduced which provides a means of constraining the degrading impact of systematic uncertainties on limit calculations.

  2. A posteriori error estimator and error control for contact problems

    NASA Astrophysics Data System (ADS)

    Weiss, Alexander; Wohlmuth, Barbara I.

    2009-09-01

    In this paper, we consider two error estimators for one-body contact problems. The first error estimator is defined in terms of H( div ) -conforming stress approximations and equilibrated fluxes while the second is a standard edge-based residual error estimator without any modification with respect to the contact. We show reliability and efficiency for both estimators. Moreover, the error is bounded by the first estimator with a constant one plus a higher order data oscillation term plus a term arising from the contact that is shown numerically to be of higher order. The second estimator is used in a control-based AFEM refinement strategy, and the decay of the error in the energy is shown. Several numerical tests demonstrate the performance of both estimators.

  3. The Error Distribution of BATSE Gamma-Ray Burst Locations

    NASA Technical Reports Server (NTRS)

    Briggs, Michael S.; Pendleton, Geoffrey N.; Kippen, R. Marc; Brainerd, J. J.; Hurley, Kevin; Connaughton, Valerie; Meegan, Charles A.

    1999-01-01

    Empirical probability models for BATSE gamma-ray burst (GRB) location errors are developed via a Bayesian analysis of the separations between BATSE GRB locations and locations obtained with the Interplanetary Network (IPN). Models are compared and their parameters estimated using 392 GRBs with single IPN annuli and 19 GRBs with intersecting IPN annuli. Most of the analysis is for the 4Br BATSE catalog; earlier catalogs are also analyzed. The simplest model that provides a good representation of the error distribution has 78% of the probability in a "core" term with a systematic error of 1.85 deg and the remainder in an extended tail with a systematic error of 5.1 deg, which implies a 68% confidence radius for bursts with negligible statistical uncertainties of 2.2 deg. There is evidence for a more complicated model in which the error distribution depends on the BATSE data type that was used to obtain the location. Bright bursts are typically located using the CONT data type, and according to the more complicated model, the 68% confidence radius for CONT-located bursts with negligible statistical uncertainties is 2.0 deg.

  4. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  5. Diagnosing Multiplicative Error with Lensing Magnification of Type Ia Supernovae

    NASA Astrophysics Data System (ADS)

    Zhang, Pengjie

    2015-06-01

    Weak lensing causes spatially coherent fluctuations in the flux of Type Ia supernovae (SNe Ia). This lensing magnification allows for weak lensing measurement independent of cosmic shear. It is free of the shape measurement errors associated with cosmic shear and can therefore be used to diagnose and calibrate multiplicative error. Although this lensing magnification is difficult to accurately measure in auto correlation, its cross correlation with cosmic shear and galaxy distribution in an overlapping area can be measured to a significantly higher accuracy. Therefore, these cross correlations can put useful constraints on multiplicative error, and the obtained constraint is free of cosmic variance in the weak lensing field. We present two methods implementing this idea and estimate their performances. We find that, with ˜1 million SNe Ia that can be achieved with the proposed D2k survey with the LSST telescope, a multiplicative error of ˜0.5% for source galaxies at {{z}s}˜ 1 can be detected and a larger multiplicative error can be corrected to the level of 0.5%. It is therefore a promising approach to control the multiplicative error to the sub-percent level required for stage IV projects. The combination of the two methods even has the potential to diagnose and calibrate galaxy intrinsic alignment, which is another major systematic error in cosmic shear cosmology.

  6. Aliasing errors in measurements of beam position and ellipticity

    NASA Astrophysics Data System (ADS)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  7. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE).

    PubMed

    Haney, L N

    2000-09-01

    FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) is a framework and methodology for the systematic analysis, characterization, and prediction of human error. It was developed in a NASA Advanced Concepts Project by Idaho National Engineering and Environmental Laboratory, NASA Ames Research Center, Boeing, and America West Airlines, with input from United Airlines and Idaho State University. It was hypothesized that development of a comprehensive taxonomy of error-type and contributing-influences, in a framework and methodology addressing issues important for error analysis, would result in a useful tool for human error analysis. The development method included capturing expertise of human factors and domain experts in the framework, and ensuring that the approach addressed issues important for future human error analysis. This development resulted in creation of a FRANCIE taxonomy for airline maintenance, and a FRANCIE framework and approach that addresses important issues: proactive and reactive, comprehensive error-type and contributing-influences taxonomy, meaningful error reduction strategies, multilevel analyses, multiple user types, compatible with existing methods, applied in design phase or throughout system life cycle, capture of lessons learned, and ease of application. FRANCIE was designed to apply to any domain, given taxonomy refinement. This is demonstrated by its application for an aviation operations scenario for a new precision landing aid. Representative error-types and contributing-influences, two example analyses, and a case study are presented. In conclusion, FRANCIE is useful for analysis of human error, and the taxonomy is a starting point for development of taxonomies allowing application to other domains, such as spacecraft maintenance, operations, medicine, process control, and other transportation industries. PMID:10993328

  8. Spatial frequency domain error budget

    SciTech Connect

    Hauschildt, H; Krulewich, D

    1998-08-27

    The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine

  9. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  10. Error coding simulations in C

    NASA Astrophysics Data System (ADS)

    Noble, Viveca K.

    1994-10-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  11. Investigation of Measurement Errors in Doppler Global Velocimetry

    NASA Technical Reports Server (NTRS)

    Meyers, James F.; Lee, Joseph W.

    1999-01-01

    While the initial development phase of Doppler Global Velocimetry (DGV) has been successfully completed, there remains a critical next phase to be conducted, namely the determination of an error budget to provide quantitative bounds for measurements obtained by this technology. This paper describes a laboratory investigation that consisted of a detailed interrogation of potential error sources to determine their contribution to the overall DGV error budget. A few sources of error were obvious; e.g., iodine vapor adsorption lines, optical systems, and camera characteristics. However, additional non-obvious sources were also discovered; e.g., laser frequency and single-frequency stability, media scattering characteristics, and interference fringes. This paper describes each identified error source, its effect on the overall error budget, and where possible, corrective procedures to reduce or eliminate its effect.

  12. Automatic pronunciation error detection in non-native speech: the case of vowel errors in Dutch.

    PubMed

    van Doremalen, Joost; Cucchiarini, Catia; Strik, Helmer

    2013-08-01

    This research is aimed at analyzing and improving automatic pronunciation error detection in a second language. Dutch vowels spoken by adult non-native learners of Dutch are used as a test case. A first study on Dutch pronunciation by L2 learners with different L1s revealed that vowel pronunciation errors are relatively frequent and often concern subtle acoustic differences between the realization and the target sound. In a second study automatic pronunciation error detection experiments were conducted to compare existing measures to a metric that takes account of the error patterns observed to capture relevant acoustic differences. The results of the two studies do indeed show that error patterns bear information that can be usefully employed in weighted automatic measures of pronunciation quality. In addition, it appears that combining such a weighted metric with existing measures improves the equal error rate by 6.1 percentage points from 0.297, for the Goodness of Pronunciation (GOP) algorithm, to 0.236. PMID:23927130

  13. Medical error and related factors during internship and residency.

    PubMed

    Ahmadipour, Habibeh; Nahid, Mortazavi

    2015-01-01

    It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors. PMID:26592783

  14. Non-Systematic Complex Number RS Coded OFDM by Unique Word Prefix

    NASA Astrophysics Data System (ADS)

    Huemer, Mario; Hofbauer, Christian; Huber, Johannes B.

    2012-01-01

    In this paper we expand our recently introduced concept of UW-OFDM (unique word orthogonal frequency division multiplexing). In UW-OFDM the cyclic prefixes (CPs) are replaced by deterministic sequences, the so-called unique words (UWs). The UWs are generated by appropriately loading a set of redundant subcarriers. By that a systematic complex number Reed Solomon (RS) code construction is introduced in a quite natural way, because an RS code may be defined as the set of vectors, for which a block of successive zeros occurs in the other domain w.r.t. a discrete Fourier transform. (For a fixed block different to zero, i.e., a UW, a coset code of an RS code is generated.) A remaining problem in the original systematic coded UW-OFDM concept is the fact that the redundant subcarrier symbols disproportionately contribute to the mean OFDM symbol energy. In this paper we introduce the concept of non-systematic coded UW-OFDM, where the redundancy is no longer allocated to dedicated subcarriers, but distributed over all subcarriers. We derive optimum complex valued code generator matrices matched to the BLUE (best linear unbiased estimator) and to the LMMSE (linear minimum mean square error) data estimator, respectively. With the help of simulations we highlight the advantageous spectral properties and the superior BER (bit error ratio) performance of non-systematic coded UW-OFDM compared to systematic coded UW-OFDM as well as to CP-OFDM in AWGN (additive white Gaussian noise) and in frequency selective environments.

  15. Quantifying systematic uncertainties in supernova cosmology

    SciTech Connect

    Nordin, Jakob; Goobar, Ariel; Joensson, Jakob E-mail: ariel@physto.se

    2008-02-15

    Observations of Type Ia supernovae used to map the expansion history of the Universe suffer from systematic uncertainties that need to be propagated into the estimates of cosmological parameters. We propose an iterative Monte Carlo simulation and cosmology fitting technique (SMOCK) to investigate the impact of sources of error upon fits of the dark energy equation of state. This approach is especially useful to track the impact of non-Gaussian, correlated effects, e.g. reddening correction errors, brightness evolution of the supernovae, K-corrections, gravitational lensing, etc. While the tool is primarily aimed at studies and optimization of future instruments, we use the Gold data-set in Riess et al (2007 Astrophys. J. 659 98) to show examples of potential systematic uncertainties that could exceed the quoted statistical uncertainties.

  16. Explaining Errors in Children's Questions

    ERIC Educational Resources Information Center

    Rowland, Caroline F.

    2007-01-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that,…

  17. Dyslexia and Oral Reading Errors

    ERIC Educational Resources Information Center

    Singleton, Chris

    2005-01-01

    Thomson was the first of very few researchers to have studied oral reading errors as a means of addressing the question: Are dyslexic readers different to other readers? Using the Neale Analysis of Reading Ability and Goodman's taxonomy of oral reading errors, Thomson concluded that dyslexic readers are different, but he found that they do not…

  18. Children's Scale Errors with Tools

    ERIC Educational Resources Information Center

    Casler, Krista; Eshleman, Angelica; Greene, Kimberly; Terziyan, Treysi

    2011-01-01

    Children sometimes make "scale errors," attempting to interact with tiny object replicas as though they were full size. Here, we demonstrate that instrumental tools provide special insight into the origins of scale errors and, moreover, into the broader nature of children's purpose-guided reasoning and behavior with objects. In Study 1, 1.5- to…

  19. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  20. Dual Processing and Diagnostic Errors

    ERIC Educational Resources Information Center

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  1. Measurement Errors in Organizational Surveys.

    ERIC Educational Resources Information Center

    Dutka, Solomon; Frankel, Lester R.

    1993-01-01

    Describes three classes of measurement techniques: (1) interviewing methods; (2) record retrieval procedures; and (3) observation methods. Discusses primary reasons for measurement error. Concludes that, although measurement error can be defined and controlled for, there are other design factors that also must be considered. (CFR)

  2. Barriers to Medical Error Reporting

    PubMed Central

    Poorolajal, Jalal; Rezaie, Shirin; Aghighi, Negar

    2015-01-01

    Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan, Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%), lack of proper reporting form (51.8%), lack of peer supporting a person who has committed an error (56.0%), and lack of personal attention to the importance of medical errors (62.9%). The rate of committing medical errors was higher in men (71.4%), age of 50–40 years (67.6%), less-experienced personnel (58.7%), educational level of MSc (87.5%), and staff of radiology department (88.9%). Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement. PMID:26605018

  3. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  4. Errors in visuo-haptic and haptic-haptic location matching are stable over long periods of time.

    PubMed

    Kuling, Irene A; Brenner, Eli; Smeets, Jeroen B J

    2016-05-01

    People make systematic errors when they move their unseen dominant hand to a visual target (visuo-haptic matching) or to their other unseen hand (haptic-haptic matching). Why they make such errors is still unknown. A key question in determining the reason is to what extent individual participants' errors are stable over time. To examine this, we developed a method to quantify the consistency. With this method, we studied the stability of systematic matching errors across time intervals of at least a month. Within this time period, individual subjects' matches were as consistent as one could expect on the basis of the variability in the individual participants' performance within each session. Thus individual participants make quite different systematic errors, but in similar circumstances they make the same errors across long periods of time. PMID:27043253

  5. Reducing latent errors, drift errors, and stakeholder dissonance.

    PubMed

    Samaras, George M

    2012-01-01

    Healthcare information technology (HIT) is being offered as a transformer of modern healthcare delivery systems. Some believe that it has the potential to improve patient safety, increase the effectiveness of healthcare delivery, and generate significant cost savings. In other industrial sectors, information technology has dramatically influenced quality and profitability - sometimes for the better and sometimes not. Quality improvement efforts in healthcare delivery have not yet produced the dramatic results obtained in other industrial sectors. This may be that previously successful quality improvement experts do not possess the requisite domain knowledge (clinical experience and expertise). It also appears related to a continuing misconception regarding the origins and meaning of work errors in healthcare delivery. The focus here is on system use errors rather than individual user errors. System use errors originate in both the development and the deployment of technology. Not recognizing stakeholders and their conflicting needs, wants, and desires (NWDs) may lead to stakeholder dissonance. Mistakes translating stakeholder NWDs into development or deployment requirements may lead to latent errors. Mistakes translating requirements into specifications may lead to drift errors. At the sharp end, workers encounter system use errors or, recognizing the risk, expend extensive and unanticipated resources to avoid them. PMID:22317001

  6. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  7. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  8. Error diffusion with a more symmetric error distribution

    NASA Astrophysics Data System (ADS)

    Fan, Zhigang

    1994-05-01

    In this paper a new error diffusion algorithm is presented that effectively eliminates the `worm' artifacts appearing in the standard methods. The new algorithm processes each scanline of the image in two passes, a forward pass followed by a backward one. This enables the error made at one pixel to be propagated to all the `future' pixels. A much more symmetric error distribution is achieved than that of the standard methods. The frequency response of the noise shaping filter associated with the new algorithm is mirror-symmetric in magnitude.

  9. Application of human error analysis to aviation and space operations

    SciTech Connect

    Nelson, W.R.

    1998-03-01

    For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) the authors have been working to apply methods of human error analysis to the design of complex systems. They have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. They are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. The primary vehicle the authors have used to develop and apply these methods has been a series of projects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. They are currently adapting their methods and tools of human error analysis to the domain of air traffic management (ATM) systems. Under the NASA-sponsored Advanced Air Traffic Technologies (AATT) program they are working to address issues of human reliability in the design of ATM systems to support the development of a free flight environment for commercial air traffic in the US. They are also currently testing the application of their human error analysis approach for space flight operations. They have developed a simplified model of the critical habitability functions for the space station Mir, and have used this model to assess the affects of system failures and human errors that have occurred in the wake of the collision incident last year. They are developing an approach so that lessons learned from Mir operations can be systematically applied to design and operation of long-term space missions such as the International Space Station (ISS) and the manned Mars mission.

  10. SU-E-I-83: Error Analysis of Multi-Modality Image-Based Volumes of Rodent Solid Tumors Using a Preclinical Multi-Modality QA Phantom

    SciTech Connect

    Lee, Y; Fullerton, G; Goins, B

    2015-06-15

    Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group; 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement

  11. Video Error Concealment Using Fidelity Tracking

    NASA Astrophysics Data System (ADS)

    Yoneyama, Akio; Takishima, Yasuhiro; Nakajima, Yasuyuki; Hatori, Yoshinori

    We propose a method to prevent the degradation of decoded MPEG pictures caused by video transmission over error-prone networks. In this paper, we focus on the error concealment that is processed at the decoder without using any backchannels. Though there have been various approaches to this problem, they generally focus on minimizing the degradation measured frame by frame. Although this frame-level approach is effective in evaluating individual frame quality, in the sense of human perception, the most noticeable feature is the spatio-temporal discontinuity of the image feature in the decoded video image. We propose a novel error concealment algorithm comprising the combination of i) A spatio-temporal error recovery function with low processing cost, ii) A MB-based image fidelity tracking scheme, and iii) An adaptive post-filter using the fidelity information. It is demonstrated by experimental results that the proposed algorithm can significantly reduce the subjective degradation of corrupted MPEG video quality with about 30% of additional decoding processing power.

  12. Critical evaluation of parameter consistency and predictive uncertainty in hydrological modeling: A case study using Bayesian total error analysis

    NASA Astrophysics Data System (ADS)

    Thyer, Mark; Renard, Benjamin; Kavetski, Dmitri; Kuczera, George; Franks, Stewart William; Srikanthan, Sri

    2009-12-01

    The lack of a robust framework for quantifying the parametric and predictive uncertainty of conceptual rainfall-runoff (CRR) models remains a key challenge in hydrology. The Bayesian total error analysis (BATEA) methodology provides a comprehensive framework to hypothesize, infer, and evaluate probability models describing input, output, and model structural error. This paper assesses the ability of BATEA and standard calibration approaches (standard least squares (SLS) and weighted least squares (WLS)) to address two key requirements of uncertainty assessment: (1) reliable quantification of predictive uncertainty and (2) reliable estimation of parameter uncertainty. The case study presents a challenging calibration of the lumped GR4J model to a catchment with ephemeral responses and large rainfall gradients. Postcalibration diagnostics, including checks of predictive distributions using quantile-quantile analysis, suggest that while still far from perfect, BATEA satisfied its assumed probability models better than SLS and WLS. In addition, WLS/SLS parameter estimates were highly dependent on the selected rain gauge and calibration period. This will obscure potential relationships between CRR parameters and catchment attributes and prevent the development of meaningful regional relationships. Conversely, BATEA provided consistent, albeit more uncertain, parameter estimates and thus overcomes one of the obstacles to parameter regionalization. However, significant departures from the calibration assumptions remained even in BATEA, e.g., systematic overestimation of predictive uncertainty, especially in validation. This is likely due to the inferred rainfall errors compensating for simplified treatment of model structural error.

  13. Quantum rms error and Heisenberg's error-disturbance relation

    NASA Astrophysics Data System (ADS)

    Busch, Paul

    2014-09-01

    Reports on experiments recently performed in Vienna [Erhard et al, Nature Phys. 8, 185 (2012)] and Toronto [Rozema et al, Phys. Rev. Lett. 109, 100404 (2012)] include claims of a violation of Heisenberg's error-disturbance relation. In contrast, a Heisenberg-type tradeoff relation for joint measurements of position and momentum has been formulated and proven in [Phys. Rev. Lett. 111, 160405 (2013)]. Here I show how the apparent conflict is resolved by a careful consideration of the quantum generalization of the notion of root-mean-square error. The claim of a violation of Heisenberg's principle is untenable as it is based on a historically wrong attribution of an incorrect relation to Heisenberg, which is in fact trivially violated. We review a new general trade-off relation for the necessary errors in approximate joint measurements of incompatible qubit observables that is in the spirit of Heisenberg's intuitions. The experiments mentioned may directly be used to test this new error inequality.

  14. Errors in ellipsometry measurements made with a photoelastic modulator

    SciTech Connect

    Modine, F.A.; Jellison, G.E. Jr; Gruzalski, G.R.

    1983-07-01

    The equations governing ellipsometry measurements made with a photoelastic modulator are presented in a simple but general form. These equations are used to study the propagation of both systematic and random errors, and an assessment of the accuracy of the ellipsometer is made. A basis is provided for choosing among various ellipsommeter configurations, measurement procedures, and methods of data analysis. Several new insights into the performance of this type of ellipsometer are supplied.

  15. Error characterisation of global active and passive microwave soil moisture data sets

    NASA Astrophysics Data System (ADS)

    Dorigo, W. A.; Scipal, K.; Parinussa, R. M.; Liu, Y. Y.; Wagner, W.; de Jeu, R. A. M.; Naeimi, V.

    2010-08-01

    Understanding the error structures of remotely sensed soil moisture products is essential for correctly interpreting observed variations and trends in the data or assimilating them in hydrological or numerical weather prediction models. Nevertheless, a spatially coherent assessment of the quality of the various globally available data sets is often hampered by the limited availability over space and time of reliable in-situ measurements. This study explores the triple collocation error estimation technique for assessing the relative quality of several globally available soil moisture products from active (ASCAT) and passive (AMSR-E and SSM/I) microwave sensors. The triple collocation technique is a powerful tool to estimate the root mean square error while simultaneously solving for systematic differences in the climatologies of a set of three independent data sources. In addition to the scatterometer and radiometer data sets, we used the ERA-Interim and GLDAS-NOAH reanalysis soil moisture data sets as a third, independent reference. The prime objective is to reveal trends in uncertainty related to different observation principles (passive versus active), the use of different frequencies (C-, X-, and Ku-band) for passive microwave observations, and the choice of the independent reference data set (ERA-Interim versus GLDAS-NOAH). The results suggest that the triple collocation method provides realistic error estimates. Observed spatial trends agree well with the existing theory and studies on the performance of different observation principles and frequencies with respect to land cover and vegetation density. In addition, if all theoretical prerequisites are fulfilled (e.g. a sufficiently large number of common observations is available and errors of the different data sets are uncorrelated) the errors estimated for the remote sensing products are hardly influenced by the choice of the third independent data set. The results obtained in this study can help us in

  16. Effects of Setup Errors and Shape Changes on Breast Radiotherapy

    SciTech Connect

    Mourik, Anke van; Kranen, Simon van; Hollander, Suzanne den; Sonke, Jan-Jakob; Herk, Marcel van; Vliet-Vroegindeweij, Corine van

    2011-04-01

    Purpose: The purpose of the present study was to quantify the robustness of the dose distributions from three whole-breast radiotherapy (RT) techniques involving different levels of intensity modulation against whole patient setup inaccuracies and breast shape changes. Methods and Materials: For 19 patients (one computed tomography scan and five cone beam computed tomography scans each), three treatment plans were made (wedge, simple intensity-modulated RT [IMRT], and full IMRT). For each treatment plan, four dose distributions were calculated. The first dose distribution was the original plan. The other three included the effects of patient setup errors (rigid displacement of the bony anatomy) or breast errors (e.g., rotations and shape changes of the breast with respect to the bony anatomy), or both, and were obtained through deformable image registration and dose accumulation. Subsequently, the effects of the plan type and error sources on target volume coverage, mean lung dose, and excess dose were determined. Results: Systematic errors of 1-2 mm and random errors of 2-3 mm (standard deviation) were observed for both patient- and breast-related errors. Planning techniques involving glancing fields (wedge and simple IMRT) were primarily affected by patient errors ({approx}6% loss of coverage near the dorsal field edge and {approx}2% near the skin). In contrast, plan deterioration due to breast errors was primarily observed in planning techniques without glancing fields (full IMRT, {approx}2% loss of coverage near the dorsal field edge and {approx}4% near the skin). Conclusion: The influences of patient and breast errors on the dose distributions are comparable in magnitude for whole breast RT plans, including glancing open fields, rendering simple IMRT the preferred technique. Dose distributions from planning techniques without glancing open fields were more seriously affected by shape changes of the breast, demanding specific attention in partial breast

  17. Error compensation for thermally induced errors on a machine tool

    SciTech Connect

    Krulewich, D.A.

    1996-11-08

    Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.

  18. Systematic review automation technologies

    PubMed Central

    2014-01-01

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128

  19. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. PMID:23999403

  20. Stochastic Models of Human Errors

    NASA Technical Reports Server (NTRS)

    Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)

    2002-01-01

    Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.