Sample records for control systematic errors

  1. SU-D-BRD-07: Evaluation of the Effectiveness of Statistical Process Control Methods to Detect Systematic Errors For Routine Electron Energy Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, S

    2015-06-15

    Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignmentmore » of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors using routine measurement of electron beam energy constancy.« less

  2. Dynamically correcting two-qubit gates against any systematic logical error

    NASA Astrophysics Data System (ADS)

    Calderon Vargas, Fernando Antonio

    The reliability of quantum information processing depends on the ability to deal with noise and error in an efficient way. A significant source of error in many settings is coherent, systematic gate error. This work introduces a set of composite pulse sequences that generate maximally entangling gates and correct all systematic errors within the logical subspace to arbitrary order. These sequences are applica- ble for any two-qubit interaction Hamiltonian, and make no assumptions about the underlying noise mechanism except that it is constant on the timescale of the opera- tion. The prime use for our results will be in cases where one has limited knowledge of the underlying physical noise and control mechanisms, highly constrained control, or both. In particular, we apply these composite pulse sequences to the quantum system formed by two capacitively coupled singlet-triplet qubits, which is charac- terized by having constrained control and noise sources that are low frequency and of a non-Markovian nature.

  3. Dynamically corrected gates for singlet-triplet spin qubits with control-dependent errors

    NASA Astrophysics Data System (ADS)

    Jacobson, N. Tobias; Witzel, Wayne M.; Nielsen, Erik; Carroll, Malcolm S.

    2013-03-01

    Magnetic field inhomogeneity due to random polarization of quasi-static local magnetic impurities is a major source of environmentally induced error for singlet-triplet double quantum dot (DQD) spin qubits. Moreover, for singlet-triplet qubits this error may depend on the applied controls. This effect is significant when a static magnetic field gradient is applied to enable full qubit control. Through a configuration interaction analysis, we observe that the dependence of the field inhomogeneity-induced error on the DQD bias voltage can vary systematically as a function of the controls for certain experimentally relevant operating regimes. To account for this effect, we have developed a straightforward prescription for adapting dynamically corrected gate sequences that assume control-independent errors into sequences that compensate for systematic control-dependent errors. We show that accounting for such errors may lead to a substantial increase in gate fidelities. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  4. The effects of recall errors and of selection bias in epidemiologic studies of mobile phone use and cancer risk.

    PubMed

    Vrijheid, Martine; Deltour, Isabelle; Krewski, Daniel; Sanchez, Marie; Cardis, Elisabeth

    2006-07-01

    This paper examines the effects of systematic and random errors in recall and of selection bias in case-control studies of mobile phone use and cancer. These sensitivity analyses are based on Monte-Carlo computer simulations and were carried out within the INTERPHONE Study, an international collaborative case-control study in 13 countries. Recall error scenarios simulated plausible values of random and systematic, non-differential and differential recall errors in amount of mobile phone use reported by study subjects. Plausible values for the recall error were obtained from validation studies. Selection bias scenarios assumed varying selection probabilities for cases and controls, mobile phone users, and non-users. Where possible these selection probabilities were based on existing information from non-respondents in INTERPHONE. Simulations used exposure distributions based on existing INTERPHONE data and assumed varying levels of the true risk of brain cancer related to mobile phone use. Results suggest that random recall errors of plausible levels can lead to a large underestimation in the risk of brain cancer associated with mobile phone use. Random errors were found to have larger impact than plausible systematic errors. Differential errors in recall had very little additional impact in the presence of large random errors. Selection bias resulting from underselection of unexposed controls led to J-shaped exposure-response patterns, with risk apparently decreasing at low to moderate exposure levels. The present results, in conjunction with those of the validation studies conducted within the INTERPHONE study, will play an important role in the interpretation of existing and future case-control studies of mobile phone use and cancer risk, including the INTERPHONE study.

  5. Quality Assurance of Chemical Measurements.

    ERIC Educational Resources Information Center

    Taylor, John K.

    1981-01-01

    Reviews aspects of quality control (methods to control errors) and quality assessment (verification that systems are operating within acceptable limits) including an analytical measurement system, quality control by inspection, control charts, systematic errors, and use of SRMs, materials for which properties are certified by the National Bureau…

  6. Effect of MLC leaf position, collimator rotation angle, and gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Sen; Li, Guangjun; Wang, Maojie

    The purpose of this study was to investigate the effect of multileaf collimator (MLC) leaf position, collimator rotation angle, and accelerator gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma. To compare dosimetric differences between the simulating plans and the clinical plans with evaluation parameters, 6 patients with nasopharyngeal carcinoma were selected for simulation of systematic and random MLC leaf position errors, collimator rotation angle errors, and accelerator gantry rotation angle errors. There was a high sensitivity to dose distribution for systematic MLC leaf position errors in response to field size. When the systematic MLC position errors weremore » 0.5, 1, and 2 mm, respectively, the maximum values of the mean dose deviation, observed in parotid glands, were 4.63%, 8.69%, and 18.32%, respectively. The dosimetric effect was comparatively small for systematic MLC shift errors. For random MLC errors up to 2 mm and collimator and gantry rotation angle errors up to 0.5°, the dosimetric effect was negligible. We suggest that quality control be regularly conducted for MLC leaves, so as to ensure that systematic MLC leaf position errors are within 0.5 mm. Because the dosimetric effect of 0.5° collimator and gantry rotation angle errors is negligible, it can be concluded that setting a proper threshold for allowed errors of collimator and gantry rotation angle may increase treatment efficacy and reduce treatment time.« less

  7. Flux control coefficients determined by inhibitor titration: the design and analysis of experiments to minimize errors.

    PubMed Central

    Small, J R

    1993-01-01

    This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434

  8. Error Sources in Asteroid Astrometry

    NASA Technical Reports Server (NTRS)

    Owen, William M., Jr.

    2000-01-01

    Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.

  9. Adverse effects in dual-feed interferometry

    NASA Astrophysics Data System (ADS)

    Colavita, M. Mark

    2009-11-01

    Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews the key aspects of the dual-star approach and implementation, the main contributors to the systematic error budget, and the coherence terms in the photometric error budget.

  10. SU-E-T-613: Dosimetric Consequences of Systematic MLC Leaf Positioning Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kathuria, K; Siebers, J

    2014-06-01

    Purpose: The purpose of this study is to determine the dosimetric consequences of systematic MLC leaf positioning errors for clinical IMRT patient plans so as to establish detection tolerances for quality assurance programs. Materials and Methods: Dosimetric consequences were simulated by extracting mlc delivery instructions from the TPS, altering the file by the specified error, reloading the delivery instructions into the TPS, recomputing dose, and extracting dose-volume metrics for one head-andneck and one prostate patient. Machine error was simulated by offsetting MLC leaves in Pinnacle in a systematic way. Three different algorithms were followed for these systematic offsets, and aremore » as follows: a systematic sequential one-leaf offset (one leaf offset in one segment per beam), a systematic uniform one-leaf offset (same one leaf offset per segment per beam) and a systematic offset of a given number of leaves picked uniformly at random from a given number of segments (5 out of 10 total). Dose to the PTV and normal tissue was simulated. Results: A systematic 5 mm offset of 1 leaf for all delivery segments of all beams resulted in a maximum PTV D98 deviation of 1%. Results showed very low dose error in all reasonably possible machine configurations, rare or otherwise, which could be simulated. Very low error in dose to PTV and OARs was shown in all possible cases of one leaf per beam per segment being offset (<1%), or that of only one leaf per beam being offset (<.2%). The errors resulting from a high number of adjacent leaves (maximum of 5 out of 60 total leaf-pairs) being simultaneously offset in many (5) of the control points (total 10–18 in all beams) per beam, in both the PTV and the OARs analyzed, were similarly low (<2–3%). Conclusions: The above results show that patient shifts and anatomical changes are the main source of errors in dose delivered, not machine delivery. These two sources of error are “visually complementary” and uncorrelated (albeit not additive in the final error) and one can easily incorporate error resulting from machine delivery in an error model based purely on tumor motion.« less

  11. Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors

    NASA Astrophysics Data System (ADS)

    Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.

    2018-04-01

    The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.

  12. Online automatic tuning and control for fed-batch cultivation

    PubMed Central

    van Straten, Gerrit; van der Pol, Leo A.; van Boxtel, Anton J. B.

    2007-01-01

    Performance of controllers applied in biotechnological production is often below expectation. Online automatic tuning has the capability to improve control performance by adjusting control parameters. This work presents automatic tuning approaches for model reference specific growth rate control during fed-batch cultivation. The approaches are direct methods that use the error between observed specific growth rate and its set point; systematic perturbations of the cultivation are not necessary. Two automatic tuning methods proved to be efficient, in which the adaptation rate is based on a combination of the error, squared error and integral error. These methods are relatively simple and robust against disturbances, parameter uncertainties, and initialization errors. Application of the specific growth rate controller yields a stable system. The controller and automatic tuning methods are qualified by simulations and laboratory experiments with Bordetella pertussis. PMID:18157554

  13. Variation across mitochondrial gene trees provides evidence for systematic error: How much gene tree variation is biological?

    PubMed

    Richards, Emilie J; Brown, Jeremy M; Barley, Anthony J; Chong, Rebecca A; Thomson, Robert C

    2018-02-19

    The use of large genomic datasets in phylogenetics has highlighted extensive topological variation across genes. Much of this discordance is assumed to result from biological processes. However, variation among gene trees can also be a consequence of systematic error driven by poor model fit, and the relative importance of biological versus methodological factors in explaining gene tree variation is a major unresolved question. Using mitochondrial genomes to control for biological causes of gene tree variation, we estimate the extent of gene tree discordance driven by systematic error and employ posterior prediction to highlight the role of model fit in producing this discordance. We find that the amount of discordance among mitochondrial gene trees is similar to the amount of discordance found in other studies that assume only biological causes of variation. This similarity suggests that the role of systematic error in generating gene tree variation is underappreciated and critical evaluation of fit between assumed models and the data used for inference is important for the resolution of unresolved phylogenetic questions.

  14. Impact of random and systematic recall errors and selection bias in case--control studies on mobile phone use and brain tumors in adolescents (CEFALO study).

    PubMed

    Aydin, Denis; Feychting, Maria; Schüz, Joachim; Andersen, Tina Veje; Poulsen, Aslak Harbo; Prochazka, Michaela; Klaeboe, Lars; Kuehni, Claudia E; Tynes, Tore; Röösli, Martin

    2011-07-01

    Whether the use of mobile phones is a risk factor for brain tumors in adolescents is currently being studied. Case--control studies investigating this possible relationship are prone to recall error and selection bias. We assessed the potential impact of random and systematic recall error and selection bias on odds ratios (ORs) by performing simulations based on real data from an ongoing case--control study of mobile phones and brain tumor risk in children and adolescents (CEFALO study). Simulations were conducted for two mobile phone exposure categories: regular and heavy use. Our choice of levels of recall error was guided by a validation study that compared objective network operator data with the self-reported amount of mobile phone use in CEFALO. In our validation study, cases overestimated their number of calls by 9% on average and controls by 34%. Cases also overestimated their duration of calls by 52% on average and controls by 163%. The participation rates in CEFALO were 83% for cases and 71% for controls. In a variety of scenarios, the combined impact of recall error and selection bias on the estimated ORs was complex. These simulations are useful for the interpretation of previous case-control studies on brain tumor and mobile phone use in adults as well as for the interpretation of future studies on adolescents. Copyright © 2011 Wiley-Liss, Inc.

  15. Nature versus nurture: A systematic approach to elucidate gene-environment interactions in the development of myopic refractive errors.

    PubMed

    Miraldi Utz, Virginia

    2017-01-01

    Myopia is the most common eye disorder and major cause of visual impairment worldwide. As the incidence of myopia continues to rise, the need to further understand the complex roles of molecular and environmental factors controlling variation in refractive error is of increasing importance. Tkatchenko and colleagues applied a systematic approach using a combination of gene set enrichment analysis, genome-wide association studies, and functional analysis of a murine model to identify a myopia susceptibility gene, APLP2. Differential expression of refractive error was associated with time spent reading for those with low frequency variants in this gene. This provides support for the longstanding hypothesis of gene-environment interactions in refractive error development.

  16. A constant altitude flight survey method for mapping atmospheric ambient pressures and systematic radar errors

    NASA Technical Reports Server (NTRS)

    Larson, T. J.; Ehernberger, L. J.

    1985-01-01

    The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air data calibrations, but also for studies of atmospheric structure and space positioning accuracy performance. The examples presented cover a wide range of radar tracking conditions for both subsonic and supersonic flight to an altitude of 42,000 ft.

  17. Analytical quality goals derived from the total deviation from patients' homeostatic set points, with a margin for analytical errors.

    PubMed

    Bolann, B J; Asberg, A

    2004-01-01

    The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.

  18. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    NASA Astrophysics Data System (ADS)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  19. Error Analysis of Indirect Broadband Monitoring of Multilayer Optical Coatings using Computer Simulations

    NASA Astrophysics Data System (ADS)

    Semenov, Z. V.; Labusov, V. A.

    2017-11-01

    Results of studying the errors of indirect monitoring by means of computer simulations are reported. The monitoring method is based on measuring spectra of reflection from additional monitoring substrates in a wide spectral range. Special software (Deposition Control Simulator) is developed, which allows one to estimate the influence of the monitoring system parameters (noise of the photodetector array, operating spectral range of the spectrometer and errors of its calibration in terms of wavelengths, drift of the radiation source intensity, and errors in the refractive index of deposited materials) on the random and systematic errors of deposited layer thickness measurements. The direct and inverse problems of multilayer coatings are solved using the OptiReOpt library. Curves of the random and systematic errors of measurements of the deposited layer thickness as functions of the layer thickness are presented for various values of the system parameters. Recommendations are given on using the indirect monitoring method for the purpose of reducing the layer thickness measurement error.

  20. Chiral extrapolation of the leading hadronic contribution to the muon anomalous magnetic moment

    NASA Astrophysics Data System (ADS)

    Golterman, Maarten; Maltman, Kim; Peris, Santiago

    2017-04-01

    A lattice computation of the leading-order hadronic contribution to the muon anomalous magnetic moment can potentially help reduce the error on the Standard Model prediction for this quantity, if sufficient control of all systematic errors affecting such a computation can be achieved. One of these systematic errors is that associated with the extrapolation to the physical pion mass from values on the lattice larger than the physical pion mass. We investigate this extrapolation assuming lattice pion masses in the range of 200 to 400 MeV with the help of two-loop chiral perturbation theory, and we find that such an extrapolation is unlikely to lead to control of this systematic error at the 1% level. This remains true even if various tricks to improve the reliability of the chiral extrapolation employed in the literature are taken into account. In addition, while chiral perturbation theory also predicts the dependence on the pion mass of the leading-order hadronic contribution to the muon anomalous magnetic moment as the chiral limit is approached, this prediction turns out to be of no practical use because the physical pion mass is larger than the muon mass that sets the scale for the onset of this behavior.

  1. Systematic instruction for individuals with acquired brain injury: Results of a randomized controlled trial

    PubMed Central

    Powell, Laurie Ehlhardt; Glang, Ann; Ettel, Deborah; Todis, Bonnie; Sohlberg, McKay; Albin, Richard

    2012-01-01

    The goal of this study was to experimentally evaluate systematic instruction compared with trial-and-error learning (conventional instruction) applied to assistive technology for cognition (ATC), in a double blind, pretest-posttest, randomized controlled trial. Twenty-nine persons with moderate-severe cognitive impairments due to acquired brain injury (15 in systematic instruction group; 14 in conventional instruction) completed the study. Both groups received 12, 45-minute individual training sessions targeting selected skills on the Palm Tungsten E2 personal digital assistant (PDA). A criterion-based assessment of PDA skills was used to evaluate accuracy, fluency/efficiency, maintenance, and generalization of skills. There were no significant differences between groups at immediate posttest with regard to accuracy and fluency. However, significant differences emerged at 30-day follow-up in favor of systematic instruction. Furthermore, systematic instruction participants performed significantly better at immediate posttest generalizing trained PDA skills when interacting with people other than the instructor. These results demonstrate that systematic instruction applied to ATC results in better skill maintenance and generalization than trial-and-error learning for individuals with moderate-severe cognitive impairments due to acquired brain injury. Implications, study limitations, and directions for future research are discussed. PMID:22264146

  2. 10 CFR 75.23 - Operating records.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Accounting and Control for Facilities § 75.23 Operating records. The operating records required by § 75.21... to control the quality of measurements, and the derived estimates of random and systematic error; (c...

  3. Identification and correction of systematic error in high-throughput sequence data

    PubMed Central

    2011-01-01

    Background A feature common to all DNA sequencing technologies is the presence of base-call errors in the sequenced reads. The implications of such errors are application specific, ranging from minor informatics nuisances to major problems affecting biological inferences. Recently developed "next-gen" sequencing technologies have greatly reduced the cost of sequencing, but have been shown to be more error prone than previous technologies. Both position specific (depending on the location in the read) and sequence specific (depending on the sequence in the read) errors have been identified in Illumina and Life Technology sequencing platforms. We describe a new type of systematic error that manifests as statistically unlikely accumulations of errors at specific genome (or transcriptome) locations. Results We characterize and describe systematic errors using overlapping paired reads from high-coverage data. We show that such errors occur in approximately 1 in 1000 base pairs, and that they are highly replicable across experiments. We identify motifs that are frequent at systematic error sites, and describe a classifier that distinguishes heterozygous sites from systematic error. Our classifier is designed to accommodate data from experiments in which the allele frequencies at heterozygous sites are not necessarily 0.5 (such as in the case of RNA-Seq), and can be used with single-end datasets. Conclusions Systematic errors can easily be mistaken for heterozygous sites in individuals, or for SNPs in population analyses. Systematic errors are particularly problematic in low coverage experiments, or in estimates of allele-specific expression from RNA-Seq data. Our characterization of systematic error has allowed us to develop a program, called SysCall, for identifying and correcting such errors. We conclude that correction of systematic errors is important to consider in the design and interpretation of high-throughput sequencing experiments. PMID:22099972

  4. The Experimental Probe of Inflationary Cosmology: A Mission Concept Study for NASA's Einstein Inflation Probe

    NASA Technical Reports Server (NTRS)

    2008-01-01

    When we began our study we sought to answer five fundamental implementation questions: 1) can foregrounds be measured and subtracted to a sufficiently low level?; 2) can systematic errors be controlled?; 3) can we develop optics with sufficiently large throughput, low polarization, and frequency coverage from 30 to 300 GHz?; 4) is there a technical path to realizing the sensitivity and systematic error requirements?; and 5) what are the specific mission architecture parameters, including cost? Detailed answers to these questions are contained in this report.

  5. SYSTEMATIC EFFECTS IN POLARIZING FOURIER TRANSFORM SPECTROMETERS FOR COSMIC MICROWAVE BACKGROUND OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagler, Peter C.; Tucker, Gregory S.; Fixsen, Dale J.

    The detection of the primordial B-mode polarization signal of the cosmic microwave background (CMB) would provide evidence for inflation. Yet as has become increasingly clear, the detection of a such a faint signal requires an instrument with both wide frequency coverage to reject foregrounds and excellent control over instrumental systematic effects. Using a polarizing Fourier transform spectrometer (FTS) for CMB observations meets both of these requirements. In this work, we present an analysis of instrumental systematic effects in polarizing FTSs, using the Primordial Inflation Explorer (PIXIE) as a worked example. We analytically solve for the most important systematic effects inherentmore » to the FTS—emissive optical components, misaligned optical components, sampling and phase errors, and spin synchronous effects—and demonstrate that residual systematic error terms after corrections will all be at the sub-nK level, well below the predicted 100 nK B-mode signal.« less

  6. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1999-01-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy{close_quote}s Idaho National Engineering and Environmental Laboratory (INEEL) is developing a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper will describe previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS. {copyright} {ital 1999 American Institute of Physics.}« less

  7. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1998-09-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.« less

  8. Importance of implementing an analytical quality control system in a core laboratory.

    PubMed

    Marques-Garcia, F; Garcia-Codesal, M F; Caro-Narros, M R; Contreras-SanFeliciano, T

    2015-01-01

    The aim of the clinical laboratory is to provide useful information for screening, diagnosis and monitoring of disease. The laboratory should ensure the quality of extra-analytical and analytical process, based on set criteria. To do this, it develops and implements a system of internal quality control, designed to detect errors, and compare its data with other laboratories, through external quality control. In this way it has a tool to detect the fulfillment of the objectives set, and in case of errors, allowing corrective actions to be made, and ensure the reliability of the results. This article sets out to describe the design and implementation of an internal quality control protocol, as well as its periodical assessment intervals (6 months) to determine compliance with pre-determined specifications (Stockholm Consensus(1)). A total of 40 biochemical and 15 immunochemical methods were evaluated using three different control materials. Next, a standard operation procedure was planned to develop a system of internal quality control that included calculating the error of the analytical process, setting quality specifications, and verifying compliance. The quality control data were then statistically depicted as means, standard deviations, and coefficients of variation, as well as systematic, random, and total errors. The quality specifications were then fixed and the operational rules to apply in the analytical process were calculated. Finally, our data were compared with those of other laboratories through an external quality assurance program. The development of an analytical quality control system is a highly structured process. This should be designed to detect errors that compromise the stability of the analytical process. The laboratory should review its quality indicators, systematic, random and total error at regular intervals, in order to ensure that they are meeting pre-determined specifications, and if not, apply the appropriate corrective actions. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.

  9. Systematic sparse matrix error control for linear scaling electronic structure calculations.

    PubMed

    Rubensson, Emanuel H; Sałek, Paweł

    2005-11-30

    Efficient truncation criteria used in multiatom blocked sparse matrix operations for ab initio calculations are proposed. As system size increases, so does the need to stay on top of errors and still achieve high performance. A variant of a blocked sparse matrix algebra to achieve strict error control with good performance is proposed. The presented idea is that the condition to drop a certain submatrix should depend not only on the magnitude of that particular submatrix, but also on which other submatrices that are dropped. The decision to remove a certain submatrix is based on the contribution the removal would cause to the error in the chosen norm. We study the effect of an accumulated truncation error in iterative algorithms like trace correcting density matrix purification. One way to reduce the initial exponential growth of this error is presented. The presented error control for a sparse blocked matrix toolbox allows for achieving optimal performance by performing only necessary operations needed to maintain the requested level of accuracy. Copyright 2005 Wiley Periodicals, Inc.

  10. A Systematic Approach to Error Free Telemetry

    DTIC Science & Technology

    2017-06-28

    A SYSTEMATIC APPROACH TO ERROR FREE TELEMETRY 412TW-TIM-17-03 DISTRIBUTION A: Approved for public release. Distribution is...Systematic Approach to Error-Free Telemetry) was submitted by the Commander, 412th Test Wing, Edwards AFB, California 93524. Prepared by...Technical Information Memorandum 3. DATES COVERED (From - Through) February 2016 4. TITLE AND SUBTITLE A Systematic Approach to Error-Free

  11. Pion mass dependence of the HVP contribution to muon g - 2

    NASA Astrophysics Data System (ADS)

    Golterman, Maarten; Maltman, Kim; Peris, Santiago

    2018-03-01

    One of the systematic errors in some of the current lattice computations of the HVP contribution to the muon anomalous magnetic moment g - 2 is that associated with the extrapolation to the physical pion mass. We investigate this extrapolation assuming lattice pion masses in the range of 220 to 440 MeV with the help of two-loop chiral perturbation theory, and find that such an extrapolation is unlikely to lead to control of this systematic error at the 1% level. This remains true even if various proposed tricks to improve the chiral extrapolation are taken into account.

  12. A Systematic Error Correction Method for TOVS Radiances

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.

  13. Systematic ionospheric electron density tilts (SITs) at mid-latitudes and their associated HF bearing errors

    NASA Astrophysics Data System (ADS)

    Tedd, B. L.; Strangeways, H. J.; Jones, T. B.

    1985-11-01

    Systematic ionospheric tilts (SITs) at midlatitudes and the diurnal variation of bearing error for different transmission paths are examined. An explanation of diurnal variations of bearing error based on the dependence of ionospheric tilt on solar zenith angle and plasma transport processes is presented. The effect of vertical ion drift and the momentum transfer of neutral winds is investigated. During the daytime the transmissions are low and photochemical processes control SITs; however, at night transmissions are at higher heights and spatial and temporal variations of plasma transport processes influence SITs. A HF ray tracing technique which uses a three-dimensional ionospheric model based on predictions to simulate SIT-induced bearing errors is described; poor correlation with experimental data is observed and the causes for this are studied. A second model based on measured vertical-sounder data is proposed. Model two is applicable for predicting bearing error for a range of transmission paths and correlates well with experimental data.

  14. An Automatic Quality Control Pipeline for High-Throughput Screening Hit Identification.

    PubMed

    Zhai, Yufeng; Chen, Kaisheng; Zhong, Yang; Zhou, Bin; Ainscow, Edward; Wu, Ying-Ta; Zhou, Yingyao

    2016-09-01

    The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community. © 2016 Society for Laboratory Automation and Screening.

  15. The detection of problem analytes in a single proficiency test challenge in the absence of the Health Care Financing Administration rule violations.

    PubMed

    Cembrowski, G S; Hackney, J R; Carey, N

    1993-04-01

    The Clinical Laboratory Improvement Act of 1988 (CLIA 88) has dramatically changed proficiency testing (PT) practices having mandated (1) satisfactory PT for certain analytes as a condition of laboratory operation, (2) fixed PT limits for many of these "regulated" analytes, and (3) an increased number of PT specimens (n = 5) for each testing cycle. For many of these analytes, the fixed limits are much broader than the previously employed Standard Deviation Index (SDI) criteria. Paradoxically, there may be less incentive to identify and evaluate analytically significant outliers to improve the analytical process. Previously described "control rules" to evaluate these PT results are unworkable as they consider only two or three results. We used Monte Carlo simulations of Kodak Ektachem analyzers participating in PT to determine optimal control rules for the identification of PT results that are inconsistent with those from other laboratories using the same methods. The analysis of three representative analytes, potassium, creatine kinase, and iron was simulated with varying intrainstrument and interinstrument standard deviations (si and sg, respectively) obtained from the College of American Pathologists (Northfield, Ill) Quality Assurance Services data and Proficiency Test data, respectively. Analytical errors were simulated in each of the analytes and evaluated in terms of multiples of the interlaboratory SDI. Simple control rules for detecting systematic and random error were evaluated with power function graphs, graphs of probability of error detected vs magnitude of error. Based on the simulation results, we recommend screening all analytes for the occurrence of two or more observations exceeding the same +/- 1 SDI limit. For any analyte satisfying this condition, the mean of the observations should be calculated. For analytes with sg/si ratios between 1.0 and 1.5, a significant systematic error is signaled by the mean exceeding 1.0 SDI. Significant random error is signaled by one observation exceeding the +/- 3-SDI limit or the range of the observations exceeding 4 SDIs. For analytes with higher sg/si, significant systematic or random error is signaled by violation of the screening rule (having at least two observations exceeding the same +/- 1 SDI limit). Random error can also be signaled by one observation exceeding the +/- 1.5-SDI limit or the range of the observations exceeding 3 SDIs. We present a practical approach to the workup of apparent PT errors.

  16. Feedback-tuned, noise resilient gates for encoded spin qubits

    NASA Astrophysics Data System (ADS)

    Bluhm, Hendrik

    Spin 1/2 particles form native two level systems and thus lend themselves as a natural qubit implementation. However, encoding a single qubit in several spins entails benefits, such as reducing the resources necessary for qubit control and protection from certain decoherence channels. While several varieties of such encoded spin qubits have been implemented, accurate control remains challenging, and leakage out of the subspace of valid qubit states is a potential issue. Optimal performance typically requires large pulse amplitudes for fast control, which is prone to systematic errors and prohibits standard control approaches based on Rabi flopping. Furthermore, the exchange interaction typically used to electrically manipulate encoded spin qubits is inherently sensitive to charge noise. I will discuss all-electrical, high-fidelity single qubit operations for a spin qubit encoded in two electrons in a GaAs double quantum dot. Starting from a set of numerically optimized control pulses, we employ an iterative tuning procedure based on measured error syndromes to remove systematic errors.Randomized benchmarking yields an average gate fidelity exceeding 98 % and a leakage rate into invalid states of 0.2 %. These gates exhibit a certain degree of resilience to both slow charge and nuclear spin fluctuations due to dynamical correction analogous to a spin echo. Furthermore, the numerical optimization minimizes the impact of fast charge noise. Both types of noise make relevant contributions to gate errors. The general approach is also adaptable to other qubit encodings and exchange based two-qubit gates.

  17. More on Systematic Error in a Boyle's Law Experiment

    ERIC Educational Resources Information Center

    McCall, Richard P.

    2012-01-01

    A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.

  18. On the Quality of Point-Clouds Derived from Sfm-Photogrammetry Applied to UAS Imagery

    NASA Astrophysics Data System (ADS)

    Carbonneau, P.; James, T.

    2014-12-01

    Structure from Motion photogrammetry (SfM-photogrammetry) recently appeared in environmental sciences as an impressive tool allowing for the creation of topographic data from unstructured imagery. Several authors have tested the performance of SfM-photogrammetry vs that of TLS or dGPS. Whilst the initial results were very promising, there is currently a growing awareness that systematic deformations occur in DEMs and point-clouds derived from SfM-photogrammetry. Notably, some authors have identified a systematic doming manifest as an increasing error vs distance to the model centre. Simulation studies have confirmed that this error is due to errors in the calibration of camera distortions. This work aims to further investigate these effects in the presence of real data. We start with a dataset of 220 images acquired from a sUAS. After obtaining an initial self-calibration of the camera lens with Agisoft Photoscan, our method consists in applying systematic perturbations to 2 key lens parameters: Focal length and the k1 distortion parameter. For each perturbation, a point-cloud was produced and compared to LiDAR data. After deriving the mean and standard deviation of the error residuals (ɛ), a 2nd order polynomial surface was fitted to the errors point-cloud and the peak ɛ defined as the mathematical extrema of this surface. The results are presented in figure 1. This figure shows that lens perturbations can induce a range of errors with systematic behaviours. Peak ɛ is primarily controlled by K1 with a secondary control exerted by the focal length. These results allow us to state that: To limit the peak ɛ to 10cm, the K1 parameter must be calibrated to within 0.00025 and the focal length to within 2.5 pixels (≈10 µm). This level of calibration accuracy can only be achieved with proper design of image acquisition and control network geometry. Our main point is therefore that SfM is not a bypass to a rigorous and well-informed photogrammetric approach. Users of SfM-photogrammetry will still require basic training and knowledge in the fundamentals of photogrammetry. This is especially true for applications where very small topographic changes need to be detected or where gradient-sensitive processes need to be modelled.

  19. Causal Inference for fMRI Time Series Data with Systematic Errors of Measurement in a Balanced On/Off Study of Social Evaluative Threat.

    PubMed

    Sobel, Michael E; Lindquist, Martin A

    2014-07-01

    Functional magnetic resonance imaging (fMRI) has facilitated major advances in understanding human brain function. Neuroscientists are interested in using fMRI to study the effects of external stimuli on brain activity and causal relationships among brain regions, but have not stated what is meant by causation or defined the effects they purport to estimate. Building on Rubin's causal model, we construct a framework for causal inference using blood oxygenation level dependent (BOLD) fMRI time series data. In the usual statistical literature on causal inference, potential outcomes, assumed to be measured without systematic error, are used to define unit and average causal effects. However, in general the potential BOLD responses are measured with stimulus dependent systematic error. Thus we define unit and average causal effects that are free of systematic error. In contrast to the usual case of a randomized experiment where adjustment for intermediate outcomes leads to biased estimates of treatment effects (Rosenbaum, 1984), here the failure to adjust for task dependent systematic error leads to biased estimates. We therefore adjust for systematic error using measured "noise covariates" , using a linear mixed model to estimate the effects and the systematic error. Our results are important for neuroscientists, who typically do not adjust for systematic error. They should also prove useful to researchers in other areas where responses are measured with error and in fields where large amounts of data are collected on relatively few subjects. To illustrate our approach, we re-analyze data from a social evaluative threat task, comparing the findings with results that ignore systematic error.

  20. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less

  1. Irregular analytical errors in diagnostic testing - a novel concept.

    PubMed

    Vogeser, Michael; Seger, Christoph

    2018-02-23

    In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC-isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.

  2. Test-retest reliability of jump execution variables using mechanography: a comparison of jump protocols.

    PubMed

    Fitzgerald, John S; Johnson, LuAnn; Tomkinson, Grant; Stein, Jesse; Roemmich, James N

    2018-05-01

    Mechanography during the vertical jump may enhance screening and determining mechanistic causes underlying physical performance changes. Utility of jump mechanography for evaluation is limited by scant test-retest reliability data on force-time variables. This study examined the test-retest reliability of eight jump execution variables assessed from mechanography. Thirty-two women (mean±SD: age 20.8 ± 1.3 yr) and 16 men (age 22.1 ± 1.9 yr) attended a familiarization session and two testing sessions, all one week apart. Participants performed two variations of the squat jump with squat depth self-selected and controlled using a goniometer to 80º knee flexion. Test-retest reliability was quantified as the systematic error (using effect size between jumps), random error (using coefficients of variation), and test-retest correlations (using intra-class correlation coefficients). Overall, jump execution variables demonstrated acceptable reliability, evidenced by small systematic errors (mean±95%CI: 0.2 ± 0.07), moderate random errors (mean±95%CI: 17.8 ± 3.7%), and very strong test-retest correlations (range: 0.73-0.97). Differences in random errors between controlled and self-selected protocols were negligible (mean±95%CI: 1.3 ± 2.3%). Jump execution variables demonstrated acceptable reliability, with no meaningful differences between the controlled and self-selected jump protocols. To simplify testing, a self-selected jump protocol can be used to assess force-time variables with negligible impact on measurement error.

  3. Experimental power spectral density analysis for mid- to high-spatial frequency surface error control.

    PubMed

    Hoyo, Javier Del; Choi, Heejoo; Burge, James H; Kim, Geon-Hee; Kim, Dae Wook

    2017-06-20

    The control of surface errors as a function of spatial frequency is critical during the fabrication of modern optical systems. A large-scale surface figure error is controlled by a guided removal process, such as computer-controlled optical surfacing. Smaller-scale surface errors are controlled by polishing process parameters. Surface errors of only a few millimeters may degrade the performance of an optical system, causing background noise from scattered light and reducing imaging contrast for large optical systems. Conventionally, the microsurface roughness is often given by the root mean square at a high spatial frequency range, with errors within a 0.5×0.5  mm local surface map with 500×500 pixels. This surface specification is not adequate to fully describe the characteristics for advanced optical systems. The process for controlling and minimizing mid- to high-spatial frequency surface errors with periods of up to ∼2-3  mm was investigated for many optical fabrication conditions using the measured surface power spectral density (PSD) of a finished Zerodur optical surface. Then, the surface PSD was systematically related to various fabrication process parameters, such as the grinding methods, polishing interface materials, and polishing compounds. The retraceable experimental polishing conditions and processes used to produce an optimal optical surface PSD are presented.

  4. Propagation of stage measurement uncertainties to streamflow time series

    NASA Astrophysics Data System (ADS)

    Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary

    2016-04-01

    Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.

  5. EMG Versus Torque Control of Human-Machine Systems: Equalizing Control Signal Variability Does not Equalize Error or Uncertainty.

    PubMed

    Johnson, Reva E; Kording, Konrad P; Hargrove, Levi J; Sensinger, Jonathon W

    2017-06-01

    In this paper we asked the question: if we artificially raise the variability of torque control signals to match that of EMG, do subjects make similar errors and have similar uncertainty about their movements? We answered this question using two experiments in which subjects used three different control signals: torque, torque+noise, and EMG. First, we measured error on a simple target-hitting task in which subjects received visual feedback only at the end of their movements. We found that even when the signal-to-noise ratio was equal across EMG and torque+noise control signals, EMG resulted in larger errors. Second, we quantified uncertainty by measuring the just-noticeable difference of a visual perturbation. We found that for equal errors, EMG resulted in higher movement uncertainty than both torque and torque+noise. The differences suggest that performance and confidence are influenced by more than just the noisiness of the control signal, and suggest that other factors, such as the user's ability to incorporate feedback and develop accurate internal models, also have significant impacts on the performance and confidence of a person's actions. We theorize that users have difficulty distinguishing between random and systematic errors for EMG control, and future work should examine in more detail the types of errors made with EMG control.

  6. Seeing in the Dark: Weak Lensing from the Sloan Digital Sky Survey

    NASA Astrophysics Data System (ADS)

    Huff, Eric Michael

    Statistical weak lensing by large-scale structure { cosmic shear { is a promising cosmological tool, which has motivated the design of several large upcoming astronomical surveys. This Thesis presents a measurement of cosmic shear using coadded Sloan Digital Sky Survey (SDSS) imaging in 168 square degrees of the equatorial region, with r < 23:5 and i < 22:5, a source number density of 2.2 per arcmin2 and median redshift of zmed = 0.52. These coadds were generated using a new rounding kernel method that was intended to minimize systematic errors in the lensing measurement due to coherent PSF anisotropies that are otherwise prevalent in the SDSS imaging data. Measurements of cosmic shear out to angular separations of 2 degrees are presented, along with systematics tests of the catalog generation and shear measurement steps that demonstrate that these results are dominated by statistical rather than systematic errors. Assuming a cosmological model corresponding to WMAP7 (Komatsu et al., 2011) and allowing only the amplitude of matter fluctuations sigma8 to vary, the best-t value of the amplitude of matter fluctuations is sigma 8=0.636+0.109-0.154 (1sigma); without systematic errors this would be sigma8=0.636+0.099 -0.137 (1sigma). Assuming a flat Λ CDM model, the combined constraints with WMAP7 are sigma8=0.784+0.028 -0.026 (1sigma). The 2sigma error range is 14 percent smaller than WMAP7 alone. Aside from the intrinsic value of such cosmological constraints from the growth of structure, some important lessons are identified for upcoming surveys that may face similar issues when combining multi-epoch data to measure cosmic shear. Motivated by the challenges faced in the cosmic shear measurement, two new lensing probes are suggested for increasing the available weak lensing signal. Both use galaxy scaling relations to control for scatter in lensing observables. The first employs a version of the well-known fundamental plane relation for early type galaxies. This modified "photometric fundamental plane" replaces velocity dispersions with photometric galaxy properties, thus obviating the need for spectroscopic data. We present the first detection of magnification using this method by applying it to photometric catalogs from the Sloan Digital Sky Survey. This analysis shows that the derived magnification signal is comparable to that available from conventional methods using gravitational shear. We suppress the dominant sources of systematic error and discuss modest improvements that may allow this method to equal or even surpass the signal-to-noise achievable with shear. Moreover, some of the dominant sources of systematic error are substantially different from those of shear-based techniques. The second outlines an idea for using the optical Tully-Fisher relation to dramatically improve the signal-to-noise and systematic error control for shear measurements. The expected error properties and potential advantages of such a measurement are proposed, and a pilot study is suggested in order to test the viability of Tully-Fisher weak lensing in the context of the forthcoming generation of large spectroscopic surveys.

  7. Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen

    2005-10-01

    Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errorsmore » of {sigma} = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with {sigma} = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for {sigma} = {sigma} = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D{sub 98}), clinical target volume (CTV) D{sub 90}, nodes D{sub 90}, cord D{sub 2}, and parotid D{sub 50} and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error {sigma} exceeded 3 mm. Simulated systematic setup errors with {sigma} = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a {sigma} = 3.0 mm resulted in half of the plans having more than a 3% dose error and 28% with a 5% dose error. Combined random and systematic dose errors with {sigma} = {sigma} = 3.0 mm resulted in more than 50% of plans having at least a 3% dose error and 38% of the plans having at least a 5% dose error. Evaluation with respect to a 3-mm expanded PTV reduced the observed dose deviations greater than 5% for the {sigma} = {sigma} = 3.0 mm simulations to 5.4% of the plans simulated. Conclusions: Head-and-neck SIB-IMRT dosimetric accuracy would benefit from methods to reduce patient systematic setup errors. When GTV, CTV, or nodal volumes are used for dose evaluation, plans simulated including the effects of random and systematic errors deviate substantially from the nominal plan. The use of PTVs for dose evaluation in the nominal plan improves agreement with evaluated GTV, CTV, and nodal dose values under simulated setup errors. PTV concepts should be used for SIB-IMRT head-and-neck squamous cell carcinoma patients, although the size of the margins may be less than those used with three-dimensional conformal radiation therapy.« less

  8. Technical Note: Millimeter precision in ultrasound based patient positioning: Experimental quantification of inherent technical limitations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballhausen, Hendrik, E-mail: hendrik.ballhausen@med.uni-muenchen.de; Hieber, Sheila; Li, Minglun

    2014-08-15

    Purpose: To identify the relevant technical sources of error of a system based on three-dimensional ultrasound (3D US) for patient positioning in external beam radiotherapy. To quantify these sources of error in a controlled laboratory setting. To estimate the resulting end-to-end geometric precision of the intramodality protocol. Methods: Two identical free-hand 3D US systems at both the planning-CT and the treatment room were calibrated to the laboratory frame of reference. Every step of the calibration chain was repeated multiple times to estimate its contribution to overall systematic and random error. Optimal margins were computed given the identified and quantified systematicmore » and random errors. Results: In descending order of magnitude, the identified and quantified sources of error were: alignment of calibration phantom to laser marks 0.78 mm, alignment of lasers in treatment vs planning room 0.51 mm, calibration and tracking of 3D US probe 0.49 mm, alignment of stereoscopic infrared camera to calibration phantom 0.03 mm. Under ideal laboratory conditions, these errors are expected to limit ultrasound-based positioning to an accuracy of 1.05 mm radially. Conclusions: The investigated 3D ultrasound system achieves an intramodal accuracy of about 1 mm radially in a controlled laboratory setting. The identified systematic and random errors require an optimal clinical tumor volume to planning target volume margin of about 3 mm. These inherent technical limitations do not prevent clinical use, including hypofractionation or stereotactic body radiation therapy.« less

  9. Horizon sensors attitude errors simulation for the Brazilian Remote Sensing Satellite

    NASA Astrophysics Data System (ADS)

    Vicente de Brum, Antonio Gil; Ricci, Mario Cesar

    Remote sensing, meteorological and other types of satellites require an increasingly better Earth related positioning. From the past experience it is well known that the thermal horizon in the 15 micrometer band provides conditions of determining the local vertical at any time. This detection is done by horizon sensors which are accurate instruments for Earth referred attitude sensing and control whose performance is limited by systematic and random errors amounting about 0.5 deg. Using the computer programs OBLATE, SEASON, ELECTRO and MISALIGN, developed at INPE to simulate four distinct facets of conical scanning horizon sensors, attitude errors are obtained for the Brazilian Remote Sensing Satellite (the first one, SSR-1, is scheduled to fly in 1996). These errors are due to the oblate shape of the Earth, seasonal and latitudinal variations of the 15 micrometer infrared radiation, electronic processing time delay and misalignment of sensor axis. The sensor related attitude errors are thus properly quantified in this work and will, together with other systematic errors (for instance, ambient temperature variation) take part in the pre-launch analysis of the Brazilian Remote Sensing Satellite, with respect to the horizon sensor performance.

  10. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. A cautionary note on the use of some mass flow controllers

    NASA Astrophysics Data System (ADS)

    Weinheimer, Andrew J.; Ridley, Brian A.

    1990-06-01

    Commercial mass flow controllers are widely used in atmospheric research where precise and constant gas flows are required. We have determined, however, that some commonly used controllers are far more sensitive to ambient pressure than is acknowledged in the literature of the manufacturers. Since a flow error can lead directly to a measurement error of the same magnitude, this is a matter of great concern. Indeed, in our particular application, were we not aware of this problem, our measurements would be subject to a systematic error that increased with altitude (i.e., a drift), up to a factor of 2 at the highest altitudes (˜37 km). In this note we present laboratory measurements of the errors of two brands of flow controllers when operated at pressures down to a few millibars. The errors are as large as a factor of 2 to 3 and depend not simply on the ambient pressure at a given time, but also on the pressure history. In addition there is a large dependence on flow setting. In light of these flow errors, some past measurements of chemical species in the stratosphere will need to be revised.

  12. Systematic Error Study for ALICE charged-jet v2 Measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinz, M.; Soltz, R.

    We study the treatment of systematic errors in the determination of v 2 for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ 2 according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ 2 and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methodsmore » are equivalent.« less

  13. Systematic review of the evidence for Trails B cut-off scores in assessing fitness-to-drive.

    PubMed

    Roy, Mononita; Molnar, Frank

    2013-01-01

    Fitness-to-drive guidelines recommend employing the Trail Making B Test (a.k.a. Trails B), but do not provide guidance regarding cut-off scores. There is ongoing debate regarding the optimal cut-off score on the Trails B test. The objective of this study was to address this controversy by systematically reviewing the evidence for specific Trails B cut-off scores (e.g., cut-offs in both time to completion and number of errors) with respect to fitness-to-drive. Systematic review of all prospective cohort, retrospective cohort, case-control, correlation, and cross-sectional studies reporting the ability of the Trails B to predict driving safety that were published in English-language, peer-reviewed journals. Forty-seven articles were reviewed. None of the articles justified sample sizes via formal calculations. Cut-off scores reported based on research include: 90 seconds, 133 seconds, 147 seconds, 180 seconds, and < 3 errors. There is support for the previously published Trails B cut-offs of 3 minutes or 3 errors (the '3 or 3 rule'). Major methodological limitations of this body of research were uncovered including (1) lack of justification of sample size leaving studies open to Type II error (i.e., false negative findings), and (2) excessive focus on associations rather than clinically useful cut-off scores.

  14. The impact of command signal power distribution, processing delays, and speed scaling on neurally-controlled devices.

    PubMed

    Marathe, A R; Taylor, D M

    2015-08-01

    Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.

  15. The impact of command signal power distribution, processing delays, and speed scaling on neurally-controlled devices

    NASA Astrophysics Data System (ADS)

    Marathe, A. R.; Taylor, D. M.

    2015-08-01

    Objective. Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. Approach. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Main results. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. Significance. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.

  16. The effectiveness of the error reporting promoting program on the nursing error incidence rate in Korean operating rooms.

    PubMed

    Kim, Myoung-Soo; Kim, Jung-Soon; Jung, In Sook; Kim, Young Hae; Kim, Ho Jung

    2007-03-01

    The purpose of this study was to develop and evaluate an error reporting promoting program(ERPP) to systematically reduce the incidence rate of nursing errors in operating room. A non-equivalent control group non-synchronized design was used. Twenty-six operating room nurses who were in one university hospital in Busan participated in this study. They were stratified into four groups according to their operating room experience and were allocated to the experimental and control groups using a matching method. Mann-Whitney U Test was used to analyze the differences pre and post incidence rates of nursing errors between the two groups. The incidence rate of nursing errors decreased significantly in the experimental group compared to the pre-test score from 28.4% to 15.7%. The incidence rate by domains, it decreased significantly in the 3 domains-"compliance of aseptic technique", "management of document", "environmental management" in the experimental group while it decreased in the control group which was applied ordinary error-reporting method. Error-reporting system can make possible to hold the errors in common and to learn from them. ERPP was effective to reduce the errors of recognition-related nursing activities. For the wake of more effective error-prevention, we will be better to apply effort of risk management along the whole health care system with this program.

  17. TRAINING ERRORS AND RUNNING RELATED INJURIES: A SYSTEMATIC REVIEW

    PubMed Central

    Buist, Ida; Sørensen, Henrik; Lind, Martin; Rasmussen, Sten

    2012-01-01

    Purpose: The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries. Methods: A systematic search was performed in PubMed, Web of Science, Embase, and SportDiscus. Studies were included if they examined novice, recreational, or elite runners between the ages of 18 and 65. Exposure variables were training characteristics defined as volume, distance or mileage, time or duration, frequency, intensity, speed or pace, or similar terms. The outcome of interest was Running Related Injuries (RRI) in general or specific RRI in the lower extremity or lower back. Methodological quality was evaluated using quality assessment tools of 11 to 16 items. Results: After examining 4561 titles and abstracts, 63 articles were identified as potentially relevant. Finally, nine retrospective cohort studies, 13 prospective cohort studies, six case-control studies, and three randomized controlled trials were included. The mean quality score was 44.1%. Conflicting results were reported on the relationships between volume, duration, intensity, and frequency and RRI. Conclusion: It was not possible to identify which training errors were related to running related injuries. Still, well supported data on which training errors relate to or cause running related injuries is highly important for determining proper prevention strategies. If methodological limitations in measuring training variables can be resolved, more work can be conducted to define training and the interactions between different training variables, create several hypotheses, test the hypotheses in a large scale prospective study, and explore cause and effect relationships in randomized controlled trials. Level of evidence: 2a PMID:22389869

  18. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmissionmore » and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  19. Systematic Errors in an Air Track Experiment.

    ERIC Educational Resources Information Center

    Ramirez, Santos A.; Ham, Joe S.

    1990-01-01

    Errors found in a common physics experiment to measure acceleration resulting from gravity using a linear air track are investigated. Glider position at release and initial velocity are shown to be sources of systematic error. (CW)

  20. ICP-Forests (International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests): Quality Assurance procedure in plant diversity monitoring.

    PubMed

    Allegrini, Maria-Cristina; Canullo, Roberto; Campetella, Giandiego

    2009-04-01

    Knowledge of accuracy and precision rates is particularly important for long-term studies. Vegetation assessments include many sources of error related to overlooking and misidentification, that are usually influenced by some factors, such as cover estimate subjectivity, observer biased species lists and experience of the botanist. The vegetation assessment protocol adopted in the Italian forest monitoring programme (CONECOFOR) contains a Quality Assurance programme. The paper presents the different phases of QA, separates the 5 main critical points of the whole protocol as sources of random or systematic errors. Examples of Measurement Quality Objectives (MQOs) expressed as Data Quality Limits (DQLs) are given for vascular plant cover estimates, in order to establish the reproducibility of the data. Quality control activities were used to determine the "distance" between the surveyor teams and the control team. Selected data were acquired during the training and inter-calibration courses. In particular, an index of average cover by species groups was used to evaluate the random error (CV 4%) as the dispersion around the "true values" of the control team. The systematic error in the evaluation of species composition, caused by overlooking or misidentification of species, was calculated following the pseudo-turnover rate; detailed species censuses on smaller sampling units were accepted as the pseudo-turnover which always fell below the 25% established threshold; species density scores recorded at community level (100 m(2) surface) rarely exceeded that limit.

  1. The quality of systematic reviews about interventions for refractive error can be improved: a review of systematic reviews.

    PubMed

    Mayo-Wilson, Evan; Ng, Sueko Matsumura; Chuck, Roy S; Li, Tianjing

    2017-09-05

    Systematic reviews should inform American Academy of Ophthalmology (AAO) Preferred Practice Pattern® (PPP) guidelines. The quality of systematic reviews related to the forthcoming Preferred Practice Pattern® guideline (PPP) Refractive Errors & Refractive Surgery is unknown. We sought to identify reliable systematic reviews to assist the AAO Refractive Errors & Refractive Surgery PPP. Systematic reviews were eligible if they evaluated the effectiveness or safety of interventions included in the 2012 PPP Refractive Errors & Refractive Surgery. To identify potentially eligible systematic reviews, we searched the Cochrane Eyes and Vision United States Satellite database of systematic reviews. Two authors identified eligible reviews and abstracted information about the characteristics and quality of the reviews independently using the Systematic Review Data Repository. We classified systematic reviews as "reliable" when they (1) defined criteria for the selection of studies, (2) conducted comprehensive literature searches for eligible studies, (3) assessed the methodological quality (risk of bias) of the included studies, (4) used appropriate methods for meta-analyses (which we assessed only when meta-analyses were reported), (5) presented conclusions that were supported by the evidence provided in the review. We identified 124 systematic reviews related to refractive error; 39 met our eligibility criteria, of which we classified 11 to be reliable. Systematic reviews classified as unreliable did not define the criteria for selecting studies (5; 13%), did not assess methodological rigor (10; 26%), did not conduct comprehensive searches (17; 44%), or used inappropriate quantitative methods (3; 8%). The 11 reliable reviews were published between 2002 and 2016. They included 0 to 23 studies (median = 9) and analyzed 0 to 4696 participants (median = 666). Seven reliable reviews (64%) assessed surgical interventions. Most systematic reviews of interventions for refractive error are low methodological quality. Following widely accepted guidance, such as Cochrane or Institute of Medicine standards for conducting systematic reviews, would contribute to improved patient care and inform future research.

  2. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  3. ELLIPTICAL WEIGHTED HOLICs FOR WEAK LENSING SHEAR MEASUREMENT. III. THE EFFECT OF RANDOM COUNT NOISE ON IMAGE MOMENTS IN WEAK LENSING ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@nao.ac.jp, E-mail: tof@astr.tohoku.ac.jp

    This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging,more » but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of {nu} {approx} 11.7.« less

  4. A Practical Methodology for Quantifying Random and Systematic Components of Unexplained Variance in a Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Deloach, Richard; Obara, Clifford J.; Goodman, Wesley L.

    2012-01-01

    This paper documents a check standard wind tunnel test conducted in the Langley 0.3-Meter Transonic Cryogenic Tunnel (0.3M TCT) that was designed and analyzed using the Modern Design of Experiments (MDOE). The test designed to partition the unexplained variance of typical wind tunnel data samples into two constituent components, one attributable to ordinary random error, and one attributable to systematic error induced by covariate effects. Covariate effects in wind tunnel testing are discussed, with examples. The impact of systematic (non-random) unexplained variance on the statistical independence of sequential measurements is reviewed. The corresponding correlation among experimental errors is discussed, as is the impact of such correlation on experimental results generally. The specific experiment documented herein was organized as a formal test for the presence of unexplained variance in representative samples of wind tunnel data, in order to quantify the frequency with which such systematic error was detected, and its magnitude relative to ordinary random error. Levels of systematic and random error reported here are representative of those quantified in other facilities, as cited in the references.

  5. Landscape Response to the 1980 Eruption of Mount St. Helens: Using Historical Aerial Photography to Measure Surface Change

    NASA Astrophysics Data System (ADS)

    Sweeney, K.; Major, J. J.

    2016-12-01

    Advances in structure-from-motion (SfM) photogrammetry and point cloud comparison have fueled a proliferation of studies using modern imagery to monitor geomorphic change. These techniques also have obvious applications for reconstructing historical landscapes from vertical aerial imagery, but known challenges include insufficient photo overlap, systematic "doming" induced by photo-spacing regularity, missing metadata, and lack of ground control. Aerial imagery of landscape change in the North Fork Toutle River (NFTR) following the 1980 eruption of Mount St. Helens is a prime dataset to refine methodologies. In particular, (1) 14-μm film scans are available for 1:9600 images at 4-month intervals from 1980 - 1986, (2) the large magnitude of landscape change swamps systematic error and noise, and (3) stable areas (primary deposit features, roads, etc.) provide targets for both ground control and matching to modern lidar. Using AgiSoft PhotoScan, we create digital surface models from the NFTR imagery and examine how common steps in SfM workflows affect results. Tests of scan quality show high-resolution, professional film scans are superior to office scans of paper prints, reducing spurious points related to scan infidelity and image damage. We confirm earlier findings that cropping and rotating images improves point matching and the final surface model produced by the SfM algorithm. We demonstrate how the iterative closest point algorithm, implemented in CloudCompare and using modern lidar as a reference dataset, can serve as an adequate substitute for absolute ground control. Elevation difference maps derived from our surface models of Mount St. Helens show patterns consistent with field observations, including channel avulsion and migration, though systematic errors remain. We suggest that subtracting an empirical function fit to the long-wavelength topographic signal may be one avenue for correcting systematic error in similar datasets.

  6. Measuring diagnoses: ICD code accuracy.

    PubMed

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-10-01

    To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.

  7. Bundle Block Adjustment of Airborne Three-Line Array Imagery Based on Rotation Angles

    PubMed Central

    Zhang, Yongjun; Zheng, Maoteng; Huang, Xu; Xiong, Jinxin

    2014-01-01

    In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs), which are measured by the integrated positioning and orientation system (POS) of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models. PMID:24811075

  8. Bundle block adjustment of airborne three-line array imagery based on rotation angles.

    PubMed

    Zhang, Yongjun; Zheng, Maoteng; Huang, Xu; Xiong, Jinxin

    2014-05-07

    In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs), which are measured by the integrated positioning and orientation system (POS) of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models.

  9. Trial Sequential Analysis in systematic reviews with meta-analysis.

    PubMed

    Wetterslev, Jørn; Jakobsen, Janus Christian; Gluud, Christian

    2017-03-06

    Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors). We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D 2 ) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis. Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.

  10. Under conditions of large geometric miss, tumor control probability can be higher for static gantry intensity-modulated radiation therapy compared to volume-modulated arc therapy for prostate cancer.

    PubMed

    Balderson, Michael; Brown, Derek; Johnson, Patricia; Kirkby, Charles

    2016-01-01

    The purpose of this work was to compare static gantry intensity-modulated radiation therapy (IMRT) with volume-modulated arc therapy (VMAT) in terms of tumor control probability (TCP) under scenarios involving large geometric misses, i.e., those beyond what are accounted for when margin expansion is determined. Using a planning approach typical for these treatments, a linear-quadratic-based model for TCP was used to compare mean TCP values for a population of patients who experiences a geometric miss (i.e., systematic and random shifts of the clinical target volume within the planning target dose distribution). A Monte Carlo approach was used to account for the different biological sensitivities of a population of patients. Interestingly, for errors consisting of coplanar systematic target volume offsets and three-dimensional random offsets, static gantry IMRT appears to offer an advantage over VMAT in that larger shift errors are tolerated for the same mean TCP. For example, under the conditions simulated, erroneous systematic shifts of 15mm directly between or directly into static gantry IMRT fields result in mean TCP values between 96% and 98%, whereas the same errors on VMAT plans result in mean TCP values between 45% and 74%. Random geometric shifts of the target volume were characterized using normal distributions in each Cartesian dimension. When the standard deviations were doubled from those values assumed in the derivation of the treatment margins, our model showed a 7% drop in mean TCP for the static gantry IMRT plans but a 20% drop in TCP for the VMAT plans. Although adding a margin for error to a clinical target volume is perhaps the best approach to account for expected geometric misses, this work suggests that static gantry IMRT may offer a treatment that is more tolerant to geometric miss errors than VMAT. Copyright © 2016 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  11. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  12. Systematic review of the evidence for Trails B cut-off scores in assessing fitness-to-drive

    PubMed Central

    Roy, Mononita; Molnar, Frank

    2013-01-01

    Background Fitness-to-drive guidelines recommend employing the Trail Making B Test (a.k.a. Trails B), but do not provide guidance regarding cut-off scores. There is ongoing debate regarding the optimal cut-off score on the Trails B test. The objective of this study was to address this controversy by systematically reviewing the evidence for specific Trails B cut-off scores (e.g., cut-offs in both time to completion and number of errors) with respect to fitness-to-drive. Methods Systematic review of all prospective cohort, retrospective cohort, case-control, correlation, and cross-sectional studies reporting the ability of the Trails B to predict driving safety that were published in English-language, peer-reviewed journals. Results Forty-seven articles were reviewed. None of the articles justified sample sizes via formal calculations. Cut-off scores reported based on research include: 90 seconds, 133 seconds, 147 seconds, 180 seconds, and < 3 errors. Conclusions There is support for the previously published Trails B cut-offs of 3 minutes or 3 errors (the ‘3 or 3 rule’). Major methodological limitations of this body of research were uncovered including (1) lack of justification of sample size leaving studies open to Type II error (i.e., false negative findings), and (2) excessive focus on associations rather than clinically useful cut-off scores. PMID:23983828

  13. Quantitative evaluation for accumulative calibration error and video-CT registration errors in electromagnetic-tracked endoscopy.

    PubMed

    Liu, Sheena Xin; Gutiérrez, Luis F; Stanton, Doug

    2011-05-01

    Electromagnetic (EM)-guided endoscopy has demonstrated its value in minimally invasive interventions. Accuracy evaluation of the system is of paramount importance to clinical applications. Previously, a number of researchers have reported the results of calibrating the EM-guided endoscope; however, the accumulated errors of an integrated system, which ultimately reflect intra-operative performance, have not been characterized. To fill this vacancy, we propose a novel system to perform this evaluation and use a 3D metric to reflect the intra-operative procedural accuracy. This paper first presents a portable design and a method for calibration of an electromagnetic (EM)-tracked endoscopy system. An evaluation scheme is then described that uses the calibration results and EM-CT registration to enable real-time data fusion between CT and endoscopic video images. We present quantitative evaluation results for estimating the accuracy of this system using eight internal fiducials as the targets on an anatomical phantom: the error is obtained by comparing the positions of these targets in the CT space, EM space and endoscopy image space. To obtain 3D error estimation, the 3D locations of the targets in the endoscopy image space are reconstructed from stereo views of the EM-tracked monocular endoscope. Thus, the accumulated errors are evaluated in a controlled environment, where the ground truth information is present and systematic performance (including the calibration error) can be assessed. We obtain the mean in-plane error to be on the order of 2 pixels. To evaluate the data integration performance for virtual navigation, target video-CT registration error (TRE) is measured as the 3D Euclidean distance between the 3D-reconstructed targets of endoscopy video images and the targets identified in CT. The 3D error (TRE) encapsulates EM-CT registration error, EM-tracking error, fiducial localization error, and optical-EM calibration error. We present in this paper our calibration method and a virtual navigation evaluation system for quantifying the overall errors of the intra-operative data integration. We believe this phantom not only offers us good insights to understand the systematic errors encountered in all phases of an EM-tracked endoscopy procedure but also can provide quality control of laboratory experiments for endoscopic procedures before the experiments are transferred from the laboratory to human subjects.

  14. Pointing control using a moving base of support.

    PubMed

    Hondzinski, Jan M; Kwon, Taegyong

    2009-07-01

    The purposes of this study were to determine whether gaze direction provides a control signal for movement direction for a pointing task requiring a step and to gain insight into why discrepancies previously identified in the literature for endpoint accuracy with gaze directed eccentrically exist. Straight arm pointing movements were performed to real and remembered target locations, either toward or 30 degrees eccentric to gaze direction. Pointing occurred in normal room lighting or darkness while subjects sat, stood still or side-stepped left or right. Trunk rotation contributed 22-65% to gaze orientations when it was not constrained. Error differences for different target locations explained discrepancies among previous experiments. Variable pointing errors were influenced by gaze direction, while mean systematic pointing errors and trunk orientations were influenced by step direction. These data support the use of a control strategy that relies on gaze direction and equilibrium inputs for whole-body goal-directed movements.

  15. Regionalized PM2.5 Community Multiscale Air Quality model performance evaluation across a continuous spatiotemporal domain.

    PubMed

    Reyes, Jeanette M; Xu, Yadong; Vizuete, William; Serre, Marc L

    2017-01-01

    The regulatory Community Multiscale Air Quality (CMAQ) model is a means to understanding the sources, concentrations and regulatory attainment of air pollutants within a model's domain. Substantial resources are allocated to the evaluation of model performance. The Regionalized Air quality Model Performance (RAMP) method introduced here explores novel ways of visualizing and evaluating CMAQ model performance and errors for daily Particulate Matter ≤ 2.5 micrometers (PM2.5) concentrations across the continental United States. The RAMP method performs a non-homogenous, non-linear, non-homoscedastic model performance evaluation at each CMAQ grid. This work demonstrates that CMAQ model performance, for a well-documented 2001 regulatory episode, is non-homogeneous across space/time. The RAMP correction of systematic errors outperforms other model evaluation methods as demonstrated by a 22.1% reduction in Mean Square Error compared to a constant domain wide correction. The RAMP method is able to accurately reproduce simulated performance with a correlation of r = 76.1%. Most of the error coming from CMAQ is random error with only a minority of error being systematic. Areas of high systematic error are collocated with areas of high random error, implying both error types originate from similar sources. Therefore, addressing underlying causes of systematic error will have the added benefit of also addressing underlying causes of random error.

  16. Quantitative assessment of hit detection and confirmation in single and duplicate high-throughput screenings.

    PubMed

    Wu, Zhijin; Liu, Dongmei; Sui, Yunxia

    2008-02-01

    The process of identifying active targets (hits) in high-throughput screening (HTS) usually involves 2 steps: first, removing or adjusting for systematic variation in the measurement process so that extreme values represent strong biological activity instead of systematic biases such as plate effect or edge effect and, second, choosing a meaningful cutoff on the calculated statistic to declare positive compounds. Both false-positive and false-negative errors are inevitable in this process. Common control or estimation of error rates is often based on an assumption of normal distribution of the noise. The error rates in hit detection, especially false-negative rates, are hard to verify because in most assays, only compounds selected in primary screening are followed up in confirmation experiments. In this article, the authors take advantage of a quantitative HTS experiment in which all compounds are tested 42 times over a wide range of 14 concentrations so true positives can be found through a dose-response curve. Using the activity status defined by dose curve, the authors analyzed the effect of various data-processing procedures on the sensitivity and specificity of hit detection, the control of error rate, and hit confirmation. A new summary score is proposed and demonstrated to perform well in hit detection and useful in confirmation rate estimation. In general, adjusting for positional effects is beneficial, but a robust test can prevent overadjustment. Error rates estimated based on normal assumption do not agree with actual error rates, for the tails of noise distribution deviate from normal distribution. However, false discovery rate based on empirically estimated null distribution is very close to observed false discovery proportion.

  17. A study to assess the long-term stability of the ionization chamber reference system in the LNMRI

    NASA Astrophysics Data System (ADS)

    Trindade Filho, O. L.; Conceição, D. A.; da Silva, C. J.; Delgado, J. U.; de Oliveira, A. E.; Iwahara, A.; Tauhata, L.

    2018-03-01

    Ionization chambers are used as secondary standard in order to maintain the calibration factors of radionuclides in the activity measurements in metrology laboratories. Used as radionuclide calibrator in nuclear medicine clinics to control dose in patients, its long-term performance is not evaluated systematically. A methodology for long-term evaluation for its stability is monitored and checked. Historical data produced monthly of 2012 until 2017, by an ionization chamber, electrometer and 226Ra, were analyzed via control chart, aiming to follow the long-term performance. Monitoring systematic errors were consistent within the limits of control, demonstrating the quality of measurements in compliance with ISO17025.

  18. Errors in radial velocity variance from Doppler wind lidar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, H.; Barthelmie, R. J.; Doubrawa, P.

    A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less

  19. Errors in radial velocity variance from Doppler wind lidar

    DOE PAGES

    Wang, H.; Barthelmie, R. J.; Doubrawa, P.; ...

    2016-08-29

    A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less

  20. Recovery of Large Angular Scale CMB Polarization for Instruments Employing Variable-Delay Polarization Modulators

    NASA Technical Reports Server (NTRS)

    Miller, N. J.; Chuss, D. T.; Marriage, T. A.; Wollack, E. J.; Appel, J. W.; Bennett, C. L.; Eimer, J.; Essinger-Hileman, T.; Fixsen, D. J.; Harrington, K.; hide

    2016-01-01

    Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residual modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/ f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r= 0.01. Indeed, r less than 0.01 is achievable with commensurately improved characterizations and controls.

  1. A procedure for the significance testing of unmodeled errors in GNSS observations

    NASA Astrophysics Data System (ADS)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  2. Synthesis of Arbitrary Quantum Circuits to Topological Assembly: Systematic, Online and Compact.

    PubMed

    Paler, Alexandru; Fowler, Austin G; Wille, Robert

    2017-09-05

    It is challenging to transform an arbitrary quantum circuit into a form protected by surface code quantum error correcting codes (a variant of topological quantum error correction), especially if the goal is to minimise overhead. One of the issues is the efficient placement of magic state distillation sub circuits, so-called distillation boxes, in the space-time volume that abstracts the computation's required resources. This work presents a general, systematic, online method for the synthesis of such circuits. Distillation box placement is controlled by so-called schedulers. The work introduces a greedy scheduler generating compact box placements. The implemented software, whose source code is available at www.github.com/alexandrupaler/tqec, is used to illustrate and discuss synthesis examples. Synthesis and optimisation improvements are proposed.

  3. Integration of imagery and cartographic data through a common map base

    NASA Technical Reports Server (NTRS)

    Clark, J.

    1983-01-01

    Several disparate data types are integrated by using control points as the basis for spatially registering the data to a map base. The data are reprojected to match the coordinates of the reference UTM (Universal Transverse Mercator) map projection, as expressed in lines and samples. Control point selection is the most critical aspect of integrating the Thematic Mapper Simulator MSS imagery with the cartographic data. It is noted that control points chosen from the imagery are subject to error from mislocated points, either points that did not correlate well to the reference map or minor pixel offsets because of interactive cursorring errors. Errors are also introduced in map control points when points are improperly located and digitized, leading to inaccurate latitude and longitude coordinates. Nonsystematic aircraft platform variations, such as yawl, pitch, and roll, affect the spatial fidelity of the imagery in comparison with the quadrangles. Features in adjacent flight paths do not always correspond properly owing to the systematic panorama effect and alteration of flightline direction, as well as platform variations.

  4. Design and analysis of a sub-aperture scanning machine for the transmittance measurements of large-aperture optical system

    NASA Astrophysics Data System (ADS)

    He, Yingwei; Li, Ping; Feng, Guojin; Cheng, Li; Wang, Yu; Wu, Houping; Liu, Zilong; Zheng, Chundi; Sha, Dingguo

    2010-11-01

    For measuring large-aperture optical system transmittance, a novel sub-aperture scanning machine with double-rotating arms (SSMDA) was designed to obtain sub-aperture beam spot. Optical system full-aperture transmittance measurements can be achieved by applying sub-aperture beam spot scanning technology. The mathematical model of the SSMDA based on a homogeneous coordinate transformation matrix is established to develop a detailed methodology for analyzing the beam spot scanning errors. The error analysis methodology considers two fundamental sources of scanning errors, namely (1) the length systematic errors and (2) the rotational systematic errors. As the systematic errors of the parameters are given beforehand, computational results of scanning errors are between -0.007~0.028mm while scanning radius is not lager than 400.000mm. The results offer theoretical and data basis to the research on transmission characteristics of large optical system.

  5. Scattering from binary optics

    NASA Technical Reports Server (NTRS)

    Ricks, Douglas W.

    1993-01-01

    There are a number of sources of scattering in binary optics: etch depth errors, line edge errors, quantization errors, roughness, and the binary approximation to the ideal surface. These sources of scattering can be systematic (deterministic) or random. In this paper, scattering formulas for both systematic and random errors are derived using Fourier optics. These formulas can be used to explain the results of scattering measurements and computer simulations.

  6. Utilizing measure-based feedback in control-mastery theory: A clinical error.

    PubMed

    Snyder, John; Aafjes-van Doorn, Katie

    2016-09-01

    Clinical errors and ruptures are an inevitable part of clinical practice. Often times, therapists are unaware that a clinical error or rupture has occurred, leaving no space for repair, and potentially leading to patient dropout and/or less effective treatment. One way to overcome our blind spots is by frequently and systematically collecting measure-based feedback from the patient. Patient feedback measures that focus on the process of psychotherapy such as the Patient's Experience of Attunement and Responsiveness scale (PEAR) can be used in conjunction with treatment outcome measures such as the Outcome Questionnaire 45.2 (OQ-45.2) to monitor the patient's therapeutic experience and progress. The regular use of these types of measures can aid clinicians in the identification of clinical errors and the associated patient deterioration that might otherwise go unnoticed and unaddressed. The current case study describes an instance of clinical error that occurred during the 2-year treatment of a highly traumatized young woman. The clinical error was identified using measure-based feedback and subsequently understood and addressed from the theoretical standpoint of the control-mastery theory of psychotherapy. An alternative hypothetical response is also presented and explained using control-mastery theory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    PubMed

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  8. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  9. Human Error and the International Space Station: Challenges and Triumphs in Science Operations

    NASA Technical Reports Server (NTRS)

    Harris, Samantha S.; Simpson, Beau C.

    2016-01-01

    Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.

  10. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks

    PubMed Central

    Besada, Juan A.

    2017-01-01

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation. PMID:28934157

  11. Sources of variability and systematic error in mouse timing behavior.

    PubMed

    Gallistel, C R; King, Adam; McDonald, Robert

    2004-01-01

    In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.

  12. Error analysis and system optimization of non-null aspheric testing system

    NASA Astrophysics Data System (ADS)

    Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo

    2010-10-01

    A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.

  13. The influence of different error estimates in the detection of postoperative cognitive dysfunction using reliable change indices with correction for practice effects.

    PubMed

    Lewis, Matthew S; Maruff, Paul; Silbert, Brendan S; Evered, Lis A; Scott, David A

    2007-02-01

    The reliable change index (RCI) expresses change relative to its associated error, and is useful in the identification of postoperative cognitive dysfunction (POCD). This paper examines four common RCIs that each account for error in different ways. Three rules incorporate a constant correction for practice effects and are contrasted with the standard RCI that had no correction for practice. These rules are applied to 160 patients undergoing coronary artery bypass graft (CABG) surgery who completed neuropsychological assessments preoperatively and 1 week postoperatively using error and reliability data from a comparable healthy nonsurgical control group. The rules all identify POCD in a similar proportion of patients, but the use of the within-subject standard deviation (WSD), expressing the effects of random error, as an error estimate is a theoretically appropriate denominator when a constant error correction, removing the effects of systematic error, is deducted from the numerator in a RCI.

  14. Synoptic scale forecast skill and systematic errors in the MASS 2.0 model. [Mesoscale Atmospheric Simulation System

    NASA Technical Reports Server (NTRS)

    Koch, S. E.; Skillman, W. C.; Kocin, P. J.; Wetzel, P. J.; Brill, K. F.

    1985-01-01

    The synoptic scale performance characteristics of MASS 2.0 are determined by comparing filtered 12-24 hr model forecasts to same-case forecasts made by the National Meteorological Center's synoptic-scale Limited-area Fine Mesh model. Characteristics of the two systems are contrasted, and the analysis methodology used to determine statistical skill scores and systematic errors is described. The overall relative performance of the two models in the sample is documented, and important systematic errors uncovered are presented.

  15. Distributed control of large space antennas

    NASA Technical Reports Server (NTRS)

    Cameron, J. M.; Hamidi, M.; Lin, Y. H.; Wang, S. J.

    1983-01-01

    A systematic way to choose control design parameters and to evaluate performance for large space antennas is presented. The structural dynamics and control properties for a Hoop and Column Antenna and a Wrap-Rib Antenna are characterized. Some results of the effects of model parameter uncertainties to the stability, surface accuracy, and pointing errors are presented. Critical dynamics and control problems for these antenna configurations are identified and potential solutions are discussed. It was concluded that structural uncertainties and model error can cause serious performance deterioration and can even destabilize the controllers. For the hoop and column antenna, large hoop and long meat and the lack of stiffness between the two substructures result in low structural frequencies. Performance can be improved if this design can be strengthened. The two-site control system is more robust than either single-site control systems for the hoop and column antenna.

  16. Towards a realistic simulation of boreal summer tropical rainfall climatology in state-of-the-art coupled models: role of the background snow-free land albedo

    NASA Astrophysics Data System (ADS)

    Terray, P.; Sooraj, K. P.; Masson, S.; Krishna, R. P. M.; Samson, G.; Prajeesh, A. G.

    2017-07-01

    State-of-the-art global coupled models used in seasonal prediction systems and climate projections still have important deficiencies in representing the boreal summer tropical rainfall climatology. These errors include prominently a severe dry bias over all the Northern Hemisphere monsoon regions, excessive rainfall over the ocean and an unrealistic double inter-tropical convergence zone (ITCZ) structure in the tropical Pacific. While these systematic errors can be partly reduced by increasing the horizontal atmospheric resolution of the models, they also illustrate our incomplete understanding of the key mechanisms controlling the position of the ITCZ during boreal summer. Using a large collection of coupled models and dedicated coupled experiments, we show that these tropical rainfall errors are partly associated with insufficient surface thermal forcing and incorrect representation of the surface albedo over the Northern Hemisphere continents. Improving the parameterization of the land albedo in two global coupled models leads to a large reduction of these systematic errors and further demonstrates that the Northern Hemisphere subtropical deserts play a seminal role in these improvements through a heat low mechanism.

  17. Analysis of difference between direct and geodetic mass balance measurements at South Cascade Glacier, Washington

    USGS Publications Warehouse

    Krimmel, R.M.

    1999-01-01

    Net mass balance has been measured since 1958 at South Cascade Glacier using the 'direct method,' e.g. area averages of snow gain and firn and ice loss at stakes. Analysis of cartographic vertical photography has allowed measurement of mass balance using the 'geodetic method' in 1970, 1975, 1977, 1979-80, and 1985-97. Water equivalent change as measured by these nearly independent methods should give similar results. During 1970-97, the direct method shows a cumulative balance of about -15 m, and the geodetic method shows a cumulative balance of about -22 m. The deviation between the two methods is fairly consistent, suggesting no gross errors in either, but rather a cumulative systematic error. It is suspected that the cumulative error is in the direct method because the geodetic method is based on a non-changing reference, the bedrock control, whereas the direct method is measured with reference to only the previous year's summer surface. Possible sources of mass loss that are missing from the direct method are basal melt, internal melt, and ablation on crevasse walls. Possible systematic measurement errors include under-estimation of the density of lost material, sinking stakes, or poorly represented areas.

  18. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.

    PubMed

    White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K

    2016-12-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.

  19. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems

    PubMed Central

    Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.

    2016-01-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060

  20. A new systematic calibration method of ring laser gyroscope inertial navigation system

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Xiong, Zhenyu; Long, Xingwu

    2016-10-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.

  1. Measuring Systematic Error with Curve Fits

    ERIC Educational Resources Information Center

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

  2. Systematic Error Modeling and Bias Estimation

    PubMed Central

    Zhang, Feihu; Knoll, Alois

    2016-01-01

    This paper analyzes the statistic properties of the systematic error in terms of range and bearing during the transformation process. Furthermore, we rely on a weighted nonlinear least square method to calculate the biases based on the proposed models. The results show the high performance of the proposed approach for error modeling and bias estimation. PMID:27213386

  3. A constrained-gradient method to control divergence errors in numerical MHD

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2016-10-01

    In numerical magnetohydrodynamics (MHD), a major challenge is maintaining nabla \\cdot {B}=0. Constrained transport (CT) schemes achieve this but have been restricted to specific methods. For more general (meshless, moving-mesh, ALE) methods, `divergence-cleaning' schemes reduce the nabla \\cdot {B} errors; however they can still be significant and can lead to systematic errors which converge away slowly. We propose a new constrained gradient (CG) scheme which augments these with a projection step, and can be applied to any numerical scheme with a reconstruction. This iteratively approximates the least-squares minimizing, globally divergence-free reconstruction of the fluid. Unlike `locally divergence free' methods, this actually minimizes the numerically unstable nabla \\cdot {B} terms, without affecting the convergence order of the method. We implement this in the mesh-free code GIZMO and compare various test problems. Compared to cleaning schemes, our CG method reduces the maximum nabla \\cdot {B} errors by ˜1-3 orders of magnitude (˜2-5 dex below typical errors if no nabla \\cdot {B} cleaning is used). By preventing large nabla \\cdot {B} at discontinuities, this eliminates systematic errors at jumps. Our CG results are comparable to CT methods; for practical purposes, the nabla \\cdot {B} errors are eliminated. The cost is modest, ˜30 per cent of the hydro algorithm, and the CG correction can be implemented in a range of numerical MHD methods. While for many problems, we find Dedner-type cleaning schemes are sufficient for good results, we identify a range of problems where using only Powell or `8-wave' cleaning can produce order-of-magnitude errors.

  4. Mapping and correcting the influence of gaze position on pupil size measurements

    PubMed Central

    Petrov, Alexander A.

    2015-01-01

    Pupil size is correlated with a wide variety of important cognitive variables and is increasingly being used by cognitive scientists. Pupil data can be recorded inexpensively and non-invasively by many commonly used video-based eye-tracking cameras. Despite the relative ease of data collection and increasing prevalence of pupil data in the cognitive literature, researchers often underestimate the methodological challenges associated with controlling for confounds that can result in misinterpretation of their data. One serious confound that is often not properly controlled is pupil foreshortening error (PFE)—the foreshortening of the pupil image as the eye rotates away from the camera. Here we systematically map PFE using an artificial eye model and then apply a geometric model correction. Three artificial eyes with different fixed pupil sizes were used to systematically measure changes in pupil size as a function of gaze position with a desktop EyeLink 1000 tracker. A grid-based map of pupil measurements was recorded with each artificial eye across three experimental layouts of the eye-tracking camera and display. Large, systematic deviations in pupil size were observed across all nine maps. The measured PFE was corrected by a geometric model that expressed the foreshortening of the pupil area as a function of the cosine of the angle between the eye-to-camera axis and the eye-to-stimulus axis. The model reduced the root mean squared error of pupil measurements by 82.5 % when the model parameters were pre-set to the physical layout dimensions, and by 97.5 % when they were optimized to fit the empirical error surface. PMID:25953668

  5. Living systematic reviews: 3. Statistical methods for updating meta-analyses.

    PubMed

    Simmonds, Mark; Salanti, Georgia; McKenzie, Joanne; Elliott, Julian

    2017-11-01

    A living systematic review (LSR) should keep the review current as new research evidence emerges. Any meta-analyses included in the review will also need updating as new material is identified. If the aim of the review is solely to present the best current evidence standard meta-analysis may be sufficient, provided reviewers are aware that results may change at later updates. If the review is used in a decision-making context, more caution may be needed. When using standard meta-analysis methods, the chance of incorrectly concluding that any updated meta-analysis is statistically significant when there is no effect (the type I error) increases rapidly as more updates are performed. Inaccurate estimation of any heterogeneity across studies may also lead to inappropriate conclusions. This paper considers four methods to avoid some of these statistical problems when updating meta-analyses: two methods, that is, law of the iterated logarithm and the Shuster method control primarily for inflation of type I error and two other methods, that is, trial sequential analysis and sequential meta-analysis control for type I and II errors (failing to detect a genuine effect) and take account of heterogeneity. This paper compares the methods and considers how they could be applied to LSRs. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. The Application of Coherent Local Time for Optical Time Transfer and the Quantification of Systematic Errors in Satellite Laser Ranging

    NASA Astrophysics Data System (ADS)

    Schreiber, K. Ulrich; Kodet, Jan

    2018-02-01

    Highly precise time and stable reference frequencies are fundamental requirements for space geodesy. Satellite laser ranging (SLR) is one of these techniques, which differs from all other applications like Very Long Baseline Interferometry (VLBI), Global Navigation Satellite Systems (GNSS) and finally Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS) by the fact that it is an optical two-way measurement technique. That means that there is no need for a clock synchronization process between both ends of the distance covered by the measurement technique. Under the assumption of isotropy for the speed of light, SLR establishes the only practical realization of the Einstein Synchronization process so far. Therefore it is a powerful time transfer technique. However, in order to transfer time between two remote clocks, it is also necessary to tightly control all possible signal delays in the ranging process. This paper discusses the role of time and frequency in SLR as well as the error sources before it address the transfer of time between ground and space. The need of an improved signal delay control led to a major redesign of the local time and frequency distribution at the Geodetic Observatory Wettzell. Closure measurements can now be used to identify and remove systematic errors in SLR measurements.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Juan; Beltran, Chris J., E-mail: beltran.chris@mayo.edu; Herman, Michael G.

    Purpose: To quantitatively and systematically assess dosimetric effects induced by spot positioning error as a function of spot spacing (SS) on intensity-modulated proton therapy (IMPT) plan quality and to facilitate evaluation of safety tolerance limits on spot position. Methods: Spot position errors (PE) ranging from 1 to 2 mm were simulated. Simple plans were created on a water phantom, and IMPT plans were calculated on two pediatric patients with a brain tumor of 28 and 3 cc, respectively, using a commercial planning system. For the phantom, a uniform dose was delivered to targets located at different depths from 10 tomore » 20 cm with various field sizes from 2{sup 2} to 15{sup 2} cm{sup 2}. Two nominal spot sizes, 4.0 and 6.6 mm of 1 σ in water at isocenter, were used for treatment planning. The SS ranged from 0.5 σ to 1.5 σ, which is 2–6 mm for the small spot size and 3.3–9.9 mm for the large spot size. Various perturbation scenarios of a single spot error and systematic and random multiple spot errors were studied. To quantify the dosimetric effects, percent dose error (PDE) depth profiles and the value of percent dose error at the maximum dose difference (PDE [ΔDmax]) were used for evaluation. Results: A pair of hot and cold spots was created per spot shift. PDE[ΔDmax] is found to be a complex function of PE, SS, spot size, depth, and global spot distribution that can be well defined in simple models. For volumetric targets, the PDE [ΔDmax] is not noticeably affected by the change of field size or target volume within the studied ranges. In general, reducing SS decreased the dose error. For the facility studied, given a single spot error with a PE of 1.2 mm and for both spot sizes, a SS of 1σ resulted in a 2% maximum dose error; a SS larger than 1.25 σ substantially increased the dose error and its sensitivity to PE. A similar trend was observed in multiple spot errors (both systematic and random errors). Systematic PE can lead to noticeable hot spots along the field edges, which may be near critical structures. However, random PE showed minimal dose error. Conclusions: Dose error dependence for PE was quantitatively and systematically characterized and an analytic tool was built to simulate systematic and random errors for patient-specific IMPT. This information facilitates the determination of facility specific spot position error thresholds.« less

  8. Robust approximation-free prescribed performance control for nonlinear systems and its application

    NASA Astrophysics Data System (ADS)

    Sun, Ruisheng; Na, Jing; Zhu, Bin

    2018-02-01

    This paper presents a robust prescribed performance control approach and its application to nonlinear tail-controlled missile systems with unknown dynamics and uncertainties. The idea of prescribed performance function (PPF) is incorporated into the control design, such that both the steady-state and transient control performance can be strictly guaranteed. Unlike conventional PPF-based control methods, we further tailor a recently proposed systematic control design procedure (i.e. approximation-free control) using the transformed tracking error dynamics, which provides a proportional-like control action. Hence, the function approximators (e.g. neural networks, fuzzy systems) that are widely used to address the unknown nonlinearities in the nonlinear control designs are not needed. The proposed control design leads to a robust yet simplified function approximation-free control for nonlinear systems. The closed-loop system stability and the control error convergence are all rigorously proved. Finally, comparative simulations are conducted based on nonlinear missile systems to validate the improved response and the robustness of the proposed control method.

  9. Reliability of chemotherapy preparation processes: Evaluating independent double-checking and computer-assisted gravimetric control.

    PubMed

    Carrez, Laurent; Bouchoud, Lucie; Fleury-Souverain, Sandrine; Combescure, Christophe; Falaschi, Ludivine; Sadeghipour, Farshid; Bonnabry, Pascal

    2017-03-01

    Background and objectives Centralized chemotherapy preparation units have established systematic strategies to avoid errors. Our work aimed to evaluate the accuracy of manual preparations associated with different control methods. Method A simulation study in an operational setting used phenylephrine and lidocaine as markers. Each operator prepared syringes that were controlled using a different method during each of three sessions (no control, visual double-checking, and gravimetric control). Eight reconstitutions and dilutions were prepared in each session, with variable doses and volumes, using different concentrations of stock solutions. Results were analyzed according to qualitative (choice of stock solution) and quantitative criteria (accurate, <5% deviation from the target concentration; weakly accurate, 5%-10%; inaccurate, 10%-30%; wrong, >30% deviation). Results Eleven operators carried out 19 sessions. No final preparation (n = 438) contained a wrong drug. The protocol involving no control failed to detect 1 of 3 dose errors made and double-checking failed to detect 3 of 7 dose errors. The gravimetric control method detected all 5 out of 5 dose errors. The accuracy of the doses measured was equivalent across the control methods ( p = 0.63 Kruskal-Wallis). The final preparations ranged from 58% to 60% accurate, 25% to 27% weakly accurate, 14% to 17% inaccurate and 0.9% wrong. A high variability was observed between operators. Discussion Gravimetric control was the only method able to detect all dose errors, but it did not improve dose accuracy. A dose accuracy with <5% deviation cannot always be guaranteed using manual production. Automation should be considered in the future.

  10. Addressing Systematic Errors in Correlation Tracking on HMI Magnetograms

    NASA Astrophysics Data System (ADS)

    Mahajan, Sushant S.; Hathaway, David H.; Munoz-Jaramillo, Andres; Martens, Petrus C.

    2017-08-01

    Correlation tracking in solar magnetograms is an effective method to measure the differential rotation and meridional flow on the solar surface. However, since the tracking accuracy required to successfully measure meridional flow is very high, small systematic errors have a noticeable impact on measured meridional flow profiles. Additionally, the uncertainties of this kind of measurements have been historically underestimated, leading to controversy regarding flow profiles at high latitudes extracted from measurements which are unreliable near the solar limb.Here we present a set of systematic errors we have identified (and potential solutions), including bias caused by physical pixel sizes, center-to-limb systematics, and discrepancies between measurements performed using different time intervals. We have developed numerical techniques to get rid of these systematic errors and in the process improve the accuracy of the measurements by an order of magnitude.We also present a detailed analysis of uncertainties in these measurements using synthetic magnetograms and the quantification of an upper limit below which meridional flow measurements cannot be trusted as a function of latitude.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balderson, Michael, E-mail: michael.balderson@rmp.uhn.ca; Brown, Derek; Johnson, Patricia

    The purpose of this work was to compare static gantry intensity-modulated radiation therapy (IMRT) with volume-modulated arc therapy (VMAT) in terms of tumor control probability (TCP) under scenarios involving large geometric misses, i.e., those beyond what are accounted for when margin expansion is determined. Using a planning approach typical for these treatments, a linear-quadratic–based model for TCP was used to compare mean TCP values for a population of patients who experiences a geometric miss (i.e., systematic and random shifts of the clinical target volume within the planning target dose distribution). A Monte Carlo approach was used to account for themore » different biological sensitivities of a population of patients. Interestingly, for errors consisting of coplanar systematic target volume offsets and three-dimensional random offsets, static gantry IMRT appears to offer an advantage over VMAT in that larger shift errors are tolerated for the same mean TCP. For example, under the conditions simulated, erroneous systematic shifts of 15 mm directly between or directly into static gantry IMRT fields result in mean TCP values between 96% and 98%, whereas the same errors on VMAT plans result in mean TCP values between 45% and 74%. Random geometric shifts of the target volume were characterized using normal distributions in each Cartesian dimension. When the standard deviations were doubled from those values assumed in the derivation of the treatment margins, our model showed a 7% drop in mean TCP for the static gantry IMRT plans but a 20% drop in TCP for the VMAT plans. Although adding a margin for error to a clinical target volume is perhaps the best approach to account for expected geometric misses, this work suggests that static gantry IMRT may offer a treatment that is more tolerant to geometric miss errors than VMAT.« less

  12. M-MRAC Backstepping for Systems with Unknown Virtual Control Coefficients

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmanje

    2015-01-01

    The paper presents an over-parametrization free certainty equivalence state feedback backstepping adaptive control design method for systems of any relative degree with unmatched uncertainties and unknown virtual control coefficients. It uses a fast prediction model to estimate the unknown parameters, which is independent of the control design. It is shown that the system's input and output tracking errors can be systematically decreased by the proper choice of the design parameters. The benefits of the approach are demonstrated in numerical simulations.

  13. The application of GPS precise point positioning technology in aerial triangulation

    NASA Astrophysics Data System (ADS)

    Yuan, Xiuxiao; Fu, Jianhong; Sun, Hongxing; Toth, Charles

    In traditional GPS-supported aerotriangulation, differential GPS (DGPS) positioning technology is used to determine the 3-dimensional coordinates of the perspective centers at exposure time with an accuracy of centimeter to decimeter level. This method can significantly reduce the number of ground control points (GCPs). However, the establishment of GPS reference stations for DGPS positioning is not only labor-intensive and costly, but also increases the implementation difficulty of aerial photography. This paper proposes aerial triangulation supported with GPS precise point positioning (PPP) as a way to avoid the use of the GPS reference stations and simplify the work of aerial photography. Firstly, we present the algorithm for GPS PPP in aerial triangulation applications. Secondly, the error law of the coordinate of perspective centers determined using GPS PPP is analyzed. Thirdly, based on GPS PPP and aerial triangulation software self-developed by the authors, four sets of actual aerial images taken from surveying and mapping projects, different in both terrain and photographic scale, are given as experimental models. The four sets of actual data were taken over a flat region at a scale of 1:2500, a mountainous region at a scale of 1:3000, a high mountainous region at a scale of 1:32000 and an upland region at a scale of 1:60000 respectively. In these experiments, the GPS PPP results were compared with results obtained through DGPS positioning and traditional bundle block adjustment. In this way, the empirical positioning accuracy of GPS PPP in aerial triangulation can be estimated. Finally, the results of bundle block adjustment with airborne GPS controls from GPS PPP are analyzed in detail. The empirical results show that GPS PPP applied in aerial triangulation has a systematic error of half-meter level and a stochastic error within a few decimeters. However, if a suitable adjustment solution is adopted, the systematic error can be eliminated in GPS-supported bundle block adjustment. When four full GCPs are emplaced in the corners of the adjustment block, then the systematic error is compensated using a set of independent unknown parameters for each strip, the final result of the bundle block adjustment with airborne GPS controls from PPP is the same as that of bundle block adjustment with airborne GPS controls from DGPS. Although the accuracy of the former is a little lower than that of traditional bundle block adjustment with dense GCPs, it can still satisfy the accuracy requirement of photogrammetric point determination for topographic mapping at many scales.

  14. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis.

    PubMed

    Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-08-05

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  15. A comparison of advanced overlay technologies

    NASA Astrophysics Data System (ADS)

    Dasari, Prasad; Smith, Nigel; Goelzer, Gary; Liu, Zhuan; Li, Jie; Tan, Asher; Koh, Chin Hwee

    2010-03-01

    The extension of optical lithography to 22nm and beyond by Double Patterning Technology is often challenged by CDU and overlay control. With reduced overlay measurement error budgets in the sub-nm range, relying on traditional Total Measurement Uncertainty (TMU) estimates alone is no longer sufficient. In this paper we will report scatterometry overlay measurements data from a set of twelve test wafers, using four different target designs. The TMU of these measurements is under 0.4nm, within the process control requirements for the 22nm node. Comparing the measurement differences between DBO targets (using empirical and model based analysis) and with image-based overlay data indicates the presence of systematic and random measurement errors that exceeds the TMU estimate.

  16. Reliability of System Identification Techniques to Assess Standing Balance in Healthy Elderly

    PubMed Central

    Maier, Andrea B.; Aarts, Ronald G. K. M.; van Gerven, Joop M. A.; Arendzen, J. Hans; Schouten, Alfred C.; Meskers, Carel G. M.; van der Kooij, Herman

    2016-01-01

    Objectives System identification techniques have the potential to assess the contribution of the underlying systems involved in standing balance by applying well-known disturbances. We investigated the reliability of standing balance parameters obtained with multivariate closed loop system identification techniques. Methods In twelve healthy elderly balance tests were performed twice a day during three days. Body sway was measured during two minutes of standing with eyes closed and the Balance test Room (BalRoom) was used to apply four disturbances simultaneously: two sensory disturbances, to the proprioceptive and the visual system, and two mechanical disturbances applied at the leg and trunk segment. Using system identification techniques, sensitivity functions of the sensory disturbances and the neuromuscular controller were estimated. Based on the generalizability theory (G theory), systematic errors and sources of variability were assessed using linear mixed models and reliability was assessed by computing indexes of dependability (ID), standard error of measurement (SEM) and minimal detectable change (MDC). Results A systematic error was found between the first and second trial in the sensitivity functions. No systematic error was found in the neuromuscular controller and body sway. The reliability of 15 of 25 parameters and body sway were moderate to excellent when the results of two trials on three days were averaged. To reach an excellent reliability on one day in 7 out of 25 parameters, it was predicted that at least seven trials must be averaged. Conclusion This study shows that system identification techniques are a promising method to assess the underlying systems involved in standing balance in elderly. However, most of the parameters do not appear to be reliable unless a large number of trials are collected across multiple days. To reach an excellent reliability in one third of the parameters, a training session for participants is needed and at least seven trials of two minutes must be performed on one day. PMID:26953694

  17. RECOVERY OF LARGE ANGULAR SCALE CMB POLARIZATION FOR INSTRUMENTS EMPLOYING VARIABLE-DELAY POLARIZATION MODULATORS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, N. J.; Marriage, T. A.; Appel, J. W.

    2016-02-20

    Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residualmore » modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r = 0.01. Indeed, r < 0.01 is achievable with commensurately improved characterizations and controls.« less

  18. Assessment of Systematic Measurement Errors for Acoustic Travel-Time Tomography of the Atmosphere

    DTIC Science & Technology

    2013-01-01

    measurements include assess- ment of the time delays in electronic circuits and mechanical hardware (e.g., drivers and microphones) of a tomography array ...hardware and electronic circuits of the tomography array and errors in synchronization of the transmitted and recorded signals. For example, if...coordinates can be as large as 30 cm. These errors are equivalent to the systematic errors in the travel times of 0.9 ms. Third, loudspeakers which are used

  19. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  20. DtaRefinery: a software tool for elimination of systematic errors from parent ion mass measurements in tandem mass spectra datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.

    2009-12-16

    Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and MS/MS fragmentation have become widely available in recent years and have allowed for sig-nificantly better discrimination between true and false MS/MS pep-tide identifications by applying relatively narrow windows for maxi-mum allowable deviations for parent ion mass measurements. To fully gain the advantage of highly accurate parent ion mass meas-urements, it is important to limit systematic mass measurement errors. The DtaRefinery software tool can correct systematic errors in parent ion masses by reading a set of fragmentation spectra, searching for MS/MS peptide identifications, then fitting a model that canmore » estimate systematic errors, and removing them. This results in a new fragmentation spectrum file with updated parent ion masses.« less

  1. Digital multi-channel high resolution phase locked loop for surveillance radar systems

    NASA Astrophysics Data System (ADS)

    Rizk, Mohamed; Shaaban, Shawky; Abou-El-Nadar, Usama M.; Hafez, Alaa El-Din Sayed

    This paper present a multi-channel, high resolution, fast lock phase locked loop (PLL) for surveillance radar applications. Phase detector based PLLs are simple to design, suffer no systematic phase error, and can run at the highest speed. Reducing loop gain can proportionally improve jitter performance, but also reduces locking time and pull-in range. The proposed system is based on digital process and control the error signal to the voltage controlled oscillator (VCO) adaptively to control its gain in order to achieve fast lock times while improving in lock jitter performance. Under certain circumstances the design also improves the frequency agility capability of the radar system. The results show a fast lock, high resolution PLL with transient time less than 10 µ sec which is suitable to radar applications.

  2. Increasing the reliability of solution exchanges by monitoring solenoid valve actuation.

    PubMed

    Auzmendi, Jerónimo Andrés; Moffatt, Luciano

    2010-01-15

    Solenoid valves are a core component of most solution perfusion systems used in neuroscience research. As they open and close, they control the flow of solution through each perfusion line, thereby modulating the timing and sequence of chemical stimulation. The valves feature a ferromagnetic plunger that moves due to the magnetization of the solenoid and returns to its initial position with the aid of a spring. The delays between the time of voltage application or removal and the actual opening or closing of the valve are difficult to predict beforehand and have to be measured experimentally. Here we propose a simple method for monitoring whether and when the solenoid valve opens and closes. The proposed method detects the movement of the plunger as it generates a measurable signal on the solenoid that surrounds it. Using this plunger signal, we detected the opening and closing of diaphragm and pinch solenoid valves with a systematic error of less than 2ms. After this systematic error is subtracted, the trial-to-trial error was below 0.2ms.

  3. A meta-analysis of inhibitory-control deficits in patients diagnosed with Alzheimer's dementia.

    PubMed

    Kaiser, Anna; Kuhlmann, Beatrice G; Bosnjak, Michael

    2018-05-10

    The authors conducted meta-analyses to determine the magnitude of performance impairments in patients diagnosed with Alzheimer's dementia (AD) compared with healthy aging (HA) controls on eight tasks commonly used to measure inhibitory control. Response time (RT) and error rates from a total of 64 studies were analyzed with random-effects models (overall effects) and mixed-effects models (moderator analyses). Large differences between AD patients and HA controls emerged in the basic inhibition conditions of many of the tasks with AD patients often performing slower, overall d = 1.17, 95% CI [0.88-1.45], and making more errors, d = 0.83 [0.63-1.03]. However, comparably large differences were also present in performance on many of the baseline control-conditions, d = 1.01 [0.83-1.19] for RTs and d = 0.44 [0.19-0.69] for error rates. A standardized derived inhibition score (i.e., control-condition score - inhibition-condition score) suggested no significant mean group difference for RTs, d = -0.07 [-0.22-0.08], and only a small difference for errors, d = 0.24 [-0.12-0.60]. Effects systematically varied across tasks and with AD severity. Although the error rate results suggest a specific deterioration of inhibitory-control abilities in AD, further processes beyond inhibitory control (e.g., a general reduction in processing speed and other, task-specific attentional processes) appear to contribute to AD patients' performance deficits observed on a variety of inhibitory-control tasks. Nonetheless, the inhibition conditions of many of these tasks well discriminate between AD patients and HA controls. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Subaperture test of wavefront error of large telescopes: error sources and stitching performance simulations

    NASA Astrophysics Data System (ADS)

    Chen, Shanyong; Li, Shengyi; Wang, Guilin

    2014-11-01

    The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and environmental control. We simulate the performance of the stitching algorithm dealing with surface error and misalignment of the ACF, and noise suppression, which provides guidelines to optomechanical design of the stitching test system.

  5. Statistical process control and verifying positional accuracy of a cobra motion couch using step-wedge quality assurance tool.

    PubMed

    Binny, Diana; Lancaster, Craig M; Trapp, Jamie V; Crowe, Scott B

    2017-09-01

    This study utilizes process control techniques to identify action limits for TomoTherapy couch positioning quality assurance tests. A test was introduced to monitor accuracy of the applied couch offset detection in the TomoTherapy Hi-Art treatment system using the TQA "Step-Wedge Helical" module and MVCT detector. Individual X-charts, process capability (cp), probability (P), and acceptability (cpk) indices were used to monitor a 4-year couch IEC offset data to detect systematic and random errors in the couch positional accuracy for different action levels. Process capability tests were also performed on the retrospective data to define tolerances based on user-specified levels. A second study was carried out whereby physical couch offsets were applied using the TQA module and the MVCT detector was used to detect the observed variations. Random and systematic variations were observed for the SPC-based upper and lower control limits, and investigations were carried out to maintain the ongoing stability of the process for a 4-year and a three-monthly period. Local trend analysis showed mean variations up to ±0.5 mm in the three-monthly analysis period for all IEC offset measurements. Variations were also observed in the detected versus applied offsets using the MVCT detector in the second study largely in the vertical direction, and actions were taken to remediate this error. Based on the results, it was recommended that imaging shifts in each coordinate direction be only applied after assessing the machine for applied versus detected test results using the step helical module. User-specified tolerance levels of at least ±2 mm were recommended for a test frequency of once every 3 months to improve couch positional accuracy. SPC enables detection of systematic variations prior to reaching machine tolerance levels. Couch encoding system recalibrations reduced variations to user-specified levels and a monitoring period of 3 months using SPC facilitated in detecting systematic and random variations. SPC analysis for couch positional accuracy enabled greater control in the identification of errors, thereby increasing confidence levels in daily treatment setups. © 2017 Royal Brisbane and Women's Hospital, Metro North Hospital and Health Service. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  6. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    PubMed

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  7. Systematic Error Mitigation for the PIXIE Instrument

    NASA Technical Reports Server (NTRS)

    Kogut, Alan; Fixsen, Dale J.; Nagler, Peter; Tucker, Gregory

    2016-01-01

    The Primordial Ination Explorer (PIXIE) uses a nulling Fourier Transform Spectrometer to measure the absoluteintensity and linear polarization of the cosmic microwave background and diuse astrophysical foregrounds.PIXIE will search for the signature of primordial ination and will characterize distortions from a blackbodyspectrum, both to precision of a few parts per billion. Rigorous control of potential instrumental eects isrequired to take advantage of the raw sensitivity. PIXIE employs a highly symmetric design using multipledierential nulling to reduce the instrumental signature to negligible levels. We discuss the systematic errorbudget and mitigation strategies for the PIXIE mission.

  8. Mapping the absolute magnetic field and evaluating the quadratic Zeeman-effect-induced systematic error in an atom interferometer gravimeter

    NASA Astrophysics Data System (ADS)

    Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim

    2017-09-01

    Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.

  9. Effects of vertical distribution of water vapor and temperature on total column water vapor retrieval error

    NASA Technical Reports Server (NTRS)

    Sun, Jielun

    1993-01-01

    Results are presented of a test of the physically based total column water vapor retrieval algorithm of Wentz (1992) for sensitivity to realistic vertical distributions of temperature and water vapor. The ECMWF monthly averaged temperature and humidity fields are used to simulate the spatial pattern of systematic retrieval error of total column water vapor due to this sensitivity. The estimated systematic error is within 0.1 g/sq cm over about 70 percent of the global ocean area; systematic errors greater than 0.3 g/sq cm are expected to exist only over a few well-defined regions, about 3 percent of the global oceans, assuming that the global mean value is unbiased.

  10. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    PubMed Central

    Casas, Francisco J.; Ortiz, David; Villa, Enrique; Cano, Juan L.; Cagigas, Jaime; Pérez, Ana R.; Aja, Beatriz; Terán, J. Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-01-01

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906

  11. A study for systematic errors of the GLA forecast model in tropical regions

    NASA Technical Reports Server (NTRS)

    Chen, Tsing-Chang; Baker, Wayman E.; Pfaendtner, James; Corrigan, Martin

    1988-01-01

    From the sensitivity studies performed with the Goddard Laboratory for Atmospheres (GLA) analysis/forecast system, it was revealed that the forecast errors in the tropics affect the ability to forecast midlatitude weather in some cases. Apparently, the forecast errors occurring in the tropics can propagate to midlatitudes. Therefore, the systematic error analysis of the GLA forecast system becomes a necessary step in improving the model's forecast performance. The major effort of this study is to examine the possible impact of the hydrological-cycle forecast error on dynamical fields in the GLA forecast system.

  12. Determination of the precision error of the pulmonary artery thermodilution catheter using an in vitro continuous flow test rig.

    PubMed

    Yang, Xiao-Xing; Critchley, Lester A; Joynt, Gavin M

    2011-01-01

    Thermodilution cardiac output using a pulmonary artery catheter is the reference method against which all new methods of cardiac output measurement are judged. However, thermodilution lacks precision and has a quoted precision error of ± 20%. There is uncertainty about its true precision and this causes difficulty when validating new cardiac output technology. Our aim in this investigation was to determine the current precision error of thermodilution measurements. A test rig through which water circulated at different constant rates with ports to insert catheters into a flow chamber was assembled. Flow rate was measured by an externally placed transonic flowprobe and meter. The meter was calibrated by timed filling of a cylinder. Arrow and Edwards 7Fr thermodilution catheters, connected to a Siemens SC9000 cardiac output monitor, were tested. Thermodilution readings were made by injecting 5 mL of ice-cold water. Precision error was divided into random and systematic components, which were determined separately. Between-readings (random) variability was determined for each catheter by taking sets of 10 readings at different flow rates. Coefficient of variation (CV) was calculated for each set and averaged. Between-catheter systems (systematic) variability was derived by plotting calibration lines for sets of catheters. Slopes were used to estimate the systematic component. Performances of 3 cardiac output monitors were compared: Siemens SC9000, Siemens Sirecust 1261, and Philips MP50. Five Arrow and 5 Edwards catheters were tested using the Siemens SC9000 monitor. Flow rates between 0.7 and 7.0 L/min were studied. The CV (random error) for Arrow was 5.4% and for Edwards was 4.8%. The random precision error was ± 10.0% (95% confidence limits). CV (systematic error) was 5.8% and 6.0%, respectively. The systematic precision error was ± 11.6%. The total precision error of a single thermodilution reading was ± 15.3% and ± 13.0% for triplicate readings. Precision error increased by 45% when using the Sirecust monitor and 100% when using the Philips monitor. In vitro testing of pulmonary artery catheters enabled us to measure both the random and systematic error components of thermodilution cardiac output measurement, and thus calculate the precision error. Using the Siemens monitor, we established a precision error of ± 15.3% for single and ± 13.0% for triplicate reading, which was similar to the previous estimate of ± 20%. However, this precision error was significantly worsened by using the Sirecust and Philips monitors. Clinicians should recognize that the precision error of thermodilution cardiac output is dependent on the selection of catheter and monitor model.

  13. Measurement error is often neglected in medical literature: a systematic review.

    PubMed

    Brakenhoff, Timo B; Mitroiu, Marian; Keogh, Ruth H; Moons, Karel G M; Groenwold, Rolf H H; van Smeden, Maarten

    2018-06-01

    In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Wavefront-aberration measurement and systematic-error analysis of a high numerical-aperture objective

    NASA Astrophysics Data System (ADS)

    Liu, Zhixiang; Xing, Tingwen; Jiang, Yadong; Lv, Baobin

    2018-02-01

    A two-dimensional (2-D) shearing interferometer based on an amplitude chessboard grating was designed to measure the wavefront aberration of a high numerical-aperture (NA) objective. Chessboard gratings offer better diffraction efficiencies and fewer disturbing diffraction orders than traditional cross gratings. The wavefront aberration of the tested objective was retrieved from the shearing interferogram using the Fourier transform and differential Zernike polynomial-fitting methods. Grating manufacturing errors, including the duty-cycle and pattern-deviation errors, were analyzed with the Fourier transform method. Then, according to the relation between the spherical pupil and planar detector coordinates, the influence of the distortion of the pupil coordinates was simulated. Finally, the systematic error attributable to grating alignment errors was deduced through the geometrical ray-tracing method. Experimental results indicate that the measuring repeatability (3σ) of the wavefront aberration of an objective with NA 0.4 was 3.4 mλ. The systematic-error results were consistent with previous analyses. Thus, the correct wavefront aberration can be obtained after calibration.

  15. The Systematics of Strong Lens Modeling Quantified: The Effects of Constraint Selection and Redshift Information on Magnification, Mass, and Multiple Image Predictability

    NASA Astrophysics Data System (ADS)

    Johnson, Traci L.; Sharon, Keren

    2016-11-01

    Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.

  16. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Jaehyung; Wagner, Lucas K.; Ertekin, Elif, E-mail: ertekin@illinois.edu

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlledmore » and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Ke; Li Yanqiu; Wang Hai

    Characterization of measurement accuracy of the phase-shifting point diffraction interferometer (PS/PDI) is usually performed by two-pinhole null test. In this procedure, the geometrical coma and detector tilt astigmatism systematic errors are almost one or two magnitude higher than the desired accuracy of PS/PDI. These errors must be accurately removed from the null test result to achieve high accuracy. Published calibration methods, which can remove the geometrical coma error successfully, have some limitations in calibrating the astigmatism error. In this paper, we propose a method to simultaneously calibrate the geometrical coma and detector tilt astigmatism errors in PS/PDI null test. Basedmore » on the measurement results obtained from two pinhole pairs in orthogonal directions, the method utilizes the orthogonal and rotational symmetry properties of Zernike polynomials over unit circle to calculate the systematic errors introduced in null test of PS/PDI. The experiment using PS/PDI operated at visible light is performed to verify the method. The results show that the method is effective in isolating the systematic errors of PS/PDI and the measurement accuracy of the calibrated PS/PDI is 0.0088{lambda} rms ({lambda}= 632.8 nm).« less

  18. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    NASA Astrophysics Data System (ADS)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.

  19. Dependence of Dynamic Modeling Accuracy on Sensor Measurements, Mass Properties, and Aircraft Geometry

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    The NASA Generic Transport Model (GTM) nonlinear simulation was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of identified parameters in mathematical models describing the flight dynamics and determined from flight data. Measurements from a typical flight condition and system identification maneuver were systematically and progressively deteriorated by introducing noise, resolution errors, and bias errors. The data were then used to estimate nondimensional stability and control derivatives within a Monte Carlo simulation. Based on these results, recommendations are provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using additional flight conditions and parameter estimation methods, as well as a nonlinear flight simulation of the General Dynamics F-16 aircraft, were compared with these recommendations

  20. Analyse des erreurs et grammaire generative: La syntaxe de l'interrogation en francais (Error Analysis and Generative Grammar: The Syntax of Interrogation in French).

    ERIC Educational Resources Information Center

    Py, Bernard

    A progress report is presented of a study which applies a system of generative grammar to error analysis. The objective of the study was to reconstruct the grammar of students' interlanguage, using a systematic analysis of errors. (Interlanguage refers to the linguistic competence of a student who possesses a relatively systematic body of rules,…

  1. Selecting Statistical Quality Control Procedures for Limiting the Impact of Increases in Analytical Random Error on Patient Safety.

    PubMed

    Yago, Martín

    2017-05-01

    QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results. A statistical model was used to construct charts for the 1 ks and X /χ 2 rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results. 1 ks Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. X /χ 2 rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance. Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision. © 2017 American Association for Clinical Chemistry.

  2. Systematic review of ERP and fMRI studies investigating inhibitory control and error processing in people with substance dependence and behavioural addictions

    PubMed Central

    Luijten, Maartje; Machielsen, Marise W.J.; Veltman, Dick J.; Hester, Robert; de Haan, Lieuwe; Franken, Ingmar H.A.

    2014-01-01

    Background Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The combined evaluation of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) findings in the present review offers unique information on neural deficits in addicted individuals. Methods We selected 19 ERP and 22 fMRI studies using stop-signal, go/no-go or Flanker paradigms based on a search of PubMed and Embase. Results The most consistent findings in addicted individuals relative to healthy controls were lower N2, error-related negativity and error positivity amplitudes as well as hypoactivation in the anterior cingulate cortex (ACC), inferior frontal gyrus and dorsolateral prefrontal cortex. These neural deficits, however, were not always associated with impaired task performance. With regard to behavioural addictions, some evidence has been found for similar neural deficits; however, studies are scarce and results are not yet conclusive. Differences among the major classes of substances of abuse were identified and involve stronger neural responses to errors in individuals with alcohol dependence versus weaker neural responses to errors in other substance-dependent populations. Limitations Task design and analysis techniques vary across studies, thereby reducing comparability among studies and the potential of clinical use of these measures. Conclusion Current addiction theories were supported by identifying consistent abnormalities in prefrontal brain function in individuals with addiction. An integrative model is proposed, suggesting that neural deficits in the dorsal ACC may constitute a hallmark neurocognitive deficit underlying addictive behaviours, such as loss of control. PMID:24359877

  3. Thirty Years of Improving the NCEP Global Forecast System

    NASA Astrophysics Data System (ADS)

    White, G. H.; Manikin, G.; Yang, F.

    2014-12-01

    Current eight day forecasts by the NCEP Global Forecast System are as accurate as five day forecasts 30 years ago. This revolution in weather forecasting reflects increases in computer power, improvements in the assimilation of observations, especially satellite data, improvements in model physics, improvements in observations and international cooperation and competition. One important component has been and is the diagnosis, evaluation and reduction of systematic errors. The effect of proposed improvements in the GFS on systematic errors is one component of the thorough testing of such improvements by the Global Climate and Weather Modeling Branch. Examples of reductions in systematic errors in zonal mean temperatures and winds and other fields will be presented. One challenge in evaluating systematic errors is uncertainty in what reality is. Model initial states can be regarded as the best overall depiction of the atmosphere, but can be misleading in areas of few observations or for fields not well observed such as humidity or precipitation over the oceans. Verification of model physics is particularly difficult. The Environmental Modeling Center emphasizes the evaluation of systematic biases against observations. Recently EMC has placed greater emphasis on synoptic evaluation and on precipitation, 2-meter temperatures and dew points and 10 meter winds. A weekly EMC map discussion reviews the performance of many models over the United States and has helped diagnose and alleviate significant systematic errors in the GFS, including a near surface summertime evening cold wet bias over the eastern US and a multi-week period when the GFS persistently developed bogus tropical storms off Central America. The GFS exhibits a wet bias for light rain and a dry bias for moderate to heavy rain over the continental United States. Significant changes to the GFS are scheduled to be implemented in the fall of 2014. These include higher resolution, improved physics and improvements to the assimilation. These changes significantly improve the tropospheric flow and reduce a tropical upper tropospheric warm bias. One important error remaining is the failure of the GFS to maintain deep convection over Indonesia and in the tropical west Pacific. This and other current systematic errors will be presented.

  4. Prescribing Errors Involving Medication Dosage Forms

    PubMed Central

    Lesar, Timothy S

    2002-01-01

    CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138

  5. Within-Tunnel Variations in Pressure Data for Three Transonic Wind Tunnels

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2014-01-01

    This paper compares the results of pressure measurements made on the same test article with the same test matrix in three transonic wind tunnels. A comparison is presented of the unexplained variance associated with polar replicates acquired in each tunnel. The impact of a significance component of systematic (not random) unexplained variance is reviewed, and the results of analyses of variance are presented to assess the degree of significant systematic error in these representative wind tunnel tests. Total uncertainty estimates are reported for 140 samples of pressure data, quantifying the effects of within-polar random errors and between-polar systematic bias errors.

  6. The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Suarez, M. J.; Bacmeister, J. T.; Chen, B.; Takacs, L. L.

    2006-01-01

    Previous GCM studies have found that the systematic errors in the GCM simulation of the seasonal mean ITCZ intensity and location could be substantially corrected by adding suitable amount of rain re-evaporation or cumulus momentum transport. However, the reason(s) for these systematic errors and solutions has remained a puzzle. In this work the knowledge gained from previous studies of the ITCZ in an aqua-planet model with zonally uniform SST is applied to solve this puzzle. The solution is supported by further aqua-planet and full model experiments using the latest version of the Goddard Earth Observing System GCM.

  7. Quantifying Errors in TRMM-Based Multi-Sensor QPE Products Over Land in Preparation for GPM

    NASA Technical Reports Server (NTRS)

    Peters-Lidard, Christa D.; Tian, Yudong

    2011-01-01

    Determining uncertainties in satellite-based multi-sensor quantitative precipitation estimates over land of fundamental importance to both data producers and hydro climatological applications. ,Evaluating TRMM-era products also lays the groundwork and sets the direction for algorithm and applications development for future missions including GPM. QPE uncertainties result mostly from the interplay of systematic errors and random errors. In this work, we will synthesize our recent results quantifying the error characteristics of satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMap). For systematic errors, we devised an error decomposition scheme to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals hydroclimatologically-relevant error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. For the random errors, we calculated the measurement spread from the ensemble of these six quasi-independent products, and thus produced a global map of measurement uncertainties. The map yields a global view of the error characteristics and their regional and seasonal variations, reveals many undocumented error features over areas with no validation data available, and provides better guidance to global assimilation of satellite-based precipitation data. Insights gained from these results and how they could help with GPM will be highlighted.

  8. THE SYSTEMATICS OF STRONG LENS MODELING QUANTIFIED: THE EFFECTS OF CONSTRAINT SELECTION AND REDSHIFT INFORMATION ON MAGNIFICATION, MASS, AND MULTIPLE IMAGE PREDICTABILITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu

    Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading asmore » to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.« less

  9. Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach

    NASA Astrophysics Data System (ADS)

    Bähr, Hermann; Hanssen, Ramon F.

    2012-12-01

    An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.

  10. A correction method for systematic error in (1)H-NMR time-course data validated through stochastic cell culture simulation.

    PubMed

    Sokolenko, Stanislav; Aucoin, Marc G

    2015-09-04

    The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small as 2.5 % under a wide range of conditions. Both the simulation framework and error correction method represent examples of time-course analysis that can be applied to further developments in (1)H-NMR methodology and the more general application of quantitative metabolomics.

  11. SU-E-J-87: Building Deformation Error Histogram and Quality Assurance of Deformable Image Registration.

    PubMed

    Park, S B; Kim, H; Yao, M; Ellis, R; Machtay, M; Sohn, J W

    2012-06-01

    To quantify the systematic error of a Deformable Image Registration (DIR) system and establish Quality Assurance (QA) procedure. To address the shortfall of landmark approach which it is only available at the significant visible feature points, we adapted a Deformation Vector Map (DVM) comparison approach. We used two CT image sets (R and T image sets) taken for the same patient at different time and generated a DVM, which includes the DIR systematic error. The DVM was calculated using fine-tuned B-Spline DIR and L-BFGS optimizer. By utilizing this DVM we generated R' image set to eliminate the systematic error in DVM,. Thus, we have truth data set, R' and T image sets, and the truth DVM. To test a DIR system, we use R' and T image sets to a DIR system. We compare the test DVM to the truth DVM. If there is no systematic error, they should be identical. We built Deformation Error Histogram (DEH) for quantitative analysis. The test registration was performed with an in-house B-Spline DIR system using a stochastic gradient descent optimizer. Our example data set was generated with a head and neck patient case. We also tested CT to CBCT deformable registration. We found skin regions which interface with the air has relatively larger errors. Also mobile joints such as shoulders had larger errors. Average error for ROIs were as follows; CTV: 0.4mm, Brain stem: 1.4mm, Shoulders: 1.6mm, and Normal tissues: 0.7mm. We succeeded to build DEH approach to quantify the DVM uncertainty. Our data sets are available for testing other systems in our web page. Utilizing DEH, users can decide how much systematic error they would accept. DEH and our data can be a tool for an AAPM task group to compose a DIR system QA guideline. This project is partially supported by the Agency for Healthcare Research and Quality (AHRQ) grant 1R18HS017424-01A2. © 2012 American Association of Physicists in Medicine.

  12. The Effect of Systematic Error in Forced Oscillation Testing

    NASA Technical Reports Server (NTRS)

    Williams, Brianne Y.; Landman, Drew; Flory, Isaac L., IV; Murphy, Patrick C.

    2012-01-01

    One of the fundamental problems in flight dynamics is the formulation of aerodynamic forces and moments acting on an aircraft in arbitrary motion. Classically, conventional stability derivatives are used for the representation of aerodynamic loads in the aircraft equations of motion. However, for modern aircraft with highly nonlinear and unsteady aerodynamic characteristics undergoing maneuvers at high angle of attack and/or angular rates the conventional stability derivative model is no longer valid. Attempts to formulate aerodynamic model equations with unsteady terms are based on several different wind tunnel techniques: for example, captive, wind tunnel single degree-of-freedom, and wind tunnel free-flying techniques. One of the most common techniques is forced oscillation testing. However, the forced oscillation testing method does not address the systematic and systematic correlation errors from the test apparatus that cause inconsistencies in the measured oscillatory stability derivatives. The primary objective of this study is to identify the possible sources and magnitude of systematic error in representative dynamic test apparatuses. Sensitivities of the longitudinal stability derivatives to systematic errors are computed, using a high fidelity simulation of a forced oscillation test rig, and assessed using both Design of Experiments and Monte Carlo methods.

  13. Global Warming Estimation from MSU

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, Robert; Yoo, Jung-Moon

    1998-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz) from sequential, sun-synchronous, polar-orbiting NOAA satellites contain small systematic errors. Some of these errors are time-dependent and some are time-independent. Small errors in Ch 2 data of successive satellites arise from calibration differences. Also, successive NOAA satellites tend to have different Local Equatorial Crossing Times (LECT), which introduce differences in Ch 2 data due to the diurnal cycle. These two sources of systematic error are largely time independent. However, because of atmospheric drag, there can be a drift in the LECT of a given satellite, which introduces time-dependent systematic errors. One of these errors is due to the progressive chance in the diurnal cycle and the other is due to associated chances in instrument heating by the sun. In order to infer global temperature trend from the these MSU data, we have eliminated explicitly the time-independent systematic errors. Both of the time-dependent errors cannot be assessed from each satellite. For this reason, their cumulative effect on the global temperature trend is evaluated implicitly. Christy et al. (1998) (CSL). based on their method of analysis of the MSU Ch 2 data, infer a global temperature cooling trend (-0.046 K per decade) from 1979 to 1997, although their near nadir measurements yield near zero trend (0.003 K/decade). Utilising an independent method of analysis, we infer global temperature warmed by 0.12 +/- 0.06 C per decade from the observations of the MSU Ch 2 during the period 1980 to 1997.

  14. Archie's law - a reappraisal

    NASA Astrophysics Data System (ADS)

    Glover, Paul W. J.

    2016-07-01

    When scientists apply Archie's first law they often include an extra parameter a, which was introduced about 10 years after the equation's first publication by Winsauer et al. (1952), and which is sometimes called the "tortuosity" or "lithology" parameter. This parameter is not, however, theoretically justified. Paradoxically, the Winsauer et al. (1952) form of Archie's law often performs better than the original, more theoretically correct version. The difference in the cementation exponent calculated from these two forms of Archie's law is important, and can lead to a misestimation of reserves by at least 20 % for typical reservoir parameter values. We have examined the apparent paradox, and conclude that while the theoretical form of the law is correct, the data that we have been analysing with Archie's law have been in error. There are at least three types of systematic error that are present in most measurements: (i) a porosity error, (ii) a pore fluid salinity error, and (iii) a temperature error. Each of these systematic errors is sufficient to ensure that a non-unity value of the parameter a is required in order to fit the electrical data well. Fortunately, the inclusion of this parameter in the fit has compensated for the presence of the systematic errors in the electrical and porosity data, leading to a value of cementation exponent that is correct. The exceptions are those cementation exponents that have been calculated for individual core plugs. We make a number of recommendations for reducing the systematic errors that contribute to the problem and suggest that the value of the parameter a may now be used as an indication of data quality.

  15. [Errors in Peruvian medical journals references].

    PubMed

    Huamaní, Charles; Pacheco-Romero, José

    2009-01-01

    References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.

  16. Accuracy Analysis on Large Blocks of High Resolution Images

    NASA Technical Reports Server (NTRS)

    Passini, Richardo M.

    2007-01-01

    Although high altitude frequencies effects are removed at the time of basic image generation, low altitude (Yaw) effects are still present in form of affinity/angular affinity. They are effectively removed by additional parameters. Bundle block adjustment based on properly weighted ephemeris/altitude quaternions (BBABEQ) are not enough to remove the systematic effect. Moreover, due to the narrow FOV of the HRSI, position and altitude are highly correlated making it almost impossible to separate and remove their systematic effects without extending the geometric model (Self-Calib.) The systematic effects gets evident on the increase of accuracy (in terms of RMSE at GCPs) for looser and relaxed ground control at the expense of large and strong block deformation with large residuals at check points. Systematic errors are most freely distributed and their effects propagated all over the block.

  17. Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN

    NASA Astrophysics Data System (ADS)

    Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.

    2016-12-01

    In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of this product. Overall, we believe that different error component's analysis performed in this study, can substantially help any further local studies for post-calibration and bias reduction of PERSIANN estimations.

  18. Modeling systematic errors: polychromatic sources of Beer-Lambert deviations in HPLC/UV and nonchromatographic spectrophotometric assays.

    PubMed

    Galli, C

    2001-07-01

    It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.

  19. A service evaluation of on-line image-guided radiotherapy to lower extremity sarcoma: Investigating the workload implications of a 3 mm action level for image assessment and correction prior to delivery.

    PubMed

    Taylor, C; Parker, J; Stratford, J; Warren, M

    2018-05-01

    Although all systematic and random positional setup errors can be corrected for in entirety during on-line image-guided radiotherapy, the use of a specified action level, below which no correction occurs, is also an option. The following service evaluation aimed to investigate the use of this 3 mm action level for on-line image assessment and correction (online, systematic set-up error and weekly evaluation) for lower extremity sarcoma, and understand the impact on imaging frequency and patient positioning error within one cancer centre. All patients were immobilised using a thermoplastic shell attached to a plastic base and an individual moulded footrest. A retrospective analysis of 30 patients was performed. Patient setup and correctional data derived from cone beam CT analysis was retrieved. The timing, frequency and magnitude of corrections were evaluated. The population systematic and random error was derived. 20% of patients had no systematic corrections over the duration of treatment, and 47% had one. The maximum number of systematic corrections per course of radiotherapy was 4, which occurred for 2 patients. 34% of episodes occurred within the first 5 fractions. All patients had at least one observed translational error during their treatment greater than 0.3 cm, and 80% of patients had at least one observed translational error during their treatment greater than 0.5 cm. The population systematic error was 0.14 cm, 0.10 cm, 0.14 cm and random error was 0.27 cm, 0.22 cm, 0.23 cm in the lateral, caudocranial and anteroposterial directions. The required Planning Target Volume margin for the study population was 0.55 cm, 0.41 cm and 0.50 cm in the lateral, caudocranial and anteroposterial directions. The 3 mm action level for image assessment and correction prior to delivery reduced the imaging burden and focussed intervention on patients that exhibited greater positional variability. This strategy could be an efficient deployment of departmental resources if full daily correction of positional setup error is not possible. Copyright © 2017. Published by Elsevier Ltd.

  20. Using total quality management approach to improve patient safety by preventing medication error incidences*.

    PubMed

    Yousef, Nadin; Yousef, Farah

    2017-09-04

    Whereas one of the predominant causes of medication errors is a drug administration error, a previous study related to our investigations and reviews estimated that the incidences of medication errors constituted 6.7 out of 100 administrated medication doses. Therefore, we aimed by using six sigma approach to propose a way that reduces these errors to become less than 1 out of 100 administrated medication doses by improving healthcare professional education and clearer handwritten prescriptions. The study was held in a General Government Hospital. First, we systematically studied the current medication use process. Second, we used six sigma approach by utilizing the five-step DMAIC process (Define, Measure, Analyze, Implement, Control) to find out the real reasons behind such errors. This was to figure out a useful solution to avoid medication error incidences in daily healthcare professional practice. Data sheet was used in Data tool and Pareto diagrams were used in Analyzing tool. In our investigation, we reached out the real cause behind administrated medication errors. As Pareto diagrams used in our study showed that the fault percentage in administrated phase was 24.8%, while the percentage of errors related to prescribing phase was 42.8%, 1.7 folds. This means that the mistakes in prescribing phase, especially because of the poor handwritten prescriptions whose percentage in this phase was 17.6%, are responsible for the consequent) mistakes in this treatment process later on. Therefore, we proposed in this study an effective low cost strategy based on the behavior of healthcare workers as Guideline Recommendations to be followed by the physicians. This method can be a prior caution to decrease errors in prescribing phase which may lead to decrease the administrated medication error incidences to less than 1%. This improvement way of behavior can be efficient to improve hand written prescriptions and decrease the consequent errors related to administrated medication doses to less than the global standard; as a result, it enhances patient safety. However, we hope other studies will be made later in hospitals to practically evaluate how much effective our proposed systematic strategy really is in comparison with other suggested remedies in this field.

  1. Helical tomotherapy setup variations in canine nasal tumor patients immobilized with a bite block.

    PubMed

    Kubicek, Lyndsay N; Seo, Songwon; Chappell, Richard J; Jeraj, Robert; Forrest, Lisa J

    2012-01-01

    The purpose of our study was to compare setup variation in four degrees of freedom (vertical, longitudinal, lateral, and roll) between canine nasal tumor patients immobilized with a mattress and bite block, versus a mattress alone. Our secondary aim was to define a clinical target volume (CTV) to planning target volume (PTV) expansion margin based on our mean systematic error values associated with nasal tumor patients immobilized by a mattress and bite block. We evaluated six parameters for setup corrections: systematic error, random error, patient-patient variation in systematic errors, the magnitude of patient-specific random errors (root mean square [RMS]), distance error, and the variation of setup corrections from zero shift. The variations in all parameters were statistically smaller in the group immobilized by a mattress and bite block. The mean setup corrections in the mattress and bite block group ranged from 0.91 mm to 1.59 mm for the translational errors and 0.5°. Although most veterinary radiation facilities do not have access to Image-guided radiotherapy (IGRT), we identified a need for more rigid fixation, established the value of adding IGRT to veterinary radiation therapy, and define the CTV-PTV setup error margin for canine nasal tumor patients immobilized in a mattress and bite block. © 2012 Veterinary Radiology & Ultrasound.

  2. Systematic bias in genomic classification due to contaminating non-neoplastic tissue in breast tumor samples.

    PubMed

    Elloumi, Fathi; Hu, Zhiyuan; Li, Yan; Parker, Joel S; Gulley, Margaret L; Amos, Keith D; Troester, Melissa A

    2011-06-30

    Genomic tests are available to predict breast cancer recurrence and to guide clinical decision making. These predictors provide recurrence risk scores along with a measure of uncertainty, usually a confidence interval. The confidence interval conveys random error and not systematic bias. Standard tumor sampling methods make this problematic, as it is common to have a substantial proportion (typically 30-50%) of a tumor sample comprised of histologically benign tissue. This "normal" tissue could represent a source of non-random error or systematic bias in genomic classification. To assess the performance characteristics of genomic classification to systematic error from normal contamination, we collected 55 tumor samples and paired tumor-adjacent normal tissue. Using genomic signatures from the tumor and paired normal, we evaluated how increasing normal contamination altered recurrence risk scores for various genomic predictors. Simulations of normal tissue contamination caused misclassification of tumors in all predictors evaluated, but different breast cancer predictors showed different types of vulnerability to normal tissue bias. While two predictors had unpredictable direction of bias (either higher or lower risk of relapse resulted from normal contamination), one signature showed predictable direction of normal tissue effects. Due to this predictable direction of effect, this signature (the PAM50) was adjusted for normal tissue contamination and these corrections improved sensitivity and negative predictive value. For all three assays quality control standards and/or appropriate bias adjustment strategies can be used to improve assay reliability. Normal tissue sampled concurrently with tumor is an important source of bias in breast genomic predictors. All genomic predictors show some sensitivity to normal tissue contamination and ideal strategies for mitigating this bias vary depending upon the particular genes and computational methods used in the predictor.

  3. Opposite effects of cannabis and cocaine on performance monitoring.

    PubMed

    Spronk, Desirée B; Verkes, Robbert J; Cools, Roshan; Franke, Barbara; Van Wel, Janelle H P; Ramaekers, Johannes G; De Bruijn, Ellen R A

    2016-07-01

    Drug use is often associated with risky and unsafe behavior. However, the acute effects of cocaine and cannabis on performance monitoring processes have not been systematically investigated. The aim of the current study was to investigate how administration of these drugs alters performance monitoring processes, as reflected in the error-related negativity (ERN), the error positivity (Pe) and post-error slowing. A double-blind placebo-controlled randomized three-way crossover design was used. Sixty-one subjects completed a Flanker task while EEG measures were obtained. Subjects showed diminished ERN and Pe amplitudes after cannabis administration and increased ERN and Pe amplitudes after administration of cocaine. Neither drug affected post-error slowing. These results demonstrate diametrically opposing effects on the early and late phases of performance monitoring of the two most commonly used illicit drugs of abuse. Conversely, the behavioral adaptation phase of performance monitoring remained unaltered by the drugs. Copyright © 2016. Published by Elsevier B.V.

  4. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    PubMed

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  5. Internal robustness: systematic search for systematic bias in SN Ia data

    NASA Astrophysics Data System (ADS)

    Amendola, Luca; Marra, Valerio; Quartin, Miguel

    2013-04-01

    A great deal of effort is currently being devoted to understanding, estimating and removing systematic errors in cosmological data. In the particular case of Type Ia supernovae, systematics are starting to dominate the error budget. Here we propose a Bayesian tool for carrying out a systematic search for systematic contamination. This serves as an extension to the standard goodness-of-fit tests and allows not only to cross-check raw or processed data for the presence of systematics but also to pin-point the data that are most likely contaminated. We successfully test our tool with mock catalogues and conclude that the Union2.1 data do not possess a significant amount of systematics. Finally, we show that if one includes in Union2.1 the supernovae that originally failed the quality cuts, our tool signals the presence of systematics at over 3.8σ confidence level.

  6. High-Accuracy Decoupling Estimation of the Systematic Coordinate Errors of an INS and Intensified High Dynamic Star Tracker Based on the Constrained Least Squares Method

    PubMed Central

    Jiang, Jie; Yu, Wenbo; Zhang, Guangjun

    2017-01-01

    Navigation accuracy is one of the key performance indicators of an inertial navigation system (INS). Requirements for an accuracy assessment of an INS in a real work environment are exceedingly urgent because of enormous differences between real work and laboratory test environments. An attitude accuracy assessment of an INS based on the intensified high dynamic star tracker (IHDST) is particularly suitable for a real complex dynamic environment. However, the coupled systematic coordinate errors of an INS and the IHDST severely decrease the attitude assessment accuracy of an INS. Given that, a high-accuracy decoupling estimation method of the above systematic coordinate errors based on the constrained least squares (CLS) method is proposed in this paper. The reference frame of the IHDST is firstly converted to be consistent with that of the INS because their reference frames are completely different. Thereafter, the decoupling estimation model of the systematic coordinate errors is established and the CLS-based optimization method is utilized to estimate errors accurately. After compensating for error, the attitude accuracy of an INS can be assessed based on IHDST accurately. Both simulated experiments and real flight experiments of aircraft are conducted, and the experimental results demonstrate that the proposed method is effective and shows excellent performance for the attitude accuracy assessment of an INS in a real work environment. PMID:28991179

  7. Seeing Your Error Alters My Pointing: Observing Systematic Pointing Errors Induces Sensori-Motor After-Effects

    PubMed Central

    Ronchi, Roberta; Revol, Patrice; Katayama, Masahiro; Rossetti, Yves; Farnè, Alessandro

    2011-01-01

    During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: As consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects). Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift) were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction) produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion “to feel” the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors. PMID:21731649

  8. Configuration Manual Polarized Proton Collider at RHIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alekseev, I.; Allgower, C.; Bai, M.

    2006-01-01

    In this report we present our design to accelerate and store polarized protons in RHIC, with the level of polarization, luminosity, and control of systematic errors required by the approved RHIC spin physics program. We provide an overview of the physics to be studied using RHIC with polarized proton beams, and a brief description of the accelerator systems required for the project.

  9. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    PubMed

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  10. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    PubMed Central

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  11. A Liberal Account of Addiction

    PubMed Central

    Foddy, Bennett; Savulescu, Julian

    2014-01-01

    Philosophers and psychologists have been attracted to two differing accounts of addictive motivation. In this paper, we investigate these two accounts and challenge their mutual claim that addictions compromise a person’s self-control. First, we identify some incompatibilities between this claim of reduced self-control and the available evidence from various disciplines. A critical assessment of the evidence weakens the empirical argument for reduced autonomy. Second, we identify sources of unwarranted normative bias in the popular theories of addiction that introduce systematic errors in interpreting the evidence. By eliminating these errors, we are able to generate a minimal, but correct account, of addiction that presumes addicts to be autonomous in their addictive behavior, absent further evidence to the contrary. Finally, we explore some of the implications of this minimal, correct view. PMID:24659901

  12. Comparison of Different Attitude Correction Models for ZY-3 Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Song, Wenping; Liu, Shijie; Tong, Xiaohua; Niu, Changling; Ye, Zhen; Zhang, Han; Jin, Yanmin

    2018-04-01

    ZY-3 satellite, launched in 2012, is the first civilian high resolution stereo mapping satellite of China. This paper analyzed the positioning errors of ZY-3 satellite imagery and conducted compensation for geo-position accuracy improvement using different correction models, including attitude quaternion correction, attitude angle offset correction, and attitude angle linear correction. The experimental results revealed that there exist systematic errors with ZY-3 attitude observations and the positioning accuracy can be improved after attitude correction with aid of ground controls. There is no significant difference between the results of attitude quaternion correction method and the attitude angle correction method. However, the attitude angle offset correction model produced steady improvement than the linear correction model when limited ground control points are available for single scene.

  13. WE-H-BRC-08: Examining Credentialing Criteria and Poor Performance Indicators for IROC Houston’s Anthropomorphic Head and Neck Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carson, M; Molineu, A; Taylor, P

    Purpose: To analyze the most recent results of IROC Houston’s anthropomorphic H&N phantom to determine the nature of failing irradiations and the feasibility of altering pass/fail credentialing criteria. Methods: IROC Houston’s H&N phantom, used for IMRT credentialing for NCI-sponsored clinical trials, requires that an institution’s treatment plan must agree with measurement within 7% (TLD doses) and ≥85% pixels must pass 7%/4 mm gamma analysis. 156 phantom irradiations (November 2014 – October 2015) were re-evaluated using tighter criteria: 1) 5% TLD and 5%/4 mm, 2) 5% TLD and 5%/3 mm, 3) 4% TLD and 4%/4 mm, and 4) 3% TLD andmore » 3%/3 mm. Failure/poor performance rates were evaluated with respect to individual film and TLD performance by location in the phantom. Overall poor phantom results were characterized qualitatively as systematic (dosimetric) errors, setup errors/positional shifts, global but non-systematic errors, and errors affecting only a local region. Results: The pass rate for these phantoms using current criteria is 90%. Substituting criteria 1-4 reduces the overall pass rate to 77%, 70%, 63%, and 37%, respectively. Statistical analyses indicated the probability of noise-induced TLD failure at the 5% criterion was <0.5%. Using criteria 1, TLD results were most often the cause of failure (86% failed TLD while 61% failed film), with most failures identified in the primary PTV (77% cases). Other criteria posed similar results. Irradiations that failed from film only were overwhelmingly associated with phantom shifts/setup errors (≥80% cases). Results failing criteria 1 were primarily diagnosed as systematic: 58% of cases. 11% were setup/positioning errors, 8% were global non-systematic errors, and 22% were local errors. Conclusion: This study demonstrates that 5% TLD and 5%/4 mm gamma criteria may be both practically and theoretically achievable. Further work is necessary to diagnose and resolve dosimetric inaccuracy in these trials, particularly for systematic dose errors. This work is funded by NCI Grant CA180803.« less

  14. UNDERSTANDING SYSTEMATIC MEASUREMENT ERROR IN THERMAL-OPTICAL ANALYSIS FOR PM BLACK CARBON USING RESPONSE SURFACES AND SURFACE CONFIDENCE INTERVALS

    EPA Science Inventory

    Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...

  15. The accuracy of the measurements in Ulugh Beg's star catalogue

    NASA Astrophysics Data System (ADS)

    Krisciunas, K.

    1992-12-01

    The star catalogue compiled by Ulugh Beg and his collaborators in Samarkand (ca. 1437) is the only catalogue primarily based on original observations between the times of Ptolemy and Tycho Brahe. Evans (1987) has given convincing evidence that Ulugh Beg's star catalogue was based on measurements made with a zodiacal armillary sphere graduated to 15(') , with interpolation to 0.2 units. He and Shevchenko (1990) were primarily interested in the systematic errors in ecliptic longitude. Shevchenko's analysis of the random errors was limited to the twelve zodiacal constellations. We have analyzed all 843 ecliptic longitudes and latitudes attributed to Ulugh Beg by Knobel (1917). This required multiplying all the longitude errors by the respective values of the cosine of the celestial latitudes. We find a random error of +/- 17minp 7 for ecliptic longitude and +/- 16minp 5 for ecliptic latitude. On the whole, the random errors are largest near the ecliptic, decreasing towards the ecliptic poles. For all of Ulugh Beg's measurements (excluding outliers) the mean systematic error is -10minp 8 +/- 0minp 8 for ecliptic longitude and 7minp 5 +/- 0minp 7 for ecliptic latitude, with the errors in the sense ``computed minus Ulugh Beg''. For the brighter stars (those designated alpha , beta , and gamma in the respective constellations), the mean systematic errors are -11minp 3 +/- 1minp 9 for ecliptic longitude and 9minp 4 +/- 1minp 5 for ecliptic latitude. Within the errors this matches the systematic error in both coordinates for alpha Vir. With greater confidence we may conclude that alpha Vir was the principal reference star in the catalogues of Ulugh Beg and Ptolemy. Evans, J. 1987, J. Hist. Astr. 18, 155. Knobel, E. B. 1917, Ulugh Beg's Catalogue of Stars, Washington, D. C.: Carnegie Institution. Shevchenko, M. 1990, J. Hist. Astr. 21, 187.

  16. Certainty Equivalence M-MRAC for Systems with Unmatched Uncertainties

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmanje

    2012-01-01

    The paper presents a certainty equivalence state feedback indirect adaptive control design method for the systems of any relative degree with unmatched uncertainties. The approach is based on the parameter identification (estimation) model, which is completely separated from the control design and is capable of producing parameter estimates as fast as the computing power allows without generating high frequency oscillations. It is shown that the system's input and output tracking errors can be systematically decreased by the proper choice of the design parameters.

  17. Short-range optical air data measurements for aircraft control using rotational Raman backscatter.

    PubMed

    Fraczek, Michael; Behrendt, Andreas; Schmitt, Nikolaus

    2013-07-15

    A first laboratory prototype of a novel concept for a short-range optical air data system for aircraft control and safety was built. The measurement methodology was introduced in [Appl. Opt. 51, 148 (2012)] and is based on techniques known from lidar detecting elastic and Raman backscatter from air. A wide range of flight-critical parameters, such as air temperature, molecular number density and pressure can be measured as well as data on atmospheric particles and humidity can be collected. In this paper, the experimental measurement performance achieved with the first laboratory prototype using 532 nm laser radiation of a pulse energy of 118 mJ is presented. Systematic measurement errors and statistical measurement uncertainties are quantified separately. The typical systematic temperature, density and pressure measurement errors obtained from the mean of 1000 averaged signal pulses are small amounting to < 0.22 K, < 0.36% and < 0.31%, respectively, for measurements at air pressures varying from 200 hPa to 950 hPa but constant air temperature of 298.95 K. The systematic measurement errors at air temperatures varying from 238 K to 308 K but constant air pressure of 946 hPa are even smaller and < 0.05 K, < 0.07% and < 0.06%, respectively. A focus is put on the system performance at different virtual flight altitudes as a function of the laser pulse energy. The virtual flight altitudes are precisely generated with a custom-made atmospheric simulation chamber system. In this context, minimum laser pulse energies and pulse numbers are experimentally determined, which are required using the measurement system, in order to meet measurement error demands for temperature and pressure specified in aviation standards. The aviation error margins limit the allowable temperature errors to 1.5 K for all measurement altitudes and the pressure errors to 0.1% for 0 m and 0.5% for 13000 m. With regard to 100-pulse-averaged temperature measurements, the pulse energy using 532 nm laser radiation has to be larger than 11 mJ (35 mJ), regarding 1-σ (3-σ) uncertainties at all measurement altitudes. For 100-pulse-averaged pressure measurements, the laser pulse energy has to be larger than 95 mJ (355 mJ), respectively. Based on these experimental results, the laser pulse energy requirements are extrapolated to the ultraviolet wavelength region as well, resulting in significantly lower pulse energy demand of 1.5 - 3 mJ (4-10 mJ) and 12-27 mJ (45-110 mJ) for 1-σ (3-σ) 100-pulse-averaged temperature and pressure measurements, respectively.

  18. Controlling qubit drift by recycling error correction syndromes

    NASA Astrophysics Data System (ADS)

    Blume-Kohout, Robin

    2015-03-01

    Physical qubits are susceptible to systematic drift, above and beyond the stochastic Markovian noise that motivates quantum error correction. This parameter drift must be compensated - if it is ignored, error rates will rise to intolerable levels - but compensation requires knowing the parameters' current value, which appears to require halting experimental work to recalibrate (e.g. via quantum tomography). Fortunately, this is untrue. I show how to perform on-the-fly recalibration on the physical qubits in an error correcting code, using only information from the error correction syndromes. The algorithm for detecting and compensating drift is very simple - yet, remarkably, when used to compensate Brownian drift in the qubit Hamiltonian, it achieves a stabilized error rate very close to the theoretical lower bound. Against 1/f noise, it is less effective only because 1/f noise is (like white noise) dominated by high-frequency fluctuations that are uncompensatable. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE

  19. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    NASA Astrophysics Data System (ADS)

    DeSalvo, Riccardo

    2015-06-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.

  20. 13Check_RNA: A tool to evaluate 13C chemical shifts assignments of RNA.

    PubMed

    Icazatti, A A; Martin, O A; Villegas, M; Szleifer, I; Vila, J A

    2018-06-19

    Chemical shifts (CS) are an important source of structural information of macromolecules such as RNA. In addition to the scarce availability of CS for RNA, the observed values are prone to errors due to a wrong re-calibration or miss assignments. Different groups have dedicated their efforts to correct CS systematic errors on RNA. Despite this, there are not automated and freely available algorithms for correct assignments of RNA 13C CS before their deposition to the BMRB or re-reference already deposited CS with systematic errors. Based on an existent method we have implemented an open source python module to correct 13C CS (from here on 13Cexp) systematic errors of RNAs and then return the results in 3 formats including the nmrstar one. This software is available on GitHub at https://github.com/BIOS-IMASL/13Check_RNA under a MIT license. Supplementary data are available at Bioinformatics online.

  1. The Gnomon Experiment

    NASA Astrophysics Data System (ADS)

    Krisciunas, Kevin

    2007-12-01

    A gnomon, or vertical pointed stick, can be used to determine the north-south direction at a site, as well as one's latitude. If one has accurate time and knows one's time zone, it is also possible to determine one's longitude. From observations on the first day of winter and the first day of summer one can determine the obliquity of the ecliptic. Since we can obtain accurate geographical coordinates from Google Earth or a GPS device, analysis of set of shadow length measurements can be used by students to learn about astronomical coordinate systems, time systems, systematic errors, and random errors. Systematic latitude errors of student datasets are typically 30 nautical miles (0.5 degree) or more, but with care one can achieve systematic and random errors less than 8 nautical miles. One of the advantages of this experiment is that it can be carried out during the day. Also, it is possible to determine if a student has made up his data.

  2. Evaluation of seasonal and spatial variations of lumped water balance model sensitivity to precipitation data errors

    NASA Astrophysics Data System (ADS)

    Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.

    2006-06-01

    Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.

  3. Detecting and overcoming systematic errors in genome-scale phylogenies.

    PubMed

    Rodríguez-Ezpeleta, Naiara; Brinkmann, Henner; Roure, Béatrice; Lartillot, Nicolas; Lang, B Franz; Philippe, Hervé

    2007-06-01

    Genome-scale data sets result in an enhanced resolution of the phylogenetic inference by reducing stochastic errors. However, there is also an increase of systematic errors due to model violations, which can lead to erroneous phylogenies. Here, we explore the impact of systematic errors on the resolution of the eukaryotic phylogeny using a data set of 143 nuclear-encoded proteins from 37 species. The initial observation was that, despite the impressive amount of data, some branches had no significant statistical support. To demonstrate that this lack of resolution is due to a mutual annihilation of phylogenetic and nonphylogenetic signals, we created a series of data sets with slightly different taxon sampling. As expected, these data sets yielded strongly supported but mutually exclusive trees, thus confirming the presence of conflicting phylogenetic and nonphylogenetic signals in the original data set. To decide on the correct tree, we applied several methods expected to reduce the impact of some kinds of systematic error. Briefly, we show that (i) removing fast-evolving positions, (ii) recoding amino acids into functional categories, and (iii) using a site-heterogeneous mixture model (CAT) are three effective means of increasing the ratio of phylogenetic to nonphylogenetic signal. Finally, our results allow us to formulate guidelines for detecting and overcoming phylogenetic artefacts in genome-scale phylogenetic analyses.

  4. Local systematic differences in 2MASS positions

    NASA Astrophysics Data System (ADS)

    Bustos Fierro, I. H.; Calderón, J. H.

    2018-01-01

    We have found that positions in the 2MASS All-sky Catalog of Point Sources show local systematic differences with characteristic length-scales of ˜ 5 to ˜ 8 arcminutes when compared with several catalogs. We have observed that when 2MASS positions are used in the computation of proper motions, the mentioned systematic differences cause systematic errors in the resulting proper motions. We have developed a method to locally rectify 2MASS with respect to UCAC4 in order to diminish the systematic differences between these catalogs. The rectified 2MASS catalog with the proposed method can be regarded as an extension of UCAC4 for astrometry with accuracy ˜ 90 mas in its positions, with negligible systematic errors. Also we show that the use of these rectified positions removes the observed systematic pattern in proper motions derived from original 2MASS positions.

  5. SU-E-CAMPUS-J-05: Quantitative Investigation of Random and Systematic Uncertainties From Hardware and Software Components in the Frameless 6DBrainLAB ExacTrac System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keeling, V; Jin, H; Hossain, S

    2014-06-15

    Purpose: To evaluate setup accuracy and quantify individual systematic and random errors for the various hardware and software components of the frameless 6D-BrainLAB ExacTrac system. Methods: 35 patients with cranial lesions, some with multiple isocenters (50 total lesions treated in 1, 3, 5 fractions), were investigated. All patients were simulated with a rigid head-and-neck mask and the BrainLAB localizer. CT images were transferred to the IPLAN treatment planning system where optimized plans were generated using stereotactic reference frame based on the localizer. The patients were setup initially with infrared (IR) positioning ExacTrac system. Stereoscopic X-ray images (XC: X-ray Correction) weremore » registered to their corresponding digitally-reconstructed-radiographs, based on bony anatomy matching, to calculate 6D-translational and rotational (Lateral, Longitudinal, Vertical, Pitch, Roll, Yaw) shifts. XC combines systematic errors of the mask, localizer, image registration, frame, and IR. If shifts were below tolerance (0.7 mm translational and 1 degree rotational), treatment was initiated; otherwise corrections were applied and additional X-rays were acquired to verify patient position (XV: X-ray Verification). Statistical analysis was used to extract systematic and random errors of the different components of the 6D-ExacTrac system and evaluate the cumulative setup accuracy. Results: Mask systematic errors (translational; rotational) were the largest and varied from one patient to another in the range (−15 to 4mm; −2.5 to 2.5degree) obtained from mean of XC for each patient. Setup uncertainty in IR positioning (0.97,2.47,1.62mm;0.65,0.84,0.96degree) was extracted from standard-deviation of XC. Combined systematic errors of the frame and localizer (0.32,−0.42,−1.21mm; −0.27,0.34,0.26degree) was extracted from mean of means of XC distributions. Final patient setup uncertainty was obtained from the standard deviations of XV (0.57,0.77,0.67mm,0.39,0.35,0.30degree). Conclusion: Statistical analysis was used to calculate cumulative and individual systematic errors from the different hardware and software components of the 6D-ExacTrac-system. Patients were treated with cumulative errors (<1mm,<1degree) with XV image guidance.« less

  6. Microwave Resonator Measurements of Atmospheric Absorption Coefficients: A Preliminary Design Study

    NASA Technical Reports Server (NTRS)

    Walter, Steven J.; Spilker, Thomas R.

    1995-01-01

    A preliminary design study examined the feasibility of using microwave resonator measurements to improve the accuracy of atmospheric absorption coefficients and refractivity between 18 and 35 GHz. Increased accuracies would improve the capability of water vapor radiometers to correct for radio signal delays caused by Earth's atmosphere. Calibration of delays incurred by radio signals traversing the atmosphere has applications to both deep space tracking and planetary radio science experiments. Currently, the Cassini gravity wave search requires 0.8-1.0% absorption coefficient accuracy. This study examined current atmospheric absorption models and estimated that current model accuracy ranges from 5% to 7%. The refractivity of water vapor is known to 1% accuracy, while the refractivity of many dry gases (oxygen, nitrogen, etc.) are known to better than 0.1%. Improvements to the current generation of models will require that both the functional form and absolute absorption of the water vapor spectrum be calibrated and validated. Several laboratory techniques for measuring atmospheric absorption and refractivity were investigated, including absorption cells, single and multimode rectangular cavity resonators, and Fabry-Perot resonators. Semi-confocal Fabry-Perot resonators were shown to provide the most cost-effective and accurate method of measuring atmospheric gas refractivity. The need for accurate environmental measurement and control was also addressed. A preliminary design for the environmental control and measurement system was developed to aid in identifying significant design issues. The analysis indicated that overall measurement accuracy will be limited by measurement errors and imprecise control of the gas sample's thermodynamic state, thermal expansion and vibration- induced deformation of the resonator structure, and electronic measurement error. The central problem is to identify systematic errors because random errors can be reduced by averaging. Calibrating the resonator measurements by checking the refractivity of dry gases which are known to better than 0.1% provides a method of controlling the systematic errors to 0.1%. The primary source of error in absorptivity and refractivity measurements is thus the ability to measure the concentration of water vapor in the resonator path. Over the whole thermodynamic range of interest the accuracy of water vapor measurement is 1.5%. However, over the range responsible for most of the radio delay (i.e. conditions in the bottom two kilometers of the atmosphere) the accuracy of water vapor measurements ranges from 0.5% to 1.0%. Therefore the precision of the resonator measurements could be held to 0.3% and the overall absolute accuracy of resonator-based absorption and refractivity measurements will range from 0.6% to 1.

  7. Combined influence of CT random noise and HU-RSP calibration curve nonlinearities on proton range systematic errors

    NASA Astrophysics Data System (ADS)

    Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.

    2017-11-01

    Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.

  8. Medication errors in the Middle East countries: a systematic review of the literature.

    PubMed

    Alsulami, Zayed; Conroy, Sharon; Choonara, Imti

    2013-04-01

    Medication errors are a significant global concern and can cause serious medical consequences for patients. Little is known about medication errors in Middle Eastern countries. The objectives of this systematic review were to review studies of the incidence and types of medication errors in Middle Eastern countries and to identify the main contributory factors involved. A systematic review of the literature related to medication errors in Middle Eastern countries was conducted in October 2011 using the following databases: Embase, Medline, Pubmed, the British Nursing Index and the Cumulative Index to Nursing & Allied Health Literature. The search strategy included all ages and languages. Inclusion criteria were that the studies assessed or discussed the incidence of medication errors and contributory factors to medication errors during the medication treatment process in adults or in children. Forty-five studies from 10 of the 15 Middle Eastern countries met the inclusion criteria. Nine (20 %) studies focused on medication errors in paediatric patients. Twenty-one focused on prescribing errors, 11 measured administration errors, 12 were interventional studies and one assessed transcribing errors. Dispensing and documentation errors were inadequately evaluated. Error rates varied from 7.1 % to 90.5 % for prescribing and from 9.4 % to 80 % for administration. The most common types of prescribing errors reported were incorrect dose (with an incidence rate from 0.15 % to 34.8 % of prescriptions), wrong frequency and wrong strength. Computerised physician rder entry and clinical pharmacist input were the main interventions evaluated. Poor knowledge of medicines was identified as a contributory factor for errors by both doctors (prescribers) and nurses (when administering drugs). Most studies did not assess the clinical severity of the medication errors. Studies related to medication errors in the Middle Eastern countries were relatively few in number and of poor quality. Educational programmes on drug therapy for doctors and nurses are urgently needed.

  9. On-board error correction improves IR earth sensor accuracy

    NASA Astrophysics Data System (ADS)

    Alex, T. K.; Kasturirangan, K.; Shrivastava, S. K.

    1989-10-01

    Infra-red earth sensors are used in satellites for attitude sensing. Their accuracy is limited by systematic and random errors. The sources of errors in a scanning infra-red earth sensor are analyzed in this paper. The systematic errors arising from seasonal variation of infra-red radiation, oblate shape of the earth, ambient temperature of sensor, changes in scan/spin rates have been analyzed. Simple relations are derived using least square curve fitting for on-board correction of these errors. Random errors arising out of noise from detector and amplifiers, instability of alignment and localized radiance anomalies are analyzed and possible correction methods are suggested. Sun and Moon interference on earth sensor performance has seriously affected a number of missions. The on-board processor detects Sun/Moon interference and corrects the errors on-board. It is possible to obtain eight times improvement in sensing accuracy, which will be comparable with ground based post facto attitude refinement.

  10. Patient disclosure of medical errors in paediatrics: A systematic literature review

    PubMed Central

    Koller, Donna; Rummens, Anneke; Le Pouesard, Morgane; Espin, Sherry; Friedman, Jeremy; Coffey, Maitreya; Kenneally, Noah

    2016-01-01

    Medical errors are common within paediatrics; however, little research has examined the process of disclosing medical errors in paediatric settings. The present systematic review of current research and policy initiatives examined evidence regarding the disclosure of medical errors involving paediatric patients. Peer-reviewed research from a range of scientific journals from the past 10 years is presented, and an overview of Canadian and international policies regarding disclosure in paediatric settings are provided. The purpose of the present review was to scope the existing literature and policy, and to synthesize findings into an integrated and accessible report. Future research priorities and policy implications are then identified. PMID:27429578

  11. System calibration method for Fourier ptychographic microscopy

    NASA Astrophysics Data System (ADS)

    Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli

    2017-09-01

    Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic.

  12. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    DOE PAGES

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; ...

    2016-06-01

    Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  13. Systematic error of diode thermometer.

    PubMed

    Iskrenovic, Predrag S

    2009-08-01

    Semiconductor diodes are often used for measuring temperatures. The forward voltage across a diode decreases, approximately linearly, with the increase in temperature. The applied method is mainly the simplest one. A constant direct current flows through the diode, and voltage is measured at diode terminals. The direct current that flows through the diode, putting it into operating mode, heats up the diode. The increase in temperature of the diode-sensor, i.e., the systematic error due to self-heating, depends on the intensity of current predominantly and also on other factors. The results of systematic error measurements due to heating up by the forward-bias current have been presented in this paper. The measurements were made at several diodes over a wide range of bias current intensity.

  14. Hadronic Contribution to Muon g-2 with Systematic Error Correlations

    NASA Astrophysics Data System (ADS)

    Brown, D. H.; Worstell, W. A.

    1996-05-01

    We have performed a new evaluation of the hadronic contribution to a_μ=(g-2)/2 of the muon with explicit correlations of systematic errors among the experimental data on σ( e^+e^- → hadrons ). Our result for the lowest order hadronic vacuum polarization contribution is a_μ^hvp = 701.7(7.6)(13.4) × 10-10 where the total systematic error contributions from below and above √s = 1.4 GeV are (12.5) × 10-10 and (4.8) × 10-10 respectively. Therefore new measurements on σ( e^+e^- → hadrons ) below 1.4 GeV in Novosibirsk, Russia can significantly reduce the total error on a_μ^hvp. This contrasts with a previous evaluation which indicated that the dominant error is due to the energy region above 1.4 GeV. The latter analysis correlated systematic errors at each energy point separately but not across energy ranges as we have done. Combination with higher order hadronic contributions is required for a new measurement of a_μ at Brookhaven National Laboratory to be sensitive to electroweak and possibly supergravity and muon substructure effects. Our analysis may also be applied to calculations of hadronic contributions to the running of α(s) at √s= M_Z, the hyperfine structure of muonium, and the running of sin^2 θW in Møller scattering. The analysis of the new Novosibirsk data will also be given.

  15. Validation of self-reported start year of mobile phone use in a Swedish case-control study on radiofrequency fields and acoustic neuroma risk.

    PubMed

    Pettersson, David; Bottai, Matteo; Mathiesen, Tiit; Prochazka, Michaela; Feychting, Maria

    2015-01-01

    The possible effect of radiofrequency exposure from mobile phones on tumor risk has been studied since the late 1990s. Yet, empirical information about recall of the start of mobile phone use among adult cases and controls has never been reported. Limited knowledge about recall errors hampers interpretations of the epidemiological evidence. We used network operator data to validate the self-reported start year of mobile phone use in a case-control study of mobile phone use and acoustic neuroma risk. The answers of 96 (29%) cases and 111 (22%) controls could be included in the validation. The larger proportion of cases reflects a more complete and detailed reporting of subscription history. Misclassification was substantial, with large random errors, small systematic errors, and no significant differences between cases and controls. The average difference between self-reported and operator start year was -0.62 (95% confidence interval: -1.42, 0.17) years for cases and -0.71 (-1.50, 0.07) years for controls, standard deviations were 3.92 and 4.17 years, respectively. Agreement between self-reported and operator-recorded data categorized into short, intermediate and long-term use was moderate (kappa statistic: 0.42). Should an association exist, dilution of risk estimates and distortion of exposure-response patterns for time since first mobile phone use could result from the large random errors in self-reported start year. Retrospective collection of operator data likely leads to a selection of "good reporters", with a higher proportion of cases. Thus, differential recall cannot be entirely excluded.

  16. Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Garcia, P.

    2016-04-01

    Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.

  17. [The continuous graphic presentation of interdepartmental results (author's transl)].

    PubMed

    Berger, J; Hirsch, H

    1976-10-01

    The results of interdepartmental determinations can be clearly presented in the laboratory in two ways: 1. Entry on a test card, according to Shewhart, as used in internal quality control. 2. Entry on a test card, constructed on the principle of the Cusum test. By the latter procedure, systematic errors are detected sooner than with the usual test card. A graphic variant of the Cusum test is described; a V-mask is not required; the card superficially resembles the usual control card, and the calculation time is minimal. This method may also be used to advantage in internal quality control.

  18. Assessment and quantification of patient set-up errors in nasopharyngeal cancer patients and their biological and dosimetric impact in terms of generalized equivalent uniform dose (gEUD), tumour control probability (TCP) and normal tissue complication probability (NTCP).

    PubMed

    Boughalia, A; Marcie, S; Fellah, M; Chami, S; Mekki, F

    2015-06-01

    The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy-oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose-volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. The gEUD, TCP and NTCP may be more suitable tools to assess the treatment plans before treating the patients.

  19. Measurement error and timing of predictor values for multivariable risk prediction models are poorly reported.

    PubMed

    Whittle, Rebecca; Peat, George; Belcher, John; Collins, Gary S; Riley, Richard D

    2018-05-18

    Measurement error in predictor variables may threaten the validity of clinical prediction models. We sought to evaluate the possible extent of the problem. A secondary objective was to examine whether predictors are measured at the intended moment of model use. A systematic search of Medline was used to identify a sample of articles reporting the development of a clinical prediction model published in 2015. After screening according to a predefined inclusion criteria, information on predictors, strategies to control for measurement error and intended moment of model use were extracted. Susceptibility to measurement error for each predictor was classified into low and high risk. Thirty-three studies were reviewed, including 151 different predictors in the final prediction models. Fifty-one (33.7%) predictors were categorised as high risk of error, however this was not accounted for in the model development. Only 8 (24.2%) studies explicitly stated the intended moment of model use and when the predictors were measured. Reporting of measurement error and intended moment of model use is poor in prediction model studies. There is a need to identify circumstances where ignoring measurement error in prediction models is consequential and whether accounting for the error will improve the predictions. Copyright © 2018. Published by Elsevier Inc.

  20. Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel

    ERIC Educational Resources Information Center

    Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.

    2007-01-01

    A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…

  1. Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.

  2. Prevalence and reporting of recruitment, randomisation and treatment errors in clinical trials: A systematic review.

    PubMed

    Yelland, Lisa N; Kahan, Brennan C; Dent, Elsa; Lee, Katherine J; Voysey, Merryn; Forbes, Andrew B; Cook, Jonathan A

    2018-06-01

    Background/aims In clinical trials, it is not unusual for errors to occur during the process of recruiting, randomising and providing treatment to participants. For example, an ineligible participant may inadvertently be randomised, a participant may be randomised in the incorrect stratum, a participant may be randomised multiple times when only a single randomisation is permitted or the incorrect treatment may inadvertently be issued to a participant at randomisation. Such errors have the potential to introduce bias into treatment effect estimates and affect the validity of the trial, yet there is little motivation for researchers to report these errors and it is unclear how often they occur. The aim of this study is to assess the prevalence of recruitment, randomisation and treatment errors and review current approaches for reporting these errors in trials published in leading medical journals. Methods We conducted a systematic review of individually randomised, phase III, randomised controlled trials published in New England Journal of Medicine, Lancet, Journal of the American Medical Association, Annals of Internal Medicine and British Medical Journal from January to March 2015. The number and type of recruitment, randomisation and treatment errors that were reported and how they were handled were recorded. The corresponding authors were contacted for a random sample of trials included in the review and asked to provide details on unreported errors that occurred during their trial. Results We identified 241 potentially eligible articles, of which 82 met the inclusion criteria and were included in the review. These trials involved a median of 24 centres and 650 participants, and 87% involved two treatment arms. Recruitment, randomisation or treatment errors were reported in 32 in 82 trials (39%) that had a median of eight errors. The most commonly reported error was ineligible participants inadvertently being randomised. No mention of recruitment, randomisation or treatment errors was found in the remaining 50 of 82 trials (61%). Based on responses from 9 of the 15 corresponding authors who were contacted regarding recruitment, randomisation and treatment errors, between 1% and 100% of the errors that occurred in their trials were reported in the trial publications. Conclusion Recruitment, randomisation and treatment errors are common in individually randomised, phase III trials published in leading medical journals, but reporting practices are inadequate and reporting standards are needed. We recommend researchers report all such errors that occurred during the trial and describe how they were handled in trial publications to improve transparency in reporting of clinical trials.

  3. The Crucial Role of Error Correlation for Uncertainty Modeling of CFD-Based Aerodynamics Increments

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.; Walker, Eric L.

    2011-01-01

    The Ares I ascent aerodynamics database for Design Cycle 3 (DAC-3) was built from wind-tunnel test results and CFD solutions. The wind tunnel results were used to build the baseline response surfaces for wind-tunnel Reynolds numbers at power-off conditions. The CFD solutions were used to build increments to account for Reynolds number effects. We calculate the validation errors for the primary CFD code results at wind tunnel Reynolds number power-off conditions and would like to be able to use those errors to predict the validation errors for the CFD increments. However, the validation errors are large compared to the increments. We suggest a way forward that is consistent with common practice in wind tunnel testing which is to assume that systematic errors in the measurement process and/or the environment will subtract out when increments are calculated, thus making increments more reliable with smaller uncertainty than absolute values of the aerodynamic coefficients. A similar practice has arisen for the use of CFD to generate aerodynamic database increments. The basis of this practice is the assumption of strong correlation of the systematic errors inherent in each of the results used to generate an increment. The assumption of strong correlation is the inferential link between the observed validation uncertainties at wind-tunnel Reynolds numbers and the uncertainties to be predicted for flight. In this paper, we suggest a way to estimate the correlation coefficient and demonstrate the approach using code-to-code differences that were obtained for quality control purposes during the Ares I CFD campaign. Finally, since we can expect the increments to be relatively small compared to the baseline response surface and to be typically of the order of the baseline uncertainty, we find that it is necessary to be able to show that the correlation coefficients are close to unity to avoid overinflating the overall database uncertainty with the addition of the increments.

  4. Training of Working Memory Impacts Neural Processing of Vocal Pitch Regulation

    PubMed Central

    Li, Weifeng; Guo, Zhiqiang; Jones, Jeffery A.; Huang, Xiyan; Chen, Xi; Liu, Peng; Chen, Shaozhen; Liu, Hanjun

    2015-01-01

    Working memory training can improve the performance of tasks that were not trained. Whether auditory-motor integration for voice control can benefit from working memory training, however, remains unclear. The present event-related potential (ERP) study examined the impact of working memory training on the auditory-motor processing of vocal pitch. Trained participants underwent adaptive working memory training using a digit span backwards paradigm, while control participants did not receive any training. Before and after training, both trained and control participants were exposed to frequency-altered auditory feedback while producing vocalizations. After training, trained participants exhibited significantly decreased N1 amplitudes and increased P2 amplitudes in response to pitch errors in voice auditory feedback. In addition, there was a significant positive correlation between the degree of improvement in working memory capacity and the post-pre difference in P2 amplitudes. Training-related changes in the vocal compensation, however, were not observed. There was no systematic change in either vocal or cortical responses for control participants. These findings provide evidence that working memory training impacts the cortical processing of feedback errors in vocal pitch regulation. This enhanced cortical processing may be the result of increased neural efficiency in the detection of pitch errors between the intended and actual feedback. PMID:26553373

  5. Optimising UAV topographic surveys processed with structure-from-motion: Ground control quality, quantity and bundle adjustment

    NASA Astrophysics Data System (ADS)

    James, Mike R.; Robson, Stuart; d'Oleire-Oltmanns, Sebastian; Niethammer, Uwe

    2016-04-01

    Structure-from-motion (SfM) algorithms are greatly facilitating the production of detailed topographic models based on images collected by unmanned aerial vehicles (UAVs). However, SfM-based software does not generally provide the rigorous photogrammetric analysis required to fully understand survey quality. Consequently, error related to problems in control point data or the distribution of control points can remain undiscovered. Even if these errors are not large in magnitude, they can be systematic, and thus have strong implications for the use of products such as digital elevation models (DEMs) and orthophotos. Here, we develop a Monte Carlo approach to (1) improve the accuracy of products when SfM-based processing is used and (2) reduce the associated field effort by identifying suitable lower density deployments of ground control points. The method highlights over-parameterisation during camera self-calibration and provides enhanced insight into control point performance when rigorous error metrics are not available. Processing was implemented using commonly-used SfM-based software (Agisoft PhotoScan), which we augment with semi-automated and automated GCPs image measurement. We apply the Monte Carlo method to two contrasting case studies - an erosion gully survey (Taurodont, Morocco) carried out with an fixed-wing UAV, and an active landslide survey (Super-Sauze, France), acquired using a manually controlled quadcopter. The results highlight the differences in the control requirements for the two sites, and we explore the implications for future surveys. We illustrate DEM sensitivity to critical processing parameters and show how the use of appropriate parameter values increases DEM repeatability and reduces the spatial variability of error due to processing artefacts.

  6. Characterizing the impact of model error in hydrologic time series recovery inverse problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.

    Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less

  7. Characterizing the impact of model error in hydrologic time series recovery inverse problems

    DOE PAGES

    Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.

    2017-10-28

    Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less

  8. Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers

    NASA Technical Reports Server (NTRS)

    Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.

    2012-01-01

    Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.

  9. Functional integration of vertical flight path and speed control using energy principles

    NASA Technical Reports Server (NTRS)

    Lambregts, A. A.

    1984-01-01

    A generalized automatic flight control system was developed which integrates all longitudinal flight path and speed control functions previously provided by a pitch autopilot and autothrottle. In this design, a net thrust command is computed based on total energy demand arising from both flight path and speed targets. The elevator command is computed based on the energy distribution error between flight path and speed. The engine control is configured to produce the commanded net thrust. The design incorporates control strategies and hierarchy to deal systematically and effectively with all aircraft operational requirements, control nonlinearities, and performance limits. Consistent decoupled maneuver control is achieved for all modes and flight conditions without outer loop gain schedules, control law submodes, or control function duplication.

  10. USGS Blind Sample Project: monitoring and evaluating laboratory analytical quality

    USGS Publications Warehouse

    Ludtke, Amy S.; Woodworth, Mark T.

    1997-01-01

    The U.S. Geological Survey (USGS) collects and disseminates information about the Nation's water resources. Surface- and ground-water samples are collected and sent to USGS laboratories for chemical analyses. The laboratories identify and quantify the constituents in the water samples. Random and systematic errors occur during sample handling, chemical analysis, and data processing. Although all errors cannot be eliminated from measurements, the magnitude of their uncertainty can be estimated and tracked over time. Since 1981, the USGS has operated an independent, external, quality-assurance project called the Blind Sample Project (BSP). The purpose of the BSP is to monitor and evaluate the quality of laboratory analytical results through the use of double-blind quality-control (QC) samples. The information provided by the BSP assists the laboratories in detecting and correcting problems in the analytical procedures. The information also can aid laboratory users in estimating the extent that laboratory errors contribute to the overall errors in their environmental data.

  11. Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method

    PubMed Central

    Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni

    2017-01-01

    The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument. PMID:28445508

  12. System calibration method for Fourier ptychographic microscopy.

    PubMed

    Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli

    2017-09-01

    Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  13. Analyzing False Positives of Four Questions in the Force Concept Inventory

    ERIC Educational Resources Information Center

    Yasuda, Jun-ichro; Mae, Naohiro; Hull, Michael M.; Taniguchi, Masa-aki

    2018-01-01

    In this study, we analyze the systematic error from false positives of the Force Concept Inventory (FCI). We compare the systematic errors of question 6 (Q.6), Q.7, and Q.16, for which clearly erroneous reasoning has been found, with Q.5, for which clearly erroneous reasoning has not been found. We determine whether or not a correct response to a…

  14. Adaptive attitude control and momentum management for large-angle spacecraft maneuvers

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Sunkel, John W.

    1992-01-01

    The fully coupled equations of motion are systematically linearized around an equilibrium point of a gravity gradient stabilized spacecraft, controlled by momentum exchange devices. These equations are then used for attitude control system design of an early Space Station Freedom flight configuration, demonstrating the errors caused by the improper approximation of the spacecraft dynamics. A full state feedback controller, incorporating gain-scheduled adaptation of the attitude gains, is developed for use during spacecraft on-orbit assembly or operations characterized by significant mass properties variations. The feasibility of the gain adaptation is demonstrated via a Space Station Freedom assembly sequence case study. The attitude controller stability robustness and transient performance during gain adaptation appear satisfactory.

  15. The Accuracy of GBM GRB Localizations

    NASA Astrophysics Data System (ADS)

    Briggs, Michael Stephen; Connaughton, V.; Meegan, C.; Hurley, K.

    2010-03-01

    We report an study of the accuracy of GBM GRB localizations, analyzing three types of localizations: those produced automatically by the GBM Flight Software on board GBM, those produced automatically with ground software in near real time, and localizations produced with human guidance. The two types of automatic locations are distributed in near real-time via GCN Notices; the human-guided locations are distributed on timescale of many minutes or hours using GCN Circulars. This work uses a Bayesian analysis that models the distribution of the GBM total location error by comparing GBM locations to more accurate locations obtained with other instruments. Reference locations are obtained from Swift, Super-AGILE, the LAT, and with the IPN. We model the GBM total location errors as having systematic errors in addition to the statistical errors and use the Bayesian analysis to constrain the systematic errors.

  16. A Learning-Based Wrapper Method to Correct Systematic Errors in Automatic Image Segmentation: Consistently Improved Performance in Hippocampus, Cortex and Brain Segmentation

    PubMed Central

    Wang, Hongzhi; Das, Sandhitsu R.; Suh, Jung Wook; Altinay, Murat; Pluta, John; Craige, Caryne; Avants, Brian; Yushkevich, Paul A.

    2011-01-01

    We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter segmentation, respectively. PMID:21237273

  17. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review.

    PubMed

    Mathes, Tim; Klaßen, Pauline; Pieper, Dawid

    2017-11-28

    Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.

  18. Procedures for dealing with certain types of noise and systematic errors common to many Hadamard transform optical systems

    NASA Technical Reports Server (NTRS)

    Harwit, M.

    1977-01-01

    Sources of noise and error correcting procedures characteristic of Hadamard transform optical systems were investigated. Reduction of spectral noise due to noise spikes in the data, the effect of random errors, the relative performance of Fourier and Hadamard transform spectrometers operated under identical detector-noise-limited conditions, and systematic means for dealing with mask defects are among the topics discussed. The distortion in Hadamard transform optical instruments caused by moving Masks, incorrect mask alignment, missing measurements, and diffraction is analyzed and techniques for reducing or eliminating this distortion are described.

  19. Systematic reviews, systematic error and the acquisition of clinical knowledge

    PubMed Central

    2010-01-01

    Background Since its inception, evidence-based medicine and its application through systematic reviews, has been widely accepted. However, it has also been strongly criticised and resisted by some academic groups and clinicians. One of the main criticisms of evidence-based medicine is that it appears to claim to have unique access to absolute scientific truth and thus devalues and replaces other types of knowledge sources. Discussion The various types of clinical knowledge sources are categorised on the basis of Kant's categories of knowledge acquisition, as being either 'analytic' or 'synthetic'. It is shown that these categories do not act in opposition but rather, depend upon each other. The unity of analysis and synthesis in knowledge acquisition is demonstrated during the process of systematic reviewing of clinical trials. Systematic reviews constitute comprehensive synthesis of clinical knowledge but depend upon plausible, analytical hypothesis development for the trials reviewed. The dangers of systematic error regarding the internal validity of acquired knowledge are highlighted on the basis of empirical evidence. It has been shown that the systematic review process reduces systematic error, thus ensuring high internal validity. It is argued that this process does not exclude other types of knowledge sources. Instead, amongst these other types it functions as an integrated element during the acquisition of clinical knowledge. Conclusions The acquisition of clinical knowledge is based on interaction between analysis and synthesis. Systematic reviews provide the highest form of synthetic knowledge acquisition in terms of achieving internal validity of results. In that capacity it informs the analytic knowledge of the clinician but does not replace it. PMID:20537172

  20. Methods, analysis, and the treatment of systematic errors for the electron electric dipole moment search in thorium monoxide

    NASA Astrophysics Data System (ADS)

    Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration

    2017-07-01

    We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.

  1. Can a combination of average of normals and "real time" External Quality Assurance replace Internal Quality Control?

    PubMed

    Badrick, Tony; Graham, Peter

    2018-03-28

    Internal Quality Control and External Quality Assurance are separate but related processes that have developed independently in laboratory medicine over many years. They have different sample frequencies, statistical interpretations and immediacy. Both processes have evolved absorbing new understandings of the concept of laboratory error, sample material matrix and assay capability. However, we do not believe at the coalface that either process has led to much improvement in patient outcomes recently. It is the increasing reliability and automation of analytical platforms along with improved stability of reagents that has reduced systematic and random error, which in turn has minimised the risk of running less frequent IQC. We suggest that it is time to rethink the role of both these processes and unite them into a single approach using an Average of Normals model supported by more frequent External Quality Assurance samples. This new paradigm may lead to less confusion for laboratory staff and quicker responses to and identification of out of control situations.

  2. LANDSAT-4 MSS Geometric Correction: Methods and Results

    NASA Technical Reports Server (NTRS)

    Brooks, J.; Kimmer, E.; Su, J.

    1984-01-01

    An automated image registration system such as that developed for LANDSAT-4 can produce all of the information needed to verify and calibrate the software and to evaluate system performance. The on-line MSS archive generation process which upgrades systematic correction data to geodetic correction data is described as well as the control point library build subsystem which generates control point chips and support data for on-line upgrade of correction data. The system performance was evaluated for both temporal and geodetic registration. For temporal registration, 90% errors were computed to be .36 IFOV (instantaneous field of view) = 82.7 meters) cross track, and .29 IFOV along track. Also, for actual production runs monitored, the 90% errors were .29 IFOV cross track and .25 IFOV along track. The system specification is .3 IFOV, 90% of the time, both cross and along track. For geodetic registration performance, the model bias was measured by designating control points in the geodetically corrected imagery.

  3. Phase-demodulation error of a fiber-optic Fabry-Perot sensor with complex reflection coefficients.

    PubMed

    Kilpatrick, J M; MacPherson, W N; Barton, J S; Jones, J D

    2000-03-20

    The influence of reflector losses attracts little discussion in standard treatments of the Fabry-Perot interferometer yet may be an important factor contributing to errors in phase-stepped demodulation of fiber optic Fabry-Perot (FFP) sensors. We describe a general transfer function for FFP sensors with complex reflection coefficients and estimate systematic phase errors that arise when the asymmetry of the reflected fringe system is neglected, as is common in the literature. The measured asymmetric response of higher-finesse metal-dielectric FFP constructions corroborates a model that predicts systematic phase errors of 0.06 rad in three-step demodulation of a low-finesse FFP sensor (R = 0.05) with internal reflector losses of 25%.

  4. Systematic errors in regional climate model RegCM over Europe and sensitivity to variations in PBL parameterizations

    NASA Astrophysics Data System (ADS)

    Güttler, I.

    2012-04-01

    Systematic errors in near-surface temperature (T2m), total cloud cover (CLD), shortwave albedo (ALB) and surface net longwave (SNL) and shortwave energy flux (SNS) are detected in simulations of RegCM on 50 km resolution over the European CORDEX domain when forced with ERA-Interim reanalysis. Simulated T2m is compared to CRU 3.0 and other variables to GEWEX-SRB 3.0 dataset. Most of systematic errors found in SNL and SNS are consistent with errors in T2m, CLD and ALB: they include prevailing negative errors in T2m and positive errors in CLD present during most of the year. Errors in T2m and CLD can be associated with the overestimation of SNL and SNS in most simulations. Impact of errors in albedo are primarily confined to north Africa, where e.g. underestimation of albedo in JJA is consistent with associated surface heating and positive SNS and T2m errors. Sensitivity to the choice of the PBL scheme and various parameters in PBL schemes is examined from an ensemble of 20 simulations. The recently implemented prognostic PBL scheme performs over Europe with a mixed success when compared to standard diagnostic scheme with a general increase of errors in T2m and CLD over all of the domain. Nevertheless, the improvements in T2m can be found in e.g. north-eastern Europe during DJF and western Europe during JJA where substantial warm biases existed in simulations with the diagnostic scheme. The most detectable impact, in terms of the JJA T2m errors over western Europe, comes form the variation in the formulation of mixing length. In order to reduce the above errors an update of the RegCM albedo values and further work in customizing PBL scheme is suggested.

  5. Interventions to reduce medication errors in neonatal care: a systematic review

    PubMed Central

    Nguyen, Minh-Nha Rhylie; Mosel, Cassandra

    2017-01-01

    Background: Medication errors represent a significant but often preventable cause of morbidity and mortality in neonates. The objective of this systematic review was to determine the effectiveness of interventions to reduce neonatal medication errors. Methods: A systematic review was undertaken of all comparative and noncomparative studies published in any language, identified from searches of PubMed and EMBASE and reference-list checking. Eligible studies were those investigating the impact of any medication safety interventions aimed at reducing medication errors in neonates in the hospital setting. Results: A total of 102 studies were identified that met the inclusion criteria, including 86 comparative and 16 noncomparative studies. Medication safety interventions were classified into six themes: technology (n = 38; e.g. electronic prescribing), organizational (n = 16; e.g. guidelines, policies, and procedures), personnel (n = 13; e.g. staff education), pharmacy (n = 9; e.g. clinical pharmacy service), hazard and risk analysis (n = 8; e.g. error detection tools), and multifactorial (n = 18; e.g. any combination of previous interventions). Significant variability was evident across all included studies, with differences in intervention strategies, trial methods, types of medication errors evaluated, and how medication errors were identified and evaluated. Most studies demonstrated an appreciable risk of bias. The vast majority of studies (>90%) demonstrated a reduction in medication errors. A similar median reduction of 50–70% in medication errors was evident across studies included within each of the identified themes, but findings varied considerably from a 16% increase in medication errors to a 100% reduction in medication errors. Conclusion: While neonatal medication errors can be reduced through multiple interventions aimed at improving the medication use process, no single intervention appeared clearly superior. Further research is required to evaluate the relative cost-effectiveness of the various medication safety interventions to facilitate decisions regarding uptake and implementation into clinical practice. PMID:29387337

  6. The effects of errors on children's performance on a circle-ellipse discrimination.

    PubMed

    Stoddard, L T; Sidman, M

    1967-05-01

    Children first learned by means of a teaching program to discriminate a circle from relatively flat ellipses. Children in the control group then proceeded into a program which gradually reduced the difference between the circle and the ellipses. They advanced to a finer discrimination when they made a correct choice, and reversed to an easier discrimination after making errors ("backup" procedure). The children made relatively few errors until they approached the region of their difference threshold (empirically determined under the conditions described). When they could no longer discriminate the forms, they learned other bases for responding that could be classified as specifiable error patterns. Children in the experimental group, having learned the preliminary circle-ellipse discrimination, were started at the upper end of the ellipse series, where it was impossible for them to discriminate the forms. The backup procedure returned them to an easier discrimination after they made errors. They made many errors and reversed down through the ellipse series. Eventually, most of the children reached a point in the ellipse series where they abandoned their systematic errors and began to make correct first choices; then they advanced upward through the program. All of the children advanced to ellipse sizes that were much larger than the ellipse size at the point of their furthest descent.

  7. Autonomous calibration of single spin qubit operations

    NASA Astrophysics Data System (ADS)

    Frank, Florian; Unden, Thomas; Zoller, Jonathan; Said, Ressa S.; Calarco, Tommaso; Montangero, Simone; Naydenov, Boris; Jelezko, Fedor

    2017-12-01

    Fully autonomous precise control of qubits is crucial for quantum information processing, quantum communication, and quantum sensing applications. It requires minimal human intervention on the ability to model, to predict, and to anticipate the quantum dynamics, as well as to precisely control and calibrate single qubit operations. Here, we demonstrate single qubit autonomous calibrations via closed-loop optimisations of electron spin quantum operations in diamond. The operations are examined by quantum state and process tomographic measurements at room temperature, and their performances against systematic errors are iteratively rectified by an optimal pulse engineering algorithm. We achieve an autonomous calibrated fidelity up to 1.00 on a time scale of minutes for a spin population inversion and up to 0.98 on a time scale of hours for a single qubit π/2 -rotation within the experimental error of 2%. These results manifest a full potential for versatile quantum technologies.

  8. Cross-Spectrum PM Noise Measurement, Thermal Energy, and Metamaterial Filters.

    PubMed

    Gruson, Yannick; Giordano, Vincent; Rohde, Ulrich L; Poddar, Ajay K; Rubiola, Enrico

    2017-03-01

    Virtually all commercial instruments for the measurement of the oscillator PM noise make use of the cross-spectrum method (arXiv:1004.5539 [physics.ins-det], 2010). High sensitivity is achieved by correlation and averaging on two equal channels, which measure the same input, and reject the background of the instrument. We show that a systematic error is always present if the thermal energy of the input power splitter is not accounted for. Such error can result in noise underestimation up to a few decibels in the lowest-noise quartz oscillators, and in an invalid measurement in the case of cryogenic oscillators. As another alarming fact, the presence of metamaterial components in the oscillator results in unpredictable behavior and large errors, even in well controlled experimental conditions. We observed a spread of 40 dB in the phase noise spectra of an oscillator, just replacing the output filter.

  9. Empirical Analysis of Systematic Communication Errors.

    DTIC Science & Technology

    1981-09-01

    human o~ . .... 8 components in communication systems. (Systematic errors were defined to be those that occur regularly in human communication links...phase of the human communication process and focuses on the linkage between a specific piece of information (and the receiver) and the transmission...communication flow. (2) Exchange. Exchange is the next phase in human communication and entails a concerted effort on the part of the sender and receiver to share

  10. Systematics errors in strong lens modeling

    NASA Astrophysics Data System (ADS)

    Johnson, Traci L.; Sharon, Keren; Bayliss, Matthew B.

    We investigate how varying the number of multiple image constraints and the available redshift information can influence the systematic errors of strong lens models, specifically, the image predictability, mass distribution, and magnifications of background sources. This work will not only inform upon Frontier Field science, but also for work on the growing collection of strong lensing galaxy clusters, most of which are less massive and are capable of lensing a handful of galaxies.

  11. Low-Energy Proton Testing Methodology

    NASA Technical Reports Server (NTRS)

    Pellish, Jonathan A.; Marshall, Paul W.; Heidel, David F.; Schwank, James R.; Shaneyfelt, Marty R.; Xapsos, M.A.; Ladbury, Raymond L.; LaBel, Kenneth A.; Berg, Melanie; Kim, Hak S.; hide

    2009-01-01

    Use of low-energy protons and high-energy light ions is becoming necessary to investigate current-generation SEU thresholds. Systematic errors can dominate measurements made with low-energy protons. Range and energy straggling contribute to systematic error. Low-energy proton testing is not a step-and-repeat process. Low-energy protons and high-energy light ions can be used to measure SEU cross section of single sensitive features; important for simulation.

  12. Focusing cosmic telescopes: systematics of strong lens modeling

    NASA Astrophysics Data System (ADS)

    Johnson, Traci Lin; Sharon, Keren q.

    2018-01-01

    The use of strong gravitational lensing by galaxy clusters has become a popular method for studying the high redshift universe. While diverse in computational methods, lens modeling techniques have grasped the means for determining statistical errors on cluster masses and magnifications. However, the systematic errors have yet to be quantified, arising from the number of constraints, availablity of spectroscopic redshifts, and various types of image configurations. I will be presenting my dissertation work on quantifying systematic errors in parametric strong lensing techniques. I have participated in the Hubble Frontier Fields lens model comparison project, using simulated clusters to compare the accuracy of various modeling techniques. I have extended this project to understanding how changing the quantity of constraints affects the mass and magnification. I will also present my recent work extending these studies to clusters in the Outer Rim Simulation. These clusters are typical of the clusters found in wide-field surveys, in mass and lensing cross-section. These clusters have fewer constraints than the HFF clusters and thus, are more susceptible to systematic errors. With the wealth of strong lensing clusters discovered in surveys such as SDSS, SPT, DES, and in the future, LSST, this work will be influential in guiding the lens modeling efforts and follow-up spectroscopic campaigns.

  13. Optimized pulses for the control of uncertain qubits

    DOE PAGES

    Grace, Matthew D.; Dominy, Jason M.; Witzel, Wayne M.; ...

    2012-05-18

    The construction of high-fidelity control fields that are robust to control, system, and/or surrounding environment uncertainties is a crucial objective for quantum information processing. Using the two-state Landau-Zener model for illustrative simulations of a controlled qubit, we generate optimal controls for π/2 and π pulses and investigate their inherent robustness to uncertainty in the magnitude of the drift Hamiltonian. Next, we construct a quantum-control protocol to improve system-drift robustness by combining environment-decoupling pulse criteria and optimal control theory for unitary operations. By perturbatively expanding the unitary time-evolution operator for an open quantum system, previous analysis of environment-decoupling control pulses hasmore » calculated explicit control-field criteria to suppress environment-induced errors up to (but not including) third order from π/2 and π pulses. We systematically integrate this criteria with optimal control theory, incorporating an estimate of the uncertain parameter to produce improvements in gate fidelity and robustness, demonstrated via a numerical example based on double quantum dot qubits. For the qubit model used in this work, postfacto analysis of the resulting controls suggests that realistic control-field fluctuations and noise may contribute just as significantly to gate errors as system and environment fluctuations.« less

  14. A probabilistic approach to remote compositional analysis of planetary surfaces

    USGS Publications Warehouse

    Lapotre, Mathieu G.A.; Ehlmann, Bethany L.; Minson, Sarah E.

    2017-01-01

    Reflected light from planetary surfaces provides information, including mineral/ice compositions and grain sizes, by study of albedo and absorption features as a function of wavelength. However, deconvolving the compositional signal in spectra is complicated by the nonuniqueness of the inverse problem. Trade-offs between mineral abundances and grain sizes in setting reflectance, instrument noise, and systematic errors in the forward model are potential sources of uncertainty, which are often unquantified. Here we adopt a Bayesian implementation of the Hapke model to determine sets of acceptable-fit mineral assemblages, as opposed to single best fit solutions. We quantify errors and uncertainties in mineral abundances and grain sizes that arise from instrument noise, compositional end members, optical constants, and systematic forward model errors for two suites of ternary mixtures (olivine-enstatite-anorthite and olivine-nontronite-basaltic glass) in a series of six experiments in the visible-shortwave infrared (VSWIR) wavelength range. We show that grain sizes are generally poorly constrained from VSWIR spectroscopy. Abundance and grain size trade-offs lead to typical abundance errors of ≤1 wt % (occasionally up to ~5 wt %), while ~3% noise in the data increases errors by up to ~2 wt %. Systematic errors further increase inaccuracies by a factor of 4. Finally, phases with low spectral contrast or inaccurate optical constants can further increase errors. Overall, typical errors in abundance are <10%, but sometimes significantly increase for specific mixtures, prone to abundance/grain-size trade-offs that lead to high unmixing uncertainties. These results highlight the need for probabilistic approaches to remote determination of planetary surface composition.

  15. Accurate Magnetometer/Gyroscope Attitudes Using a Filter with Correlated Sensor Noise

    NASA Technical Reports Server (NTRS)

    Sedlak, J.; Hashmall, J.

    1997-01-01

    Magnetometers and gyroscopes have been shown to provide very accurate attitudes for a variety of spacecraft. These results have been obtained, however, using a batch-least-squares algorithm and long periods of data. For use in onboard applications, attitudes are best determined using sequential estimators such as the Kalman filter. When a filter is used to determine attitudes using magnetometer and gyroscope data for input, the resulting accuracy is limited by both the sensor accuracies and errors inherent in the Earth magnetic field model. The Kalman filter accounts for the random component by modeling the magnetometer and gyroscope errors as white noise processes. However, even when these tuning parameters are physically realistic, the rate biases (included in the state vector) have been found to show systematic oscillations. These are attributed to the field model errors. If the gyroscope noise is sufficiently small, the tuned filter 'memory' will be long compared to the orbital period. In this case, the variations in the rate bias induced by field model errors are substantially reduced. Mistuning the filter to have a short memory time leads to strongly oscillating rate biases and increased attitude errors. To reduce the effect of the magnetic field model errors, these errors are estimated within the filter and used to correct the reference model. An exponentially-correlated noise model is used to represent the filter estimate of the systematic error. Results from several test cases using in-flight data from the Compton Gamma Ray Observatory are presented. These tests emphasize magnetometer errors, but the method is generally applicable to any sensor subject to a combination of random and systematic noise.

  16. The Effects of Bar-coding Technology on Medication Errors: A Systematic Literature Review.

    PubMed

    Hutton, Kevin; Ding, Qian; Wellman, Gregory

    2017-02-24

    The bar-coding technology adoptions have risen drastically in U.S. health systems in the past decade. However, few studies have addressed the impact of bar-coding technology with strong prospective methodologies and the research, which has been conducted from both in-pharmacy and bedside implementations. This systematic literature review is to examine the effectiveness of bar-coding technology on preventing medication errors and what types of medication errors may be prevented in the hospital setting. A systematic search of databases was performed from 1998 to December 2016. Studies measuring the effect of bar-coding technology on medication errors were included in a full-text review. Studies with the outcomes other than medication errors such as efficiency or workarounds were excluded. The outcomes were measured and findings were summarized for each retained study. A total of 2603 articles were initially identified and 10 studies, which used prospective before-and-after study design, were fully reviewed in this article. Of the 10 included studies, 9 took place in the United States, whereas the remaining was conducted in the United Kingdom. One research article focused on bar-coding implementation in a pharmacy setting, whereas the other 9 focused on bar coding within patient care areas. All 10 studies showed overall positive effects associated with bar-coding implementation. The results of this review show that bar-coding technology may reduce medication errors in hospital settings, particularly on preventing targeted wrong dose, wrong drug, wrong patient, unauthorized drug, and wrong route errors.

  17. Minimizing systematic errors from atmospheric multiple scattering and satellite viewing geometry in coastal zone color scanner level IIA imagery

    NASA Technical Reports Server (NTRS)

    Martin, D. L.; Perry, M. J.

    1994-01-01

    Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.

  18. Statistical and systematic errors in the measurement of weak-lensing Minkowski functionals: Application to the Canada-France-Hawaii Lensing Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shirasaki, Masato; Yoshida, Naoki, E-mail: masato.shirasaki@utap.phys.s.u-tokyo.ac.jp

    2014-05-01

    The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degradesmore » the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ∼1400 deg{sup 2} will constrain the dark energy equation of the state parameter with an error of Δw {sub 0} ∼ 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1σ error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density Ω{sub m0}=0.256±{sub 0.046}{sup 0.054}.« less

  19. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  20. Single Crystal Diffractometry

    NASA Astrophysics Data System (ADS)

    Arndt, U. W.; Willis, B. T. M.

    2009-06-01

    Preface; Acknowledgements; Part I. Introduction; Part II. Diffraction Geometry; Part III. The Design of Diffractometers; Part IV. Detectors; Part V. Electronic Circuits; Part VI. The Production of the Primary Beam (X-rays); Part VII. The Production of the Primary Beam (Neutrons); Part VIII. The Background; Part IX. Systematic Errors in Measuring Relative Integrated Intensities; Part X. Procedure for Measuring Integrated Intensities; Part XI. Derivation and Accuracy of Structure Factors; Part XII. Computer Programs and On-line Control; Appendix; References; Index.

  1. An Analysis of Computational Errors in the Use of Division Algorithms by Fourth-Grade Students.

    ERIC Educational Resources Information Center

    Stefanich, Greg P.; Rokusek, Teri

    1992-01-01

    Presents a study that analyzed errors made by randomly chosen fourth grade students (25 of 57) while using the division algorithm and investigated the effect of remediation on identified systematic errors. Results affirm that error pattern diagnosis and directed remediation lead to new learning and long-term retention. (MDH)

  2. Digital identification of cartographic control points

    NASA Technical Reports Server (NTRS)

    Gaskell, R. W.

    1988-01-01

    Techniques have been developed for the sub-pixel location of control points in satellite images returned by the Voyager spacecraft. The procedure uses digital imaging data in the neighborhood of the point to form a multipicture model of a piece of the surface. Comparison of this model with the digital image in each picture determines the control point locations to about a tenth of a pixel. At this level of precision, previously insignificant effects must be considered, including chromatic aberration, high level imaging distortions, and systematic errors due to navigation uncertainties. Use of these methods in the study of Jupiter's satellite Io has proven very fruitful.

  3. Increased errors and decreased performance at night: A systematic review of the evidence concerning shift work and quality.

    PubMed

    de Cordova, Pamela B; Bradford, Michelle A; Stone, Patricia W

    2016-02-15

    Shift workers have worse health outcomes than employees who work standard business hours. However, it is unclear how this poorer health shift may be related to employee work productivity. The purpose of this systematic review is to assess the relationship between shift work and errors and performance. Searches of MEDLINE/PubMed, EBSCOhost, and CINAHL were conducted to identify articles that examined the relationship between shift work, errors, quality, productivity, and performance. All articles were assessed for study quality. A total of 435 abstracts were screened with 13 meeting inclusion criteria. Eight studies were rated to be of strong, methodological quality. Nine studies demonstrated a positive relationship that night shift workers committed more errors and had decreased performance. Night shift workers have worse health that may contribute to errors and decreased performance in the workplace.

  4. Filtering Methods for Error Reduction in Spacecraft Attitude Estimation Using Quaternion Star Trackers

    NASA Technical Reports Server (NTRS)

    Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil

    2011-01-01

    Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.

  5. A water-vapor radiometer error model. [for ionosphere in geodetic microwave techniques

    NASA Technical Reports Server (NTRS)

    Beckman, B.

    1985-01-01

    The water-vapor radiometer (WVR) is used to calibrate unpredictable delays in the wet component of the troposphere in geodetic microwave techniques such as very-long-baseline interferometry (VLBI) and Global Positioning System (GPS) tracking. Based on experience with Jet Propulsion Laboratory (JPL) instruments, the current level of accuracy in wet-troposphere calibration limits the accuracy of local vertical measurements to 5-10 cm. The goal for the near future is 1-3 cm. Although the WVR is currently the best calibration method, many instruments are prone to systematic error. In this paper, a treatment of WVR data is proposed and evaluated. This treatment reduces the effect of WVR systematic errors by estimating parameters that specify an assumed functional form for the error. The assumed form of the treatment is evaluated by comparing the results of two similar WVR's operating near each other. Finally, the observability of the error parameters is estimated by covariance analysis.

  6. Investigating Systematic Errors of the Interstellar Flow Longitude Derived from the Pickup Ion Cutoff

    NASA Astrophysics Data System (ADS)

    Taut, A.; Berger, L.; Drews, C.; Bower, J.; Keilbach, D.; Lee, M. A.; Moebius, E.; Wimmer-Schweingruber, R. F.

    2017-12-01

    Complementary to the direct neutral particle measurements performed by e.g. IBEX, the measurement of PickUp Ions (PUIs) constitutes a diagnostic tool to investigate the local interstellar medium. PUIs are former neutral particles that have been ionized in the inner heliosphere. Subsequently, they are picked up by the solar wind and its frozen-in magnetic field. Due to this process, a characteristic Velocity Distribution Function (VDF) with a sharp cutoff evolves, which carries information about the PUI's injection speed and thus the former neutral particle velocity. The symmetry of the injection speed about the interstellar flow vector is used to derive the interstellar flow longitude from PUI measurements. Using He PUI data obtained by the PLASTIC sensor on STEREO A, we investigate how this concept may be affected by systematic errors. The PUI VDF strongly depends on the orientation of the local interplanetary magnetic field. Recently injected PUIs with speeds just below the cutoff speed typically form a highly anisotropic torus distribution in velocity space, which leads to a longitudinal transport for certain magnetic field orientation. Therefore, we investigate how the selection of magnetic field configurations in the data affects the result for the interstellar flow longitude that we derive from the PUI cutoff. Indeed, we find that the results follow a systematic trend with the filtered magnetic field angles that can lead to a shift of the result up to 5°. In turn, this means that every value for the interstellar flow longitude derived from the PUI cutoff is affected by a systematic error depending on the utilized magnetic field orientations. Here, we present our observations, discuss possible reasons for the systematic trend we discovered, and indicate selections that may minimize the systematic errors.

  7. Digital stereo photogrammetry for grain-scale monitoring of fluvial surfaces: Error evaluation and workflow optimisation

    NASA Astrophysics Data System (ADS)

    Bertin, Stephane; Friedrich, Heide; Delmas, Patrice; Chan, Edwin; Gimel'farb, Georgy

    2015-03-01

    Grain-scale monitoring of fluvial morphology is important for the evaluation of river system dynamics. Significant progress in remote sensing and computer performance allows rapid high-resolution data acquisition, however, applications in fluvial environments remain challenging. Even in a controlled environment, such as a laboratory, the extensive acquisition workflow is prone to the propagation of errors in digital elevation models (DEMs). This is valid for both of the common surface recording techniques: digital stereo photogrammetry and terrestrial laser scanning (TLS). The optimisation of the acquisition process, an effective way to reduce the occurrence of errors, is generally limited by the use of commercial software. Therefore, the removal of evident blunders during post processing is regarded as standard practice, although this may introduce new errors. This paper presents a detailed evaluation of a digital stereo-photogrammetric workflow developed for fluvial hydraulic applications. The introduced workflow is user-friendly and can be adapted to various close-range measurements: imagery is acquired with two Nikon D5100 cameras and processed using non-proprietary "on-the-job" calibration and dense scanline-based stereo matching algorithms. Novel ground truth evaluation studies were designed to identify the DEM errors, which resulted from a combination of calibration errors, inaccurate image rectifications and stereo-matching errors. To ensure optimum DEM quality, we show that systematic DEM errors must be minimised by ensuring a good distribution of control points throughout the image format during calibration. DEM quality is then largely dependent on the imagery utilised. We evaluated the open access multi-scale Retinex algorithm to facilitate the stereo matching, and quantified its influence on DEM quality. Occlusions, inherent to any roughness element, are still a major limiting factor to DEM accuracy. We show that a careful selection of the camera-to-object and baseline distance reduces errors in occluded areas and that realistic ground truths help to quantify those errors.

  8. A Review of Cochrane Systematic Reviews of Interventions Relevant to Orthoptic Practice.

    PubMed

    Rowe, Fiona J; Elliott, Sue; Gordon, Iris; Shah, Anupa

    2017-09-01

    To present an overview of the range of systematic reviews on intervention trials pertinent to orthoptic practice, produced by the Cochrane Eyes and Vision group (CEV). We searched the 2016 Cochrane Library database (31.03.2016) to identify completed reviews and protocols of direct relevance to orthoptic practice. These reviews are currently completed and published, available on www.thecochranelibrary.com (free to UK health employees) or via the CEV website (http://eyes.cochrane.org/) . We found 27 completed CEV reviews across the topics of strabismus, amblyopia, refractive errors, and low vision. Seven completed CEV protocols addressed topics of strabismus, amblyopia, refractive errors, low vision, and screening. We found 3 completed Cochrane Stroke reviews addressing visual field loss, eye movement impairment, and age-related vision loss. The systematic review process presents an important opportunity for any clinician to contribute to the establishment of reliable, evidence-based orthoptic practice. Each review has an abstract and plain language summary that many non-clinicians find useful, followed by a full copy of the review (background, objectives, methods, results, discussion) with a conclusion section that is divided into implications for practice and implications for research. The current reviews provide patients/parents/carers with information about various different conditions and treatment options, but also provide clinicians with a summary of the available evidence on interventions, to use as a guide for both clinical practice and future research planning. The reviews identified in this overview highlight the evidence available for effective interventions for strabismus, amblyopia, refractive errors, and low vision or stroke rehabilitation as well as the gaps in the evidence base. Thus, a demand exists for future robust, randomized, controlled trials of such interventions of importance in orthoptic practice.

  9. Improving the quality of marine geophysical track line data: Along-track analysis

    NASA Astrophysics Data System (ADS)

    Chandler, Michael T.; Wessel, Paul

    2008-02-01

    We have examined 4918 track line geophysics cruises archived at the U.S. National Geophysical Data Center (NGDC) using comprehensive error checking methods. Each cruise was checked for observation outliers, excessive gradients, metadata consistency, and general agreement with satellite altimetry-derived gravity and predicted bathymetry grids. Thresholds for error checking were determined empirically through inspection of histograms for all geophysical values, gradients, and differences with gridded data sampled along ship tracks. Robust regression was used to detect systematic scale and offset errors found by comparing ship bathymetry and free-air anomalies to the corresponding values from global grids. We found many recurring error types in the NGDC archive, including poor navigation, inappropriately scaled or offset data, excessive gradients, and extended offsets in depth and gravity when compared to global grids. While ˜5-10% of bathymetry and free-air gravity records fail our conservative tests, residual magnetic errors may exceed twice this proportion. These errors hinder the effective use of the data and may lead to mistakes in interpretation. To enable the removal of gross errors without over-writing original cruise data, we developed an errata system that concisely reports all errors encountered in a cruise. With such errata files, scientists may share cruise corrections, thereby preventing redundant processing. We have implemented these quality control methods in the modified MGD77 supplement to the Generic Mapping Tools software suite.

  10. A cognitive taxonomy of medical errors.

    PubMed

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2004-06-01

    Propose a cognitive taxonomy of medical errors at the level of individuals and their interactions with technology. Use cognitive theories of human error and human action to develop the theoretical foundations of the taxonomy, develop the structure of the taxonomy, populate the taxonomy with examples of medical error cases, identify cognitive mechanisms for each category of medical error under the taxonomy, and apply the taxonomy to practical problems. Four criteria were used to evaluate the cognitive taxonomy. The taxonomy should be able (1) to categorize major types of errors at the individual level along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to describe how and explain why a specific error occurs, and (4) to generate intervention strategies for each type of error. The proposed cognitive taxonomy largely satisfies the four criteria at a theoretical and conceptual level. Theoretically, the proposed cognitive taxonomy provides a method to systematically categorize medical errors at the individual level along cognitive dimensions, leads to a better understanding of the underlying cognitive mechanisms of medical errors, and provides a framework that can guide future studies on medical errors. Practically, it provides guidelines for the development of cognitive interventions to decrease medical errors and foundation for the development of medical error reporting system that not only categorizes errors but also identifies problems and helps to generate solutions. To validate this model empirically, we will next be performing systematic experimental studies.

  11. Removal of batch effects using distribution-matching residual networks.

    PubMed

    Shaham, Uri; Stanton, Kelly P; Zhao, Jun; Li, Huamin; Raddassi, Khadir; Montgomery, Ruth; Kluger, Yuval

    2017-08-15

    Sources of variability in experimentally derived data include measurement error in addition to the physical phenomena of interest. This measurement error is a combination of systematic components, originating from the measuring instrument and random measurement errors. Several novel biological technologies, such as mass cytometry and single-cell RNA-seq (scRNA-seq), are plagued with systematic errors that may severely affect statistical analysis if the data are not properly calibrated. We propose a novel deep learning approach for removing systematic batch effects. Our method is based on a residual neural network, trained to minimize the Maximum Mean Discrepancy between the multivariate distributions of two replicates, measured in different batches. We apply our method to mass cytometry and scRNA-seq datasets, and demonstrate that it effectively attenuates batch effects. our codes and data are publicly available at https://github.com/ushaham/BatchEffectRemoval.git. yuval.kluger@yale.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  12. Improved methods for the measurement and analysis of stellar magnetic fields

    NASA Technical Reports Server (NTRS)

    Saar, Steven H.

    1988-01-01

    The paper presents several improved methods for the measurement of magnetic fields on cool stars which take into account simple radiative transfer effects and the exact Zeeman patterns. Using these methods, high-resolution, low-noise data can be fitted with theoretical line profiles to determine the mean magnetic field strength in stellar active regions and a model-dependent fraction of the stellar surface (filling factor) covered by these regions. Random errors in the derived field strength and filling factor are parameterized in terms of signal-to-noise ratio, wavelength, spectral resolution, stellar rotation rate, and the magnetic parameters themselves. Weak line blends, if left uncorrected, can have significant systematic effects on the derived magnetic parameters, and thus several methods are developed to compensate partially for them. The magnetic parameters determined by previous methods likely have systematic errors because of such line blends and because of line saturation effects. Other sources of systematic error are explored in detail. These sources of error currently make it difficult to determine the magnetic parameters of individual stars to better than about + or - 20 percent.

  13. Efficient Methods to Assimilate Satellite Retrievals Based on Information Content. Part 2; Suboptimal Retrieval Assimilation

    NASA Technical Reports Server (NTRS)

    Joiner, J.; Dee, D. P.

    1998-01-01

    One of the outstanding problems in data assimilation has been and continues to be how best to utilize satellite data while balancing the tradeoff between accuracy and computational cost. A number of weather prediction centers have recently achieved remarkable success in improving their forecast skill by changing the method by which satellite data are assimilated into the forecast model from the traditional approach of assimilating retrievals to the direct assimilation of radiances in a variational framework. The operational implementation of such a substantial change in methodology involves a great number of technical details, e.g., pertaining to quality control procedures, systematic error correction techniques, and tuning of the statistical parameters in the analysis algorithm. Although there are clear theoretical advantages to the direct radiance assimilation approach, it is not obvious at all to what extent the improvements that have been obtained so far can be attributed to the change in methodology, or to various technical aspects of the implementation. The issue is of interest because retrieval assimilation retains many practical and logistical advantages which may become even more significant in the near future when increasingly high-volume data sources become available. The central question we address here is: how much improvement can we expect from assimilating radiances rather than retrievals, all other things being equal? We compare the two approaches in a simplified one-dimensional theoretical framework, in which problems related to quality control and systematic error correction are conveniently absent. By assuming a perfect radiative transfer model and perfect knowledge of radiance and background error covariances, we are able to formulate a nonlinear local error analysis for each assimilation method. Direct radiance assimilation is optimal in this idealized context, while the traditional method of assimilating retrievals is suboptimal because it ignores the cross-covariances between background errors and retrieval errors. We show that interactive retrieval assimilation (where the same background used for assimilation is also used in the retrieval step) is equivalent to direct assimilation of radiances with suboptimal analysis weights. We illustrate and extend these theoretical arguments with several one-dimensional assimilation experiments, where we estimate vertical atmospheric profiles using simulated data from both the High-resolution InfraRed Sounder 2 (HIRS2) and the future Atmospheric InfraRed Sounder (AIRS).

  14. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    PubMed

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Y; Fullerton, G; Goins, B

    Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group;more » 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement errors during the animal study.« less

  16. Atmospheric Dispersion Effects in Weak Lensing Measurements

    DOE PAGES

    Plazas, Andrés Alejandro; Bernstein, Gary

    2012-10-01

    The wavelength dependence of atmospheric refraction causes elongation of finite-bandwidth images along the elevation vector, which produces spurious signals in weak gravitational lensing shear measurements unless this atmospheric dispersion is calibrated and removed to high precision. Because astrometric solutions and PSF characteristics are typically calibrated from stellar images, differences between the reference stars' spectra and the galaxies' spectra will leave residual errors in both the astrometric positions (dr) and in the second moment (width) of the wavelength-averaged PSF (dv) for galaxies.We estimate the level of dv that will induce spurious weak lensing signals in PSF-corrected galaxy shapes that exceed themore » statistical errors of the DES and the LSST cosmic-shear experiments. We also estimate the dr signals that will produce unacceptable spurious distortions after stacking of exposures taken at different airmasses and hour angles. We also calculate the errors in the griz bands, and find that dispersion systematics, uncorrected, are up to 6 and 2 times larger in g and r bands,respectively, than the requirements for the DES error budget, but can be safely ignored in i and z bands. For the LSST requirements, the factors are about 30, 10, and 3 in g, r, and i bands,respectively. We find that a simple correction linear in galaxy color is accurate enough to reduce dispersion shear systematics to insignificant levels in the r band for DES and i band for LSST,but still as much as 5 times than the requirements for LSST r-band observations. More complex corrections will likely be able to reduce the systematic cosmic-shear errors below statistical errors for LSST r band. But g-band effects remain large enough that it seems likely that induced systematics will dominate the statistical errors of both surveys, and cosmic-shear measurements should rely on the redder bands.« less

  17. Ground state properties of 3d metals from self-consistent GW approach

    DOE PAGES

    Kutepov, Andrey L.

    2017-10-06

    The self consistent GW approach (scGW) has been applied to calculate the ground state properties (equilibrium Wigner–Seitz radius S WZ and bulk modulus B) of 3d transition metals Sc, Ti, V, Fe, Co, Ni, and Cu. The approach systematically underestimates S WZ with average relative deviation from the experimental data of about 1% and it overestimates the calculated bulk modulus with relative error of about 25%. We show that scGW is superior in accuracy as compared to the local density approximation but it is less accurate than the generalized gradient approach for the materials studied. If compared to the randommore » phase approximation, scGW is slightly less accurate, but its error for 3d metals looks more systematic. Lastly, the systematic nature of the deviation from the experimental data suggests that the next order of the perturbation theory should allow one to reduce the error.« less

  18. Ground state properties of 3d metals from self-consistent GW approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutepov, Andrey L.

    The self consistent GW approach (scGW) has been applied to calculate the ground state properties (equilibrium Wigner–Seitz radius S WZ and bulk modulus B) of 3d transition metals Sc, Ti, V, Fe, Co, Ni, and Cu. The approach systematically underestimates S WZ with average relative deviation from the experimental data of about 1% and it overestimates the calculated bulk modulus with relative error of about 25%. We show that scGW is superior in accuracy as compared to the local density approximation but it is less accurate than the generalized gradient approach for the materials studied. If compared to the randommore » phase approximation, scGW is slightly less accurate, but its error for 3d metals looks more systematic. Lastly, the systematic nature of the deviation from the experimental data suggests that the next order of the perturbation theory should allow one to reduce the error.« less

  19. A systematic comparison of error correction enzymes by next-generation sequencing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.

    Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less

  20. A systematic comparison of error correction enzymes by next-generation sequencing

    DOE PAGES

    Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.; ...

    2017-08-01

    Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less

  1. The effect of the Earth's oblate spheroid shape on the accuracy of a time-of-arrival lightning ground strike locating system

    NASA Technical Reports Server (NTRS)

    Casper, Paul W.; Bent, Rodney B.

    1991-01-01

    The algorithm used in previous technology time-of-arrival lightning mapping systems was based on the assumption that the earth is a perfect spheroid. These systems yield highly-accurate lightning locations, which is their major strength. However, extensive analysis of tower strike data has revealed occasionally significant (one to two kilometer) systematic offset errors which are not explained by the usual error sources. It was determined that these systematic errors reduce dramatically (in some cases) when the oblate shape of the earth is taken into account. The oblate spheroid correction algorithm and a case example is presented.

  2. Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.

    PubMed

    Zaitsev, M; Steinhoff, S; Shah, N J

    2003-06-01

    A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.

  3. Technical Basis for Evaluating Software-Related Common-Cause Failures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muhlheim, Michael David; Wood, Richard

    2016-04-01

    The instrumentation and control (I&C) system architecture at a nuclear power plant (NPP) incorporates protections against common-cause failures (CCFs) through the use of diversity and defense-in-depth. Even for well-established analog-based I&C system designs, the potential for CCFs of multiple systems (or redundancies within a system) constitutes a credible threat to defeating the defense-in-depth provisions within the I&C system architectures. The integration of digital technologies into the I&C systems provides many advantages compared to the aging analog systems with respect to reliability, maintenance, operability, and cost effectiveness. However, maintaining the diversity and defense-in-depth for both the hardware and software within themore » digital system is challenging. In fact, the introduction of digital technologies may actually increase the potential for CCF vulnerabilities because of the introduction of undetected systematic faults. These systematic faults are defined as a “design fault located in a software component” and at a high level, are predominately the result of (1) errors in the requirement specification, (2) inadequate provisions to account for design limits (e.g., environmental stress), or (3) technical faults incorporated in the internal system (or architectural) design or implementation. Other technology-neutral CCF concerns include hardware design errors, equipment qualification deficiencies, installation or maintenance errors, instrument loop scaling and setpoint mistakes.« less

  4. Light meson form factors at high Q2 from lattice QCD

    NASA Astrophysics Data System (ADS)

    Koponen, Jonna; Zimermmane-Santos, André; Davies, Christine; Lepage, G. Peter; Lytle, Andrew

    2018-03-01

    Measurements and theoretical calculations of meson form factors are essential for our understanding of internal hadron structure and QCD, the dynamics that bind the quarks in hadrons. The pion electromagnetic form factor has been measured at small space-like momentum transfer |q2| < 0.3 GeV2 by pion scattering from atomic electrons and at values up to 2.5 GeV2 by scattering electrons from the pion cloud around a proton. On the other hand, in the limit of very large (or infinite) Q2 = -q2, perturbation theory is applicable. This leaves a gap in the intermediate Q2 where the form factors are not known. As a part of their 12 GeV upgrade Jefferson Lab will measure pion and kaon form factors in this intermediate region, up to Q2 of 6 GeV2. This is then an ideal opportunity for lattice QCD to make an accurate prediction ahead of the experimental results. Lattice QCD provides a from-first-principles approach to calculate form factors, and the challenge here is to control the statistical and systematic uncertainties as errors grow when going to higher Q2 values. Here we report on a calculation that tests the method using an ηs meson, a 'heavy pion' made of strange quarks, and also present preliminary results for kaon and pion form factors. We use the nf = 2 + 1 + 1 ensembles made by the MILC collaboration and Highly Improved Staggered Quarks, which allows us to obtain high statistics. The HISQ action is also designed to have small dicretisation errors. Using several light quark masses and lattice spacings allows us to control the chiral and continuum extrapolation and keep systematic errors in check. Warning, no authors found for 2018EPJWC.17506016.

  5. Identification of the feedforward component in manual control with predictable target signals.

    PubMed

    Drop, Frank M; Pool, Daan M; Damveld, Herman J; van Paassen, Marinus M; Mulder, Max

    2013-12-01

    In the manual control of a dynamic system, the human controller (HC) often follows a visible and predictable reference path. Compared with a purely feedback control strategy, performance can be improved by making use of this knowledge of the reference. The operator could effectively introduce feedforward control in conjunction with a feedback path to compensate for errors, as hypothesized in literature. However, feedforward behavior has never been identified from experimental data, nor have the hypothesized models been validated. This paper investigates human control behavior in pursuit tracking of a predictable reference signal while being perturbed by a quasi-random multisine disturbance signal. An experiment was done in which the relative strength of the target and disturbance signals were systematically varied. The anticipated changes in control behavior were studied by means of an ARX model analysis and by fitting three parametric HC models: two different feedback models and a combined feedforward and feedback model. The ARX analysis shows that the experiment participants employed control action on both the error and the target signal. The control action on the target was similar to the inverse of the system dynamics. Model fits show that this behavior can be modeled best by the combined feedforward and feedback model.

  6. IPTV multicast with peer-assisted lossy error control

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd

    2010-07-01

    Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.

  7. Why GPS makes distances bigger than they are

    PubMed Central

    Ranacher, Peter; Brunauer, Richard; Trutschnig, Wolfgang; Van der Spek, Stefan; Reich, Siegfried

    2016-01-01

    ABSTRACT Global navigation satellite systems such as the Global Positioning System (GPS) is one of the most important sensors for movement analysis. GPS is widely used to record the trajectories of vehicles, animals and human beings. However, all GPS movement data are affected by both measurement and interpolation errors. In this article we show that measurement error causes a systematic bias in distances recorded with a GPS; the distance between two points recorded with a GPS is – on average – bigger than the true distance between these points. This systematic ‘overestimation of distance’ becomes relevant if the influence of interpolation error can be neglected, which in practice is the case for movement sampled at high frequencies. We provide a mathematical explanation of this phenomenon and illustrate that it functionally depends on the autocorrelation of GPS measurement error (C). We argue that C can be interpreted as a quality measure for movement data recorded with a GPS. If there is a strong autocorrelation between any two consecutive position estimates, they have very similar error. This error cancels out when average speed, distance or direction is calculated along the trajectory. Based on our theoretical findings we introduce a novel approach to determine C in real-world GPS movement data sampled at high frequencies. We apply our approach to pedestrian trajectories and car trajectories. We found that the measurement error in the data was strongly spatially and temporally autocorrelated and give a quality estimate of the data. Most importantly, our findings are not limited to GPS alone. The systematic bias and its implications are bound to occur in any movement data collected with absolute positioning if interpolation error can be neglected. PMID:27019610

  8. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    NASA Astrophysics Data System (ADS)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.

  9. ILRS Activities in Monitoring Systematic Errors in SLR Data

    NASA Astrophysics Data System (ADS)

    Pavlis, E. C.; Luceri, V.; Kuzmicz-Cieslak, M.; Bianco, G.

    2017-12-01

    The International Laser Ranging Service (ILRS) contributes to ITRF development unique information that only Satellite Laser Ranging—SLR is sensitive to: the definition of the origin, and in equal parts with VLBI, the scale of the model. For the development of ITRF2014, the ILRS analysts adopted a revision of the internal standards and procedures in generating our contribution from the eight ILRS Analysis Centers. The improved results for the ILRS components were reflected in the resulting new time series of the ITRF origin and scale, showing insignificant trends and tighter scatter. This effort was further extended after the release of ITRF2014, with the execution of a Pilot Project (PP) in the 2016-2017 timeframe that demonstrated the robust estimation of persistent systematic errors at the millimeter level. ILRS ASC is now turning this into an operational tool to monitor station performance and to generate a history of systematics at each station, to be used with each re-analysis for future ITRF model developments. This is part of a broader ILRS effort to improve the quality control of the data collection process as well as that of our products. To this end, the ILRS has established a "Quality Control Board—QCB" that comprises of members from the analysis and engineering groups, the Central Bureau, and even user groups with special interests. The QCB meets by telecon monthly and oversees the various ongoing projects, develops ideas for new tools and future products. This presentation will focus on the main topic with an update on the results so far, the schedule for the near future and its operational implementation, along with a brief description of upcoming new ILRS products.

  10. Local error estimates for adaptive simulation of the Reaction–Diffusion Master Equation via operator splitting

    PubMed Central

    Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda

    2015-01-01

    The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735

  11. Local error estimates for adaptive simulation of the Reaction-Diffusion Master Equation via operator splitting.

    PubMed

    Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda

    2014-06-01

    The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity.

  12. Assessment and quantification of patient set-up errors in nasopharyngeal cancer patients and their biological and dosimetric impact in terms of generalized equivalent uniform dose (gEUD), tumour control probability (TCP) and normal tissue complication probability (NTCP)

    PubMed Central

    Marcie, S; Fellah, M; Chami, S; Mekki, F

    2015-01-01

    Objective: The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). Methods: 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy–oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose–volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. Results: The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. Conclusion: The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. Advances in knowledge: The gEUD, TCP and NTCP may be more suitable tools to assess the treatment plans before treating the patients. PMID:25882689

  13. Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients

    DOE PAGES

    Bruno, Mattia; Lehner, Christoph; Soni, Amarjit

    2018-04-20

    Here, we propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C 1 and C 2, related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.

  14. Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients

    NASA Astrophysics Data System (ADS)

    Bruno, Mattia; Lehner, Christoph; Soni, Amarjit; Rbc; Ukqcd Collaborations

    2018-04-01

    We propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C1 and C2 , related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.

  15. Primordial Inflation Polarization Explorer: Status and Plans

    NASA Technical Reports Server (NTRS)

    Kogut, Alan

    2009-01-01

    The Primordial Inflation Polarization Explorer is a balloon-borne instrument to measure the polarization of the cosmic microwave background in order to detect the characteristic signature of gravity waves created during an inflationary epoch in the early universe. PIPER combines cold /I.G K\\ optics, 5120 bolometric detectors, and rapid polarization modulation using VPM grids to achieve both high sensitivity and excellent control of systematic errors. I will discuss the current status and plans for the PIPER instrument.

  16. Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruno, Mattia; Lehner, Christoph; Soni, Amarjit

    Here, we propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C 1 and C 2, related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.

  17. mtDNAmanager: a Web-based tool for the management and quality analysis of mitochondrial DNA control-region sequences

    PubMed Central

    Lee, Hwan Young; Song, Injee; Ha, Eunho; Cho, Sung-Bae; Yang, Woo Ick; Shin, Kyoung-Jin

    2008-01-01

    Background For the past few years, scientific controversy has surrounded the large number of errors in forensic and literature mitochondrial DNA (mtDNA) data. However, recent research has shown that using mtDNA phylogeny and referring to known mtDNA haplotypes can be useful for checking the quality of sequence data. Results We developed a Web-based bioinformatics resource "mtDNAmanager" that offers a convenient interface supporting the management and quality analysis of mtDNA sequence data. The mtDNAmanager performs computations on mtDNA control-region sequences to estimate the most-probable mtDNA haplogroups and retrieves similar sequences from a selected database. By the phased designation of the most-probable haplogroups (both expected and estimated haplogroups), mtDNAmanager enables users to systematically detect errors whilst allowing for confirmation of the presence of clear key diagnostic mutations and accompanying mutations. The query tools of mtDNAmanager also facilitate database screening with two options of "match" and "include the queried nucleotide polymorphism". In addition, mtDNAmanager provides Web interfaces for users to manage and analyse their own data in batch mode. Conclusion The mtDNAmanager will provide systematic routines for mtDNA sequence data management and analysis via easily accessible Web interfaces, and thus should be very useful for population, medical and forensic studies that employ mtDNA analysis. mtDNAmanager can be accessed at . PMID:19014619

  18. Improving qPCR telomere length assays: Controlling for well position effects increases statistical power.

    PubMed

    Eisenberg, Dan T A; Kuzawa, Christopher W; Hayes, M Geoffrey

    2015-01-01

    Telomere length (TL) is commonly measured using quantitative PCR (qPCR). Although, easier than the southern blot of terminal restriction fragments (TRF) TL measurement method, one drawback of qPCR is that it introduces greater measurement error and thus reduces the statistical power of analyses. To address a potential source of measurement error, we consider the effect of well position on qPCR TL measurements. qPCR TL data from 3,638 people run on a Bio-Rad iCycler iQ are reanalyzed here. To evaluate measurement validity, correspondence with TRF, age, and between mother and offspring are examined. First, we present evidence for systematic variation in qPCR TL measurements in relation to thermocycler well position. Controlling for these well-position effects consistently improves measurement validity and yields estimated improvements in statistical power equivalent to increasing sample sizes by 16%. We additionally evaluated the linearity of the relationships between telomere and single copy gene control amplicons and between qPCR and TRF measures. We find that, unlike some previous reports, our data exhibit linear relationships. We introduce the standard error in percent, a superior method for quantifying measurement error as compared to the commonly used coefficient of variation. Using this measure, we find that excluding samples with high measurement error does not improve measurement validity in our study. Future studies using block-based thermocyclers should consider well position effects. Since additional information can be gleaned from well position corrections, rerunning analyses of previous results with well position correction could serve as an independent test of the validity of these results. © 2015 Wiley Periodicals, Inc.

  19. Medication errors in chemotherapy preparation and administration: a survey conducted among oncology nurses in Turkey.

    PubMed

    Ulas, Arife; Silay, Kamile; Akinci, Sema; Dede, Didem Sener; Akinci, Muhammed Bulent; Sendur, Mehmet Ali Nahit; Cubukcu, Erdem; Coskun, Hasan Senol; Degirmenci, Mustafa; Utkan, Gungor; Ozdemir, Nuriye; Isikdogan, Abdurrahman; Buyukcelik, Abdullah; Inanc, Mevlude; Bilici, Ahmet; Odabasi, Hatice; Cihan, Sener; Avci, Nilufer; Yalcin, Bulent

    2015-01-01

    Medication errors in oncology may cause severe clinical problems due to low therapeutic indices and high toxicity of chemotherapeutic agents. We aimed to investigate unintentional medication errors and underlying factors during chemotherapy preparation and administration based on a systematic survey conducted to reflect oncology nurses experience. This study was conducted in 18 adult chemotherapy units with volunteer participation of 206 nurses. A survey developed by primary investigators and medication errors (MAEs) defined preventable errors during prescription of medication, ordering, preparation or administration. The survey consisted of 4 parts: demographic features of nurses; workload of chemotherapy units; errors and their estimated monthly number during chemotherapy preparation and administration; and evaluation of the possible factors responsible from ME. The survey was conducted by face to face interview and data analyses were performed with descriptive statistics. Chi-square or Fisher exact tests were used for a comparative analysis of categorical data. Some 83.4% of the 210 nurses reported one or more than one error during chemotherapy preparation and administration. Prescribing or ordering wrong doses by physicians (65.7%) and noncompliance with administration sequences during chemotherapy administration (50.5%) were the most common errors. The most common estimated average monthly error was not following the administration sequence of the chemotherapeutic agents (4.1 times/month, range 1-20). The most important underlying reasons for medication errors were heavy workload (49.7%) and insufficient number of staff (36.5%). Our findings suggest that the probability of medication error is very high during chemotherapy preparation and administration, the most common involving prescribing and ordering errors. Further studies must address the strategies to minimize medication error in chemotherapy receiving patients, determine sufficient protective measures and establishing multistep control mechanisms.

  20. Automated optimal coordination of multiple-DOF neuromuscular actions in feedforward neuroprostheses.

    PubMed

    Lujan, J Luis; Crago, Patrick E

    2009-01-01

    This paper describes a new method for designing feedforward controllers for multiple-muscle, multiple-DOF, motor system neural prostheses. The design process is based on experimental measurement of the forward input/output properties of the neuromechanical system and numerical optimization of stimulation patterns to meet muscle coactivation criteria, thus resolving the muscle redundancy (i.e., overcontrol) and the coupled DOF problems inherent in neuromechanical systems. We designed feedforward controllers to control the isometric forces at the tip of the thumb in two directions during stimulation of three thumb muscles as a model system. We tested the method experimentally in ten able-bodied individuals and one patient with spinal cord injury. Good control of isometric force in both DOFs was observed, with rms errors less than 10% of the force range in seven experiments and statistically significant correlations between the actual and target forces in all ten experiments. Systematic bias and slope errors were observed in a few experiments, likely due to the neuromuscular fatigue. Overall, the tests demonstrated the ability of a general design approach to satisfy both control and coactivation criteria in multiple-muscle, multiple-axis neuromechanical systems, which is applicable to a wide range of neuromechanical systems and stimulation electrodes.

  1. The Observational Determination of the Primordial Helium Abundance: a Y2K Status Report

    NASA Astrophysics Data System (ADS)

    Skillman, Evan D.

    I review observational progress and assess the current state of the determination of the primordial helium abundance, Yp. At present there are two determinations with non-overlapping errors. My impression is that the errors have been under-estimated in both studies. I review recent work on errors assessment and give suggestions for decreasing systematic errors in future studies.

  2. Error identification in a high-volume clinical chemistry laboratory: Five-year experience.

    PubMed

    Jafri, Lena; Khan, Aysha Habib; Ghani, Farooq; Shakeel, Shahid; Raheem, Ahmed; Siddiqui, Imran

    2015-07-01

    Quality indicators for assessing the performance of a laboratory require a systematic and continuous approach in collecting and analyzing data. The aim of this study was to determine the frequency of errors utilizing the quality indicators in a clinical chemistry laboratory and to convert errors to the Sigma scale. Five-year quality indicator data of a clinical chemistry laboratory was evaluated to describe the frequency of errors. An 'error' was defined as a defect during the entire testing process from the time requisition was raised and phlebotomy was done until the result dispatch. An indicator with a Sigma value of 4 was considered good but a process for which the Sigma value was 5 (i.e. 99.977% error-free) was considered well controlled. In the five-year period, a total of 6,792,020 specimens were received in the laboratory. Among a total of 17,631,834 analyses, 15.5% were from within hospital. Total error rate was 0.45% and of all the quality indicators used in this study the average Sigma level was 5.2. Three indicators - visible hemolysis, failure of proficiency testing and delay in stat tests - were below 5 on the Sigma scale and highlight the need to rigorously monitor these processes. Using Six Sigma metrics quality in a clinical laboratory can be monitored more effectively and it can set benchmarks for improving efficiency.

  3. Improved Quality in Aerospace Testing Through the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, R.

    2000-01-01

    This paper illustrates how, in the presence of systematic error, the quality of an experimental result can be influenced by the order in which the independent variables are set. It is suggested that in typical experimental circumstances in which systematic errors are significant, the common practice of organizing the set point order of independent variables to maximize data acquisition rate results in a test matrix that fails to produce the highest quality research result. With some care to match the volume of data required to satisfy inference error risk tolerances, it is possible to accept a lower rate of data acquisition and still produce results of higher technical quality (lower experimental error) with less cost and in less time than conventional test procedures, simply by optimizing the sequence in which independent variable levels are set.

  4. Detecting Spatial Patterns in Biological Array Experiments

    PubMed Central

    ROOT, DAVID E.; KELLEY, BRIAN P.; STOCKWELL, BRENT R.

    2005-01-01

    Chemical genetic screening and DNA and protein microarrays are among a number of increasingly important and widely used biological research tools that involve large numbers of parallel experiments arranged in a spatial array. It is often difficult to ensure that uniform experimental conditions are present throughout the entire array, and as a result, one often observes systematic spatially correlated errors, especially when array experiments are performed using robots. Here, the authors apply techniques based on the discrete Fourier transform to identify and quantify spatially correlated errors superimposed on a spatially random background. They demonstrate that these techniques are effective in identifying common spatially systematic errors in high-throughput 384-well microplate assay data. In addition, the authors employ a statistical test to allow for automatic detection of such errors. Software tools for using this approach are provided. PMID:14567791

  5. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China).

    PubMed

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu

    2017-05-25

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.

  6. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China)

    PubMed Central

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu

    2017-01-01

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3–5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems. PMID:28587086

  7. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China)

    NASA Astrophysics Data System (ADS)

    Zhao, Q.

    2017-12-01

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.

  8. The ILRS Contribution to ITRF2013

    NASA Astrophysics Data System (ADS)

    Pavlis, Erricos C.; Luceri, Cinzia; Sciarretta, Cecilia; Evans, Keith

    2014-05-01

    Satellite Laser Ranging (SLR) data have contributed to the definition of the International Terrestrial Reference Frame (ITRF) over the past three decades. The development of ITRF2005 ushered a new era with the use of weekly or session contributions, allowing greater flexibility in the editing, relative weighting and the combination of information from the four contributing techniques. The new approach allows each Service to generate a solution based on the rigorous combination of the individual Analysis Centers' contributions that provides an opportunity to verify the intra-technique consistency and a comparison of internal procedures and adopted models. The intra- and inter-technique comparisons that the time series approach facilitates are an extremely powerful diagnostic that highlights differences and inconsistencies at the single station level. Over the past year the ILRS Analysis Working Group (AWG) worked on designing an improved ILRS contribution for the development of ITRF2013. The ILRS approach is based on the current IERS Conventions 2010 and our internal ILRS standards, with a few deviations that are documented. Since the Global Geodetic Observing System - GGOS identified the ITRF as its key project, the ILRS has taken a two-pronged approach in order to meet its stringent goals: modernizing the engineering components (ground and space segments), and revising the modeling standards taking advantage of recent improvements in system Earth modeling. The main concern in the case of SLR is monitoring systematic errors at individual stations, accounting for undocumented discontinuities, and improving the target signature models. The latter has been addressed with the adoption of mm-level models for all of our targets. As far as the station systematics, the AWG had already embarked on a major effort to improve the handling of such errors prior to the development of ITRF2008. The results of that effort formed the foundation for the re-examination of the systematic errors at all sites. The new process benefited extensively from the results of the quality control process that ILRS provides on a daily basis as a feedback to the stations, and the recovery of systematic error corrections from the data themselves through targeted investigations. The present re-analysis extends from 1983 to the end of 2013. The data quality for the early period 1983-1993 is significantly poorer than for the recent years. However, it contributes to the overall stability of the datum definition, especially in terms of its origin and scale and, as the more recent and higher quality data accumulate, the significance of the early data will progressively diminish. As in the case of ITRF2008, station engineers and analysts have worked together to determine the magnitude and cause of systematic errors that were noticed during the analysis, rationalize them based on events at the stations, and develop appropriate corrections whenever possible. This presentation will give an overview of the process and examples from the various steps.

  9. Calibration of the aerodynamic coefficient identification package measurements from the shuttle entry flights using inertial measurement unit data

    NASA Technical Reports Server (NTRS)

    Heck, M. L.; Findlay, J. T.; Compton, H. R.

    1983-01-01

    The Aerodynamic Coefficient Identification Package (ACIP) is an instrument consisting of body mounted linear accelerometers, rate gyros, and angular accelerometers for measuring the Space Shuttle vehicular dynamics. The high rate recorded data are utilized for postflight aerodynamic coefficient extraction studies. Although consistent with pre-mission accuracies specified by the manufacturer, the ACIP data were found to contain detectable levels of systematic error, primarily bias, as well as scale factor, static misalignment, and temperature dependent errors. This paper summarizes the technique whereby the systematic ACIP error sources were detected, identified, and calibrated with the use of recorded dynamic data from the low rate, highly accurate Inertial Measurement Units.

  10. Single molecule counting and assessment of random molecular tagging errors with transposable giga-scale error-correcting barcodes.

    PubMed

    Lau, Billy T; Ji, Hanlee P

    2017-09-21

    RNA-Seq measures gene expression by counting sequence reads belonging to unique cDNA fragments. Molecular barcodes commonly in the form of random nucleotides were recently introduced to improve gene expression measures by detecting amplification duplicates, but are susceptible to errors generated during PCR and sequencing. This results in false positive counts, leading to inaccurate transcriptome quantification especially at low input and single-cell RNA amounts where the total number of molecules present is minuscule. To address this issue, we demonstrated the systematic identification of molecular species using transposable error-correcting barcodes that are exponentially expanded to tens of billions of unique labels. We experimentally showed random-mer molecular barcodes suffer from substantial and persistent errors that are difficult to resolve. To assess our method's performance, we applied it to the analysis of known reference RNA standards. By including an inline random-mer molecular barcode, we systematically characterized the presence of sequence errors in random-mer molecular barcodes. We observed that such errors are extensive and become more dominant at low input amounts. We described the first study to use transposable molecular barcodes and its use for studying random-mer molecular barcode errors. Extensive errors found in random-mer molecular barcodes may warrant the use of error correcting barcodes for transcriptome analysis as input amounts decrease.

  11. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  12. Barcode medication administration work-arounds: a systematic review and implications for nurse executives.

    PubMed

    Voshall, Barbara; Piscotty, Ronald; Lawrence, Jeanette; Targosz, Mary

    2013-10-01

    Safe medication administration is necessary to ensure quality healthcare. Barcode medication administration systems were developed to reduce drug administration errors and the related costs and improve patient safety. Work-arounds created by nurses in the execution of the required processes can lead to unintended consequences, including errors. This article provides a systematic review of the literature associated with barcoded medication administration and work-arounds and suggests interventions that should be adopted by nurse executives to ensure medication safety.

  13. Systematic errors in long baseline oscillation experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, Deborah A.; /Fermilab

    This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.

  14. Measurement of Branching Ratios for Non-leptonic Cabibbo-suppressed Decays of the Charmed-Strange Baryon Ξ c +

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vazquez Jauregui, Eric

    2008-08-01

    We studied several Ξ c + decay modes, most of them with a hyperon in the final state, and determined their branching ratios. The data used in this analysis come from the fixed target experiment SELEX, a multi-stage spectrometer with high acceptance for forward interactions, that took data during 1996 and 1997 at Fermilab with 600 GeV=c (mainly Σ -, π -) and 540 GeV/c (mainly p) beams incident on copper and carbon targets. The thesis mainly details the first observation of two Cabibbo-suppressed decay modes, Ξ c + → Σ +π -π + and Ξ c + → Σmore » -π +π +. The branching ratios of the decays relative to the Cabibbo-favored Ξ c + → Σ -π +π + are measured to be: Γ(Ξ c + → Σ -π +π +)/Γ(Ξ c + → Ξ -π +π +) = 0.184 ± 0.086. Systematic studies have been performed in order to check the stability of the measurements varying all cuts used in the selection of events over a wide interval and we do not observe evidence of any trend, so the systematic error is negligible in the final results because the quadrature sum of the total error is not affected. The branching ratios for the same decay modes of the Λ c + are measured to check the methodology of the analysis. The branching ratio of the decay mode Λ c + → Σ +π -π + is measured relative to Λ c + → pK - π +, while the one of the decay mode Λ c + → Σ -π +π +is relative to Λ c +→ Σ +π -π +, as they have been reported earlier. The results for the control modes are: Γ(Λ c +→ Σ +π -π +)/Γ(Λ c + → pK - π +) = 0.716 ± 0.144 and Γ(Λ c +→ Σ -π +π +)/Γ(Λ c + → Σ +π -π +) = 0.382 ± 0.104. The branching ratio of the decay mode Ξ c + → pK - π + relative to Ξ c + → Ξ -π +π + is considered as another control mode, the measured value is Γ(Ξ c + → pK -π +)/Γ(Ξ c + → Ξ -π +π +) = 0.194 ± 0.054. Systematic studies have been also performed for the control modes and all systematic variations are also small compared to the statistical error. We also report the first observation of two more decay modes, the Cabibbo-suppressed decay Ξ c + → Σ - K +π + and the doubly Cabibbo-suppressed decay Ξ c + → Σ +K +π -, but their branching ratios have not been measured up to now.« less

  15. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  16. Global Warming Estimation from MSU

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, Robert, Jr.

    1999-01-01

    In this study, we have developed time series of global temperature from 1980-97 based on the Microwave Sounding Unit (MSU) Ch 2 (53.74 GHz) observations taken from polar-orbiting NOAA operational satellites. In order to create these time series, systematic errors (approx. 0.1 K) in the Ch 2 data arising from inter-satellite differences are removed objectively. On the other hand, smaller systematic errors (approx. 0.03 K) in the data due to orbital drift of each satellite cannot be removed objectively. Such errors are expected to remain in the time series and leave an uncertainty in the inferred global temperature trend. With the help of a statistical method, the error in the MSU inferred global temperature trend resulting from orbital drifts and residual inter-satellite differences of all satellites is estimated to be 0.06 K decade. Incorporating this error, our analysis shows that the global temperature increased at a rate of 0.13 +/- 0.06 K decade during 1980-97.

  17. Errors in measurements by ultrasonic thickness gauges caused by the variation in ultrasonic velocity in constructional steels and metal alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalinin, V.A.; Tarasenko, V.L.; Tselser, L.B.

    1988-09-01

    Numerical values of the variation in ultrasonic velocity in constructional metal alloys and the measurement errors related to them are systematized. The systematization is based on the measurement results of the group ultrasonic velocity made in the All-Union Scientific-Research Institute for Nondestructive Testing in 1983-1984 and also on the measurement results of the group velocity made by various authors. The variations in ultrasonic velocity were systematized for carbon, low-alloy, and medium-alloy constructional steels; high-alloy iron base alloys; nickel-base heat-resistant alloys; wrought aluminum constructional alloys; titanium alloys; and cast irons and copper alloys.

  18. High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu

    2017-05-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.

  19. The role of the basic state in the ENSO-monsoon relationship and implications for predictability

    NASA Astrophysics Data System (ADS)

    Turner, A. G.; Inness, P. M.; Slingo, J. M.

    2005-04-01

    The impact of systematic model errors on a coupled simulation of the Asian summer monsoon and its interannual variability is studied. Although the mean monsoon climate is reasonably well captured, systematic errors in the equatorial Pacific mean that the monsoon-ENSO teleconnection is rather poorly represented in the general-circulation model. A system of ocean-surface heat flux adjustments is implemented in the tropical Pacific and Indian Oceans in order to reduce the systematic biases. In this version of the general-circulation model, the monsoon-ENSO teleconnection is better simulated, particularly the lag-lead relationships in which weak monsoons precede the peak of El Niño. In part this is related to changes in the characteristics of El Niño, which has a more realistic evolution in its developing phase. A stronger ENSO amplitude in the new model version also feeds back to further strengthen the teleconnection. These results have important implications for the use of coupled models for seasonal prediction of systems such as the monsoon, and suggest that some form of flux correction may have significant benefits where model systematic error compromises important teleconnections and modes of interannual variability.

  20. Improving NGDC Track-line Data Quality Control

    NASA Astrophysics Data System (ADS)

    Chandler, M. T.; Wessel, P.

    2004-12-01

    Ship-board gravity, magnetic and bathymetry data archived at the National Geophysical Data Center (NGDC) represent decades of seagoing research, containing over 4,500 cruises. Cruise data remain relevent despite the prominence of satellite altimetry-derived global grids because many geologic processes remain resolvable by oceanographic research alone. Due to the tremendous investment put forth by scientists and taxpayers to compile this vast archive and the significant errors found within it, additional quality assessment and corrections are warranted. These can best be accomplished by adding to existing quality control measures at NGDC. We are currently developing open source software to provide additional quality control. Along with NGDC's current sanity checking, new data at NGDC will also be subjected to an along-track ``sniffer'' which will detect and flag suspicious data for later graphical inspection using a visual editor. If new data pass these tests, they will undergo further scrutinization using a crossover error (COE) calculator which will compare new data values to existing values at points of intersection within the archive. Data passing these tests will be deemed ``quality data`` and suitable for permanent addition to the archive, while data that fail will be returned to the source institution for correction. Crossover errors will be stored and an online COE database will be available. The COE database will allow users to apply corrections to the NGDC track-line database to produce corrected data files. At no time will the archived data itself be modified. An attempt will also be made to reduce navigational errors for pre-GPS navigated cruises. Upon completion these programs will be used to explore and model systematic errors within the archive, generate correction tables for all cruises, and to quantify the error budget in marine geophysical observations. Software will be released and these procedures will be implemented in cooperation with NGDC staff.

  1. A Bayesian Approach to Systematic Error Correction in Kepler Photometric Time Series

    NASA Astrophysics Data System (ADS)

    Jenkins, Jon Michael; VanCleve, J.; Twicken, J. D.; Smith, J. C.; Kepler Science Team

    2011-01-01

    In order for the Kepler mission to achieve its required 20 ppm photometric precision for 6.5 hr observations of 12th magnitude stars, the Presearch Data Conditioning (PDC) software component of the Kepler Science Processing Pipeline must reduce systematic errors in flux time series to the limit of stochastic noise for errors with time-scales less than three days, without smoothing or over-fitting away the transits that Kepler seeks. The current version of PDC co-trends against ancillary engineering data and Pipeline generated data using essentially a least squares (LS) approach. This approach is successful for quiet stars when all sources of systematic error have been identified. If the stars are intrinsically variable or some sources of systematic error are unknown, LS will nonetheless attempt to explain all of a given time series, not just the part the model can explain well. Negative consequences can include loss of astrophysically interesting signal, and injection of high-frequency noise into the result. As a remedy, we present a Bayesian Maximum A Posteriori (MAP) approach, in which a subset of intrinsically quiet and highly-correlated stars is used to establish the probability density function (PDF) of robust fit parameters in a diagonalized basis. The PDFs then determine a "reasonable” range for the fit parameters for all stars, and brake the runaway fitting that can distort signals and inject noise. We present a closed-form solution for Gaussian PDFs, and show examples using publically available Quarter 1 Kepler data. A companion poster (Van Cleve et al.) shows applications and discusses current work in more detail. Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.

  2. Systematic error of the Gaia DR1 TGAS parallaxes from data for the red giant clump

    NASA Astrophysics Data System (ADS)

    Gontcharov, G. A.

    2017-08-01

    Based on the Gaia DR1 TGAS parallaxes and photometry from the Tycho-2, Gaia, 2MASS, andWISE catalogues, we have produced a sample of 100 000 clump red giants within 800 pc of the Sun. The systematic variations of the mode of their absolute magnitude as a function of the distance, magnitude, and other parameters have been analyzed. We show that these variations reach 0.7 mag and cannot be explained by variations in the interstellar extinction or intrinsic properties of stars and by selection. The only explanation seems to be a systematic error of the Gaia DR1 TGAS parallax dependent on the square of the observed distance in kpc: 0.18 R 2 mas. Allowance for this error reduces significantly the systematic dependences of the absolute magnitude mode on all parameters. This error reaches 0.1 mas within 800 pc of the Sun and allows an upper limit for the accuracy of the TGAS parallaxes to be estimated as 0.2 mas. A careful allowance for such errors is needed to use clump red giants as "standard candles." This eliminates all discrepancies between the theoretical and empirical estimates of the characteristics of these stars and allows us to obtain the first estimates of the modes of their absolute magnitudes from the Gaia parallaxes: mode( M H ) = -1.49 m ± 0.04 m , mode( M Ks ) = -1.63 m ± 0.03 m , mode( M W1) = -1.67 m ± 0.05 m mode( M W2) = -1.67 m ± 0.05 m , mode( M W3) = -1.66 m ± 0.02 m , mode( M W4) = -1.73 m ± 0.03 m , as well as the corresponding estimates of their de-reddened colors.

  3. Effects of Systematic and Random Errors on the Retrieval of Particle Microphysical Properties from Multiwavelength Lidar Measurements Using Inversion with Regularization

    NASA Technical Reports Server (NTRS)

    Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas

    2013-01-01

    In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.

  4. Assessment of the accuracy of global geodetic satellite laser ranging observations and estimated impact on ITRF scale: estimation of systematic errors in LAGEOS observations 1993-2014

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Rodríguez, José; Altamimi, Zuheir

    2016-12-01

    Satellite laser ranging (SLR) to the geodetic satellites LAGEOS and LAGEOS-2 uniquely determines the origin of the terrestrial reference frame and, jointly with very long baseline interferometry, its scale. Given such a fundamental role in satellite geodesy, it is crucial that any systematic errors in either technique are at an absolute minimum as efforts continue to realise the reference frame at millimetre levels of accuracy to meet the present and future science requirements. Here, we examine the intrinsic accuracy of SLR measurements made by tracking stations of the International Laser Ranging Service using normal point observations of the two LAGEOS satellites in the period 1993 to 2014. The approach we investigate in this paper is to compute weekly reference frame solutions solving for satellite initial state vectors, station coordinates and daily Earth orientation parameters, estimating along with these weekly average range errors for each and every one of the observing stations. Potential issues in any of the large number of SLR stations assumed to have been free of error in previous realisations of the ITRF may have been absorbed in the reference frame, primarily in station height. Likewise, systematic range errors estimated against a fixed frame that may itself suffer from accuracy issues will absorb network-wide problems into station-specific results. Our results suggest that in the past two decades, the scale of the ITRF derived from the SLR technique has been close to 0.7 ppb too small, due to systematic errors either or both in the range measurements and their treatment. We discuss these results in the context of preparations for ITRF2014 and additionally consider the impact of this work on the currently adopted value of the geocentric gravitational constant, GM.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.

    Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence ofmore » the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  7. Edge profile analysis of Joint European Torus (JET) Thomson scattering data: Quantifying the systematic error due to edge localised mode synchronisation.

    PubMed

    Leyland, M J; Beurskens, M N A; Flanagan, J C; Frassinetti, L; Gibson, K J; Kempenaars, M; Maslov, M; Scannell, R

    2016-01-01

    The Joint European Torus (JET) high resolution Thomson scattering (HRTS) system measures radial electron temperature and density profiles. One of the key capabilities of this diagnostic is measuring the steep pressure gradient, termed the pedestal, at the edge of JET plasmas. The pedestal is susceptible to limiting instabilities, such as Edge Localised Modes (ELMs), characterised by a periodic collapse of the steep gradient region. A common method to extract the pedestal width, gradient, and height, used on numerous machines, is by performing a modified hyperbolic tangent (mtanh) fit to overlaid profiles selected from the same region of the ELM cycle. This process of overlaying profiles, termed ELM synchronisation, maximises the number of data points defining the pedestal region for a given phase of the ELM cycle. When fitting to HRTS profiles, it is necessary to incorporate the diagnostic radial instrument function, particularly important when considering the pedestal width. A deconvolved fit is determined by a forward convolution method requiring knowledge of only the instrument function and profiles. The systematic error due to the deconvolution technique incorporated into the JET pedestal fitting tool has been documented by Frassinetti et al. [Rev. Sci. Instrum. 83, 013506 (2012)]. This paper seeks to understand and quantify the systematic error introduced to the pedestal width due to ELM synchronisation. Synthetic profiles, generated with error bars and point-to-point variation characteristic of real HRTS profiles, are used to evaluate the deviation from the underlying pedestal width. We find on JET that the ELM synchronisation systematic error is negligible in comparison to the statistical error when assuming ten overlaid profiles (typical for a pre-ELM fit to HRTS profiles). This confirms that fitting a mtanh to ELM synchronised profiles is a robust and practical technique for extracting the pedestal structure.

  8. Adverse Effects in Dual-Star Interferometry

    NASA Technical Reports Server (NTRS)

    Colavita, M. Mark

    2008-01-01

    Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews: the keys aspects of the dual-star approach and implementation; the main contributors to the

  9. Estimation of daily interfractional larynx residual setup error after isocentric alignment for head and neck radiotherapy: Quality-assurance implications for target volume and organ-at-risk margination using daily CT-on-rails imaging

    PubMed Central

    Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S. R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R; Kocak-Uzel, Esengul; Fuller, Clifton D.

    2016-01-01

    Larynx may alternatively serve as a target or organ-at-risk (OAR) in head and neck cancer (HNC) image-guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population–based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT-on-rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other 6 points were calculated post-isocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all 6 points for all scans over the course of treatment were calculated. Residual systematic and random error, and the necessary compensatory CTV-to-PTV and OAR-to-PRV margins were calculated, using both observational cohort data and a bootstrap-resampled population estimator. The grand mean displacements for all anatomical points was 5.07mm, with mean systematic error of 1.1mm and mean random setup error of 2.63mm, while bootstrapped POIs grand mean displacement was 5.09mm, with mean systematic error of 1.23mm and mean random setup error of 2.61mm. Required margin for CTV-PTV expansion was 4.6mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9mm. The calculated OAR-to-PRV expansion for the observed residual set-up error was 2.7mm, and bootstrap estimated expansion of 2.9mm. We conclude that the interfractional larynx setup error is a significant source of RT set-up/delivery error in HNC both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5mm to compensate for set up error if the larynx is a target or 3mm if the larynx is an OAR when using a non-laryngeal bony isocenter. PMID:25679151

  10. Estimation of daily interfractional larynx residual setup error after isocentric alignment for head and neck radiotherapy: quality assurance implications for target volume and organs‐at‐risk margination using daily CT on‐rails imaging

    PubMed Central

    Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S.R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R.; Kocak‐Uzel, Esengul

    2014-01-01

    Larynx may alternatively serve as a target or organs at risk (OAR) in head and neck cancer (HNC) image‐guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population‐based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT on‐rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior‐anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other six points were calculated postisocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all six points for all scans over the course of treatment was calculated. Residual systematic and random error and the necessary compensatory CTV‐to‐PTV and OAR‐to‐PRV margins were calculated, using both observational cohort data and a bootstrap‐resampled population estimator. The grand mean displacements for all anatomical points was 5.07 mm, with mean systematic error of 1.1 mm and mean random setup error of 2.63 mm, while bootstrapped POIs grand mean displacement was 5.09 mm, with mean systematic error of 1.23 mm and mean random setup error of 2.61 mm. Required margin for CTV‐PTV expansion was 4.6 mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9 mm. The calculated OAR‐to‐PRV expansion for the observed residual setup error was 2.7 mm and bootstrap estimated expansion of 2.9 mm. We conclude that the interfractional larynx setup error is a significant source of RT setup/delivery error in HNC, both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5 mm to compensate for setup error if the larynx is a target, or 3 mm if the larynx is an OAR, when using a nonlaryngeal bony isocenter. PACS numbers: 87.55.D‐, 87.55.Qr

  11. Functional Independent Scaling Relation for ORR/OER Catalysts

    DOE PAGES

    Christensen, Rune; Hansen, Heine A.; Dickens, Colin F.; ...

    2016-10-11

    A widely used adsorption energy scaling relation between OH* and OOH* intermediates in the oxygen reduction reaction (ORR) and oxygen evolution reaction (OER), has previously been determined using density functional theory and shown to dictate a minimum thermodynamic overpotential for both reactions. Here, we show that the oxygen–oxygen bond in the OOH* intermediate is, however, not well described with the previously used class of exchange-correlation functionals. By quantifying and correcting the systematic error, an improved description of gaseous peroxide species versus experimental data and a reduction in calculational uncertainty is obtained. For adsorbates, we find that the systematic error largelymore » cancels the vdW interaction missing in the original determination of the scaling relation. An improved scaling relation, which is fully independent of the applied exchange–correlation functional, is obtained and found to differ by 0.1 eV from the original. Lastly, this largely confirms that, although obtained with a method suffering from systematic errors, the previously obtained scaling relation is applicable for predictions of catalytic activity.« less

  12. Effects of waveform model systematics on the interpretation of GW150914

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; E Barclay, S.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; E Brau, J.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; E Broida, J.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; E Cowan, E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; E Creighton, J. D.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; E Dwyer, S.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; E Gossan, S.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; E Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; E Holz, D.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; E Lord, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; E McClelland, D.; McCormick, S.; McGrath, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; E Mikhailov, E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; E Pace, A.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; E Smith, R. J.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; E Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; E Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; E Zucker, M.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration; Boyle, M.; Chu, T.; Hemberger, D.; Hinder, I.; E Kidder, L.; Ossokine, S.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Vano Vinuales, A.

    2017-05-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein’s equations, any such calibration is accurate only to some non-zero tolerance and is limited by the accuracy of the underlying phenomenology, availability, quality, and parameter-space coverage of numerical simulations. This paper complements the original analyses of GW150914 with an investigation of the effects of possible systematic errors in the waveform models on estimates of its source parameters. To test for systematic errors we repeat the original Bayesian analysis on mock signals from numerical simulations of a series of binary configurations with parameters similar to those found for GW150914. Overall, we find no evidence for a systematic bias relative to the statistical error of the original parameter recovery of GW150914 due to modeling approximations or modeling inaccuracies. However, parameter biases are found to occur for some configurations disfavored by the data of GW150914: for binaries inclined edge-on to the detector over a small range of choices of polarization angles, and also for eccentricities greater than  ˜0.05. For signals with higher signal-to-noise ratio than GW150914, or in other regions of the binary parameter space (lower masses, larger mass ratios, or higher spins), we expect that systematic errors in current waveform models may impact gravitational-wave measurements, making more accurate models desirable for future observations.

  13. VizieR Online Data Catalog: 5 Galactic GC proper motions from Gaia DR1 (Watkins+, 2017)

    NASA Astrophysics Data System (ADS)

    Watkins, L. L.; van der Marel, R. P.

    2017-11-01

    We present a pilot study of Galactic globular cluster (GC) proper motion (PM) determinations using Gaia data. We search for GC stars in the Tycho-Gaia Astrometric Solution (TGAS) catalog from Gaia Data Release 1 (DR1), and identify five members of NGC 104 (47 Tucanae), one member of NGC 5272 (M3), five members of NGC 6121 (M4), seven members of NGC 6397, and two members of NGC 6656 (M22). By taking a weighted average of member stars, fully accounting for the correlations between parameters, we estimate the parallax (and, hence, distance) and PM of the GCs. This provides a homogeneous PM study of multiple GCs based on an astrometric catalog with small and well-controlled systematic errors and yields random PM errors similar to existing measurements. Detailed comparison to the available Hubble Space Telescope (HST) measurements generally shows excellent agreement, validating the astrometric quality of both TGAS and HST. By contrast, comparison to ground-based measurements shows that some of those must have systematic errors exceeding the random errors. Our parallax estimates have uncertainties an order of magnitude larger than previous studies, but nevertheless imply distances consistent with previous estimates. By combining our PM measurements with literature positions, distances, and radial velocities, we measure Galactocentric space motions for the clusters and find that these also agree well with previous analyses. Our analysis provides a framework for determining more accurate distances and PMs of Galactic GCs using future Gaia data releases. This will provide crucial constraints on the near end of the cosmic distance ladder and provide accurate GC orbital histories. (4 data files).

  14. Tycho- Gaia Astrometric Solution Parallaxes and Proper Motions for Five Galactic Globular Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, Laura L.; Van der Marel, Roeland P., E-mail: lwatkins@stsci.edu

    2017-04-20

    We present a pilot study of Galactic globular cluster (GC) proper motion (PM) determinations using Gaia data. We search for GC stars in the Tycho- Gaia Astrometric Solution (TGAS) catalog from Gaia Data Release 1 (DR1), and identify five members of NGC 104 (47 Tucanae), one member of NGC 5272 (M3), five members of NGC 6121 (M4), seven members of NGC 6397, and two members of NGC 6656 (M22). By taking a weighted average of member stars, fully accounting for the correlations between parameters, we estimate the parallax (and, hence, distance) and PM of the GCs. This provides a homogeneousmore » PM study of multiple GCs based on an astrometric catalog with small and well-controlled systematic errors and yields random PM errors similar to existing measurements. Detailed comparison to the available Hubble Space Telescope ( HST ) measurements generally shows excellent agreement, validating the astrometric quality of both TGAS and HST . By contrast, comparison to ground-based measurements shows that some of those must have systematic errors exceeding the random errors. Our parallax estimates have uncertainties an order of magnitude larger than previous studies, but nevertheless imply distances consistent with previous estimates. By combining our PM measurements with literature positions, distances, and radial velocities, we measure Galactocentric space motions for the clusters and find that these also agree well with previous analyses. Our analysis provides a framework for determining more accurate distances and PMs of Galactic GCs using future Gaia data releases. This will provide crucial constraints on the near end of the cosmic distance ladder and provide accurate GC orbital histories.« less

  15. Prevalence of refractive errors in children in India: a systematic review.

    PubMed

    Sheeladevi, Sethu; Seelam, Bharani; Nukella, Phanindra B; Modi, Aditi; Ali, Rahul; Keay, Lisa

    2018-04-22

    Uncorrected refractive error is an avoidable cause of visual impairment which affects children in India. The objective of this review is to estimate the prevalence of refractive errors in children ≤ 15 years of age. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines were followed in this review. A detailed literature search was performed to include all population and school-based studies published from India between January 1990 and January 2017, using the Cochrane Library, Medline and Embase. The quality of the included studies was assessed based on a critical appraisal tool developed for systematic reviews of prevalence studies. Four population-based studies and eight school-based studies were included. The overall prevalence of refractive error per 100 children was 8.0 (CI: 7.4-8.1) and in schools it was 10.8 (CI: 10.5-11.2). The population-based prevalence of myopia, hyperopia (≥ +2.00 D) and astigmatism was 5.3 per cent, 4.0 per cent and 5.4 per cent, respectively. Combined refractive error and myopia alone were higher in urban areas compared to rural areas (odds ratio [OR]: 2.27 [CI: 2.09-2.45]) and (OR: 2.12 [CI: 1.79-2.50]), respectively. The prevalence of combined refractive errors and myopia alone in schools was higher among girls than boys (OR: 1.2 [CI: 1.1-1.3] and OR: 1.1 [CI: 1.1-1.2]), respectively. However, hyperopia was more prevalent among boys than girls in schools (OR: 2.1 [CI: 1.8-2.4]). Refractive error in children in India is a major public health problem and requires concerted efforts from various stakeholders including the health care workforce, education professionals and parents, to manage this issue. © 2018 Optometry Australia.

  16. Understanding overlay signatures using machine learning on non-lithography context information

    NASA Astrophysics Data System (ADS)

    Overcast, Marshall; Mellegaard, Corey; Daniel, David; Habets, Boris; Erley, Georg; Guhlemann, Steffen; Thrun, Xaver; Buhl, Stefan; Tottewitz, Steven

    2018-03-01

    Overlay errors between two layers can be caused by non-lithography processes. While these errors can be compensated by the run-to-run system, such process and tool signatures are not always stable. In order to monitor the impact of non-lithography context on overlay at regular intervals, a systematic approach is needed. Using various machine learning techniques, significant context parameters that relate to deviating overlay signatures are automatically identified. Once the most influential context parameters are found, a run-to-run simulation is performed to see how much improvement can be obtained. The resulting analysis shows good potential for reducing the influence of hidden context parameters on overlay performance. Non-lithographic contexts are significant contributors, and their automatic detection and classification will enable the overlay roadmap, given the corresponding control capabilities.

  17. Predicting areas of sustainable error growth in quasigeostrophic flows using perturbation alignment properties

    NASA Astrophysics Data System (ADS)

    Rivière, G.; Hua, B. L.

    2004-10-01

    A new perturbation initialization method is used to quantify error growth due to inaccuracies of the forecast model initial conditions in a quasigeostrophic box ocean model describing a wind-driven double gyre circulation. This method is based on recent analytical results on Lagrangian alignment dynamics of the perturbation velocity vector in quasigeostrophic flows. More specifically, it consists in initializing a unique perturbation from the sole knowledge of the control flow properties at the initial time of the forecast and whose velocity vector orientation satisfies a Lagrangian equilibrium criterion. This Alignment-based Initialization method is hereafter denoted as the AI method.In terms of spatial distribution of the errors, we have compared favorably the AI error forecast with the mean error obtained with a Monte-Carlo ensemble prediction. It is shown that the AI forecast is on average as efficient as the error forecast initialized with the leading singular vector for the palenstrophy norm, and significantly more efficient than that for total energy and enstrophy norms. Furthermore, a more precise examination shows that the AI forecast is systematically relevant for all control flows whereas the palenstrophy singular vector forecast leads sometimes to very good scores and sometimes to very bad ones.A principal component analysis at the final time of the forecast shows that the AI mode spatial structure is comparable to that of the first eigenvector of the error covariance matrix for a "bred mode" ensemble. Furthermore, the kinetic energy of the AI mode grows at the same constant rate as that of the "bred modes" from the initial time to the final time of the forecast and is therefore characterized by a sustained phase of error growth. In this sense, the AI mode based on Lagrangian dynamics of the perturbation velocity orientation provides a rationale of the "bred mode" behavior.

  18. An e-mail survey identified unpublished studies for systematic reviews.

    PubMed

    Reveiz, Ludovic; Cardona, Andres Felipe; Ospina, Edgar Guillermo; de Agular, Sylvia

    2006-07-01

    A large number of trials remain difficult to locate or unpublished for systematic reviews. The objective of this article was to determine the usefulness of making e-mail contact with authors of clinical trials and literature reviews found in MEDLINE to identify unpublished or difficult to locate Randomized Controlled Trials (RCTs). A structured search for detecting RCTs in MEDLINE was made from January 1999 to June 2003; a questionnaire was sent to a random sample of 525 author's mails. Those RCTs obtained were sought in MEDLINE, EMBASE, the Cochrane Controlled Trials Register, LILACS, and ongoing registers. 40 (7.6%) replies were received; 10 previously undescribed and unpublished RCTs and 21 unregistered ongoing RCTs were found. The most frequently given reasons for not publishing were: lack of time for finalizing the statistical analysis and preparing the manuscript, contractual obligations with the pharmaceutical industry, methodologic errors in designing, and editorial rejection. Using the e-mails of authors detected by the search in electronic databases could contribute toward detecting potentially relevant ongoing or unpublished RCTs enabling rapid, straightforward, low-cost systematic review; in addition, the results of this study support the need of universal registration of all studies at their inception.

  19. Simulated forecast error and climate drift resulting from the omission of the upper stratosphere in numerical models

    NASA Technical Reports Server (NTRS)

    Boville, Byron A.; Baumhefner, David P.

    1990-01-01

    Using an NCAR community climate model, Version I, the forecast error growth and the climate drift resulting from the omission of the upper stratosphere are investigated. In the experiment, the control simulation is a seasonal integration of a medium horizontal general circulation model with 30 levels extending from the surface to the upper mesosphere, while the main experiment uses an identical model, except that only the bottom 15 levels (below 10 mb) are retained. It is shown that both random and systematic errors develop rapidly in the lower stratosphere with some local propagation into the troposphere in the 10-30-day time range. The random growth rate in the troposphere in the case of the altered upper boundary was found to be slightly faster than that for the initial-condition uncertainty alone. However, this is not likely to make a significant impact in operational forecast models, because the initial-condition uncertainty is very large.

  20. Reconstructing the calibrated strain signal in the Advanced LIGO detectors

    NASA Astrophysics Data System (ADS)

    Viets, A. D.; Wade, M.; Urban, A. L.; Kandhasamy, S.; Betzwieser, J.; Brown, Duncan A.; Burguet-Castell, J.; Cahillane, C.; Goetz, E.; Izumi, K.; Karki, S.; Kissel, J. S.; Mendell, G.; Savage, R. L.; Siemens, X.; Tuyenbayev, D.; Weinstein, A. J.

    2018-05-01

    Advanced LIGO’s raw detector output needs to be calibrated to compute dimensionless strain h(t) . Calibrated strain data is produced in the time domain using both a low-latency, online procedure and a high-latency, offline procedure. The low-latency h(t) data stream is produced in two stages, the first of which is performed on the same computers that operate the detector’s feedback control system. This stage, referred to as the front-end calibration, uses infinite impulse response (IIR) filtering and performs all operations at a 16 384 Hz digital sampling rate. Due to several limitations, this procedure currently introduces certain systematic errors in the calibrated strain data, motivating the second stage of the low-latency procedure, known as the low-latency gstlal calibration pipeline. The gstlal calibration pipeline uses finite impulse response (FIR) filtering to apply corrections to the output of the front-end calibration. It applies time-dependent correction factors to the sensing and actuation components of the calibrated strain to reduce systematic errors. The gstlal calibration pipeline is also used in high latency to recalibrate the data, which is necessary due mainly to online dropouts in the calibrated data and identified improvements to the calibration models or filters.

  1. The Clinical Assessment in the Legal Field: An Empirical Study of Bias and Limitations in Forensic Expertise

    PubMed Central

    Iudici, Antonio; Salvini, Alessandro; Faccio, Elena; Castelnuovo, Gianluca

    2015-01-01

    According to the literature, psychological assessment in forensic contexts is one of the most controversial application areas for clinical psychology. This paper presents a review of systematic judgment errors in the forensic field. Forty-six psychological reports written by psychologists, court consultants, have been analyzed with content analysis to identify typical judgment errors related to the following areas: (a) distortions in the attribution of causality, (b) inferential errors, and (c) epistemological inconsistencies. Results indicated that systematic errors of judgment, usually referred also as “the man in the street,” are widely present in the forensic evaluations of specialist consultants. Clinical and practical implications are taken into account. This article could lead to significant benefits for clinical psychologists who want to deal with this sensitive issue and are interested in improving the quality of their contribution to the justice system. PMID:26648892

  2. Insights on the impact of systematic model errors on data assimilation performance in changing catchments

    NASA Astrophysics Data System (ADS)

    Pathiraja, S.; Anghileri, D.; Burlando, P.; Sharma, A.; Marshall, L.; Moradkhani, H.

    2018-03-01

    The global prevalence of rapid and extensive land use change necessitates hydrologic modelling methodologies capable of handling non-stationarity. This is particularly true in the context of Hydrologic Forecasting using Data Assimilation. Data Assimilation has been shown to dramatically improve forecast skill in hydrologic and meteorological applications, although such improvements are conditional on using bias-free observations and model simulations. A hydrologic model calibrated to a particular set of land cover conditions has the potential to produce biased simulations when the catchment is disturbed. This paper sheds new light on the impacts of bias or systematic errors in hydrologic data assimilation, in the context of forecasting in catchments with changing land surface conditions and a model calibrated to pre-change conditions. We posit that in such cases, the impact of systematic model errors on assimilation or forecast quality is dependent on the inherent prediction uncertainty that persists even in pre-change conditions. Through experiments on a range of catchments, we develop a conceptual relationship between total prediction uncertainty and the impacts of land cover changes on the hydrologic regime to demonstrate how forecast quality is affected when using state estimation Data Assimilation with no modifications to account for land cover changes. This work shows that systematic model errors as a result of changing or changed catchment conditions do not always necessitate adjustments to the modelling or assimilation methodology, for instance through re-calibration of the hydrologic model, time varying model parameters or revised offline/online bias estimation.

  3. Errors in Viking Lander Atmospheric Profiles Discovered Using MOLA Topography

    NASA Technical Reports Server (NTRS)

    Withers, Paul; Lorenz, R. D.; Neumann, G. A.

    2002-01-01

    Each Viking lander measured a topographic profile during entry. Comparing to MOLA (Mars Orbiter Laser Altimeter), we find a vertical error of 1-2 km in the Viking trajectory. This introduces a systematic error of 10-20% in the Viking densities and pressures at a given altitude. Additional information is contained in the original extended abstract.

  4. Scale invariant feature transform in adaptive radiation therapy: a tool for deformable image registration assessment and re-planning indication

    NASA Astrophysics Data System (ADS)

    Paganelli, Chiara; Peroni, Marta; Riboldi, Marco; Sharp, Gregory C.; Ciardo, Delia; Alterio, Daniela; Orecchia, Roberto; Baroni, Guido

    2013-01-01

    Adaptive radiation therapy (ART) aims at compensating for anatomic and pathological changes to improve delivery along a treatment fraction sequence. Current ART protocols require time-consuming manual updating of all volumes of interest on the images acquired during treatment. Deformable image registration (DIR) and contour propagation stand as a state of the ART method to automate the process, but the lack of DIR quality control methods hinder an introduction into clinical practice. We investigated the scale invariant feature transform (SIFT) method as a quantitative automated tool (1) for DIR evaluation and (2) for re-planning decision-making in the framework of ART treatments. As a preliminary test, SIFT invariance properties at shape-preserving and deformable transformations were studied on a computational phantom, granting residual matching errors below the voxel dimension. Then a clinical dataset composed of 19 head and neck ART patients was used to quantify the performance in ART treatments. For the goal (1) results demonstrated SIFT potential as an operator-independent DIR quality assessment metric. We measured DIR group systematic residual errors up to 0.66 mm against 1.35 mm provided by rigid registration. The group systematic errors of both bony and all other structures were also analyzed, attesting the presence of anatomical deformations. The correct automated identification of 18 patients who might benefit from ART out of the total 22 cases using SIFT demonstrated its capabilities toward goal (2) achievement.

  5. Solutions to decrease a systematic error related to AAPH addition in the fluorescence-based ORAC assay.

    PubMed

    Mellado-Ortega, Elena; Zabalgogeazcoa, Iñigo; Vázquez de Aldana, Beatriz R; Arellano, Juan B

    2017-02-15

    Oxygen radical absorbance capacity (ORAC) assay in 96-well multi-detection plate readers is a rapid method to determine total antioxidant capacity (TAC) in biological samples. A disadvantage of this method is that the antioxidant inhibition reaction does not start in all of the 96 wells at the same time due to technical limitations when dispensing the free radical-generating azo initiator 2,2'-azobis (2-methyl-propanimidamide) dihydrochloride (AAPH). The time delay between wells yields a systematic error that causes statistically significant differences in TAC determination of antioxidant solutions depending on their plate position. We propose two alternative solutions to avoid this AAPH-dependent error in ORAC assays. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. On framing the research question and choosing the appropriate research design.

    PubMed

    Parfrey, Patrick S; Ravani, Pietro

    2015-01-01

    Clinical epidemiology is the science of human disease investigation with a focus on diagnosis, prognosis, and treatment. The generation of a reasonable question requires definition of patients, interventions, controls, and outcomes. The goal of research design is to minimize error, to ensure adequate samples, to measure input and output variables appropriately, to consider external and internal validities, to limit bias, and to address clinical as well as statistical relevance. The hierarchy of evidence for clinical decision-making places randomized controlled trials (RCT) or systematic review of good quality RCTs at the top of the evidence pyramid. Prognostic and etiologic questions are best addressed with longitudinal cohort studies.

  7. On framing the research question and choosing the appropriate research design.

    PubMed

    Parfrey, Patrick; Ravani, Pietro

    2009-01-01

    Clinical epidemiology is the science of human disease investigation with a focus on diagnosis, prognosis, and treatment. The generation of a reasonable question requires the definition of patients, interventions, controls, and outcomes. The goal of research design is to minimize error, ensure adequate samples, measure input and output variables appropriately, consider external and internal validities, limit bias, and address clinical as well as statistical relevance. The hierarchy of evidence for clinical decision making places randomized controlled trials (RCT) or systematic review of good quality RCTs at the top of the evidence pyramid. Prognostic and etiologic questions are best addressed with longitudinal cohort studies.

  8. A novel method for routine quality assurance of volumetric-modulated arc therapy.

    PubMed

    Wang, Qingxin; Dai, Jianrong; Zhang, Ke

    2013-10-01

    Volumetric-modulated arc therapy (VMAT) is delivered through synchronized variation of gantry angle, dose rate, and multileaf collimator (MLC) leaf positions. The delivery dynamic nature challenges the parameter setting accuracy of linac control system. The purpose of this study was to develop a novel method for routine quality assurance (QA) of VMAT linacs. ArcCheck is a detector array with diodes distributing in spiral pattern on cylindrical surface. Utilizing its features, a QA plan was designed to strictly test all varying parameters during VMAT delivery on an Elekta Synergy linac. In this plan, there are 24 control points. The gantry rotates clockwise from 181° to 179°. The dose rate, gantry speed, and MLC positions cover their ranges commonly used in clinic. The two borders of MLC-shaped field seat over two columns of diodes of ArcCheck when the gantry rotates to the angle specified by each control point. The ratio of dose rate between each of these diodes and the diode closest to the field center is a certain value and sensitive to the MLC positioning error of the leaf crossing the diode. Consequently, the positioning error can be determined by the ratio with the help of a relationship curve. The time when the gantry reaches the angle specified by each control point can be acquired from the virtual inclinometer that is a feature of ArcCheck. The gantry speed between two consecutive control points is then calculated. The aforementioned dose rate is calculated from an acm file that is generated during ArcCheck measurements. This file stores the data measured by each detector in 50 ms updates with each update in a separate row. A computer program was written in MATLAB language to process the data. The program output included MLC positioning errors and the dose rate at each control point as well as the gantry speed between control points. To evaluate this method, this plan was delivered for four consecutive weeks. The actual dose rate and gantry speed were compared with the QA plan specified. Additionally, leaf positioning errors were intentionally introduced to investigate the sensitivity of this method. The relationship curves were established for detecting MLC positioning errors during VMAT delivery. For four consecutive weeks measured, 98.4%, 94.9%, 89.2%, and 91.0% of the leaf positioning errors were within ± 0.5 mm, respectively. For the intentionally introduced leaf positioning systematic errors of -0.5 and +1 mm, the detected leaf positioning errors of 20 Y1 leaf were -0.48 ± 0.14 and 1.02 ± 0.26 mm, respectively. The actual gantry speed and dose rate closely followed the values specified in the VMAT QA plan. This method can assess the accuracy of MLC positions and the dose rate at each control point as well as the gantry speed between control points at the same time. It is efficient and suitable for routine quality assurance of VMAT.

  9. [Improving blood safety: errors management in transfusion medicine].

    PubMed

    Bujandrić, Nevenka; Grujić, Jasmina; Krga-Milanović, Mirjana

    2014-01-01

    The concept of blood safety includes the entire transfusion chain starting with the collection of blood from the blood donor, and ending with blood transfusion to the patient. The concept involves quality management system as the systematic monitoring of adverse reactions and incidents regarding the blood donor or patient. Monitoring of near-miss errors show the critical points in the working process and increase transfusion safety. The aim of the study was to present the analysis results of adverse and unexpected events in transfusion practice with a potential risk to the health of blood donors and patients. One-year retrospective study was based on the collection, analysis and interpretation of written reports on medical errors in the Blood Transfusion Institute of Vojvodina. Errors were distributed according to the type, frequency and part of the working process where they occurred. Possible causes and corrective actions were described for each error. The study showed that there were not errors with potential health consequences for the blood donor/patient. Errors with potentially damaging consequences for patients were detected throughout the entire transfusion chain. Most of the errors were identified in the preanalytical phase. The human factor was responsible for the largest number of errors. Error reporting system has an important role in the error management and the reduction of transfusion-related risk of adverse events and incidents. The ongoing analysis reveals the strengths and weaknesses of the entire process and indicates the necessary changes. Errors in transfusion medicine can be avoided in a large percentage and prevention is cost-effective, systematic and applicable.

  10. Medication errors in paediatric care: a systematic review of epidemiology and an evaluation of evidence supporting reduction strategy recommendations

    PubMed Central

    Miller, Marlene R; Robinson, Karen A; Lubomski, Lisa H; Rinke, Michael L; Pronovost, Peter J

    2007-01-01

    Background Although children are at the greatest risk for medication errors, little is known about the overall epidemiology of these errors, where the gaps are in our knowledge, and to what extent national medication error reduction strategies focus on children. Objective To synthesise peer reviewed knowledge on children's medication errors and on recommendations to improve paediatric medication safety by a systematic literature review. Data sources PubMed, Embase and Cinahl from 1 January 2000 to 30 April 2005, and 11 national entities that have disseminated recommendations to improve medication safety. Study selection Inclusion criteria were peer reviewed original data in English language. Studies that did not separately report paediatric data were excluded. Data extraction Two reviewers screened articles for eligibility and for data extraction, and screened all national medication error reduction strategies for relevance to children. Data synthesis From 358 articles identified, 31 were included for data extraction. The definition of medication error was non‐uniform across the studies. Dispensing and administering errors were the most poorly and non‐uniformly evaluated. Overall, the distributional epidemiological estimates of the relative percentages of paediatric error types were: prescribing 3–37%, dispensing 5–58%, administering 72–75%, and documentation 17–21%. 26 unique recommendations for strategies to reduce medication errors were identified; none were based on paediatric evidence. Conclusions Medication errors occur across the entire spectrum of prescribing, dispensing, and administering, are common, and have a myriad of non‐evidence based potential reduction strategies. Further research in this area needs a firmer standardisation for items such as dose ranges and definitions of medication errors, broader scope beyond inpatient prescribing errors, and prioritisation of implementation of medication error reduction strategies. PMID:17403758

  11. Human-robot cooperative movement training: learning a novel sensory motor transformation during walking with robotic assistance-as-needed.

    PubMed

    Emken, Jeremy L; Benitez, Raul; Reinkensmeyer, David J

    2007-03-28

    A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. The assist-as-needed algorithm proposed here can limit error during the learning of a dynamic motor task. The algorithm encourages learning by decreasing its assistance as a function of the ongoing progression of movement error. This type of algorithm is well suited for helping people learn dynamic tasks for which large kinematic errors are dangerous or discouraging, and thus may prove useful for robot-assisted movement training of walking or reaching following neurologic injury.

  12. Particle Tracking on the BNL Relativistic Heavy Ion Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dell, G. F.

    1986-08-07

    Tracking studies including the effects of random multipole errors as well as the effects of random and systematic multipole errors have been made for RHIC. Initial results for operating at an off diagonal working point are discussed.

  13. WFIRST: Principal Components Analysis of H4RG-10 Near-IR Detector Data Cubes

    NASA Astrophysics Data System (ADS)

    Rauscher, Bernard

    2018-01-01

    The Wide Field Infrared Survey Telescope’s (WFIRST) Wide Field Instrument (WFI) incorporates an array of eighteen Teledyne H4RG-10 near-IR detector arrays. Because WFIRST’s science investigations require controlling systematic uncertainties to state-of-the-art levels, we conducted principal components analysis (PCA) of some H4RG-10 test data obtained in the NASA Goddard Space Flight Center Detector Characterization Laboratory (DCL). The PCA indicates that the Legendre polynomials provide a nearly orthogonal representation of up-the-ramp sampled illuminated data cubes, and suggests other representations that may provide an even more compact representation of the data in some circumstances. We hypothesize that by using orthogonal representations, such as those described here, it may be possible to control systematic errors better than has been achieved before for NASA missions. We believe that these findings are probably applicable to other H4RG, H2RG, and H1RG based systems.

  14. Efficacy and workload analysis of a fixed vertical couch position technique and a fixed‐action–level protocol in whole‐breast radiotherapy

    PubMed Central

    Verhoeven, Karolien; Weltens, Caroline; Van den Heuvel, Frank

    2015-01-01

    Quantification of the setup errors is vital to define appropriate setup margins preventing geographical misses. The no‐action–level (NAL) correction protocol reduces the systematic setup errors and, hence, the setup margins. The manual entry of the setup corrections in the record‐and‐verify software, however, increases the susceptibility of the NAL protocol to human errors. Moreover, the impact of the skin mobility on the anteroposterior patient setup reproducibility in whole‐breast radiotherapy (WBRT) is unknown. In this study, we therefore investigated the potential of fixed vertical couch position‐based patient setup in WBRT. The possibility to introduce a threshold for correction of the systematic setup errors was also explored. We measured the anteroposterior, mediolateral, and superior–inferior setup errors during fractions 1–12 and weekly thereafter with tangential angled single modality paired imaging. These setup data were used to simulate the residual setup errors of the NAL protocol, the fixed vertical couch position protocol, and the fixed‐action–level protocol with different correction thresholds. Population statistics of the setup errors of 20 breast cancer patients and 20 breast cancer patients with additional regional lymph node (LN) irradiation were calculated to determine the setup margins of each off‐line correction protocol. Our data showed the potential of the fixed vertical couch position protocol to restrict the systematic and random anteroposterior residual setup errors to 1.8 mm and 2.2 mm, respectively. Compared to the NAL protocol, a correction threshold of 2.5 mm reduced the frequency of mediolateral and superior–inferior setup corrections with 40% and 63%, respectively. The implementation of the correction threshold did not deteriorate the accuracy of the off‐line setup correction compared to the NAL protocol. The combination of the fixed vertical couch position protocol, for correction of the anteroposterior setup error, and the fixed‐action–level protocol with 2.5 mm correction threshold, for correction of the mediolateral and the superior–inferior setup errors, was proved to provide adequate and comparable patient setup accuracy in WBRT and WBRT with additional LN irradiation. PACS numbers: 87.53.Kn, 87.57.‐s

  15. Dosimetric effects of patient rotational setup errors on prostate IMRT treatments

    NASA Astrophysics Data System (ADS)

    Fu, Weihua; Yang, Yong; Li, Xiang; Heron, Dwight E.; Saiful Huq, M.; Yue, Ning J.

    2006-10-01

    The purpose of this work is to determine dose delivery errors that could result from systematic rotational setup errors (ΔΦ) for prostate cancer patients treated with three-phase sequential boost IMRT. In order to implement this, different rotational setup errors around three Cartesian axes were simulated for five prostate patients and dosimetric indices, such as dose-volume histogram (DVH), tumour control probability (TCP), normal tissue complication probability (NTCP) and equivalent uniform dose (EUD), were employed to evaluate the corresponding dosimetric influences. Rotational setup errors were simulated by adjusting the gantry, collimator and horizontal couch angles of treatment beams and the dosimetric effects were evaluated by recomputing the dose distributions in the treatment planning system. Our results indicated that, for prostate cancer treatment with the three-phase sequential boost IMRT technique, the rotational setup errors do not have significant dosimetric impacts on the cumulative plan. Even in the worst-case scenario with ΔΦ = 3°, the prostate EUD varied within 1.5% and TCP decreased about 1%. For seminal vesicle, slightly larger influences were observed. However, EUD and TCP changes were still within 2%. The influence on sensitive structures, such as rectum and bladder, is also negligible. This study demonstrates that the rotational setup error degrades the dosimetric coverage of target volume in prostate cancer treatment to a certain degree. However, the degradation was not significant for the three-phase sequential boost prostate IMRT technique and for the margin sizes used in our institution.

  16. Cervical sensorimotor control in idiopathic cervical dystonia: A cross-sectional study.

    PubMed

    De Pauw, Joke; Mercelis, Rudy; Hallemans, Ann; Michiels, Sarah; Truijen, Steven; Cras, Patrick; De Hertogh, Willem

    2017-09-01

    Patients with idiopathic adult-onset cervical dystonia (CD) experience an abnormal head posture and involuntary muscle contractions. Although the exact areas affected in the central nervous system remain uncertain, impaired functions in systems stabilizing the head and neck are apparent such as the somatosensory and sensorimotor integration systems. The aim of the study is to investigate cervical sensorimotor control dysfunction in patients with CD. Cervical sensorimotor control was assessed by a head repositioning task in 24 patients with CD and 70 asymptomatic controls. Blindfolded participants were asked to reposition their head to a previously memorized neutral head position (NHP) following an active movement (flexion, extension, left, and right rotation). The repositioning error (joint position error, JPE) was registered via 3D motion analysis with an eight-camera infrared system (VICON ® T10). Disease-specific characteristics of all patients were obtained via the Tsui scale, Cervical Dystonia Impact Profile (CDIP-58), and Toronto Western Spasmodic Rating Scale. Patients with CD showed larger JPE than controls (mean difference of 1.5°, p  <   .006), and systematically 'overshoot', i.e. surpassed the NHP, whereas control subjects 'undershoot', i.e. fall behind the NHP. The JPE did not correlate with disease-specific characteristics. Cervical sensorimotor control is impaired in patients with CD. As cervical sensorimotor control can be trained, this might be a potential treatment option for therapy, adjuvant to botulinum toxin injections.

  17. Measurement of the Neutron Beta Decay Lifetime using Magnetically Trapped Ultracold Neutrons

    NASA Astrophysics Data System (ADS)

    Adamek, Evan Robert

    The neutron lifetime is an important parameter in the Standard Model of particle physics, with influences on the electroweak interaction and on Big Bang nucleosynthesis. Measurements of this quantity in cold beam experiments and in experiments using ultracold neutrons (UCN) disagree; this discrepancy may indicate that these measurements possess unaccounted-for systematic errors. The UCNtau experiment at Los Alamos Neutron Science Center (LANSCe) utilizes an asymmetrical magneto-gravitational storage volume with an in-situ vanadium detector. This setup is designed to either avoid or control many of the weaknesses that reduce systematic precision in other UCN lifetime experiments. Controlling for the many measurable errors requires detailed calculation and simulation, aided, for example, by the Geant4 Monte Carlo particle transport toolkit, which has been used to create a high fidelity model of the UCNtau experiment for modeling UCN transport, storage, and detection. Through the course of running the experiment, improvements in knowledge of particle measurement have led to improvements to the transport and to the detectors used in various parts of the experiment. With the experimental setup optimized to account for the subtleties of the measurement, the 2014-2015 beam period at LANSCe generated 85 measurement runs from which we could calculate the storage lifetime. Careful analysis of the effects of background on the vanadium detector assembly allowed for elimination of undesired signal and allowed for the extraction of a preliminary value for the neutron lifetime and the determination of areas to improve for the following run cycle.

  18. MAX-DOAS measurements of HONO slant column densities during the MAD-CAT campaign: inter-comparison, sensitivity studies on spectral analysis settings, and error budget

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Beirle, Steffen; Hendrick, Francois; Hilboll, Andreas; Jin, Junli; Kyuberis, Aleksandra A.; Lampel, Johannes; Li, Ang; Luo, Yuhan; Lodi, Lorenzo; Ma, Jianzhong; Navarro, Monica; Ortega, Ivan; Peters, Enno; Polyansky, Oleg L.; Remmers, Julia; Richter, Andreas; Puentedura, Olga; Van Roozendael, Michel; Seyler, André; Tennyson, Jonathan; Volkamer, Rainer; Xie, Pinhua; Zobov, Nikolai F.; Wagner, Thomas

    2017-10-01

    In order to promote the development of the passive DOAS technique the Multi Axis DOAS - Comparison campaign for Aerosols and Trace gases (MAD-CAT) was held at the Max Planck Institute for Chemistry in Mainz, Germany, from June to October 2013. Here, we systematically compare the differential slant column densities (dSCDs) of nitrous acid (HONO) derived from measurements of seven different instruments. We also compare the tropospheric difference of SCDs (delta SCD) of HONO, namely the difference of the SCDs for the non-zenith observations and the zenith observation of the same elevation sequence. Different research groups analysed the spectra from their own instruments using their individual fit software. All the fit errors of HONO dSCDs from the instruments with cooled large-size detectors are mostly in the range of 0.1 to 0.3 × 1015 molecules cm-2 for an integration time of 1 min. The fit error for the mini MAX-DOAS is around 0.7 × 1015 molecules cm-2. Although the HONO delta SCDs are normally smaller than 6 × 1015 molecules cm-2, consistent time series of HONO delta SCDs are retrieved from the measurements of different instruments. Both fits with a sequential Fraunhofer reference spectrum (FRS) and a daily noon FRS lead to similar consistency. Apart from the mini-MAX-DOAS, the systematic absolute differences of HONO delta SCDs between the instruments are smaller than 0.63 × 1015 molecules cm-2. The correlation coefficients are higher than 0.7 and the slopes of linear regressions deviate from unity by less than 16 % for the elevation angle of 1°. The correlations decrease with an increase in elevation angle. All the participants also analysed synthetic spectra using the same baseline DOAS settings to evaluate the systematic errors of HONO results from their respective fit programs. In general the errors are smaller than 0.3 × 1015 molecules cm-2, which is about half of the systematic difference between the real measurements.The differences of HONO delta SCDs retrieved in the selected three spectral ranges 335-361, 335-373 and 335-390 nm are considerable (up to 0.57 × 1015 molecules cm-2) for both real measurements and synthetic spectra. We performed sensitivity studies to quantify the dominant systematic error sources and to find a recommended DOAS setting in the three spectral ranges. The results show that water vapour absorption, temperature and wavelength dependence of O4 absorption, temperature dependence of Ring spectrum, and polynomial and intensity offset correction all together dominate the systematic errors. We recommend a fit range of 335-373 nm for HONO retrievals. In such fit range the overall systematic uncertainty is about 0.87 × 1015 molecules cm-2, much smaller than those in the other two ranges. The typical random uncertainty is estimated to be about 0.16 × 1015 molecules cm-2, which is only 25 % of the total systematic uncertainty for most of the instruments in the MAD-CAT campaign. In summary for most of the MAX-DOAS instruments for elevation angle below 5°, half daytime measurements (usually in the morning) of HONO delta SCD can be over the detection limit of 0.2 × 1015 molecules cm-2 with an uncertainty of ˜ 0.9 × 1015 molecules cm-2.

  19. The Role of Supralexical Prosodic Units in Speech Production: Evidence from the Distribution of Speech Errors

    ERIC Educational Resources Information Center

    Choe, Wook Kyung

    2013-01-01

    The current dissertation represents one of the first systematic studies of the distribution of speech errors within supralexical prosodic units. Four experiments were conducted to gain insight into the specific role of these units in speech planning and production. The first experiment focused on errors in adult English. These were found to be…

  20. A geometric model for initial orientation errors in pigeon navigation.

    PubMed

    Postlethwaite, Claire M; Walker, Michael M

    2011-01-21

    All mobile animals respond to gradients in signals in their environment, such as light, sound, odours and magnetic and electric fields, but it remains controversial how they might use these signals to navigate over long distances. The Earth's surface is essentially two-dimensional, so two stimuli are needed to act as coordinates for navigation. However, no environmental fields are known to be simple enough to act as perpendicular coordinates on a two-dimensional grid. Here, we propose a model for navigation in which we assume that an animal has a simplified 'cognitive map' in which environmental stimuli act as perpendicular coordinates. We then investigate how systematic deviation of the contour lines of the environmental signals from a simple orthogonal arrangement can cause errors in position determination and lead to systematic patterns of directional errors in initial homing directions taken by pigeons. The model reproduces patterns of initial orientation errors seen in previously collected data from homing pigeons, predicts that errors should increase with distance from the loft, and provides a basis for efforts to identify further sources of orientation errors made by homing pigeons. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Entanglement-assisted quantum feedback control

    NASA Astrophysics Data System (ADS)

    Yamamoto, Naoki; Mikami, Tomoaki

    2017-07-01

    The main advantage of quantum metrology relies on the effective use of entanglement, which indeed allows us to achieve strictly better estimation performance over the standard quantum limit. In this paper, we propose an analogous method utilizing entanglement for the purpose of feedback control. The system considered is a general linear dynamical quantum system, where the control goal can be systematically formulated as a linear quadratic Gaussian control problem based on the quantum Kalman filtering method; in this setting, an entangled input probe field is effectively used to reduce the estimation error and accordingly the control cost function. In particular, we show that, in the problem of cooling an opto-mechanical oscillator, the entanglement-assisted feedback control can lower the stationary occupation number of the oscillator below the limit attainable by the controller with a coherent probe field and furthermore beats the controller with an optimized squeezed probe field.

  2. Systematic Error in Leaf Water Potential Measurements with a Thermocouple Psychrometer.

    PubMed

    Rawlins, S L

    1964-10-30

    To allow for the error in measurement of water potentials in leaves, introduced by the presence of a water droplet in the chamber of the psychrometer, a correction must be made for the permeability of the leaf.

  3. Quantifying the burden of opioid medication errors in adult oncology and palliative care settings: A systematic review.

    PubMed

    Heneka, Nicole; Shaw, Tim; Rowett, Debra; Phillips, Jane L

    2016-06-01

    Opioids are the primary pharmacological treatment for cancer pain and, in the palliative care setting, are routinely used to manage symptoms at the end of life. Opioids are one of the most frequently reported drug classes in medication errors causing patient harm. Despite their widespread use, little is known about the incidence and impact of opioid medication errors in oncology and palliative care settings. To determine the incidence, types and impact of reported opioid medication errors in adult oncology and palliative care patient settings. A systematic review. Five electronic databases and the grey literature were searched from 1980 to August 2014. Empirical studies published in English, reporting data on opioid medication error incidence, types or patient impact, within adult oncology and/or palliative care services, were included. Popay's narrative synthesis approach was used to analyse data. Five empirical studies were included in this review. Opioid error incidence rate was difficult to ascertain as each study focussed on a single narrow area of error. The predominant error type related to deviation from opioid prescribing guidelines, such as incorrect dosing intervals. None of the included studies reported the degree of patient harm resulting from opioid errors. This review has highlighted the paucity of the literature examining opioid error incidence, types and patient impact in adult oncology and palliative care settings. Defining, identifying and quantifying error reporting practices for these populations should be an essential component of future oncology and palliative care quality and safety initiatives. © The Author(s) 2015.

  4. LOGISTIC FUNCTION PROFILE FIT: A least-squares program for fitting interface profiles to an extended logistic function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirchhoff, William H.

    2012-09-15

    The extended logistic function provides a physically reasonable description of interfaces such as depth profiles or line scans of surface topological or compositional features. It describes these interfaces with the minimum number of parameters, namely, position, width, and asymmetry. Logistic Function Profile Fit (LFPF) is a robust, least-squares fitting program in which the nonlinear extended logistic function is linearized by a Taylor series expansion (equivalent to a Newton-Raphson approach) with no apparent introduction of bias in the analysis. The program provides reliable confidence limits for the parameters when systematic errors are minimal and provides a display of the residuals frommore » the fit for the detection of systematic errors. The program will aid researchers in applying ASTM E1636-10, 'Standard practice for analytically describing sputter-depth-profile and linescan-profile data by an extended logistic function,' and may also prove useful in applying ISO 18516: 2006, 'Surface chemical analysis-Auger electron spectroscopy and x-ray photoelectron spectroscopy-determination of lateral resolution.' Examples are given of LFPF fits to a secondary ion mass spectrometry depth profile, an Auger surface line scan, and synthetic data generated to exhibit known systematic errors for examining the significance of such errors to the extrapolation of partial profiles.« less

  5. The effect of rainfall measurement uncertainties on rainfall-runoff processes modelling.

    PubMed

    Stransky, D; Bares, V; Fatka, P

    2007-01-01

    Rainfall data are a crucial input for various tasks concerning the wet weather period. Nevertheless, their measurement is affected by random and systematic errors that cause an underestimation of the rainfall volume. Therefore, the general objective of the presented work was to assess the credibility of measured rainfall data and to evaluate the effect of measurement errors on urban drainage modelling tasks. Within the project, the methodology of the tipping bucket rain gauge (TBR) was defined and assessed in terms of uncertainty analysis. A set of 18 TBRs was calibrated and the results were compared to the previous calibration. This enables us to evaluate the ageing of TBRs. A propagation of calibration and other systematic errors through the rainfall-runoff model was performed on experimental catchment. It was found that the TBR calibration is important mainly for tasks connected with the assessment of peak values and high flow durations. The omission of calibration leads to up to 30% underestimation and the effect of other systematic errors can add a further 15%. The TBR calibration should be done every two years in order to catch up the ageing of TBR mechanics. Further, the authors recommend to adjust the dynamic test duration proportionally to generated rainfall intensity.

  6. Evaluation of the leap motion controller as a new contact-free pointing device.

    PubMed

    Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard

    2014-12-24

    This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8% for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC.

  7. Evaluation of the Leap Motion Controller as a New Contact-Free Pointing Device

    PubMed Central

    Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard

    2015-01-01

    This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8 % for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC. PMID:25609043

  8. Adaptive control of an exoskeleton robot with uncertainties on kinematics and dynamics.

    PubMed

    Brahmi, Brahim; Saad, Maarouf; Ochoa-Luna, Cristobal; Rahman, Mohammad H

    2017-07-01

    In this paper, we propose a new adaptive control technique based on nonlinear sliding mode control (JSTDE) taking into account kinematics and dynamics uncertainties. This approach is applied to an exoskeleton robot with uncertain kinematics and dynamics. The adaptation design is based on Time Delay Estimation (TDE). The proposed strategy does not necessitate the well-defined dynamic and kinematic models of the system robot. The updated laws are designed using Lyapunov-function to solve the adaptation problem systematically, proving the close loop stability and ensuring the convergence asymptotically of the outputs tracking errors. Experiments results show the effectiveness and feasibility of JSTDE technique to deal with the variation of the unknown nonlinear dynamics and kinematics of the exoskeleton model.

  9. Design and study of water supply system for supercritical unit boiler in thermal power station

    NASA Astrophysics Data System (ADS)

    Du, Zenghui

    2018-04-01

    In order to design and optimize the boiler feed water system of supercritical unit, the establishment of a highly accurate controlled object model and its dynamic characteristics are prerequisites for developing a perfect thermal control system. In this paper, the method of mechanism modeling often leads to large systematic errors. Aiming at the information contained in the historical operation data of the boiler typical thermal system, the modern intelligent identification method to establish a high-precision quantitative model is used. This method avoids the difficulties caused by the disturbance experiment modeling for the actual system in the field, and provides a strong reference for the design and optimization of the thermal automation control system in the thermal power plant.

  10. Quotation accuracy in medical journal articles-a systematic review and meta-analysis.

    PubMed

    Jergas, Hannah; Baethge, Christopher

    2015-01-01

    Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose-quotation errors-may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress.

  11. Outcomes of a Failure Mode and Effects Analysis for medication errors in pediatric anesthesia.

    PubMed

    Martin, Lizabeth D; Grigg, Eliot B; Verma, Shilpa; Latham, Gregory J; Rampersad, Sally E; Martin, Lynn D

    2017-06-01

    The Institute of Medicine has called for development of strategies to prevent medication errors, which are one important cause of preventable harm. Although the field of anesthesiology is considered a leader in patient safety, recent data suggest high medication error rates in anesthesia practice. Unfortunately, few error prevention strategies for anesthesia providers have been implemented. Using Toyota Production System quality improvement methodology, a multidisciplinary team observed 133 h of medication practice in the operating room at a tertiary care freestanding children's hospital. A failure mode and effects analysis was conducted to systematically deconstruct and evaluate each medication handling process step and score possible failure modes to quantify areas of risk. A bundle of five targeted countermeasures were identified and implemented over 12 months. Improvements in syringe labeling (73 to 96%), standardization of medication organization in the anesthesia workspace (0 to 100%), and two-provider infusion checks (23 to 59%) were observed. Medication error reporting improved during the project and was subsequently maintained. After intervention, the median medication error rate decreased from 1.56 to 0.95 per 1000 anesthetics. The frequency of medication error harm events reaching the patient also decreased. Systematic evaluation and standardization of medication handling processes by anesthesia providers in the operating room can decrease medication errors and improve patient safety. © 2017 John Wiley & Sons Ltd.

  12. De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets

    NASA Astrophysics Data System (ADS)

    Hemati, Maziar S.; Rowley, Clarence W.; Deem, Eric A.; Cattafesta, Louis N.

    2017-08-01

    The dynamic mode decomposition (DMD)—a popular method for performing data-driven Koopman spectral analysis—has gained increased popularity for extracting dynamically meaningful spatiotemporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model forecasts. Despite its widespread use and utility, DMD can fail to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples. In particular, in the analysis of time-resolved particle image velocimetry data for a separated flow, TDMD outperforms standard DMD by providing dynamical interpretations that are consistent with alternative analysis techniques. Further, TDMD extracts modes that reveal detailed spatial structures missed by standard DMD.

  13. Specificity of reliable change models and review of the within-subjects standard deviation as an error term.

    PubMed

    Hinton-Bayre, Anton D

    2011-02-01

    There is an ongoing debate over the preferred method(s) for determining the reliable change (RC) in individual scores over time. In the present paper, specificity comparisons of several classic and contemporary RC models were made using a real data set. This included a more detailed review of a new RC model recently proposed in this journal, that used the within-subjects standard deviation (WSD) as the error term. It was suggested that the RC(WSD) was more sensitive to change and theoretically superior. The current paper demonstrated that even in the presence of mean practice effects, false-positive rates were comparable across models when reliability was good and initial and retest variances were equivalent. However, when variances differed, discrepancies in classification across models became evident. Notably, the RC using the WSD provided unacceptably high false-positive rates in this setting. It was considered that the WSD was never intended for measuring change in this manner. The WSD actually combines systematic and error variance. The systematic variance comes from measurable between-treatment differences, commonly referred to as practice effect. It was further demonstrated that removal of the systematic variance and appropriate modification of the residual error term for the purpose of testing individual change yielded an error term already published and criticized in the literature. A consensus on the RC approach is needed. To that end, further comparison of models under varied conditions is encouraged.

  14. A proposed method to investigate reliability throughout a questionnaire.

    PubMed

    Wentzel-Larsen, Tore; Norekvål, Tone M; Ulvik, Bjørg; Nygård, Ottar; Pripp, Are H

    2011-10-05

    Questionnaires are used extensively in medical and health care research and depend on validity and reliability. However, participants may differ in interest and awareness throughout long questionnaires, which can affect reliability of their answers. A method is proposed for "screening" of systematic change in random error, which could assess changed reliability of answers. A simulation study was conducted to explore whether systematic change in reliability, expressed as changed random error, could be assessed using unsupervised classification of subjects by cluster analysis (CA) and estimation of intraclass correlation coefficient (ICC). The method was also applied on a clinical dataset from 753 cardiac patients using the Jalowiec Coping Scale. The simulation study showed a relationship between the systematic change in random error throughout a questionnaire and the slope between the estimated ICC for subjects classified by CA and successive items in a questionnaire. This slope was proposed as an awareness measure--to assessing if respondents provide only a random answer or one based on a substantial cognitive effort. Scales from different factor structures of Jalowiec Coping Scale had different effect on this awareness measure. Even though assumptions in the simulation study might be limited compared to real datasets, the approach is promising for assessing systematic change in reliability throughout long questionnaires. Results from a clinical dataset indicated that the awareness measure differed between scales.

  15. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the entire error is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by about 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+-) 0.04 K/decade during 1980 to 1998.

  16. Moisture Forecast Bias Correction in GEOS DAS

    NASA Technical Reports Server (NTRS)

    Dee, D.

    1999-01-01

    Data assimilation methods rely on numerous assumptions about the errors involved in measuring and forecasting atmospheric fields. One of the more disturbing of these is that short-term model forecasts are assumed to be unbiased. In case of atmospheric moisture, for example, observational evidence shows that the systematic component of errors in forecasts and analyses is often of the same order of magnitude as the random component. we have implemented a sequential algorithm for estimating forecast moisture bias from rawinsonde data in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The algorithm is designed to remove the systematic component of analysis errors and can be easily incorporated in an existing statistical data assimilation system. We will present results of initial experiments that show a significant reduction of bias in the GEOS DAS moisture analyses.

  17. Is the four-day rotation of Venus illusory?. [includes systematic error in radial velocities of solar lines reflected from Venus

    NASA Technical Reports Server (NTRS)

    Young, A. T.

    1974-01-01

    An overlooked systematic error exists in the apparent radial velocities of solar lines reflected from regions of Venus near the terminator, owing to a combination of the finite angular size of the Sun and its large (2 km/sec) equatorial velocity of rotation. This error produces an apparent, but fictitious, retrograde component of planetary rotation, typically on the order of 40 meters/sec. Spectroscopic, photometric, and radiometric evidence against a 4-day atmospheric rotation is also reviewed. The bulk of the somewhat contradictory evidence seems to favor slow motions, on the order of 5 m/sec, in the atmosphere of Venus; the 4-day rotation may be due to a traveling wave-like disturbance, not bulk motions, driven by the UV albedo differences.

  18. Total error shift patterns for daily CT on rails image-guided radiotherapy to the prostate bed

    PubMed Central

    2011-01-01

    Background To evaluate the daily total error shift patterns on post-prostatectomy patients undergoing image guided radiotherapy (IGRT) with a diagnostic quality computer tomography (CT) on rails system. Methods A total of 17 consecutive post-prostatectomy patients receiving adjuvant or salvage IMRT using CT-on-rails IGRT were analyzed. The prostate bed's daily total error shifts were evaluated for a total of 661 CT scans. Results In the right-left, cranial-caudal, and posterior-anterior directions, 11.5%, 9.2%, and 6.5% of the 661 scans required no position adjustments; 75.3%, 66.1%, and 56.8% required a shift of 1 - 5 mm; 11.5%, 20.9%, and 31.2% required a shift of 6 - 10 mm; and 1.7%, 3.8%, and 5.5% required a shift of more than 10 mm, respectively. There was evidence of correlation between the x and y, x and z, and y and z axes in 3, 3, and 3 of 17 patients, respectively. Univariate (ANOVA) analysis showed that the total error pattern was random in the x, y, and z axis for 10, 5, and 2 of 17 patients, respectively, and systematic for the rest. Multivariate (MANOVA) analysis showed that the (x,y), (x,z), (y,z), and (x, y, z) total error pattern was random in 5, 1, 1, and 1 of 17 patients, respectively, and systematic for the rest. Conclusions The overall daily total error shift pattern for these 17 patients simulated with an empty bladder, and treated with CT on rails IGRT was predominantly systematic. Despite this, the temporal vector trends showed complex behaviors and unpredictable changes in magnitude and direction. These findings highlight the importance of using daily IGRT in post-prostatectomy patients. PMID:22024279

  19. Analyzing false positives of four questions in the Force Concept Inventory

    NASA Astrophysics Data System (ADS)

    Yasuda, Jun-ichiro; Mae, Naohiro; Hull, Michael M.; Taniguchi, Masa-aki

    2018-06-01

    In this study, we analyze the systematic error from false positives of the Force Concept Inventory (FCI). We compare the systematic errors of question 6 (Q.6), Q.7, and Q.16, for which clearly erroneous reasoning has been found, with Q.5, for which clearly erroneous reasoning has not been found. We determine whether or not a correct response to a given FCI question is a false positive using subquestions. In addition to the 30 original questions, subquestions were introduced for Q.5, Q.6, Q.7, and Q.16. This modified version of the FCI was administered to 1145 university students in Japan from 2015 to 2017. In this paper, we discuss our finding that the systematic errors of Q.6, Q.7, and Q.16 are much larger than that of Q.5 for students with mid-level FCI scores. Furthermore, we find that, averaged over the data sample, the sum of the false positives from Q.5, Q.6, Q.7, and Q.16 is about 10% of the FCI score of a midlevel student.

  20. Precision Møller Polarimetry

    NASA Astrophysics Data System (ADS)

    Henry, William; Jefferson Lab Hall A Collaboration

    2017-09-01

    Jefferson Lab's cutting-edge parity-violating electron scattering program has increasingly stringent requirements for systematic errors. Beam polarimetry is often one of the dominant systematic errors in these experiments. A new Møller Polarimeter in Hall A of Jefferson Lab (JLab) was installed in 2015 and has taken first measurements for a polarized scattering experiment. Upcoming parity violation experiments in Hall A include CREX, PREX-II, MOLLER and SOLID with the latter two requiring <0.5% precision on beam polarization measurements. The polarimeter measures the Møller scattering rates of the polarized electron beam incident upon an iron target placed in a saturating magnetic field. The spectrometer consists of four focusing quadrapoles and one momentum selection dipole. The detector is designed to measure the scattered and knock out target electrons in coincidence. Beam polarization is extracted by constructing an asymmetry from the scattering rates when the incident electron spin is parallel and anti-parallel to the target electron spin. Initial data will be presented. Sources of systematic errors include target magnetization, spectrometer acceptance, the Levchuk effect, and radiative corrections which will be discussed. National Science Foundation.

  1. LANDSAT/coastal processes

    NASA Technical Reports Server (NTRS)

    James, W. P. (Principal Investigator); Hill, J. M.; Bright, J. B.

    1977-01-01

    The author has identified the following significant results. Correlations between the satellite radiance values water color, Secchi disk visibility, turbidity, and attenuation coefficients were generally good. The residual was due to several factors including systematic errors in the remotely sensed data, errors, small time and space variations in the water quality measurements, and errors caused by experimental design. Satellite radiance values were closely correlated with the optical properties of the water.

  2. Drought Persistence in Models and Observations

    NASA Astrophysics Data System (ADS)

    Moon, Heewon; Gudmundsson, Lukas; Seneviratne, Sonia

    2017-04-01

    Many regions of the world have experienced drought events that persisted several years and caused substantial economic and ecological impacts in the 20th century. However, it remains unclear whether there are significant trends in the frequency or severity of these prolonged drought events. In particular, an important issue is linked to systematic biases in the representation of persistent drought events in climate models, which impedes analysis related to the detection and attribution of drought trends. This study assesses drought persistence errors in global climate model (GCM) simulations from the 5th phase of Coupled Model Intercomparison Project (CMIP5), in the period of 1901-2010. The model simulations are compared with five gridded observational data products. The analysis focuses on two aspects: the identification of systematic biases in the models and the partitioning of the spread of drought-persistence-error into four possible sources of uncertainty: model uncertainty, observation uncertainty, internal climate variability and the estimation error of drought persistence. We use monthly and yearly dry-to-dry transition probabilities as estimates for drought persistence with drought conditions defined as negative precipitation anomalies. For both time scales we find that most model simulations consistently underestimated drought persistence except in a few regions such as India and Eastern South America. Partitioning the spread of the drought-persistence-error shows that at the monthly time scale model uncertainty and observation uncertainty are dominant, while the contribution from internal variability does play a minor role in most cases. At the yearly scale, the spread of the drought-persistence-error is dominated by the estimation error, indicating that the partitioning is not statistically significant, due to a limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current climate models and highlight the main contributors of uncertainty of drought-persistence-error. Future analyses will focus on investigating the temporal propagation of drought persistence to better understand the causes for the identified errors in the representation of drought persistence in state-of-the-art climate models.

  3. SU-G-BRB-11: On the Sensitivity of An EPID-Based 3D Dose Verification System to Detect Delivery Errors in VMAT Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez, P; Olaciregui-Ruiz, I; Mijnheer, B

    2016-06-15

    Purpose: To investigate the sensitivity of an EPID-based 3D dose verification system to detect delivery errors in VMAT treatments. Methods: For this study 41 EPID-reconstructed 3D in vivo dose distributions of 15 different VMAT plans (H&N, lung, prostate and rectum) were selected. To simulate the effect of delivery errors, their TPS plans were modified by: 1) scaling of the monitor units by ±3% and ±6% and 2) systematic shifting of leaf bank positions by ±1mm, ±2mm and ±5mm. The 3D in vivo dose distributions where then compared to the unmodified and modified treatment plans. To determine the detectability of themore » various delivery errors, we made use of a receiver operator characteristic (ROC) methodology. True positive and false positive rates were calculated as a function of the γ-parameters γmean, γ1% (near-maximum γ) and the PTV dose parameter ΔD{sub 50} (i.e. D{sub 50}(EPID)-D{sub 50}(TPS)). The ROC curve is constructed by plotting the true positive rate vs. the false positive rate. The area under the ROC curve (AUC) then serves as a measure of the performance of the EPID dosimetry system in detecting a particular error; an ideal system has AUC=1. Results: The AUC ranges for the machine output errors and systematic leaf position errors were [0.64 – 0.93] and [0.48 – 0.92] respectively using γmean, [0.57 – 0.79] and [0.46 – 0.85] using γ1% and [0.61 – 0.77] and [ 0.48 – 0.62] using ΔD{sub 50}. Conclusion: For the verification of VMAT deliveries, the parameter γmean is the best discriminator for the detection of systematic leaf position errors and monitor unit scaling errors. Compared to γmean and γ1%, the parameter ΔD{sub 50} performs worse as a discriminator in all cases.« less

  4. Strategic planning to reduce medical errors: Part I--diagnosis.

    PubMed

    Waldman, J Deane; Smith, Howard L

    2012-01-01

    Despite extensive dialogue and a continuing stream of proposed medical practice revisions, medical errors and adverse impacts persist. Connectivity of vital elements is often underestimated or not fully understood. This paper analyzes medical errors from a systems dynamics viewpoint (Part I). Our analysis suggests in Part II that the most fruitful strategies for dissolving medical errors include facilitating physician learning, educating patients about appropriate expectations surrounding treatment regimens, and creating "systematic" patient protections rather than depending on (nonexistent) perfect providers.

  5. Effects of Foveal Ablation on Emmetropization and Form-Deprivation Myopia

    PubMed Central

    Smith, Earl L.; Ramamirtham, Ramkumar; Qiao-Grider, Ying; Hung, Li-Fang; Huang, Juan; Kee, Chea-su; Coats, David; Paysse, Evelyn

    2009-01-01

    Purpose Because of the prominence of central vision in primates, it has generally been assumed that signals from the fovea dominate refractive development. To test this assumption, the authors determined whether an intact fovea was essential for either normal emmetropization or the vision-induced myopic errors produced by form deprivation. Methods In 13 rhesus monkeys at 3 weeks of age, the fovea and most of the perifovea in one eye were ablated by laser photocoagulation. Five of these animals were subsequently allowed unrestricted vision. For the other eight monkeys with foveal ablations, a diffuser lens was secured in front of the treated eyes to produce form deprivation. Refractive development was assessed along the pupillary axis by retinoscopy, keratometry, and A-scan ultrasonography. Control data were obtained from 21 normal monkeys and three infants reared with plano lenses in front of both eyes. Results Foveal ablations had no apparent effect on emmetropization. Refractive errors for both eyes of the treated infants allowed unrestricted vision were within the control range throughout the observation period, and there were no systematic interocular differences in refractive error or axial length. In addition, foveal ablation did not prevent form deprivation myopia; six of the eight infants that experienced monocular form deprivation developed myopic axial anisometropias outside the control range. Conclusions Visual signals from the fovea are not essential for normal refractive development or the vision-induced alterations in ocular growth produced by form deprivation. Conversely, the peripheral retina, in isolation, can regulate emmetropizing responses and produce anomalous refractive errors in response to abnormal visual experience. These results indicate that peripheral vision should be considered when assessing the effects of visual experience on refractive development. PMID:17724167

  6. Rational-Emotive Therapy versus Systematic Desensitization: A Comment on Moleski and Tosi.

    ERIC Educational Resources Information Center

    Atkinson, Leslie

    1983-01-01

    Questioned the statistical analyses of the Moleski and Tosi investigation of rational-emotive therapy versus systematic desensitization. Suggested means for lowering the error rate through a more efficient experimental design. Recommended a reanalysis of the original data. (LLL)

  7. ASME B89.4.19 Performance Evaluation Tests and Geometric Misalignments in Laser Trackers

    PubMed Central

    Muralikrishnan, B.; Sawyer, D.; Blackburn, C.; Phillips, S.; Borchardt, B.; Estler, W. T.

    2009-01-01

    Small and unintended offsets, tilts, and eccentricity of the mechanical and optical components in laser trackers introduce systematic errors in the measured spherical coordinates (angles and range readings) and possibly in the calculated lengths of reference artifacts. It is desirable that the tests described in the ASME B89.4.19 Standard [1] be sensitive to these geometric misalignments so that any resulting systematic errors are identified during performance evaluation. In this paper, we present some analysis, using error models and numerical simulation, of the sensitivity of the length measurement system tests and two-face system tests in the B89.4.19 Standard to misalignments in laser trackers. We highlight key attributes of the testing strategy adopted in the Standard and propose new length measurement system tests that demonstrate improved sensitivity to some misalignments. Experimental results with a tracker that is not properly error corrected for the effects of the misalignments validate claims regarding the proposed new length tests. PMID:27504211

  8. Topological analysis of polymeric melts: chain-length effects and fast-converging estimators for entanglement length.

    PubMed

    Hoy, Robert S; Foteinopoulou, Katerina; Kröger, Martin

    2009-09-01

    Primitive path analyses of entanglements are performed over a wide range of chain lengths for both bead spring and atomistic polyethylene polymer melts. Estimators for the entanglement length N_{e} which operate on results for a single chain length N are shown to produce systematic O(1/N) errors. The mathematical roots of these errors are identified as (a) treating chain ends as entanglements and (b) neglecting non-Gaussian corrections to chain and primitive path dimensions. The prefactors for the O(1/N) errors may be large; in general their magnitude depends both on the polymer model and the method used to obtain primitive paths. We propose, derive, and test new estimators which eliminate these systematic errors using information obtainable from the variation in entanglement characteristics with chain length. The new estimators produce accurate results for N_{e} from marginally entangled systems. Formulas based on direct enumeration of entanglements appear to converge faster and are simpler to apply.

  9. [Comparison study on sampling methods of Oncomelania hupensis snail survey in marshland schistosomiasis epidemic areas in China].

    PubMed

    An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang

    2016-06-29

    To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.

  10. Systematic evaluation of NASA precipitation radar estimates using NOAA/NSSL National Mosaic QPE products

    NASA Astrophysics Data System (ADS)

    Kirstetter, P.; Hong, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Petersen, W. A.

    2011-12-01

    Proper characterization of the error structure of TRMM Precipitation Radar (PR) quantitative precipitation estimation (QPE) is needed for their use in TRMM combined products, water budget studies and hydrological modeling applications. Due to the variety of sources of error in spaceborne radar QPE (attenuation of the radar signal, influence of land surface, impact of off-nadir viewing angle, etc.) and the impact of correction algorithms, the problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements (GV) using NOAA/NSSL's National Mosaic QPE (NMQ) system. An investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) on the basis of a 3-month-long data sample. A significant effort has been carried out to derive a bias-corrected, robust reference rainfall source from NMQ. The GV processing details will be presented along with preliminary results of PR's error characteristics using contingency table statistics, probability distribution comparisons, scatter plots, semi-variograms, and systematic biases and random errors.

  11. Application of linear regression analysis in accuracy assessment of rolling force calculations

    NASA Astrophysics Data System (ADS)

    Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.

    1998-10-01

    Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.

  12. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors.

    PubMed

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter

    2010-07-01

    Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. 9 head and neck (H&N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets (+/- 1 mm in two banks, +/- 0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H&N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  13. Comparison of different source calculations in two-nucleon channel at large quark mass

    NASA Astrophysics Data System (ADS)

    Yamazaki, Takeshi; Ishikawa, Ken-ichi; Kuramashi, Yoshinobu

    2018-03-01

    We investigate a systematic error coming from higher excited state contributions in the energy shift of light nucleus in the two-nucleon channel by comparing two different source calculations with the exponential and wall sources. Since it is hard to obtain a clear signal of the wall source correlation function in a plateau region, we employ a large quark mass as the pion mass is 0.8 GeV in quenched QCD. We discuss the systematic error in the spin-triplet channel of the two-nucleon system, and the volume dependence of the energy shift.

  14. Pressure Measurements Using an Airborne Differential Absorption Lidar. Part 1; Analysis of the Systematic Error Sources

    NASA Technical Reports Server (NTRS)

    Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.

    1999-01-01

    Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.

  15. Production and detection of atomic hexadecapole at Earth's magnetic field.

    PubMed

    Acosta, V M; Auzinsh, M; Gawlik, W; Grisins, P; Higbie, J M; Jackson Kimball, D F; Krzemien, L; Ledbetter, M P; Pustelny, S; Rochester, S M; Yashchuk, V V; Budker, D

    2008-07-21

    Optical magnetometers measure magnetic fields with extremely high precision and without cryogenics. However, at geomagnetic fields, important for applications from landmine removal to archaeology, they suffer from nonlinear Zeeman splitting, leading to systematic dependence on sensor orientation. We present experimental results on a method of eliminating this systematic error, using the hexadecapole atomic polarization moment. In particular, we demonstrate selective production of the atomic hexadecapole moment at Earth's magnetic field and verify its immunity to nonlinear Zeeman splitting. This technique promises to eliminate directional errors in all-optical atomic magnetometers, potentially improving their measurement accuracy by several orders of magnitude.

  16. A disturbance observer-based adaptive control approach for flexure beam nano manipulators.

    PubMed

    Zhang, Yangming; Yan, Peng; Zhang, Zhen

    2016-01-01

    This paper presents a systematic modeling and control methodology for a two-dimensional flexure beam-based servo stage supporting micro/nano manipulations. Compared with conventional mechatronic systems, such systems have major control challenges including cross-axis coupling, dynamical uncertainties, as well as input saturations, which may have adverse effects on system performance unless effectively eliminated. A novel disturbance observer-based adaptive backstepping-like control approach is developed for high precision servo manipulation purposes, which effectively accommodates model uncertainties and coupling dynamics. An auxiliary system is also introduced, on top of the proposed control scheme, to compensate the input saturations. The proposed control architecture is deployed on a customized-designed nano manipulating system featured with a flexure beam structure and voice coil actuators (VCA). Real time experiments on various manipulating tasks, such as trajectory/contour tracking, demonstrate precision errors of less than 1%. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  17. The study design elements employed by researchers in preclinical animal experiments from two research domains and implications for automation of systematic reviews.

    PubMed

    O'Connor, Annette M; Totton, Sarah C; Cullen, Jonah N; Ramezani, Mahmood; Kalivarapu, Vijay; Yuan, Chaohui; Gilbert, Stephen B

    2018-01-01

    Systematic reviews are increasingly using data from preclinical animal experiments in evidence networks. Further, there are ever-increasing efforts to automate aspects of the systematic review process. When assessing systematic bias and unit-of-analysis errors in preclinical experiments, it is critical to understand the study design elements employed by investigators. Such information can also inform prioritization of automation efforts that allow the identification of the most common issues. The aim of this study was to identify the design elements used by investigators in preclinical research in order to inform unique aspects of assessment of bias and error in preclinical research. Using 100 preclinical experiments each related to brain trauma and toxicology, we assessed design elements described by the investigators. We evaluated Methods and Materials sections of reports for descriptions of the following design elements: 1) use of comparison group, 2) unit of allocation of the interventions to study units, 3) arrangement of factors, 4) method of factor allocation to study units, 5) concealment of the factors during allocation and outcome assessment, 6) independence of study units, and 7) nature of factors. Many investigators reported using design elements that suggested the potential for unit-of-analysis errors, i.e., descriptions of repeated measurements of the outcome (94/200) and descriptions of potential for pseudo-replication (99/200). Use of complex factor arrangements was common, with 112 experiments using some form of factorial design (complete, incomplete or split-plot-like). In the toxicology dataset, 20 of the 100 experiments appeared to use a split-plot-like design, although no investigators used this term. The common use of repeated measures and factorial designs means understanding bias and error in preclinical experimental design might require greater expertise than simple parallel designs. Similarly, use of complex factor arrangements creates novel challenges for accurate automation of data extraction and bias and error assessment in preclinical experiments.

  18. Systematic behavioural observation of executive performance after brain injury.

    PubMed

    Lewis, Mark W; Babbage, Duncan R; Leathem, Janet M

    2017-01-01

    To develop an ecologically valid measure of executive functioning (i.e. Planning and Organization, Executive Memory, Initiation, Cognitive Shifting, Impulsivity, Sustained and Directed Attention, Error Detection, Error Correction and Time Management) during a functional chocolate brownie cooking task. In Study 1, the inter-rater reliability of a novel behavioural observation assessment method was assessed with 10 people with traumatic brain injury (TBI). In Study 2, 27 people with TBI and 16 healthy controls completed the functional task along with other measures of executive functioning to assess validity. Intraclass correlation coefficients for six of the nine aspects of executive functioning ranged from .54 to 1.00. Percentage agreements for the remaining aspects ranged from 70% to 90%. Significant and non-significant, moderate, correlations were found between the functional cooking task and standard neuropsychological measures. The healthy control group performed better than the TBI group in six areas (d = 0.56 to 1.23). In this initial trial of a novel assessment method, adequate inter-rater reliability was found. The measure was associated with standard neuropsychological measures, and our healthy control group performed better than the TBI group. The measure appears to be an ecologically valid measure of executive functioning.

  19. Phylogenomics of Lophotrochozoa with Consideration of Systematic Error.

    PubMed

    Kocot, Kevin M; Struck, Torsten H; Merkel, Julia; Waits, Damien S; Todt, Christiane; Brannock, Pamela M; Weese, David A; Cannon, Johanna T; Moroz, Leonid L; Lieb, Bernhard; Halanych, Kenneth M

    2017-03-01

    Phylogenomic studies have improved understanding of deep metazoan phylogeny and show promise for resolving incongruences among analyses based on limited numbers of loci. One region of the animal tree that has been especially difficult to resolve, even with phylogenomic approaches, is relationships within Lophotrochozoa (the animal clade that includes molluscs, annelids, and flatworms among others). Lack of resolution in phylogenomic analyses could be due to insufficient phylogenetic signal, limitations in taxon and/or gene sampling, or systematic error. Here, we investigated why lophotrochozoan phylogeny has been such a difficult question to answer by identifying and reducing sources of systematic error. We supplemented existing data with 32 new transcriptomes spanning the diversity of Lophotrochozoa and constructed a new set of Lophotrochozoa-specific core orthologs. Of these, 638 orthologous groups (OGs) passed strict screening for paralogy using a tree-based approach. In order to reduce possible sources of systematic error, we calculated branch-length heterogeneity, evolutionary rate, percent missing data, compositional bias, and saturation for each OG and analyzed increasingly stricter subsets of only the most stringent (best) OGs for these five variables. Principal component analysis of the values for each factor examined for each OG revealed that compositional heterogeneity and average patristic distance contributed most to the variance observed along the first principal component while branch-length heterogeneity and, to a lesser extent, saturation contributed most to the variance observed along the second. Missing data did not strongly contribute to either. Additional sensitivity analyses examined effects of removing taxa with heterogeneous branch lengths, large amounts of missing data, and compositional heterogeneity. Although our analyses do not unambiguously resolve lophotrochozoan phylogeny, we advance the field by reducing the list of viable hypotheses. Moreover, our systematic approach for dissection of phylogenomic data can be applied to explore sources of incongruence and poor support in any phylogenomic data set. [Annelida; Brachiopoda; Bryozoa; Entoprocta; Mollusca; Nemertea; Phoronida; Platyzoa; Polyzoa; Spiralia; Trochozoa.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Quotation accuracy in medical journal articles—a systematic review and meta-analysis

    PubMed Central

    Jergas, Hannah

    2015-01-01

    Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose—quotation errors—may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress. PMID:26528420

  1. Dose assessment in contrast enhanced digital mammography using simple phantoms simulating standard model breasts.

    PubMed

    Bouwman, R W; van Engen, R E; Young, K C; Veldkamp, W J H; Dance, D R

    2015-01-07

    Slabs of polymethyl methacrylate (PMMA) or a combination of PMMA and polyethylene (PE) slabs are used to simulate standard model breasts for the evaluation of the average glandular dose (AGD) in digital mammography (DM) and digital breast tomosynthesis (DBT). These phantoms are optimized for the energy spectra used in DM and DBT, which normally have a lower average energy than used in contrast enhanced digital mammography (CEDM). In this study we have investigated whether these phantoms can be used for the evaluation of AGD with the high energy x-ray spectra used in CEDM. For this purpose the calculated values of the incident air kerma for dosimetry phantoms and standard model breasts were compared in a zero degree projection with the use of an anti scatter grid. It was found that the difference in incident air kerma compared to standard model breasts ranges between -10% to +4% for PMMA slabs and between 6% and 15% for PMMA-PE slabs. The estimated systematic error in the measured AGD for both sets of phantoms were considered to be sufficiently small for the evaluation of AGD in quality control procedures for CEDM. However, the systematic error can be substantial if AGD values from different phantoms are compared.

  2. $$B\\to Kl^+l^-$$ decay form factors from three-flavor lattice QCD

    DOE PAGES

    Bailey, Jon A.

    2016-01-27

    We compute the form factors for the B → Kl +l - semileptonic decay process in lattice QCD using gauge-field ensembles with 2+1 flavors of sea quark, generated by the MILC Collaboration. The ensembles span lattice spacings from 0.12 to 0.045 fm and have multiple sea-quark masses to help control the chiral extrapolation. The asqtad improved staggered action is used for the light valence and sea quarks, and the clover action with the Fermilab interpretation is used for the heavy b quark. We present results for the form factors f+(q 2), f 0(q 2), and f T(q 2), where q 2more » is the momentum transfer, together with a comprehensive examination of systematic errors. Lattice QCD determines the form factors for a limited range of q 2, and we use the model-independent z expansion to cover the whole kinematically allowed range. We present our final form-factor results as coefficients of the z expansion and the correlations between them, where the errors on the coefficients include statistical and all systematic uncertainties. Lastly, we use this complete description of the form factors to test QCD predictions of the form factors at high and low q 2.« less

  3. The Relationships Among Perceived Patients' Safety Culture, Intention to Report Errors, and Leader Coaching Behavior of Nurses in Korea: A Pilot Study.

    PubMed

    Ko, YuKyung; Yu, Soyoung

    2017-09-01

    This study was undertaken to explore the correlations among nurses' perceptions of patient safety culture, their intention to report errors, and leader coaching behaviors. The participants (N = 289) were nurses from 5 Korean hospitals with approximately 300 to 500 beds each. Sociodemographic variables, patient safety culture, intention to report errors, and coaching behavior were measured using self-report instruments. Data were analyzed using descriptive statistics, Pearson correlation coefficient, the t test, and the Mann-Whitney U test. Nurses' perceptions of patient safety culture and their intention to report errors showed significant differences between groups of nurses who rated their leaders as high-performing or low-performing coaches. Perceived coaching behavior showed a significant, positive correlation with patient safety culture and intention to report errors, i.e., as nurses' perceptions of coaching behaviors increased, so did their ratings of patient safety culture and error reporting. There is a need in health care settings for coaching by nurse managers to provide quality nursing care and thus improve patient safety. Programs that are systematically developed and implemented to enhance the coaching behaviors of nurse managers are crucial to the improvement of patient safety and nursing care. Moreover, a systematic analysis of the causes of malpractice, as opposed to a focus on the punitive consequences of errors, could increase error reporting and therefore promote a culture in which a higher level of patient safety can thrive.

  4. Estimation of geopotential differences over intercontinental locations using satellite and terrestrial measurements. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Pavlis, Nikolaos K.

    1991-01-01

    An error analysis study was conducted in order to assess the current accuracies and the future anticipated improvements in the estimation of geopotential differences over intercontinental locations. An observation/estimation scheme was proposed and studied, whereby gravity disturbance measurements on the Earth's surface, in caps surrounding the estimation points, are combined with corresponding data in caps directly over these points at the altitude of a low orbiting satellite, for the estimation of the geopotential difference between the terrestrial stations. The mathematical modeling required to relate the primary observables to the parameters to be estimated, was studied for the terrestrial data and the data at altitude. Emphasis was placed on the examination of systematic effects and on the corresponding reductions that need to be applied to the measurements to avoid systematic errors. The error estimation for the geopotential differences was performed using both truncation theory and least squares collocation with ring averages, in case observations on the Earth's surface only are used. The error analysis indicated that with the currently available global geopotential model OSU89B and with gravity disturbance data in 2 deg caps surrounding the estimation points, the error of the geopotential difference arising from errors in the reference model and the cap data is about 23 kgal cm, for 30 deg station separation.

  5. Influence of ECG measurement accuracy on ECG diagnostic statements.

    PubMed

    Zywietz, C; Celikag, D; Joseph, G

    1996-01-01

    Computer analysis of electrocardiograms (ECGs) provides a large amount of ECG measurement data, which may be used for diagnostic classification and storage in ECG databases. Until now, neither error limits for ECG measurements have been specified nor has their influence on diagnostic statements been systematically investigated. An analytical method is presented to estimate the influence of measurement errors on the accuracy of diagnostic ECG statements. Systematic (offset) errors will usually result in an increase of false positive or false negative statements since they cause a shift of the working point on the receiver operating characteristics curve. Measurement error dispersion broadens the distribution function of discriminative measurement parameters and, therefore, usually increases the overlap between discriminative parameters. This results in a flattening of the receiver operating characteristics curve and an increase of false positive and false negative classifications. The method developed has been applied to ECG conduction defect diagnoses by using the proposed International Electrotechnical Commission's interval measurement tolerance limits. These limits appear too large because more than 30% of false positive atrial conduction defect statements and 10-18% of false intraventricular conduction defect statements could be expected due to tolerated measurement errors. To assure long-term usability of ECG measurement databases, it is recommended that systems provide its error tolerance limits obtained on a defined test set.

  6. Evaluation of robotic training forces that either enhance or reduce error in chronic hemiparetic stroke survivors.

    PubMed

    Patton, James L; Stoykov, Mary Ellen; Kovic, Mark; Mussa-Ivaldi, Ferdinando A

    2006-01-01

    This investigation is one in a series of studies that address the possibility of stroke rehabilitation using robotic devices to facilitate "adaptive training." Healthy subjects, after training in the presence of systematically applied forces, typically exhibit a predictable "after-effect." A critical question is whether this adaptive characteristic is preserved following stroke so that it might be exploited for restoring function. Another important question is whether subjects benefit more from training forces that enhance their errors than from forces that reduce their errors. We exposed hemiparetic stroke survivors and healthy age-matched controls to a pattern of disturbing forces that have been found by previous studies to induce a dramatic adaptation in healthy individuals. Eighteen stroke survivors made 834 movements in the presence of a robot-generated force field that pushed their hands proportional to its speed and perpendicular to its direction of motion--either clockwise or counterclockwise. We found that subjects could adapt, as evidenced by significant after-effects. After-effects were not correlated with the clinical scores that we used for measuring motor impairment. Further examination revealed that significant improvements occurred only when the training forces magnified the original errors, and not when the training forces reduced the errors or were zero. Within this constrained experimental task we found that error-enhancing therapy (as opposed to guiding the limb closer to the correct path) to be more effective than therapy that assisted the subject.

  7. Running coupling constant from lattice studies of gluon and ghost propagators

    NASA Astrophysics Data System (ADS)

    Cucchieri, A.; Mendes, T.

    2004-12-01

    We present a numerical study of the running coupling constant in four-dimensional pure-SU(2) lattice gauge theory. The running coupling is evaluated by fitting data for the gluon and ghost propagators in minimal Landau gauge. Following Refs. [1, 2], the fitting formulae are obtained by a simultaneous integration of the β function and of a function coinciding with the anomalous dimension of the propagator in the momentum subtraction scheme. We consider these formulae at three and four loops. The fitting method works well, especially for the ghost case, for which statistical error and hyper-cubic effects are very small. Our present result for ΛMS is 200-40+60 MeV, where the error is purely systematic. We are currently extending this analysis to five loops in order to reduce this systematic error.

  8. Simplified model of pinhole imaging for quantifying systematic errors in image shape

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, Laura Robin; Izumi, N.; Khan, S. F.

    In this paper, we examine systematic errors in x-ray imaging by pinhole optics for quantifying uncertainties in the measurement of convergence and asymmetry in inertial confinement fusion implosions. We present a quantitative model for the total resolution of a pinhole optic with an imaging detector that more effectively describes the effect of diffraction than models that treat geometry and diffraction as independent. This model can be used to predict loss of shape detail due to imaging across the transition from geometric to diffractive optics. We find that fractional error in observable shapes is proportional to the total resolution element wemore » present and inversely proportional to the length scale of the asymmetry being observed. Finally, we have experimentally validated our results by imaging a single object with differently sized pinholes and with different magnifications.« less

  9. Simplified model of pinhole imaging for quantifying systematic errors in image shape

    DOE PAGES

    Benedetti, Laura Robin; Izumi, N.; Khan, S. F.; ...

    2017-10-30

    In this paper, we examine systematic errors in x-ray imaging by pinhole optics for quantifying uncertainties in the measurement of convergence and asymmetry in inertial confinement fusion implosions. We present a quantitative model for the total resolution of a pinhole optic with an imaging detector that more effectively describes the effect of diffraction than models that treat geometry and diffraction as independent. This model can be used to predict loss of shape detail due to imaging across the transition from geometric to diffractive optics. We find that fractional error in observable shapes is proportional to the total resolution element wemore » present and inversely proportional to the length scale of the asymmetry being observed. Finally, we have experimentally validated our results by imaging a single object with differently sized pinholes and with different magnifications.« less

  10. Single-lens 3D digital image correlation system based on a bilateral telecentric lens and a bi-prism: Systematic error analysis and correction

    NASA Astrophysics Data System (ADS)

    Wu, Lifu; Zhu, Jianguo; Xie, Huimin; Zhou, Mengmeng

    2016-12-01

    Recently, we proposed a single-lens 3D digital image correlation (3D DIC) method and established a measurement system on the basis of a bilateral telecentric lens (BTL) and a bi-prism. This system can retrieve the 3D morphology of a target and measure its deformation using a single BTL with relatively high accuracy. Nevertheless, the system still suffers from systematic errors caused by manufacturing deficiency of the bi-prism and distortion of the BTL. In this study, in-depth evaluations of these errors and their effects on the measurement results are performed experimentally. The bi-prism deficiency and the BTL distortion are characterized by two in-plane rotation angles and several distortion coefficients, respectively. These values are obtained from a calibration process using a chessboard placed into the field of view of the system; this process is conducted after the measurement of tested specimen. A modified mathematical model is proposed, which takes these systematic errors into account and corrects them during 3D reconstruction. Experiments on retrieving the 3D positions of the chessboard grid corners and the morphology of a ceramic plate specimen are performed. The results of the experiments reveal that ignoring the bi-prism deficiency will induce attitude error to the retrieved morphology, and the BTL distortion can lead to its pseudo out-of-plane deformation. Correcting these problems can further improve the measurement accuracy of the bi-prism-based single-lens 3D DIC system.

  11. Coarse-graining errors and numerical optimization using a relative entropy framework

    NASA Astrophysics Data System (ADS)

    Chaimovich, Aviel; Shell, M. Scott

    2011-03-01

    The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, Srel, that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework.

  12. Mitigation of Angle Tracking Errors Due to Color Dependent Centroid Shifts in SIM-Lite

    NASA Technical Reports Server (NTRS)

    Nemati, Bijan; An, Xin; Goullioud, Renaud; Shao, Michael; Shen, Tsae-Pyng; Wehmeier, Udo J.; Weilert, Mark A.; Wang, Xu; Werne, Thomas A.; Wu, Janet P.; hide

    2010-01-01

    The SIM-Lite astrometric interferometer will search for Earth-size planets in the habitable zones of nearby stars. In this search the interferometer will monitor the astrometric position of candidate stars relative to nearby reference stars over the course of a 5 year mission. The elemental measurement is the angle between a target star and a reference star. This is a two-step process, in which the interferometer will each time need to use its controllable optics to align the starlight in the two arms with each other and with the metrology beams. The sensor for this alignment is an angle tracking CCD camera. Various constraints in the design of the camera subject it to systematic alignment errors when observing a star of one spectrum compared with a start of a different spectrum. This effect is called a Color Dependent Centroid Shift (CDCS) and has been studied extensively with SIM-Lite's SCDU testbed. Here we describe results from the simulation and testing of this error in the SCDU testbed, as well as effective ways that it can be reduced to acceptable levels.

  13. Virtual reality simulator training of laparoscopic cholecystectomies - a systematic review.

    PubMed

    Ikonen, T S; Antikainen, T; Silvennoinen, M; Isojärvi, J; Mäkinen, E; Scheinin, T M

    2012-01-01

    Simulators are widely used in occupations where practice in authentic environments would involve high human or economic risks. Surgical procedures can be simulated by increasingly complex and expensive techniques. This review gives an update on computer-based virtual reality (VR) simulators in training for laparoscopic cholecystectomies. From leading databases (Medline, Cochrane, Embase), randomised or controlled trials and the latest systematic reviews were systematically searched and reviewed. Twelve randomised trials involving simulators were identified and analysed, as well as four controlled studies. Furthermore, seven studies comparing black boxes and simulators were included. The results indicated any kind of simulator training (black box, VR) to be beneficial at novice level. After VR training, novice surgeons seemed to be able to perform their first live cholecystectomies with fewer errors, and in one trial the positive effect remained during the first ten cholecystectomies. No clinical follow-up data were found. Optimal learning requires skills training to be conducted as part of a systematic training program. No data on the cost-benefit of simulators were found, the price of a VR simulator begins at EUR 60 000. Theoretical background to learning and limited research data support the use of simulators in the early phases of surgical training. The cost of buying and using simulators is justified if the risk of injuries and complications to patients can be reduced. Developing surgical skills requires repeated training. In order to achieve optimal learning a validated training program is needed.

  14. Correction of energy-dependent systematic errors in dual-energy X-ray CT using a basis material coefficients transformation method

    NASA Astrophysics Data System (ADS)

    Goh, K. L.; Liew, S. C.; Hasegawa, B. H.

    1997-12-01

    Computer simulation results from our previous studies showed that energy dependent systematic errors exist in the values of attenuation coefficient synthesized using the basis material decomposition technique with acrylic and aluminum as the basis materials, especially when a high atomic number element (e.g., iodine from radiographic contrast media) was present in the body. The errors were reduced when a basis set was chosen from materials mimicking those found in the phantom. In the present study, we employed a basis material coefficients transformation method to correct for the energy-dependent systematic errors. In this method, the basis material coefficients were first reconstructed using the conventional basis materials (acrylic and aluminum) as the calibration basis set. The coefficients were then numerically transformed to those for a more desirable set materials. The transformation was done at the energies of the low and high energy windows of the X-ray spectrum. With this correction method using acrylic and an iodine-water mixture as our desired basis set, computer simulation results showed that accuracy of better than 2% could be achieved even when iodine was present in the body at a concentration as high as 10% by mass. Simulation work had also been carried out on a more inhomogeneous 2D thorax phantom of the 3D MCAT phantom. The results of the accuracy of quantitation were presented here.

  15. Parametric decadal climate forecast recalibration (DeFoReSt 1.0)

    NASA Astrophysics Data System (ADS)

    Pasternack, Alexander; Bhend, Jonas; Liniger, Mark A.; Rust, Henning W.; Müller, Wolfgang A.; Ulbrich, Uwe

    2018-01-01

    Near-term climate predictions such as decadal climate forecasts are increasingly being used to guide adaptation measures. For near-term probabilistic predictions to be useful, systematic errors of the forecasting systems have to be corrected. While methods for the calibration of probabilistic forecasts are readily available, these have to be adapted to the specifics of decadal climate forecasts including the long time horizon of decadal climate forecasts, lead-time-dependent systematic errors (drift) and the errors in the representation of long-term changes and variability. These features are compounded by small ensemble sizes to describe forecast uncertainty and a relatively short period for which typically pairs of reforecasts and observations are available to estimate calibration parameters. We introduce the Decadal Climate Forecast Recalibration Strategy (DeFoReSt), a parametric approach to recalibrate decadal ensemble forecasts that takes the above specifics into account. DeFoReSt optimizes forecast quality as measured by the continuous ranked probability score (CRPS). Using a toy model to generate synthetic forecast observation pairs, we demonstrate the positive effect on forecast quality in situations with pronounced and limited predictability. Finally, we apply DeFoReSt to decadal surface temperature forecasts from the MiKlip prototype system and find consistent, and sometimes considerable, improvements in forecast quality compared with a simple calibration of the lead-time-dependent systematic errors.

  16. Observing Climate with GNSS Radio Occultation: Characterization and Mitigation of Systematic Errors

    NASA Astrophysics Data System (ADS)

    Foelsche, U.; Scherllin-Pirscher, B.; Danzer, J.; Ladstädter, F.; Schwarz, J.; Steiner, A. K.; Kirchengast, G.

    2013-05-01

    GNSS Radio Occultation (RO) data a very well suited for climate applications, since they do not require external calibration and only short-term measurement stability over the occultation event duration (1 - 2 min), which is provided by the atomic clocks onboard the GPS satellites. With this "self-calibration", it is possible to combine data from different sensors and different missions without need for inter-calibration and overlap (which is extremely hard to achieve for conventional satellite data). Using the same retrieval for all datasets we obtained monthly refractivity and temperature climate records from multiple radio occultation satellites, which are consistent within 0.05 % and 0.05 K in almost any case (taking global averages over the altitude range 10 km to 30 km). Longer-term average deviations are even smaller. Even though the RO record is still short, its high quality already allows to see statistically significant temperature trends in the lower stratosphere. The value of RO data for climate monitoring is therefore increasingly recognized by the scientific community, but there is also concern about potential residual systematic errors in RO climatologies, which might be common to data from all satellites. We started to look at different error sources, like the influence of the quality control and the high altitude initialization. We will focus on recent results regarding (apparent) constants used in the retrieval and systematic ionospheric errors. (1) All current RO retrievals use a "classic" set of (measured) constants, relating atmospheric microwave refractivity with atmospheric parameters. With the increasing quality of RO climatologies, errors in these constants are not negligible anymore. We show how these parameters can be related to more fundamental physical quantities (fundamental constants, the molecular/atomic polarizabilities of the constituents of air, and the dipole moment of water vapor). This approach also allows computing sensitivities to changes in atmospheric composition. We found that changes caused by the anthropogenic CO2 increase are still almost exactly offset by the concurrent O2 decrease. (2) Since the ionospheric correction of RO data is an approximation to first order, we have to consider an ionospheric residual, which can be expected to be larger when the ionization is high (day vs. night, high vs. low solar activity). In climate applications this could lead to a time dependent bias, which could induce wrong trends in atmospheric parameters at high altitudes. We studied this systematic ionospheric residual by analyzing the bending angle bias characteristics of CHAMP and COSMIC RO data from the years 2001 to 2011. We found that the night time bending angle bias stays constant over the whole period of 11 years, while the day time bias increases from low to high solar activity. As a result, the difference between night and day time bias increases from -0.05 μrad to -0.4 μrad. This behavior paves the way to correct the (small) solar cycle dependent bias of large ensembles of day time RO profiles.

  17. Impairments in prehension produced by early postnatal sensory motor cortex activity blockade.

    PubMed

    Martin, J H; Donarummo, L; Hacking, A

    2000-02-01

    This study examined the effects of blocking neural activity in sensory motor cortex during early postnatal development on prehension. We infused muscimol, either unilaterally or bilaterally, into the sensory motor cortex of cats to block activity continuously between postnatal weeks 3-7. After stopping infusion, we trained animals to reach and grasp a cube of meat and tested behavior thereafter. Animals that had not received muscimol infusion (unilateral saline infusion; age-matched) reached for the meat accurately with small end-point errors. They grasped the meat using coordinated digit flexion followed by forearm supination on 82.7% of trials. Performance using either limb did not differ significantly. In animals receiving unilateral muscimol infusion, reaching and grasping using the limb ipsilateral to the infusion were similar to controls. The limb contralateral to infusion showed significant increases in systematic and variable reaching end-point errors, often requiring subsequent corrective movements to contact the meat. Grasping occurred on only 14.8% of trials, replaced on most trials by raking without distal movements. Compensatory adjustments in reach length and angle, to maintain end-point accuracy as movements were started from a more lateral position, were less effective using the contralateral limb than ipsilateral limb. With bilateral inactivations, the form of reaching and grasping impairments was identical to that produced by unilateral inactivation, but the magnitude of the reaching impairments was less. We discuss these results in terms of the differential effects of unilateral and bilateral inactivation on corticospinal tract development. We also investigated the degree to which these prehension impairments after unilateral blockade reflect control by each hemisphere. In animals that had received unilateral blockade between postnatal weeks (PWs) 3 and 7, we silenced on-going activity (after PW 11) during task performance using continuous muscimol infusion. We inactivated the right (previously active) and then the left (previously silenced) sensory motor cortex. Inactivation of the ipsilateral (right) sensory motor cortex produced a further increase in systematic error and less frequent normal grasping. Reinactivation of the contralateral (left) cortex produced larger increases in reaching and grasping impairments than those produced by ipsilateral inactivation. This suggests that the impaired limb receives bilateral sensory motor cortex control but that control by the contralateral (initially silenced) cortex predominates. Our data are consistent with the hypothesis that the normal development of skilled motor behavior requires activity in sensory motor cortex during early postnatal life.

  18. The propagation of inventory-based positional errors into statistical landslide susceptibility models

    NASA Astrophysics Data System (ADS)

    Steger, Stefan; Brenning, Alexander; Bell, Rainer; Glade, Thomas

    2016-12-01

    There is unanimous agreement that a precise spatial representation of past landslide occurrences is a prerequisite to produce high quality statistical landslide susceptibility models. Even though perfectly accurate landslide inventories rarely exist, investigations of how landslide inventory-based errors propagate into subsequent statistical landslide susceptibility models are scarce. The main objective of this research was to systematically examine whether and how inventory-based positional inaccuracies of different magnitudes influence modelled relationships, validation results, variable importance and the visual appearance of landslide susceptibility maps. The study was conducted for a landslide-prone site located in the districts of Amstetten and Waidhofen an der Ybbs, eastern Austria, where an earth-slide point inventory was available. The methodological approach comprised an artificial introduction of inventory-based positional errors into the present landslide data set and an in-depth evaluation of subsequent modelling results. Positional errors were introduced by artificially changing the original landslide position by a mean distance of 5, 10, 20, 50 and 120 m. The resulting differently precise response variables were separately used to train logistic regression models. Odds ratios of predictor variables provided insights into modelled relationships. Cross-validation and spatial cross-validation enabled an assessment of predictive performances and permutation-based variable importance. All analyses were additionally carried out with synthetically generated data sets to further verify the findings under rather controlled conditions. The results revealed that an increasing positional inventory-based error was generally related to increasing distortions of modelling and validation results. However, the findings also highlighted that interdependencies between inventory-based spatial inaccuracies and statistical landslide susceptibility models are complex. The systematic comparisons of 12 models provided valuable evidence that the respective error-propagation was not only determined by the degree of positional inaccuracy inherent in the landslide data, but also by the spatial representation of landslides and the environment, landslide magnitude, the characteristics of the study area, the selected classification method and an interplay of predictors within multiple variable models. Based on the results, we deduced that a direct propagation of minor to moderate inventory-based positional errors into modelling results can be partly counteracted by adapting the modelling design (e.g. generalization of input data, opting for strongly generalizing classifiers). Since positional errors within landslide inventories are common and subsequent modelling and validation results are likely to be distorted, the potential existence of inventory-based positional inaccuracies should always be considered when assessing landslide susceptibility by means of empirical models.

  19. Reliability and Measurement Error of Tensiomyography to Assess Mechanical Muscle Function: A Systematic Review.

    PubMed

    Martín-Rodríguez, Saúl; Loturco, Irineu; Hunter, Angus M; Rodríguez-Ruiz, David; Munguia-Izquierdo, Diego

    2017-12-01

    Martín-Rodríguez, S, Loturco, I, Hunter, AM, Rodríguez-Ruiz, D, and Munguia-Izquierdo, D. Reliability and measurement error of tensiomyography to assess mechanical muscle function: A systematic review. J Strength Cond Res 31(12): 3524-3536, 2017-Interest in studying mechanical skeletal muscle function through tensiomyography (TMG) has increased in recent years. This systematic review aimed to (a) report the reliability and measurement error of all TMG parameters (i.e., maximum radial displacement of the muscle belly [Dm], contraction time [Tc], delay time [Td], half-relaxation time [½ Tr], and sustained contraction time [Ts]) and (b) to provide critical reflection on how to perform accurate and appropriate measurements for informing clinicians, exercise professionals, and researchers. A comprehensive literature search was performed of the Pubmed, Scopus, Science Direct, and Cochrane databases up to July 2017. Eight studies were included in this systematic review. Meta-analysis could not be performed because of the low quality of the evidence of some studies evaluated. Overall, the review of the 9 studies involving 158 participants revealed high relative reliability (intraclass correlation coefficient [ICC]) for Dm (0.91-0.99); moderate-to-high ICC for Ts (0.80-0.96), Tc (0.70-0.98), and ½ Tr (0.77-0.93); and low-to-high ICC for Td (0.60-0.98), independently of the evaluated muscles. In addition, absolute reliability (coefficient of variation [CV]) was low for all TMG parameters except for ½ Tr (CV = >20%), whereas measurement error indexes were high for this parameter. In conclusion, this study indicates that 3 of the TMG parameters (Dm, Td, and Tc) are highly reliable, whereas ½ Tr demonstrate insufficient reliability, and thus should not be used in future studies.

  20. Economic optimization of operations for hybrid energy systems under variable markets

    DOE PAGES

    Chen, Jen; Garcia, Humberto E.

    2016-05-21

    We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less

  1. Economic optimization of operations for hybrid energy systems under variable markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jen; Garcia, Humberto E.

    We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less

  2. A proposed method to investigate reliability throughout a questionnaire

    PubMed Central

    2011-01-01

    Background Questionnaires are used extensively in medical and health care research and depend on validity and reliability. However, participants may differ in interest and awareness throughout long questionnaires, which can affect reliability of their answers. A method is proposed for "screening" of systematic change in random error, which could assess changed reliability of answers. Methods A simulation study was conducted to explore whether systematic change in reliability, expressed as changed random error, could be assessed using unsupervised classification of subjects by cluster analysis (CA) and estimation of intraclass correlation coefficient (ICC). The method was also applied on a clinical dataset from 753 cardiac patients using the Jalowiec Coping Scale. Results The simulation study showed a relationship between the systematic change in random error throughout a questionnaire and the slope between the estimated ICC for subjects classified by CA and successive items in a questionnaire. This slope was proposed as an awareness measure - to assessing if respondents provide only a random answer or one based on a substantial cognitive effort. Scales from different factor structures of Jalowiec Coping Scale had different effect on this awareness measure. Conclusions Even though assumptions in the simulation study might be limited compared to real datasets, the approach is promising for assessing systematic change in reliability throughout long questionnaires. Results from a clinical dataset indicated that the awareness measure differed between scales. PMID:21974842

  3. Assessing systematic errors in GOSAT CO2 retrievals by comparing assimilated fields to independent CO2 data

    NASA Astrophysics Data System (ADS)

    Baker, D. F.; Oda, T.; O'Dell, C.; Wunch, D.; Jacobson, A. R.; Yoshida, Y.; Partners, T.

    2012-12-01

    Measurements of column CO2 concentration from space are now being taken at a spatial and temporal density that permits regional CO2 sources and sinks to be estimated. Systematic errors in the satellite retrievals must be minimized for these estimates to be useful, however. CO2 retrievals from the TANSO instrument aboard the GOSAT satellite are compared to similar column retrievals from the Total Carbon Column Observing Network (TCCON) as the primary method of validation; while this is a powerful approach, it can only be done for overflights of 10-20 locations and has not, for example, permitted validation of GOSAT data over the oceans or deserts. Here we present a complementary approach that uses a global atmospheric transport model and flux inversion method to compare different types of CO2 measurements (GOSAT, TCCON, surface in situ, and aircraft) at different locations, at the cost of added transport error. The measurements from any single type of data are used in a variational carbon data assimilation method to optimize surface CO2 fluxes (with a CarbonTracker prior), then the corresponding optimized CO2 concentration fields are compared to those data types not inverted, using the appropriate vertical weighting. With this approach, we find that GOSAT column CO2 retrievals from the ACOS project (version 2.9 and 2.10) contain systematic errors that make the modeled fit to the independent data worse. However, we find that the differences between the GOSAT data and our prior model are correlated with certain physical variables (aerosol amount, surface albedo, correction to total column mass) that are likely driving errors in the retrievals, independent of CO2 concentration. If we correct the GOSAT data using a fit to these variables, then we find the GOSAT data to improve the fit to independent CO2 data, which suggests that the useful information in the measurements outweighs the negative impact of the remaining systematic errors. With this assurance, we compare the flux estimates given by assimilating the ACOS GOSAT retrievals to similar ones given by NIES GOSAT column retrievals, bias-corrected in a similar manner. Finally, we have found systematic differences on the order of a half ppm between column CO2 integrals from 18 TCCON sites and those given by assimilating NOAA in situ data (both surface and aircraft profile) in this approach. We assess how these differences change in switching to a newer version of the TCCON retrieval software.

  4. Determination of the number of ψ' events at BESIII

    NASA Astrophysics Data System (ADS)

    Ablikim, M.; N. Achasov, M.; Albayrak, O.; J. Ambrose, D.; F. An, F.; Q., An; Z. Bai, J.; Ban, Y.; Becker, J.; V. Bennett, J.; Berger, N.; Bertani, M.; M. Bian, J.; Boger, E.; Bondarenko, O.; Boyko, I.; A. Briere, R.; Bytev, V.; Cai, X.; Cakir, O.; Calcaterra, A.; F. Cao, G.; A. Cetin, S.; F. Chang, J.; Chelkov, G.; G., Chen; S. Chen, H.; C. Chen, J.; L. Chen, M.; J. Chen, S.; X., Chen; B. Chen, Y.; P. Cheng, H.; P. Chu, Y.; Cronin-Hennessy, D.; L. Dai, H.; P. Dai, J.; Dedovich, D.; Y. Deng, Z.; Denig, A.; Denysenko, I.; Destefanis, M.; M. Ding, W.; Y., Ding; Y. Dong, L.; Y. Dong, M.; X. Du, S.; J., Fang; S. Fang, S.; Fava, L.; Q. Feng, C.; B. Ferroli, R.; Friedel, P.; D. Fu, C.; Gao, Y.; C., Geng; Goetzen, K.; X. Gong, W.; Gradl, W.; Greco, M.; H. Gu, M.; T. Gu, Y.; H. Guan, Y.; Q. Guo, A.; B. Guo, L.; T., Guo; P. Guo, Y.; L. Han, Y.; A. Harris, F.; L. He, K.; M., He; Y. He, Z.; Held, T.; K. Heng, Y.; L. Hou, Z.; C., Hu; M. Hu, H.; F. Hu, J.; T., Hu; M. Huang, G.; S. Huang, G.; S. Huang, J.; L., Huang; T. Huang, X.; Y., Huang; P. Huang, Y.; Hussain, T.; S. Ji, C.; Q., Ji; P. Ji, Q.; B. Ji, X.; L. Ji, X.; L. Jiang, L.; S. Jiang, X.; B. Jiao, J.; Jiao, Z.; P. Jin, D.; S., Jin; F. Jing, F.; Kalantar-Nayestanaki, N.; Kavatsyuk, M.; Kopf, B.; Kornicer, M.; Kuehn, W.; Lai, W.; S. Lange, J.; Leyhe, M.; H. Li, C.; Cheng, Li; Cui, Li; M. Li, D.; F., Li; G., Li; B. Li, H.; C. Li, J.; K., Li; Lei, Li; J. Li, Q.; L. Li, S.; D. Li, W.; G. Li, W.; L. Li, X.; N. Li, X.; Q. Li, X.; R. Li, X.; B. Li, Z.; H., Liang; F. Liang, Y.; T. Liang, Y.; R. Liao, G.; T. Liao, X.; Lin(Lin, D.; J. Liu, B.; L. Liu, C.; X. Liu, C.; H. Liu, F.; Fang, Liu; Feng, Liu; H., Liu; B. Liu, H.; H. Liu, H.; M. Liu, H.; W. Liu, H.; P. Liu, J.; K., Liu; Y. Liu, K.; Kai, Liu; L. Liu, P.; Q., Liu; B. Liu, S.; X., Liu; B. Liu, Y.; A. Liu, Z.; Zhiqiang, Liu; Zhiqing, Liu; Loehner, H.; R. Lu, G.; J. Lu, H.; G. Lu, J.; W. Lu, Q.; R. Lu, X.; P. Lu, Y.; L. Luo, C.; X. Luo, M.; Luo, T.; L. Luo, X.; Lv, M.; L. Ma, C.; C. Ma, F.; L. Ma, H.; M. Ma, Q.; Ma, S.; Ma, T.; Y. Ma, X.; E. Maas, F.; Maggiora, M.; A. Malik, Q.; J. Mao, Y.; P. Mao, Z.; G. Messchendorp, J.; J., Min; J. Min, T.; E. Mitchell, R.; H. Mo, X.; C. Morales, Morales; Yu. Muchnoi, N.; Muramatsu, H.; Nefedov, Y.; Nicholson, C.; B. Nikolaev, I.; Z., Ning; L. Olsen, S.; Ouyang, Q.; Pacetti, S.; W. Park, J.; Pelizaeus, M.; P. Peng, H.; Peters, K.; L. Ping, J.; G. Ping, R.; Poling, R.; Prencipe, E.; M., Qi; Qian, S.; F. Qiao, C.; Q. Qin, L.; S. Qin, X.; Y., Qin; H. Qin, Z.; F. Qiu, J.; H. Rashid, K.; G., Rong; D. Ruan, X.; Sarantsev, A.; D. Schaefer, B.; Shao, M.; P. Shen, C.; Y. Shen, X.; Y. Sheng, H.; R. Shepherd, M.; Y. Song, X.; Spataro, S.; Spruck, B.; H. Sun, D.; X. Sun, G.; F. Sun, J.; S. Sun, S.; J. Sun, Y.; Z. Sun, Y.; J. Sun, Z.; T. Sun, Z.; J. Tang, C.; Tang, X.; Tapan, I.; H. Thorndike, E.; Toth, D.; Ullrich, M.; S. Varner, G.; Q. Wang, B.; D., Wang; Y. Wang, D.; K., Wang; L. Wang, L.; S. Wang, L.; M., Wang; P., Wang; L. Wang, P.; J. Wang, Q.; G. Wang, S.; F. Wang, X.; L. Wang, X.; F. Wang, Y.; Z., Wang; G. Wang, Z.; Y. Wang, Z.; H. Wei, D.; B. Wei, J.; Weidenkaff, P.; G. Wen, Q.; P. Wen, S.; M., Werner; Wiedner, U.; H. Wu, L.; N., Wu; X. Wu, S.; W., Wu; Z., Wu; G. Xia, L.; X Xia, Y.; J. Xiao, Z.; G. Xie, Y.; L. Xiu, Q.; F. Xu, G.; M. Xu, G.; J. Xu, Q.; N. Xu, Q.; P. Xu, X.; R. Xu, Z.; Xue, F.; Xue, Z.; L., Yan; B. Yan, W.; H. Yan, Y.; X. Yang, H.; Y., Yang; X. Yang, Y.; Ye, H.; Ye, M.; H. Ye, M.; X. Yu, B.; X. Yu, C.; W. Yu, H.; S. Yu, J.; P. Yu, S.; Z. Yuan, C.; Y., Yuan; A. Zafar, A.; Zallo, A.; Zeng, Y.; X. Zhang, B.; Y. Zhang, B.; Zhang, C.; C. Zhang, C.; H. Zhang, D.; H. Zhang, H.; Y. Zhang, H.; Q. Zhang, J.; W. Zhang, J.; Y. Zhang, J.; Z. Zhang, J.; Lili, Zhang; Zhang, R.; H. Zhang, S.; J. Zhang, X.; Y. Zhang, X.; Zhang, Y.; H. Zhang, Y.; P. Zhang, Z.; Y. Zhang, Z.; Zhenghao, Zhang; Zhao, G.; S. Zhao, H.; W. Zhao, J.; X. Zhao, K.; Lei, Zhao; Ling, Zhao; G. Zhao, M.; Zhao, Q.; Z. Zhao, Q.; J. Zhao, S.; C. Zhao, T.; B. Zhao, Y.; G. Zhao, Z.; Zhemchugov, A.; B., Zheng; P. Zheng, J.; H. Zheng, Y.; B., Zhong; Z., Zhong; L., Zhou; K. Zhou, X.; R. Zhou, X.; Zhu, C.; Zhu, K.; J. Zhu, K.; H. Zhu, S.; L. Zhu, X.; C. Zhu, Y.; M. Zhu, Y.; S. Zhu, Y.; A. Zhu, Z.; J., Zhuang; S. Zou, B.; H. Zou, J.

    2013-06-01

    The number of ψ' events accumulated by the BESIII experiment from March 3 through April 14, 2009, is determined by counting inclusive hadronic events. The result is 106.41×(1.00±0.81%)×106. The error is systematic dominant; the statistical error is negligible.

  5. Improving Student Results in the Crystal Violet Chemical Kinetics Experiment

    ERIC Educational Resources Information Center

    Kazmierczak, Nathanael; Vander Griend, Douglas A.

    2017-01-01

    Despite widespread use in general chemistry laboratories, the crystal violet chemical kinetics experiment frequently suffers from erroneous student results. Student calculations for the reaction order in hydroxide often contain large asymmetric errors, pointing to the presence of systematic error. Through a combination of "in silico"…

  6. Theory of Test Translation Error

    ERIC Educational Resources Information Center

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  7. Error sources in passive and active microwave satellite soil moisture over Australia

    USDA-ARS?s Scientific Manuscript database

    Development of a long-term climate record of soil moisture (SM) involves combining historic and present satellite-retrieved SM data sets. This in turn requires a consistent characterization and deep understanding of the systematic differences and errors in the individual data sets, which vary due to...

  8. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    NASA Technical Reports Server (NTRS)

    Beck, S. M.

    1975-01-01

    A mobile self-contained Faraday cup system for beam current measurments of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 + or - 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV.

  9. First Year Wilkinson Microwave Anisotropy Probe(WMAP) Observations: Data Processing Methods and Systematic Errors Limits

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.

    2003-01-01

    We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.

  10. MERLIN: a Franco-German LIDAR space mission for atmospheric methane

    NASA Astrophysics Data System (ADS)

    Bousquet, P.; Ehret, G.; Pierangelo, C.; Marshall, J.; Bacour, C.; Chevallier, F.; Gibert, F.; Armante, R.; Crevoisier, C. D.; Edouart, D.; Esteve, F.; Julien, E.; Kiemle, C.; Alpers, M.; Millet, B.

    2017-12-01

    The Methane Remote Sensing Lidar Mission (MERLIN), currently in phase C, is a joint cooperation between France and Germany on the development, launch and operation of a space LIDAR dedicated to the retrieval of total weighted methane (CH4) atmospheric columns. Atmospheric methane is the second most potent anthropogenic greenhouse gas, contributing 20% to climate radiative forcing but also plying an important role in atmospheric chemistry as a precursor of tropospheric ozone and low-stratosphere water vapour. Its short lifetime ( 9 years) and the nature and variety of its anthropogenic sources also offer interesting mitigation options in regards to the 2° objective of the Paris agreement. For the first time, measurements of atmospheric composition will be performed from space thanks to an IPDA (Integrated Path Differential Absorption) LIDAR (Light Detecting And Ranging), with a precision (target ±27 ppb for a 50km aggregation along the trace) and accuracy (target <3.7 ppb at 68%) sufficient to significantly reduce the uncertainties on methane emissions. The very low targeted systematic error target is particularly ambitious compared to current passive methane space mission. It is achievable because of the differential active measurements of MERLIN, which guarantees almost no contamination by aerosols or water vapour cross-sensitivity. As an active mission, MERLIN will deliver global methane weighted columns (XCH4) for all seasons and all latitudes, day and night Here, we recall the MERLIN objectives and mission characteristics. We also propose an end-to-end error analysis, from the causes of random and systematic errors of the instrument, of the platform and of the data treatment, to the error on methane emissions. To do so, we propose an OSSE analysis (observing system simulation experiment) to estimate the uncertainty reduction on methane emissions brought by MERLIN XCH4. The originality of our inversion system is to transfer both random and systematic errors from the observation space to the flux space, thus providing more realistic error reductions than usually provided in OSSE only using the random part of errors. Uncertainty reductions are presented using two different atmospheric transport models, TM3 and LMDZ, and compared with error reduction achieved with the GOSAT passive mission.

  11. Adaptive Fuzzy Tracking Control for a Class of MIMO Nonlinear Systems in Nonstrict-Feedback Form.

    PubMed

    Chen, Bing; Lin, Chong; Liu, Xiaoping; Liu, Kefu

    2015-12-01

    This paper focuses on the problem of fuzzy adaptive control for a class of multiinput and multioutput (MIMO) nonlinear systems in nonstrict-feedback form, which contains the strict-feedback form as a special case. By the condition of variable partition, a new fuzzy adaptive backstepping is proposed for such a class of nonlinear MIMO systems. The suggested fuzzy adaptive controller guarantees that the proposed control scheme can guarantee that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded and the tracking errors eventually converge to a small neighborhood around the origin. The main advantage of this paper is that a control approach is systematically derived for nonlinear systems with strong interconnected terms which are the functions of all states of the whole system. Simulation results further illustrate the effectiveness of the suggested approach.

  12. Model parameter-related optimal perturbations and their contributions to El Niño prediction errors

    NASA Astrophysics Data System (ADS)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-04-01

    Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.

  13. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have approximately 7am/7pm orbital geometry) and. afternoon satellites (NOAA 7, 9, 11 and 14 that have approximately 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error eo. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error eo. We find eo can decrease the global temperature trend by approximately 0.07 K/decade. In addition there are systematic time dependent errors ed and ec present in the data that are introduced by the drift in the satellite orbital geometry. ed arises from the diurnal cycle in temperature and ec is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error ed can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observation made in the MSU Ch 1 (50.3 GHz) support this approach. The error ec is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the error ec on the global temperature trend. In one path the entire error ec is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by approximately 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+/-) 0.04 K/decade during 1980 to 1998.

  14. Human-robot cooperative movement training: Learning a novel sensory motor transformation during walking with robotic assistance-as-needed

    PubMed Central

    Emken, Jeremy L; Benitez, Raul; Reinkensmeyer, David J

    2007-01-01

    Background A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Methods Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. Results We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. Conclusion The assist-as-needed algorithm proposed here can limit error during the learning of a dynamic motor task. The algorithm encourages learning by decreasing its assistance as a function of the ongoing progression of movement error. This type of algorithm is well suited for helping people learn dynamic tasks for which large kinematic errors are dangerous or discouraging, and thus may prove useful for robot-assisted movement training of walking or reaching following neurologic injury. PMID:17391527

  15. Nature of the refractive errors in rhesus monkeys (Macaca mulatta) with experimentally induced ametropias.

    PubMed

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-Su; Ramamirtham, Ramkumar; Smith, Earl L

    2010-08-23

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. Copyright 2010 Elsevier Ltd. All rights reserved.

  16. Nature of the Refractive Errors in Rhesus Monkeys (Macaca mulatta) with Experimentally Induced Ametropias

    PubMed Central

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-su; Ramamirtham, Ramkumar; Smith, Earl L.

    2010-01-01

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. PMID:20600237

  17. Accuracy of blood-glucose measurements using glucose meters and arterial blood gas analyzers in critically ill adult patients: systematic review

    PubMed Central

    2013-01-01

    Introduction Glucose control to prevent both hyperglycemia and hypoglycemia is important in an intensive care unit. Arterial blood gas analyzers and glucose meters are commonly used to measure blood-glucose concentration in an intensive care unit; however, their accuracies are still unclear. Methods We performed a systematic literature search (January 1, 2001, to August 31, 2012) to find clinical studies comparing blood-glucose values measured with glucose meters and/or arterial blood gas analyzers with those simultaneously measured with a central laboratory machine in critically ill adult patients. Results We reviewed 879 articles and found 21 studies in which the accuracy of blood-glucose monitoring by arterial blood gas analyzers and/or glucometers by using central laboratory methods as references was assessed in critically ill adult patients. Of those 21 studies, 11 studies in which International Organization for Standardization criteria, error-grid method, or percentage of values within 20% of the error of a reference were used were selected for evaluation. The accuracy of blood-glucose measurements by arterial blood gas analyzers and glucose meters by using arterial blood was significantly higher than that of measurements with glucose meters by using capillary blood (odds ratios for error: 0.04, P < 0.001; and 0.36, P < 0.001). The accuracy of blood-glucose measurements with arterial blood gas analyzers tended to be higher than that of measurements with glucose meters by using arterial blood (P = 0.20). In the hypoglycemic range (defined as < 81 mg/dl), the incidence of errors using these devices was higher than that in the nonhypoglycemic range (odds ratios for error: arterial blood gas analyzers, 1.86, P = 0.15; glucose meters with capillary blood, 1.84, P = 0.03; glucose meters with arterial blood, 2.33, P = 0.02). Unstable hemodynamics (edema and use of a vasopressor) and use of insulin were associated with increased error of blood glucose monitoring with glucose meters. Conclusions Our literature review showed that the accuracy of blood-glucose measurements with arterial blood gas analyzers was significantly higher than that of measurements with glucose meters by using capillary blood and tended to be higher than that of measurements with glucose meters by using arterial blood. These results should be interpreted with caution because of the large variation of accuracy among devices. Because blood-glucose monitoring was less accurate within or near the hypoglycemic range, especially in patients with unstable hemodynamics or receiving insulin infusion, we should be aware that current blood glucose-monitoring technology has not reached a high enough degree of accuracy and reliability to lead to appropriate glucose control in critically ill patients. PMID:23506841

  18. Improving Drive Files for Vehicle Road Simulations

    NASA Astrophysics Data System (ADS)

    Cherng, John G.; Goktan, Ali; French, Mark; Gu, Yi; Jacob, Anil

    2001-09-01

    Shaker tables are commonly used in laboratories for automotive vehicle component testing to study durability and acoustics performance. An example is development testing of car seats. However, it is difficult to repeat the measured road data perfectly with the response of a shaker table as there are basic differences in dynamic characteristics between a flexible vehicle and substantially rigid shaker table. In addition, there are performance limits in the shaker table drive systems that can limit correlation. In practice, an optimal drive signal for the actuators is created iteratively. During each iteration, the error between the road data and the response data is minimised by an optimising algorithm which is generally a part of the feed back loop of the shake table controller. This study presents a systematic investigation to the errors in time and frequency domains as well as joint time-frequency domain and an evaluation of different digital signal processing techniques that have been used in previous work. In addition, we present an innovative approach that integrates the dynamic characteristics of car seats and the human body into the error-minimising iteration process. We found that the iteration process can be shortened and the error reduced by using a weighting function created by normalising the frequency response function of the car seat. Two road data test sets were used in the study.

  19. Laboratory issues: use of nutritional biomarkers.

    PubMed

    Blanck, Heidi Michels; Bowman, Barbara A; Cooper, Gerald R; Myers, Gary L; Miller, Dayton T

    2003-03-01

    Biomarkers of nutritional status provide alternative measures of dietary intake. Like the error and variation associated with dietary intake measures, the magnitude and impact of both biological (preanalytical) and laboratory (analytical) variability need to be considered when one is using biomarkers. When choosing a biomarker, it is important to understand how it relates to nutritional intake and the specific time frame of exposure it reflects as well as how it is affected by sampling and laboratory procedures. Biological sources of variation that arise from genetic and disease states of an individual affect biomarkers, but they are also affected by nonbiological sources of variation arising from specimen collection and storage, seasonality, time of day, contamination, stability and laboratory quality assurance. When choosing a laboratory for biomarker assessment, researchers should try to make sure random and systematic error is minimized by inclusion of certain techniques such as blinding of laboratory staff to disease status and including external pooled standards to which laboratory staff are blinded. In addition analytic quality control should be ensured by use of internal standards or certified materials over the entire range of possible values to control method accuracy. One must consider the effect of random laboratory error on measurement precision and also understand the method's limit of detection and the laboratory cutpoints. Choosing appropriate cutpoints and reducing error is extremely important in nutritional epidemiology where weak associations are frequent. As part of this review, serum lipids are included as an example of a biomarker whereby collaborative efforts have been put forth to both understand biological sources of variation and standardize laboratory results.

  20. Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement

    PubMed Central

    Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander

    2011-01-01

    This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806

  1. A computational study of the discretization error in the solution of the Spencer-Lewis equation by doubling applied to the upwind finite-difference approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, P.; Seth, D.L.; Ray, A.K.

    A detailed and systematic study of the nature of the discretization error associated with the upwind finite-difference method is presented. A basic model problem has been identified and based upon the results for this problem, a basic hypothesis regarding the accuracy of the computational solution of the Spencer-Lewis equation is formulated. The basic hypothesis is then tested under various systematic single complexifications of the basic model problem. The results of these tests provide the framework of the refined hypothesis presented in the concluding comments. 27 refs., 3 figs., 14 tabs.

  2. Modeling longitudinal data, I: principles of multivariate analysis.

    PubMed

    Ravani, Pietro; Barrett, Brendan; Parfrey, Patrick

    2009-01-01

    Statistical models are used to study the relationship between exposure and disease while accounting for the potential role of other factors' impact on outcomes. This adjustment is useful to obtain unbiased estimates of true effects or to predict future outcomes. Statistical models include a systematic component and an error component. The systematic component explains the variability of the response variable as a function of the predictors and is summarized in the effect estimates (model coefficients). The error element of the model represents the variability in the data unexplained by the model and is used to build measures of precision around the point estimates (confidence intervals).

  3. pyAmpli: an amplicon-based variant filter pipeline for targeted resequencing data.

    PubMed

    Beyens, Matthias; Boeckx, Nele; Van Camp, Guy; Op de Beeck, Ken; Vandeweyer, Geert

    2017-12-14

    Haloplex targeted resequencing is a popular method to analyze both germline and somatic variants in gene panels. However, involved wet-lab procedures may introduce false positives that need to be considered in subsequent data-analysis. No variant filtering rationale addressing amplicon enrichment related systematic errors, in the form of an all-in-one package, exists to our knowledge. We present pyAmpli, a platform independent parallelized Python package that implements an amplicon-based germline and somatic variant filtering strategy for Haloplex data. pyAmpli can filter variants for systematic errors by user pre-defined criteria. We show that pyAmpli significantly increases specificity, without reducing sensitivity, essential for reporting true positive clinical relevant mutations in gene panel data. pyAmpli is an easy-to-use software tool which increases the true positive variant call rate in targeted resequencing data. It specifically reduces errors related to PCR-based enrichment of targeted regions.

  4. Accuracy and Landmark Error Calculation Using Cone-Beam Computed Tomography–Generated Cephalograms

    PubMed Central

    Grauer, Dan; Cevidanes, Lucia S. H.; Styner, Martin A.; Heulfe, Inam; Harmon, Eric T.; Zhu, Hongtu; Proffit, William R.

    2010-01-01

    Objective To evaluate systematic differences in landmark position between cone-beam computed tomography (CBCT)–generated cephalograms and conventional digital cephalograms and to estimate how much variability should be taken into account when both modalities are used within the same longitudinal study. Materials and Methods Landmarks on homologous cone-beam computed tomographic–generated cephalograms and conventional digital cephalograms of 46 patients were digitized, registered, and compared via the Hotelling T2 test. Results There were no systematic differences between modalities in the position of most landmarks. Three landmarks showed statistically significant differences but did not reach clinical significance. A method for error calculation while combining both modalities in the same individual is presented. Conclusion In a longitudinal follow-up for assessment of treatment outcomes and growth of one individual, the error due to the combination of the two modalities might be larger than previously estimated. PMID:19905853

  5. Compiler-aided systematic construction of large-scale DNA strand displacement circuits using unpurified components

    NASA Astrophysics Data System (ADS)

    Thubagere, Anupama J.; Thachuk, Chris; Berleant, Joseph; Johnson, Robert F.; Ardelean, Diana A.; Cherry, Kevin M.; Qian, Lulu

    2017-02-01

    Biochemical circuits made of rationally designed DNA molecules are proofs of concept for embedding control within complex molecular environments. They hold promise for transforming the current technologies in chemistry, biology, medicine and material science by introducing programmable and responsive behaviour to diverse molecular systems. As the transformative power of a technology depends on its accessibility, two main challenges are an automated design process and simple experimental procedures. Here we demonstrate the use of circuit design software, combined with the use of unpurified strands and simplified experimental procedures, for creating a complex DNA strand displacement circuit that consists of 78 distinct species. We develop a systematic procedure for overcoming the challenges involved in using unpurified DNA strands. We also develop a model that takes synthesis errors into consideration and semi-quantitatively reproduces the experimental data. Our methods now enable even novice researchers to successfully design and construct complex DNA strand displacement circuits.

  6. Experimental search for the violation of Pauli exclusion principle: VIP-2 Collaboration.

    PubMed

    Shi, H; Milotti, E; Bartalucci, S; Bazzi, M; Bertolucci, S; Bragadireanu, A M; Cargnelli, M; Clozza, A; De Paolis, L; Di Matteo, S; Egger, J-P; Elnaggar, H; Guaraldo, C; Iliescu, M; Laubenstein, M; Marton, J; Miliucci, M; Pichler, A; Pietreanu, D; Piscicchia, K; Scordo, A; Sirghi, D L; Sirghi, F; Sperandio, L; Vazquez Doce, O; Widmann, E; Zmeskal, J; Curceanu, C

    2018-01-01

    The VIolation of Pauli exclusion principle -2 experiment, or VIP-2 experiment, at the Laboratori Nazionali del Gran Sasso searches for X-rays from copper atomic transitions that are prohibited by the Pauli exclusion principle. Candidate direct violation events come from the transition of a 2 p electron to the ground state that is already occupied by two electrons. From the first data taking campaign in 2016 of VIP-2 experiment, we determined a best upper limit of [Formula: see text] for the probability that such a violation exists. Significant improvement in the control of the experimental systematics was also achieved, although not explicitly reflected in the improved upper limit. By introducing a simultaneous spectral fit of the signal and background data in the analysis, we succeeded in taking into account systematic errors that could not be evaluated previously in this type of measurements.

  7. Experimental search for the violation of Pauli exclusion principle. VIP-2 Collaboration

    NASA Astrophysics Data System (ADS)

    Shi, H.; Milotti, E.; Bartalucci, S.; Bazzi, M.; Bertolucci, S.; Bragadireanu, A. M.; Cargnelli, M.; Clozza, A.; De Paolis, L.; Di Matteo, S.; Egger, J.-P.; Elnaggar, H.; Guaraldo, C.; Iliescu, M.; Laubenstein, M.; Marton, J.; Miliucci, M.; Pichler, A.; Pietreanu, D.; Piscicchia, K.; Scordo, A.; Sirghi, D. L.; Sirghi, F.; Sperandio, L.; Vazquez Doce, O.; Widmann, E.; Zmeskal, J.; Curceanu, C.

    2018-04-01

    The VIolation of Pauli exclusion principle -2 experiment, or VIP-2 experiment, at the Laboratori Nazionali del Gran Sasso searches for X-rays from copper atomic transitions that are prohibited by the Pauli exclusion principle. Candidate direct violation events come from the transition of a 2 p electron to the ground state that is already occupied by two electrons. From the first data taking campaign in 2016 of VIP-2 experiment, we determined a best upper limit of 3.4 × 10^{-29} for the probability that such a violation exists. Significant improvement in the control of the experimental systematics was also achieved, although not explicitly reflected in the improved upper limit. By introducing a simultaneous spectral fit of the signal and background data in the analysis, we succeeded in taking into account systematic errors that could not be evaluated previously in this type of measurements.

  8. Design issues for a reinforcement-based self-learning fuzzy controller

    NASA Technical Reports Server (NTRS)

    Yen, John; Wang, Haojin; Dauherity, Walter

    1993-01-01

    Fuzzy logic controllers have some often cited advantages over conventional techniques such as PID control: easy implementation, its accommodation to natural language, the ability to cover wider range of operating conditions and others. One major obstacle that hinders its broader application is the lack of a systematic way to develop and modify its rules and as result the creation and modification of fuzzy rules often depends on try-error or pure experimentation. One of the proposed approaches to address this issue is self-learning fuzzy logic controllers (SFLC) that use reinforcement learning techniques to learn the desirability of states and to adjust the consequent part of fuzzy control rules accordingly. Due to the different dynamics of the controlled processes, the performance of self-learning fuzzy controller is highly contingent on the design. The design issue has not received sufficient attention. The issues related to the design of a SFLC for the application to chemical process are discussed and its performance is compared with that of PID and self-tuning fuzzy logic controller.

  9. The accuracy of self-reported pregnancy-related weight: a systematic review.

    PubMed

    Headen, I; Cohen, A K; Mujahid, M; Abrams, B

    2017-03-01

    Self-reported maternal weight is error-prone, and the context of pregnancy may impact error distributions. This systematic review summarizes error in self-reported weight across pregnancy and assesses implications for bias in associations between pregnancy-related weight and birth outcomes. We searched PubMed and Google Scholar through November 2015 for peer-reviewed articles reporting accuracy of self-reported, pregnancy-related weight at four time points: prepregnancy, delivery, over gestation and postpartum. Included studies compared maternal self-report to anthropometric measurement or medical report of weights. Sixty-two studies met inclusion criteria. We extracted data on magnitude of error and misclassification. We assessed impact of reporting error on bias in associations between pregnancy-related weight and birth outcomes. Women underreported prepregnancy (PPW: -2.94 to -0.29 kg) and delivery weight (DW: -1.28 to 0.07 kg), and over-reported gestational weight gain (GWG: 0.33 to 3 kg). Magnitude of error was small, ranged widely, and varied by prepregnancy weight class and race/ethnicity. Misclassification was moderate (PPW: 0-48.3%; DW: 39.0-49.0%; GWG: 16.7-59.1%), and overestimated some estimates of population prevalence. However, reporting error did not largely bias associations between pregnancy-related weight and birth outcomes. Although measured weight is preferable, self-report is a cost-effective and practical measurement approach. Future researchers should develop bias correction techniques for self-reported pregnancy-related weight. © 2017 World Obesity Federation.

  10. Comment on 3PL IRT Adjustment for Guessing

    ERIC Educational Resources Information Center

    Chiu, Ting-Wei; Camilli, Gregory

    2013-01-01

    Guessing behavior is an issue discussed widely with regard to multiple choice tests. Its primary effect is on number-correct scores for examinees at lower levels of proficiency. This is a systematic error or bias, which increases observed test scores. Guessing also can inflate random error variance. Correction or adjustment for guessing formulas…

  11. Progress in the improved lattice calculation of direct CP-violation in the Standard Model

    NASA Astrophysics Data System (ADS)

    Kelly, Christopher

    2018-03-01

    We discuss the ongoing effort by the RBC & UKQCD collaborations to improve our lattice calculation of the measure of Standard Model direct CP violation, ɛ', with physical kinematics. We present our progress in decreasing the (dominant) statistical error and discuss other related activities aimed at reducing the systematic errors.

  12. An investigation of condition mapping and plot proportion calculation issues

    Treesearch

    Demetrios Gatziolis

    2007-01-01

    A systematic examination of Forest Inventory and Analysis condition data collected under the annual inventory protocol in the Pacific Northwest region between 2000 and 2004 revealed the presence of errors both in condition topology and plot proportion computations. When plots were compiled to generate population estimates, proportion errors were found to cause...

  13. Mitigating Errors of Representation: A Practical Case Study of the University Experience Survey

    ERIC Educational Resources Information Center

    Whiteley, Sonia

    2014-01-01

    The Total Survey Error (TSE) paradigm provides a framework that supports the effective planning of research, guides decision making about data collection and contextualises the interpretation and dissemination of findings. TSE also allows researchers to systematically evaluate and improve the design and execution of ongoing survey programs and…

  14. Sampling methods for titica vine (Heteropsis spp.) inventory in a tropical forest

    Treesearch

    Carine Klauberg; Edson Vidal; Carlos Alberto Silva; Michelliny de M. Bentes; Andrew Thomas Hudak

    2016-01-01

    Titica vine provides useful raw fiber material. Using sampling schemes that reduce sampling error can provide direction for sustainable forest management of this vine. Sampling systematically with rectangular plots (10× 25 m) promoted lower error and greater accuracy in the inventory of titica vines in tropical rainforest.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuangrod, T; Simpson, J; Greer, P

    Purpose: A real-time patient treatment delivery verification system using EPID (Watchdog) has been developed as an advanced patient safety tool. In a pilot study data was acquired for 119 prostate and head and neck (HN) IMRT patient deliveries to generate body-site specific action limits using statistical process control. The purpose of this study is to determine the sensitivity of Watchdog to detect clinically significant errors during treatment delivery. Methods: Watchdog utilizes a physics-based model to generate a series of predicted transit cine EPID images as a reference data set, and compares these in real-time to measured transit cine-EPID images acquiredmore » during treatment using chi comparison (4%, 4mm criteria) after the initial 2s of treatment to allow for dose ramp-up. Four study cases were used; dosimetric (monitor unit) errors in prostate (7 fields) and HN (9 fields) IMRT treatments of (5%, 7%, 10%) and positioning (systematic displacement) errors in the same treatments of (5mm, 7mm, 10mm). These errors were introduced by modifying the patient CT scan and re-calculating the predicted EPID data set. The error embedded predicted EPID data sets were compared to the measured EPID data acquired during patient treatment. The treatment delivery percentage (measured from 2s) where Watchdog detected the error was determined. Results: Watchdog detected all simulated errors for all fields during delivery. The dosimetric errors were detected at average treatment delivery percentage of (4%, 0%, 0%) and (7%, 0%, 0%) for prostate and HN respectively. For patient positional errors, the average treatment delivery percentage was (52%, 43%, 25%) and (39%, 16%, 6%). Conclusion: These results suggest that Watchdog can detect significant dosimetric and positioning errors in prostate and HN IMRT treatments in real-time allowing for treatment interruption. Displacements of the patient require longer to detect however incorrect body site or very large geographic misses will be detected rapidly.« less

  16. Dealing with systematic laser scanner errors due to misalignment at area-based deformation analyses

    NASA Astrophysics Data System (ADS)

    Holst, Christoph; Medić, Tomislav; Kuhlmann, Heiner

    2018-04-01

    The ability to acquire rapid, dense and high quality 3D data has made terrestrial laser scanners (TLS) a desirable instrument for tasks demanding a high geometrical accuracy, such as geodetic deformation analyses. However, TLS measurements are influenced by systematic errors due to internal misalignments of the instrument. The resulting errors in the point cloud might exceed the magnitude of random errors. Hence, it is important to assure that the deformation analysis is not biased by these influences. In this study, we propose and evaluate several strategies for reducing the effect of TLS misalignments on deformation analyses. The strategies are based on the bundled in-situ self-calibration and on the exploitation of two-face measurements. The strategies are verified analyzing the deformation of the Onsala Space Observatory's radio telescope's main reflector. It is demonstrated that either two-face measurements as well as the in-situ calibration of the laser scanner in a bundle adjustment improve the results of deformation analysis. The best solution is gained by a combination of both strategies.

  17. Numerical investigations of potential systematic uncertainties in iron opacity measurements at solar interior temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagayama, T.; Bailey, J. E.; Loisel, G. P.

    Iron opacity calculations presently disagree with measurements at an electron temperature of ~180–195 eV and an electron density of (2–4)×10 22cm –3, conditions similar to those at the base of the solar convection zone. The measurements use x rays to volumetrically heat a thin iron sample that is tamped with low-Z materials. The opacity is inferred from spectrally resolved x-ray transmission measurements. Plasma self-emission, tamper attenuation, and temporal and spatial gradients can all potentially cause systematic errors in the measured opacity spectra. In this article we quantitatively evaluate these potential errors with numerical investigations. The analysis exploits computer simulations thatmore » were previously found to reproduce the experimentally measured plasma conditions. The simulations, combined with a spectral synthesis model, enable evaluations of individual and combined potential errors in order to estimate their potential effects on the opacity measurement. Lastly, the results show that the errors considered here do not account for the previously observed model-data discrepancies.« less

  18. Numerical investigations of potential systematic uncertainties in iron opacity measurements at solar interior temperatures

    DOE PAGES

    Nagayama, T.; Bailey, J. E.; Loisel, G. P.; ...

    2017-06-26

    Iron opacity calculations presently disagree with measurements at an electron temperature of ~180–195 eV and an electron density of (2–4)×10 22cm –3, conditions similar to those at the base of the solar convection zone. The measurements use x rays to volumetrically heat a thin iron sample that is tamped with low-Z materials. The opacity is inferred from spectrally resolved x-ray transmission measurements. Plasma self-emission, tamper attenuation, and temporal and spatial gradients can all potentially cause systematic errors in the measured opacity spectra. In this article we quantitatively evaluate these potential errors with numerical investigations. The analysis exploits computer simulations thatmore » were previously found to reproduce the experimentally measured plasma conditions. The simulations, combined with a spectral synthesis model, enable evaluations of individual and combined potential errors in order to estimate their potential effects on the opacity measurement. Lastly, the results show that the errors considered here do not account for the previously observed model-data discrepancies.« less

  19. Flexible methods for segmentation evaluation: results from CT-based luggage screening.

    PubMed

    Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry

    2014-01-01

    Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms' behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms.

  20. Multiple Flux Footprints, Flux Divergences and Boundary Layer Mixing Ratios: Studies of Ecosystem-Atmosphere CO2 Exchange Using the WLEF Tall Tower.

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Bakwin, P. S.; Yi, C.; Cook, B. D.; Wang, W.; Denning, A. S.; Teclaw, R.; Isebrands, J. G.

    2001-05-01

    Long-term, tower-based measurements using the eddy-covariance method have revealed a wealth of detail about the temporal dynamics of netecosystem-atmosphere exchange (NEE) of CO2. The data also provide a measure of the annual net CO2 exchange. The area represented by these flux measurements, however, is limited, and doubts remain about possible systematic errors that may bias the annual net exchange measurements. Flux and mixing ratio measurements conducted at the WLEF tall tower as part of the Chequamegon Ecosystem-Atmosphere Study (ChEAS) allow for unique assessment of the uncertainties in NEE of CO2. The synergy between flux and mixing ratio observations shows the potential for comparing inverse and eddy-covariance methods of estimating NEE of CO2. Such comparisons may strengthen confidence in both results and begin to bridge the huge gap in spatial scales (at least 3 orders of magnitude) between continental or hemispheric scale inverse studies and kilometer-scale eddy covariance flux measurements. Data from WLEF and Willow Creek, another ChEAS tower, are used to estimate random and systematic errors in NEE of CO2. Random uncertainty in seasonal exchange rates and the annual integrated NEE, including both turbulent sampling errors and variability in enviromental conditions, is small. Systematic errors are identified by examining changes in flux as a function of atmospheric stability and wind direction, and by comparing the multiple level flux measurements on the WLEF tower. Nighttime drainage is modest but evident. Systematic horizontal advection occurs during the morning turbulence transition. The potential total systematic error appears to be larger than random uncertainty, but still modest. The total systematic error, however, is difficult to assess. It appears that the WLEF region ecosystems were a small net sink of CO2 in 1997. It is clear that the summer uptake rate at WLEF is much smaller than that at most deciduous forest sites, including the nearby Willow Creek site. The WLEF tower also allows us to study the potential for monitoring continental CO2 mixing ratios from tower sites. Despite concerns about the proximity to ecosystem sources and sinks, it is clear that boundary layer CO2 mixing ratios can be monitored using typical surface layer towers. Seasonal and annual land-ocean mixing ratio gradients are readily detectable, providing the motivation for a flux-tower based mixing ratio observation network that could greatly improve the accuracy of inversion-based estimates of NEE of CO2, and enable inversions to be applied on smaller temporal and spatial scales. Results from the WLEF tower illustrate the degree to which local flux measurements represent interannual, seasonal and synoptic CO2 mixing ratio trends. This coherence between fluxes and mixing ratios serves to "regionalize" the eddy-covariance based local NEE observations.

  1. Qualitative fusion technique based on information poor system and its application to factor analysis for vibration of rolling bearings

    NASA Astrophysics Data System (ADS)

    Xia, Xintao; Wang, Zhongyu

    2008-10-01

    For some methods of stability analysis of a system using statistics, it is difficult to resolve the problems of unknown probability distribution and small sample. Therefore, a novel method is proposed in this paper to resolve these problems. This method is independent of probability distribution, and is useful for small sample systems. After rearrangement of the original data series, the order difference and two polynomial membership functions are introduced to estimate the true value, the lower bound and the supper bound of the system using fuzzy-set theory. Then empirical distribution function is investigated to ensure confidence level above 95%, and the degree of similarity is presented to evaluate stability of the system. Cases of computer simulation investigate stable systems with various probability distribution, unstable systems with linear systematic errors and periodic systematic errors and some mixed systems. The method of analysis for systematic stability is approved.

  2. Haptic spatial matching in near peripersonal space.

    PubMed

    Kaas, Amanda L; Mier, Hanneke I van

    2006-04-01

    Research has shown that haptic spatial matching at intermanual distances over 60 cm is prone to large systematic errors. The error pattern has been explained by the use of reference frames intermediate between egocentric and allocentric coding. This study investigated haptic performance in near peripersonal space, i.e. at intermanual distances of 60 cm and less. Twelve blindfolded participants (six males and six females) were presented with two turn bars at equal distances from the midsagittal plane, 30 or 60 cm apart. Different orientations (vertical/horizontal or oblique) of the left bar had to be matched by adjusting the right bar to either a mirror symmetric (/ \\) or parallel (/ /) position. The mirror symmetry task can in principle be performed accurately in both an egocentric and an allocentric reference frame, whereas the parallel task requires an allocentric representation. Results showed that parallel matching induced large systematic errors which increased with distance. Overall error was significantly smaller in the mirror task. The task difference also held for the vertical orientation at 60 cm distance, even though this orientation required the same response in both tasks, showing a marked effect of task instruction. In addition, men outperformed women on the parallel task. Finally, contrary to our expectations, systematic errors were found in the mirror task, predominantly at 30 cm distance. Based on these findings, we suggest that haptic performance in near peripersonal space might be dominated by different mechanisms than those which come into play at distances over 60 cm. Moreover, our results indicate that both inter-individual differences and task demands affect task performance in haptic spatial matching. Therefore, we conclude that the study of haptic spatial matching in near peripersonal space might reveal important additional constraints for the specification of adequate models of haptic spatial performance.

  3. Influence of Familiarization and Competitive Level on the Reliability of Countermovement Vertical Jump Kinetic and Kinematic Variables.

    PubMed

    Nibali, Maria L; Tombleson, Tom; Brady, Philip H; Wagner, Phillip

    2015-10-01

    Understanding typical variation of vertical jump (VJ) performance and confounding sources of its typical variability (i.e., familiarization and competitive level) is pertinent in the routine monitoring of athletes. We evaluated the presence of systematic error (learning effect) and nonuniformity of error (heteroscedasticity) across VJ performances of athletes that differ in competitive level and quantified the reliability of VJ kinetic and kinematic variables relative to the smallest worthwhile change (SWC). One hundred thirteen high school athletes, 30 college athletes, and 35 professional athletes completed repeat VJ trials. Average eccentric rate of force development (RFD), average concentric (CON) force, CON impulse, and jump height measurements were obtained from vertical ground reaction force (VGRF) data. Systematic error was assessed by evaluating changes in the mean of repeat trials. Heteroscedasticity was evaluated by plotting the difference score (trial 2 - trial 1) against the mean of the trials. Variability of jump variables was calculated as the typical error (TE) and coefficient of variation (%CV). No substantial systematic error (effect size range: -0.07 to 0.11) or heteroscedasticity was present for any of the VJ variables. Vertical jump can be performed without the need for familiarization trials, and the variability can be conveyed as either the raw TE or the %CV. Assessment of VGRF variables is an effective and reliable means of assessing VJ performance. Average CON force and CON impulse are highly reliable (%CV: 2.7% ×/÷ 1.10), although jump height was the only variable to display a %CV ≤SWC. Eccentric RFD is highly variable yet should not be discounted from VJ assessments on this factor alone because it may be sensitive to changes in response to training or fatigue that exceed the TE.

  4. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.

  5. Application of indices Cp and Cpk to improve quality control capability in clinical biochemistry laboratories.

    PubMed

    Chen, Ming-Shu; Wu, Ming-Hsun; Lin, Chih-Ming

    2014-04-30

    The traditional criteria for acceptability of analytic quality may not be objective in clinical laboratories. To establish quality control procedures intended to enhance Westgard multi-rules for improving the quality of clinical biochemistry tests, we applied the Cp and Cpk quality-control indices to monitor tolerance fitting and systematic variation of clinical biochemistry test results. Daily quality-control data of a large Taiwanese hospital in 2009 were analyzed. The test items were selected based on an Olympus biochemistry machine and included serum albumin, aspartate aminotransferase, cholesterol, glucose and potassium levels. Cp and Cpk values were calculated for normal and abnormal levels, respectively. The tolerance range was estimated with data from 50 laboratories using the same instruments and reagents. The results showed a monthly trend of variation for the five items under investigation. The index values of glucose were lower than those of the other items, and their values were usually <2. In contrast to the Cp value for cholesterol, Cpk of cholesterol was lower than 2, indicating a systematic error that should be further investigated. This finding suggests a degree of variation or failure to meet specifications that should be corrected. The study indicated that Cp and Cpk could be applied not only for monitoring variations in quality control, but also for revealing inter-laboratory qualitycontrol capability differences.

  6. Initial Steps Toward Next-Generation, Waveform-Based, Three-Dimensional Models and Metrics to Improve Nuclear Explosion Monitoring in the Middle East

    DTIC Science & Technology

    2008-09-30

    propagation effects by splitting apart the longer period surface waves from the shorter period, depth-sensitive Pnl waves. Problematic, or high-error... Pnl waves. Problematic, or high-error, stations and paths were further analyzed to identify systematic errors with unknown sensor responses and...frequency Pnl components and slower, longer period surface waves. All cut windows are fit simultaneously, allowing equal weighting of phases that may be

  7. Exponential H ∞ Synchronization of Chaotic Cryptosystems Using an Improved Genetic Algorithm

    PubMed Central

    Hsiao, Feng-Hsiag

    2015-01-01

    This paper presents a systematic design methodology for neural-network- (NN-) based secure communications in multiple time-delay chaotic (MTDC) systems with optimal H ∞ performance and cryptography. On the basis of the Improved Genetic Algorithm (IGA), which is demonstrated to have better performance than that of a traditional GA, a model-based fuzzy controller is then synthesized to stabilize the MTDC systems. A fuzzy controller is synthesized to not only realize the exponential synchronization, but also achieve optimal H ∞ performance by minimizing the disturbance attenuation level. Furthermore, the error of the recovered message is stated by using the n-shift cipher and key. Finally, a numerical example with simulations is given to demonstrate the effectiveness of our approach. PMID:26366432

  8. Error Sources in Proccessing LIDAR Based Bridge Inspection

    NASA Astrophysics Data System (ADS)

    Bian, H.; Chen, S. E.; Liu, W.

    2017-09-01

    Bridge inspection is a critical task in infrastructure management and is facing unprecedented challenges after a series of bridge failures. The prevailing visual inspection was insufficient in providing reliable and quantitative bridge information although a systematic quality management framework was built to ensure visual bridge inspection data quality to minimize errors during the inspection process. The LiDAR based remote sensing is recommended as an effective tool in overcoming some of the disadvantages of visual inspection. In order to evaluate the potential of applying this technology in bridge inspection, some of the error sources in LiDAR based bridge inspection are analysed. The scanning angle variance in field data collection and the different algorithm design in scanning data processing are the found factors that will introduce errors into inspection results. Besides studying the errors sources, advanced considerations should be placed on improving the inspection data quality, and statistical analysis might be employed to evaluate inspection operation process that contains a series of uncertain factors in the future. Overall, the development of a reliable bridge inspection system requires not only the improvement of data processing algorithms, but also systematic considerations to mitigate possible errors in the entire inspection workflow. If LiDAR or some other technology can be accepted as a supplement for visual inspection, the current quality management framework will be modified or redesigned, and this would be as urgent as the refine of inspection techniques.

  9. Drought Persistence Errors in Global Climate Models

    NASA Astrophysics Data System (ADS)

    Moon, H.; Gudmundsson, L.; Seneviratne, S. I.

    2018-04-01

    The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.

  10. Modeling human target acquisition in ground-to-air weapon systems

    NASA Technical Reports Server (NTRS)

    Phatak, A. V.; Mohr, R. L.; Vikmanis, M.; Wei, K. C.

    1982-01-01

    The problems associated with formulating and validating mathematical models for describing and predicting human target acquisition response are considered. In particular, the extension of the human observer model to include the acquisition phase as well as the tracking segment is presented. Relationship of the Observer model structure to the more complex Standard Optimal Control model formulation and to the simpler Transfer Function/Noise representation is discussed. Problems pertinent to structural identifiability and the form of the parameterization are elucidated. A systematic approach toward the identification of the observer acquisition model parameters from ensemble tracking error data is presented.

  11. Improvements in GRACE Gravity Fields Using Regularization

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or spatial smoothing.

  12. Systematic errors in Monsoon simulation: importance of the equatorial Indian Ocean processes

    NASA Astrophysics Data System (ADS)

    Annamalai, H.; Taguchi, B.; McCreary, J. P., Jr.; Nagura, M.; Miyama, T.

    2015-12-01

    H. Annamalai1, B. Taguchi2, J.P. McCreary1, J. Hafner1, M. Nagura2, and T. Miyama2 International Pacific Research Center, University of Hawaii, USA Application Laboratory, JAMSTEC, Japan In climate models, simulating the monsoon precipitation climatology remains a grand challenge. Compared to CMIP3, the multi-model-mean (MMM) errors for Asian-Australian monsoon (AAM) precipitation climatology in CMIP5, relative to GPCP observations, have shown little improvement. One of the implications is that uncertainties in the future projections of time-mean changes to AAM rainfall may not have reduced from CMIP3 to CMIP5. Despite dedicated efforts by the modeling community, the progress in monsoon modeling is rather slow. This leads us to wonder: Has the scientific community reached a "plateau" in modeling mean monsoon precipitation? Our focus here is to better understanding of the coupled air-sea interactions, and moist processes that govern the precipitation characteristics over the tropical Indian Ocean where large-scale errors persist. A series idealized coupled model experiments are performed to test the hypothesis that errors in the coupled processes along the equatorial Indian Ocean during inter-monsoon seasons could potentially influence systematic errors during the monsoon season. Moist static energy budget diagnostics has been performed to identify the leading moist and radiative processes that account for the large-scale errors in the simulated precipitation. As a way forward, we propose three coordinated efforts, and they are: (i) idealized coupled model experiments; (ii) process-based diagnostics and (iii) direct observations to constrain model physics. We will argue that a systematic and coordinated approach in the identification of the various interactive processes that shape the precipitation basic state needs to be carried out, and high-quality observations over the data sparse monsoon region are needed to validate models and further improve model physics.

  13. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  14. Understanding human management of automation errors.

    PubMed

    McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D

    2014-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance.

  15. Global CO2 flux inversions from remote-sensing data with systematic errors using hierarchical statistical models

    NASA Astrophysics Data System (ADS)

    Zammit-Mangion, Andrew; Stavert, Ann; Rigby, Matthew; Ganesan, Anita; Rayner, Peter; Cressie, Noel

    2017-04-01

    The Orbiting Carbon Observatory-2 (OCO-2) satellite was launched on 2 July 2014, and it has been a source of atmospheric CO2 data since September 2014. The OCO-2 dataset contains a number of variables, but the one of most interest for flux inversion has been the column-averaged dry-air mole fraction (in units of ppm). These global level-2 data offer the possibility of inferring CO2 fluxes at Earth's surface and tracking those fluxes over time. However, as well as having a component of random error, the OCO-2 data have a component of systematic error that is dependent on the instrument's mode, namely land nadir, land glint, and ocean glint. Our statistical approach to CO2-flux inversion starts with constructing a statistical model for the random and systematic errors with parameters that can be estimated from the OCO-2 data and possibly in situ sources from flasks, towers, and the Total Column Carbon Observing Network (TCCON). Dimension reduction of the flux field is achieved through the use of physical basis functions, while temporal evolution of the flux is captured by modelling the basis-function coefficients as a vector autoregressive process. For computational efficiency, flux inversion uses only three months of sensitivities of mole fraction to changes in flux, computed using MOZART; any residual variation is captured through the modelling of a stochastic process that varies smoothly as a function of latitude. The second stage of our statistical approach is to simulate from the posterior distribution of the basis-function coefficients and all unknown parameters given the data using a fully Bayesian Markov chain Monte Carlo (MCMC) algorithm. Estimates and posterior variances of the flux field can then be obtained straightforwardly from this distribution. Our statistical approach is different than others, as it simultaneously makes inference (and quantifies uncertainty) on both the error components' parameters and the CO2 fluxes. We compare it to more classical approaches through an Observing System Simulation Experiment (OSSE) on a global scale. By changing the size of the random and systematic errors in the OSSE, we can determine the corresponding spatial and temporal resolutions at which useful flux signals could be detected from the OCO-2 data.

  16. Robust Learning Control Design for Quantum Unitary Transformations.

    PubMed

    Wu, Chengzhi; Qi, Bo; Chen, Chunlin; Dong, Daoyi

    2017-12-01

    Robust control design for quantum unitary transformations has been recognized as a fundamental and challenging task in the development of quantum information processing due to unavoidable decoherence or operational errors in the experimental implementation of quantum operations. In this paper, we extend the systematic methodology of sampling-based learning control (SLC) approach with a gradient flow algorithm for the design of robust quantum unitary transformations. The SLC approach first uses a "training" process to find an optimal control strategy robust against certain ranges of uncertainties. Then a number of randomly selected samples are tested and the performance is evaluated according to their average fidelity. The approach is applied to three typical examples of robust quantum transformation problems including robust quantum transformations in a three-level quantum system, in a superconducting quantum circuit, and in a spin chain system. Numerical results demonstrate the effectiveness of the SLC approach and show its potential applications in various implementation of quantum unitary transformations.

  17. Optimal information transfer in enzymatic networks: A field theoretic formulation

    NASA Astrophysics Data System (ADS)

    Samanta, Himadri S.; Hinczewski, Michael; Thirumalai, D.

    2017-07-01

    Signaling in enzymatic networks is typically triggered by environmental fluctuations, resulting in a series of stochastic chemical reactions, leading to corruption of the signal by noise. For example, information flow is initiated by binding of extracellular ligands to receptors, which is transmitted through a cascade involving kinase-phosphatase stochastic chemical reactions. For a class of such networks, we develop a general field-theoretic approach to calculate the error in signal transmission as a function of an appropriate control variable. Application of the theory to a simple push-pull network, a module in the kinase-phosphatase cascade, recovers the exact results for error in signal transmission previously obtained using umbral calculus [Hinczewski and Thirumalai, Phys. Rev. X 4, 041017 (2014), 10.1103/PhysRevX.4.041017]. We illustrate the generality of the theory by studying the minimal errors in noise reduction in a reaction cascade with two connected push-pull modules. Such a cascade behaves as an effective three-species network with a pseudointermediate. In this case, optimal information transfer, resulting in the smallest square of the error between the input and output, occurs with a time delay, which is given by the inverse of the decay rate of the pseudointermediate. Surprisingly, in these examples the minimum error computed using simulations that take nonlinearities and discrete nature of molecules into account coincides with the predictions of a linear theory. In contrast, there are substantial deviations between simulations and predictions of the linear theory in error in signal propagation in an enzymatic push-pull network for a certain range of parameters. Inclusion of second-order perturbative corrections shows that differences between simulations and theoretical predictions are minimized. Our study establishes that a field theoretic formulation of stochastic biological signaling offers a systematic way to understand error propagation in networks of arbitrary complexity.

  18. Accounting for measurement error: a critical but often overlooked process.

    PubMed

    Harris, Edward F; Smith, Richard N

    2009-12-01

    Due to instrument imprecision and human inconsistencies, measurements are not free of error. Technical error of measurement (TEM) is the variability encountered between dimensions when the same specimens are measured at multiple sessions. A goal of a data collection regimen is to minimise TEM. The few studies that actually quantify TEM, regardless of discipline, report that it is substantial and can affect results and inferences. This paper reviews some statistical approaches for identifying and controlling TEM. Statistically, TEM is part of the residual ('unexplained') variance in a statistical test, so accounting for TEM, which requires repeated measurements, enhances the chances of finding a statistically significant difference if one exists. The aim of this paper was to review and discuss common statistical designs relating to types of error and statistical approaches to error accountability. This paper addresses issues of landmark location, validity, technical and systematic error, analysis of variance, scaled measures and correlation coefficients in order to guide the reader towards correct identification of true experimental differences. Researchers commonly infer characteristics about populations from comparatively restricted study samples. Most inferences are statistical and, aside from concerns about adequate accounting for known sources of variation with the research design, an important source of variability is measurement error. Variability in locating landmarks that define variables is obvious in odontometrics, cephalometrics and anthropometry, but the same concerns about measurement accuracy and precision extend to all disciplines. With increasing accessibility to computer-assisted methods of data collection, the ease of incorporating repeated measures into statistical designs has improved. Accounting for this technical source of variation increases the chance of finding biologically true differences when they exist.

  19. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelbaum, R.; Rowe, B.; Armstrong, R.

    2015-05-01

    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about amore » spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less

  20. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    DOE PAGES

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; ...

    2015-05-11

    The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty aboutmore » a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less

  1. Assessment of radargrammetric DSMs from TerraSAR-X Stripmap images in a mountainous relief area of the Amazon region

    NASA Astrophysics Data System (ADS)

    de Oliveira, Cleber Gonzales; Paradella, Waldir Renato; da Silva, Arnaldo de Queiroz

    The Brazilian Amazon is a vast territory with an enormous need for mapping and monitoring of renewable and non-renewable resources. Due to the adverse environmental condition (rain, cloud, dense vegetation) and difficult access, topographic information is still poor, and when available needs to be updated or re-mapped. In this paper, the feasibility of using Digital Surface Models (DSMs) extracted from TerraSAR-X Stripmap stereo-pair images for detailed topographic mapping was investigated for a mountainous area in the Carajás Mineral Province, located on the easternmost border of the Brazilian Amazon. The quality of the radargrammetric DSMs was evaluated regarding field altimetric measurements. Precise topographic field information acquired from a Global Positioning System (GPS) was used as Ground Control Points (GCPs) for the modeling of the stereoscopic DSMs and as Independent Check Points (ICPs) for the calculation of elevation accuracies. The analysis was performed following two ways: (1) the use of Root Mean Square Error (RMSE) and (2) calculations of systematic error (bias) and precision. The test for significant systematic error was based on the Student's-t distribution and the test of precision was based on the Chi-squared distribution. The investigation has shown that the accuracy of the TerraSAR-X Stripmap DSMs met the requirements for 1:50,000 map (Class A) as requested by the Brazilian Standard for Cartographic Accuracy. Thus, the use of TerraSAR-X Stripmap images can be considered a promising alternative for detailed topographic mapping in similar environments of the Amazon region, where available topographic information is rare or presents low quality.

  2. Preparatory studies for the WFIRST supernova cosmology measurements

    NASA Astrophysics Data System (ADS)

    Perlmutter, Saul

    In the context of the WFIRST-AFTA Science Definition Team we developed a first version of a supernova program, described in the WFIRST-AFTA SDT report. This program uses the imager to discover supernova candidates and an Integral Field Spectrograph (IFS) to obtain spectrophotometric light curves and higher signal to noise spectra of the supernovae near peak to better characterize the supernovae and thus minimize systematic errors. While this program was judged a robust one, and the estimates of the sensitivity to the cosmological parameters were felt to be reliable, due to limitation of time the analysis was clearly limited in depth on a number of issues. The goal of this proposal is to further develop this program and refine the estimates of the sensitivities to the cosmological parameters using more sophisticated systematic uncertainty models and covariance error matrices that fold in more realistic data concerning observed populations of SNe Ia as well as more realistic instrument models. We propose to develop analysis algorithms and approaches that are needed to build, optimize, and refine the WFIRST instrument and program requirements to accomplish the best supernova cosmology measurements possible. We plan to address the following: a) Use realistic Supernova populations, subclasses and population drift. One bothersome uncertainty with the supernova technique is the possibility of population drift with redshift. We are in a unique position to characterize and mitigate such effects using the spectrophotometric time series of real Type Ia supernovae from the Nearby Supernova Factory (SNfactory). Each supernova in this sample has global galaxy measurements as well as additional local environment information derived from the IFS spectroscopy. We plan to develop methods of coping with this issue, e.g., by selecting similar subsamples of supernovae and allowing additional model flexibility, in order to reduce systematic uncertainties. These studies will allow us to tune details, like the wavelength coverage and S/N requirements, of the WFIRST IFS to capitalize on these systematic error reduction methods. b) Supernova extraction and host galaxy subtractions. The underlying light of the host galaxy must be subtracted from the supernova images making up the lightcurves. Using the IFS to provide the lightcurve points via spectrophotometry requires the subtraction of a reference spectrum of the galaxy taken after the supernova light has faded to a negligible level. We plan to apply the expertise obtained from the SNfactory to develop galaxy background procedures that minimize the systematic errors introduced by this step in the analysis. c) Instrument calibration and ground to space cross calibration. Calibrating the entire supernova sample will be a challenge as no standard stars exist that span the range of magnitudes and wavelengths relevant to the WFIRST survey. Linking the supernova measurements to the relatively brighter standards will require several links. WFIRST will produce the high redshift sample, but the nearby supernova to anchor the Hubble diagram will have to come from ground based observations. Developing algorithms to carry out the cross calibration of these two samples to the required one percent level will be an important goal of our proposal. An integral part of this calibration will be to remove all instrumental signatures and to develop unbiased measurement techniques starting at the pixel level. We then plan to pull the above studies together in a synthesis to produce a correlated error matrix. We plan to develop a Fisher Matrix based model to evaluate the correlated error matrix due to the various systematic errors discussed above. A realistic error model will allow us to carry out a more reliable estimates of the eventual errors on the measurement of the cosmological parameters, as well as serve as a means of optimizing and fine tuning the requirements for the instruments and survey strategies.

  3. How scientific experiments are designed: Problem solving in a knowledge-rich, error-rich environment

    NASA Astrophysics Data System (ADS)

    Baker, Lisa M.

    While theory formation and the relation between theory and data has been investigated in many studies of scientific reasoning, researchers have focused less attention on reasoning about experimental design, even though the experimental design process makes up a large part of real-world scientists' reasoning. The goal of this thesis was to provide a cognitive account of the scientific experimental design process by analyzing experimental design as problem-solving behavior (Newell & Simon, 1972). Three specific issues were addressed: the effect of potential error on experimental design strategies, the role of prior knowledge in experimental design, and the effect of characteristics of the space of alternate hypotheses on alternate hypothesis testing. A two-pronged in vivo/in vitro research methodology was employed, in which transcripts of real-world scientific laboratory meetings were analyzed as well as undergraduate science and non-science majors' design of biology experiments in the psychology laboratory. It was found that scientists use a specific strategy to deal with the possibility of error in experimental findings: they include "known" control conditions in their experimental designs both to determine whether error is occurring and to identify sources of error. The known controls strategy had not been reported in earlier studies with science-like tasks, in which participants' responses to error had consisted of replicating experiments and discounting results. With respect to prior knowledge: scientists and undergraduate students drew on several types of knowledge when designing experiments, including theoretical knowledge, domain-specific knowledge of experimental techniques, and domain-general knowledge of experimental design strategies. Finally, undergraduate science students generated and tested alternates to their favored hypotheses when the space of alternate hypotheses was constrained and searchable. This result may help explain findings of confirmation bias in earlier studies using science-like tasks, in which characteristics of the alternate hypothesis space may have made it unfeasible for participants to generate and test alternate hypotheses. In general, scientists and science undergraduates were found to engage in a systematic experimental design process that responded to salient features of the problem environment, including the constant potential for experimental error, availability of alternate hypotheses, and access to both theoretical knowledge and knowledge of experimental techniques.

  4. Constituent quarks and systematic errors in mid-rapidity charged multiplicity (dNch / dη distributions

    NASA Astrophysics Data System (ADS)

    Tannenbaum, Michael

    2017-01-01

    Although it was demonstrated more than 13 years ago that the increase in midrapidity dNch / dη with increasing centrality of Au+Au collisions at RHIC was linearly proportional to the number of constituent quark participants (or ``wounded quarks'', QW) in the collision, it was only in the last few years that generating the spatial positions of the three quarks in a nucleon according to the Fourier transform of the measured electric charge form factor of the proton could be used to connect dNch / dη /QW as a function of centrality in p(d) +A and A +A collisions with the same value of dNch / dη /QW determined in p +p collisions. One calculation, which only compared its calculated dNch / dη /QW in p +p at √{sNN} = 200 GeV to the least central of 12 centrality bin measurements in Au +Au by PHENIX, claimed that the p +p value was higher by ``about 30%'' from the band of measurements vs. centrality. However the clearly quoted systematic errors were ignored for which a 1 standard deviation systematic shift would move all the 12 Au +Au data points to within 1.3 standard deviations of the p +p value, or if the statistical and systematic errors are added in quadrature a difference of 35 +/- 21%. Rearch supported by U.S. Department of Energy, Contract No. DE-SC0012704.

  5. Orbit error characteristic and distribution of TLE using CHAMP orbit data

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-li; Xiong, Yong-qing

    2018-02-01

    Space object orbital covariance data is required for collision risk assessments, but publicly accessible two line element (TLE) data does not provide orbital error information. This paper compared historical TLE data and GPS precision ephemerides of CHAMP to assess TLE orbit accuracy from 2002 to 2008, inclusive. TLE error spatial variations with longitude and latitude were calculated to analyze error characteristics and distribution. The results indicate that TLE orbit data are systematically biased from the limited SGP4 model. The biases can reach the level of kilometers, and the sign and magnitude are correlate significantly with longitude.

  6. Accuracy of non-resonant laser-induced thermal acoustics (LITA) in a convergent-divergent nozzle flow

    NASA Astrophysics Data System (ADS)

    Richter, J.; Mayer, J.; Weigand, B.

    2018-02-01

    Non-resonant laser-induced thermal acoustics (LITA) was applied to measure Mach number, temperature and turbulence level along the centerline of a transonic nozzle flow. The accuracy of the measurement results was systematically studied regarding misalignment of the interrogation beam and frequency analysis of the LITA signals. 2D steady-state Reynolds-averaged Navier-Stokes (RANS) simulations were performed for reference. The simulations were conducted using ANSYS CFX 18 employing the shear-stress transport turbulence model. Post-processing of the LITA signals is performed by applying a discrete Fourier transformation (DFT) to determine the beat frequencies. It is shown that the systematical error of the DFT, which depends on the number of oscillations, signal chirp, and damping rate, is less than 1.5% for our experiments resulting in an average error of 1.9% for Mach number. Further, the maximum calibration error is investigated for a worst-case scenario involving maximum in situ readjustment of the interrogation beam within the limits of constructive interference. It is shown that the signal intensity becomes zero if the interrogation angle is altered by 2%. This, together with the accuracy of frequency analysis, results in an error of about 5.4% for temperature throughout the nozzle. Comparison with numerical results shows good agreement within the error bars.

  7. Optical truss and retroreflector modeling for picometer laser metrology

    NASA Astrophysics Data System (ADS)

    Hines, Braden E.

    1993-09-01

    Space-based astrometric interferometer concepts typically have a requirement for the measurement of the internal dimensions of the instrument to accuracies in the picometer range. While this level of resolution has already been achieved for certain special types of laser gauges, techniques for picometer-level accuracy need to be developed to enable all the various kinds of laser gauges needed for space-based interferometers. Systematic errors due to retroreflector imperfections become important as soon as the retroreflector is allowed to either translate in position or articulate in angle away from its nominal zero-point. Also, when combining several laser interferometers to form a three-dimensional laser gauge (a laser optical truss), systematic errors due to imperfect knowledge of the truss geometry are important as the retroreflector translates away from its nominal zero-point. In order to assess the astrometric performance of a proposed instrument, it is necessary to determine how the effects of an imperfect laser metrology system impact the astrometric accuracy. This paper show the development of an error propagation model from errors in the 1-D metrology measurements through the impact on the overall astrometric accuracy for OSI. Simulations are then presented based on this development which were used to define a multiplier which determines the 1-D metrology accuracy required to produce a given amount of fringe position error.

  8. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    PubMed

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-06-22

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.

  9. Deep-space navigation with differenced data types. Part 3: An expanded information content and sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Thurman, S. W.

    1992-01-01

    An approximate six-parameter analytic model for Earth-based differential range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 micro-rad, and angular rate precision on the order of 10 to 25 x 10(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wideband and narrowband (delta) VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 micro-rad, and angular rate precisions of 0.5 to 1.0 x 10(exp -12) rad/sec.

  10. Deep-space navigation with differenced data types. Part 3: An expanded information content and sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Thurman, S. W.

    1992-01-01

    An approximate six-parameter analytic model for Earth-based differenced range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 microrad, and angular rate precision on the order of 10 to 25(10)(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wide band and narrow band (delta)VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 /microrad, and angular rate precisions of 0.5 to 1.0(10)(exp -12) rad/sec.

  11. Afocal optical flow sensor for reducing vertical height sensitivity in indoor robot localization and navigation.

    PubMed

    Yi, Dong-Hoon; Lee, Tae-Jae; Cho, Dong-Il Dan

    2015-05-13

    This paper introduces a novel afocal optical flow sensor (OFS) system for odometry estimation in indoor robotic navigation. The OFS used in computer optical mouse has been adopted for mobile robots because it is not affected by wheel slippage. Vertical height variance is thought to be a dominant factor in systematic error when estimating moving distances in mobile robots driving on uneven surfaces. We propose an approach to mitigate this error by using an afocal (infinite effective focal length) system. We conducted experiments in a linear guide on carpet and three other materials with varying sensor heights from 30 to 50 mm and a moving distance of 80 cm. The same experiments were repeated 10 times. For the proposed afocal OFS module, a 1 mm change in sensor height induces a 0.1% systematic error; for comparison, the error for a conventional fixed-focal-length OFS module is 14.7%. Finally, the proposed afocal OFS module was installed on a mobile robot and tested 10 times on a carpet for distances of 1 m. The average distance estimation error and standard deviation are 0.02% and 17.6%, respectively, whereas those for a conventional OFS module are 4.09% and 25.7%, respectively.

  12. Effect of Body Mass Index on Magnitude of Setup Errors in Patients Treated With Adjuvant Radiotherapy for Endometrial Cancer With Daily Image Guidance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Lilie L., E-mail: lin@uphs.upenn.edu; Hertan, Lauren; Rengan, Ramesh

    2012-06-01

    Purpose: To determine the impact of body mass index (BMI) on daily setup variations and frequency of imaging necessary for patients with endometrial cancer treated with adjuvant intensity-modulated radiotherapy (IMRT) with daily image guidance. Methods and Materials: The daily shifts from a total of 782 orthogonal kilovoltage images from 30 patients who received pelvic IMRT between July 2008 and August 2010 were analyzed. The BMI, mean daily shifts, and random and systematic errors in each translational and rotational direction were calculated for each patient. Margin recipes were generated based on BMI. Linear regression and spearman rank correlation analysis were performed.more » To simulate a less-than-daily IGRT protocol, the average shift of the first five fractions was applied to subsequent setups without IGRT for assessing the impact on setup error and margin requirements. Results: Median BMI was 32.9 (range, 23-62). Of the 30 patients, 16.7% (n = 5) were normal weight (BMI <25); 23.3% (n = 7) were overweight (BMI {>=}25 to <30); 26.7% (n = 8) were mildly obese (BMI {>=}30 to <35); and 33.3% (n = 10) were moderately to severely obese (BMI {>=} 35). On linear regression, mean absolute vertical, longitudinal, and lateral shifts positively correlated with BMI (p = 0.0127, p = 0.0037, and p < 0.0001, respectively). Systematic errors in the longitudinal and vertical direction were found to be positively correlated with BMI category (p < 0.0001 for both). IGRT for the first five fractions, followed by correction of the mean error for all subsequent fractions, led to a substantial reduction in setup error and resultant margin requirement overall compared with no IGRT. Conclusions: Daily shifts, systematic errors, and margin requirements were greatest in obese patients. For women who are normal or overweight, a planning target margin margin of 7 to 10 mm may be sufficient without IGRT, but for patients who are moderately or severely obese, this is insufficient.« less

  13. Systematic effects on dark energy from 3D weak shear

    NASA Astrophysics Data System (ADS)

    Kitching, T. D.; Taylor, A. N.; Heavens, A. F.

    2008-09-01

    We present an investigation into the potential effect of systematics inherent in multiband wide-field surveys on the dark energy equation-of-state determination for two 3D weak lensing methods. The weak lensing methods are a geometric shear-ratio method and 3D cosmic shear. The analysis here uses an extension of the Fisher matrix framework to include jointly photometric redshift systematics, shear distortion systematics and intrinsic alignments. Using analytic parametrizations of these three primary systematic effects allows an isolation of systematic parameters of particular importance. We show that assuming systematic parameters are fixed, but possibly biased, results in potentially large biases in dark energy parameters. We quantify any potential bias by defining a Bias Figure of Merit. By marginalizing over extra systematic parameters, such biases are negated at the expense of an increase in the cosmological parameter errors. We show the effect on the dark energy Figure of Merit of marginalizing over each systematic parameter individually. We also show the overall reduction in the Figure of Merit due to all three types of systematic effects. Based on some assumption of the likely level of systematic errors, we find that the largest effect on the Figure of Merit comes from uncertainty in the photometric redshift systematic parameters. These can reduce the Figure of Merit by up to a factor of 2 to 4 in both 3D weak lensing methods, if no informative prior on the systematic parameters is applied. Shear distortion systematics have a smaller overall effect. Intrinsic alignment effects can reduce the Figure of Merit by up to a further factor of 2. This, however, is a worst-case scenario, within the assumptions of the parametrizations used. By including prior information on systematic parameters, the Figure of Merit can be recovered to a large extent, and combined constraints from 3D cosmic shear and shear ratio are robust to systematics. We conclude that, as a rule of thumb, given a realistic current understanding of intrinsic alignments and photometric redshifts, then including all three primary systematic effects reduces the Figure of Merit by at most a factor of 2.

  14. Cognitive motor intervention for gait and balance in Parkinson's disease: systematic review and meta-analysis.

    PubMed

    Wang, Xue-Qiang; Pi, Yan-Ling; Chen, Bing-Lin; Wang, Ru; Li, Xin; Chen, Pei-Jie

    2016-02-01

    We performed a systematic review and meta-analysis to assess the effect of cognitive motor intervention (CMI) on gait and balance in Parkinson's disease. PubMed, Embase, Cochrane Library, CINAHL, Web of Science, PEDro, and China Biology Medicine disc. We included randomized controlled trials (RCTs) and non RCTs. Two reviewers independently evaluated articles for eligibility and quality and serially abstracted data. A standardized mean difference ± standard error and 95% confidence interval (CI) was calculated for each study using Hedge's g to quantify the treatment effect. Nine trials with 181 subjects, four randomized controlled trials, and five single group intervention studies were included. The pooling revealed that cognitive motor intervention can improve gait speed (Hedge's g = 0.643 ± 0.191; 95% CI: 0.269 to 1.017, P = 0.001), stride time (Hedge's g = -0.536 ± 0.167; 95% CI: -0.862 to -0.209, P = 0.001), Berg Balance Scale (Hedge's g = 0.783 ± 0.289; 95% CI: 0.218 to 1.349, P = 0.007), Unipedal Stance Test (Hedge's g = 0.440 ± 0.189; 95% CI: 0.07 to 0.81, P =0.02). The systematic review demonstrates that cognitive motor intervention is effective for gait and balance in Parkinson's disease. However, the paper is limited by the quality of the included trials. © The Author(s) 2015.

  15. Integrated Sachs-Wolfe map reconstruction in the presence of systematic errors

    NASA Astrophysics Data System (ADS)

    Weaverdyck, Noah; Muir, Jessica; Huterer, Dragan

    2018-02-01

    The decay of gravitational potentials in the presence of dark energy leads to an additional, late-time contribution to anisotropies in the cosmic microwave background (CMB) at large angular scales. The imprint of this so-called integrated Sachs-Wolfe (ISW) effect to the CMB angular power spectrum has been detected and studied in detail, but reconstructing its spatial contributions to the CMB map, which would offer the tantalizing possibility of separating the early- from the late-time contributions to CMB temperature fluctuations, is more challenging. Here, we study the technique for reconstructing the ISW map based on information from galaxy surveys and focus in particular on how its accuracy is impacted by the presence of photometric calibration errors in input galaxy maps, which were previously found to be a dominant contaminant for ISW signal estimation. We find that both including tomographic information from a single survey and using data from multiple, complementary galaxy surveys improve the reconstruction by mitigating the impact of spurious power contributions from calibration errors. A high-fidelity reconstruction further requires one to account for the contribution of calibration errors to the observed galaxy power spectrum in the model used to construct the ISW estimator. We find that if the photometric calibration errors in galaxy surveys can be independently controlled at the level required to obtain unbiased dark energy constraints, then it is possible to reconstruct ISW maps with excellent accuracy using a combination of maps from two galaxy surveys with properties similar to Euclid and SPHEREx.

  16. Perceptions of Randomness: Why Three Heads Are Better than Four

    ERIC Educational Resources Information Center

    Hahn, Ulrike; Warren, Paul A.

    2009-01-01

    A long tradition of psychological research has lamented the systematic errors and biases in people's perception of the characteristics of sequences generated by a random mechanism such as a coin toss. It is proposed that once the likely nature of people's actual experience of such processes is taken into account, these "errors" and "biases"…

  17. Les systemes approximatifs et l'enseignement des langues secondes (Approximative Systems and the Teaching of Second Languages).

    ERIC Educational Resources Information Center

    High, Virginia Lacastro

    Errors can be considered concrete representations of stages through which one must go in order to acquire one's native language and a second language. It has been discovered that certain errors appear systematically, revealing an approximate system, or "interlanguage," behind the erroneous utterances. Present research in second language…

  18. A systematic framework for Monte Carlo simulation of remote sensing errors map in carbon assessments

    Treesearch

    S. Healey; P. Patterson; S. Urbanski

    2014-01-01

    Remotely sensed observations can provide unique perspective on how management and natural disturbance affect carbon stocks in forests. However, integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential remote sensing errors...

  19. Pitch Error Analysis of Young Piano Students' Music Reading Performances

    ERIC Educational Resources Information Center

    Rut Gudmundsdottir, Helga

    2010-01-01

    This study analyzed the music reading performances of 6-13-year-old piano students (N = 35) in their second year of piano study. The stimuli consisted of three piano pieces, systematically constructed to vary in terms of left-hand complexity and input simultaneity. The music reading performances were recorded digitally and a code of error analysis…

  20. A Systematic Approach for Identifying Level-1 Error Covariance Structures in Latent Growth Modeling

    ERIC Educational Resources Information Center

    Ding, Cherng G.; Jane, Ten-Der; Wu, Chiu-Hui; Lin, Hang-Rung; Shen, Chih-Kang

    2017-01-01

    It has been pointed out in the literature that misspecification of the level-1 error covariance structure in latent growth modeling (LGM) has detrimental impacts on the inferences about growth parameters. Since correct covariance structure is difficult to specify by theory, the identification needs to rely on a specification search, which,…

  1. Why Is Rainfall Error Analysis Requisite for Data Assimilation and Climate Modeling?

    NASA Technical Reports Server (NTRS)

    Hou, Arthur Y.; Zhang, Sara Q.

    2004-01-01

    Given the large temporal and spatial variability of precipitation processes, errors in rainfall observations are difficult to quantify yet crucial to making effective use of rainfall data for improving atmospheric analysis, weather forecasting, and climate modeling. We highlight the need for developing a quantitative understanding of systematic and random errors in precipitation observations by examining explicit examples of how each type of errors can affect forecasts and analyses in global data assimilation. We characterize the error information needed from the precipitation measurement community and how it may be used to improve data usage within the general framework of analysis techniques, as well as accuracy requirements from the perspective of climate modeling and global data assimilation.

  2. Microdensitometer errors: Their effect on photometric data reduction

    NASA Technical Reports Server (NTRS)

    Bozyan, E. P.; Opal, C. B.

    1984-01-01

    The performance of densitometers used for photometric data reduction of high dynamic range electrographic plate material is analyzed. Densitometer repeatability is tested by comparing two scans of one plate. Internal densitometer errors are examined by constructing histograms of digitized densities and finding inoperative bits and differential nonlinearity in the analog to digital converter. Such problems appear common to the four densitometers used in this investigation and introduce systematic algorithm dependent errors in the results. Strategies to improve densitometer performance are suggested.

  3. Angular sensitivities of scintillator slab configurations for location of gamma ray bursts

    NASA Technical Reports Server (NTRS)

    Gregory, J. C.

    1976-01-01

    Thin flat scintillator slabs are a useful means of measuring the angular location of gamma ray fluxes of astronomical interest. A statistical estimate of position error was made of two scintillator systems suitable for gamma ray burst location from a balloon or satellite platform. A single rotating scintillator with associated flux monitor is compared with a pair of stationary orthogonal scintillators. Position error for a strong burst is of the order of a few arcmin if systematic errors are ignored.

  4. SSC Geopositional Assessment of the Advanced Wide Field Sensor

    NASA Technical Reports Server (NTRS)

    Ross, Kenton

    2006-01-01

    The geopositional accuracy of the standard geocorrected product from the Advanced Wide Field Sensor (AWiFS) was evaluated using digital orthophoto quarter quadrangles and other reference sources of similar accuracy. Images were analyzed from summer 2004 through spring 2005. Forty to fifty check points were collected manually per scene and analyzed to determine overall circular error, estimates of horizontal bias, and other systematic errors. Measured errors were somewhat higher than the specifications for the data, but they were consistent with the analysis of the distributing vendor.

  5. Uncertainties in climate data sets

    NASA Technical Reports Server (NTRS)

    Mcguirk, James P.

    1992-01-01

    Climate diagnostics are constructed from either analyzed fields or from observational data sets. Those that have been commonly used are normally considered ground truth. However, in most of these collections, errors and uncertainties exist which are generally ignored due to the consistency of usage over time. Examples of uncertainties and errors are described in NMC and ECMWF analyses and in satellite observational sets-OLR, TOVS, and SMMR. It is suggested that these errors can be large, systematic, and not negligible in climate analysis.

  6. Coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    Analysis of quantum error correcting (QEC) codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. We present analytic results for the logical error as a function of concatenation level and code distance for coherent errors under the repetition code. For data-only coherent errors, we find that the logical error is partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than ɛ - (d - 1) error correction cycles. Here ɛ << 1 is the rotation angle error per cycle for a single physical qubit and d is the code distance. These results support the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation. We discuss extensions to imperfect syndrome extraction and implications for general QEC.

  7. "Bed Side" Human Milk Analysis in the Neonatal Intensive Care Unit: A Systematic Review.

    PubMed

    Fusch, Gerhard; Kwan, Celia; Kotrri, Gynter; Fusch, Christoph

    2017-03-01

    Human milk analyzers can measure macronutrient content in native breast milk to tailor adequate supplementation with fortifiers. This article reviews all studies using milk analyzers, including (i) evaluation of devices, (ii) the impact of different conditions on the macronutrient analysis of human milk, and (iii) clinical trials to improve growth. Results lack consistency, potentially due to systematic errors in the validation of the device, or pre-analytical sample preparation errors like homogenization. It is crucial to introduce good laboratory and clinical practice when using these devices; otherwise a non-validated clinical usage can severely affect growth outcomes of infants. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Seeing in the dark - I. Multi-epoch alchemy

    NASA Astrophysics Data System (ADS)

    Huff, Eric M.; Hirata, Christopher M.; Mandelbaum, Rachel; Schlegel, David; Seljak, Uroš; Lupton, Robert H.

    2014-05-01

    Weak lensing by large-scale structure is an invaluable cosmological tool given that most of the energy density of the concordance cosmology is invisible. Several large ground-based imaging surveys will attempt to measure this effect over the coming decade, but reliable control of the spurious lensing signal introduced by atmospheric turbulence and telescope optics remains a challenging problem. We address this challenge with a demonstration that point spread function (PSF) effects on measured galaxy shapes in the Sloan Digital Sky Survey (SDSS) can be corrected with existing analysis techniques. In this work, we co-add existing SDSS imaging on the equatorial stripe in order to build a data set with the statistical power to measure cosmic shear, while using a rounding kernel method to null out the effects of the anisotropic PSF. We build a galaxy catalogue from the combined imaging, characterize its photometric properties and show that the spurious shear remaining in this catalogue after the PSF correction is negligible compared to the expected cosmic shear signal. We identify a new source of systematic error in the shear-shear autocorrelations arising from selection biases related to masking. Finally, we discuss the circumstances in which this method is expected to be useful for upcoming ground-based surveys that have lensing as one of the science goals, and identify the systematic errors that can reduce its efficacy.

  9. Validity threats: overcoming interference with proposed interpretations of assessment data.

    PubMed

    Downing, Steven M; Haladyna, Thomas M

    2004-03-01

    Factors that interfere with the ability to interpret assessment scores or ratings in the proposed manner threaten validity. To be interpreted in a meaningful manner, all assessments in medical education require sound, scientific evidence of validity. The purpose of this essay is to discuss 2 major threats to validity: construct under-representation (CU) and construct-irrelevant variance (CIV). Examples of each type of threat for written, performance and clinical performance examinations are provided. The CU threat to validity refers to undersampling the content domain. Using too few items, cases or clinical performance observations to adequately generalise to the domain represents CU. Variables that systematically (rather than randomly) interfere with the ability to meaningfully interpret scores or ratings represent CIV. Issues such as flawed test items written at inappropriate reading levels or statistically biased questions represent CIV in written tests. For performance examinations, such as standardised patient examinations, flawed cases or cases that are too difficult for student ability contribute CIV to the assessment. For clinical performance data, systematic rater error, such as halo or central tendency error, represents CIV. The term face validity is rejected as representative of any type of legitimate validity evidence, although the fact that the appearance of the assessment may be an important characteristic other than validity is acknowledged. There are multiple threats to validity in all types of assessment in medical education. Methods to eliminate or control validity threats are suggested.

  10. Coping with medical error: a systematic review of papers to assess the effects of involvement in medical errors on healthcare professionals' psychological well-being.

    PubMed

    Sirriyeh, Reema; Lawton, Rebecca; Gardner, Peter; Armitage, Gerry

    2010-12-01

    Previous research has established health professionals as secondary victims of medical error, with the identification of a range of emotional and psychological repercussions that may occur as a result of involvement in error.2 3 Due to the vast range of emotional and psychological outcomes, research to date has been inconsistent in the variables measured and tools used. Therefore, differing conclusions have been drawn as to the nature of the impact of error on professionals and the subsequent repercussions for their team, patients and healthcare institution. A systematic review was conducted. Data sources were identified using database searches, with additional reference and hand searching. Eligibility criteria were applied to all studies identified, resulting in a total of 24 included studies. Quality assessment was conducted with the included studies using a tool that was developed as part of this research, but due to the limited number and diverse nature of studies, no exclusions were made on this basis. Review findings suggest that there is consistent evidence for the widespread impact of medical error on health professionals. Psychological repercussions may include negative states such as shame, self-doubt, anxiety and guilt. Despite much attention devoted to the assessment of negative outcomes, the potential for positive outcomes resulting from error also became apparent, with increased assertiveness, confidence and improved colleague relationships reported. It is evident that involvement in a medical error can elicit a significant psychological response from the health professional involved. However, a lack of literature around coping and support, coupled with inconsistencies and weaknesses in methodology, may need be addressed in future work.

  11. A systematic review of the quality of statistical methods employed for analysing quality of life data in cancer randomised controlled trials.

    PubMed

    Hamel, Jean-Francois; Saulnier, Patrick; Pe, Madeline; Zikos, Efstathios; Musoro, Jammbe; Coens, Corneel; Bottomley, Andrew

    2017-09-01

    Over the last decades, Health-related Quality of Life (HRQoL) end-points have become an important outcome of the randomised controlled trials (RCTs). HRQoL methodology in RCTs has improved following international consensus recommendations. However, no international recommendations exist concerning the statistical analysis of such data. The aim of our study was to identify and characterise the quality of the statistical methods commonly used for analysing HRQoL data in cancer RCTs. Building on our recently published systematic review, we analysed a total of 33 published RCTs studying the HRQoL methods reported in RCTs since 1991. We focussed on the ability of the methods to deal with the three major problems commonly encountered when analysing HRQoL data: their multidimensional and longitudinal structure and the commonly high rate of missing data. All studies reported HRQoL being assessed repeatedly over time for a period ranging from 2 to 36 months. Missing data were common, with compliance rates ranging from 45% to 90%. From the 33 studies considered, 12 different statistical methods were identified. Twenty-nine studies analysed each of the questionnaire sub-dimensions without type I error adjustment. Thirteen studies repeated the HRQoL analysis at each assessment time again without type I error adjustment. Only 8 studies used methods suitable for repeated measurements. Our findings show a lack of consistency in statistical methods for analysing HRQoL data. Problems related to multiple comparisons were rarely considered leading to a high risk of false positive results. It is therefore critical that international recommendations for improving such statistical practices are developed. Copyright © 2017. Published by Elsevier Ltd.

  12. Effect of magnesium added to local anesthetics for caudal anesthesia on postoperative pain in pediatric surgical patients: A systematic review and meta-analysis with Trial Sequential Analysis.

    PubMed

    Kawakami, Hiromasa; Mihara, Takahiro; Nakamura, Nobuhito; Ka, Koui; Goto, Takahisa

    2018-01-01

    Magnesium has been investigated as an adjuvant for neuraxial anesthesia, but the effect of caudal magnesium on postoperative pain is inconsistent. The aim of this systematic review and meta-analysis was to evaluate the analgesic effect of caudal magnesium. We searched six databases, including trial registration sites. Randomized clinical trials reporting the effect of caudal magnesium on postoperative pain after general anesthesia were eligible. The risk ratio for use of rescue analgesics after surgery was combined using a random-effects model. We also assessed adverse events. The I2 statistic was used to assess heterogeneity. We assessed risk of bias with Cochrane domains. We controlled type I and II errors due to sparse data and repetitive testing with Trial Sequential Analysis. We assessed the quality of evidence with GRADE. Four randomized controlled trials (247 patients) evaluated the need for rescue analgesics. In all four trials, 50 mg of magnesium was administered with caudal ropivacaine. The results suggested that the need for rescue analgesia was reduced significantly by caudal magnesium administration (risk ratio 0.45; 95% confidence interval 0.24-0.86). There was considerable heterogeneity as indicated by an I2 value of 62.5%. The Trial Sequential Analysis-adjusted confidence interval was 0.04-5.55, indicating that further trials are required. The quality of evidence was very low. The rate of adverse events was comparable between treatment groups. Caudal magnesium may reduce the need for rescue analgesia after surgery, but further randomized clinical trials with a low risk of bias and a low risk of random errors are necessary to assess the effect of caudal magnesium on postoperative pain and adverse events. University Hospital Medical Information Network Clinical Trials Registry UMIN000025344.

  13. What is the epidemiology of medication errors, error-related adverse events and risk factors for errors in adults managed in community care contexts? A systematic review of the international literature.

    PubMed

    Assiri, Ghadah Asaad; Shebl, Nada Atef; Mahmoud, Mansour Adam; Aloudah, Nouf; Grant, Elizabeth; Aljadhey, Hisham; Sheikh, Aziz

    2018-05-05

    To investigate the epidemiology of medication errors and error-related adverse events in adults in primary care, ambulatory care and patients' homes. Systematic review. Six international databases were searched for publications between 1 January 2006 and 31 December 2015. Two researchers independently extracted data from eligible studies and assessed the quality of these using established instruments. Synthesis of data was informed by an appreciation of the medicines' management process and the conceptual framework from the International Classification for Patient Safety. 60 studies met the inclusion criteria, of which 53 studies focused on medication errors, 3 on error-related adverse events and 4 on risk factors only. The prevalence of prescribing errors was reported in 46 studies: prevalence estimates ranged widely from 2% to 94%. Inappropriate prescribing was the most common type of error reported. Only one study reported the prevalence of monitoring errors, finding that incomplete therapeutic/safety laboratory-test monitoring occurred in 73% of patients. The incidence of preventable adverse drug events (ADEs) was estimated as 15/1000 person-years, the prevalence of drug-drug interaction-related adverse drug reactions as 7% and the prevalence of preventable ADE as 0.4%. A number of patient, healthcare professional and medication-related risk factors were identified, including the number of medications used by the patient, increased patient age, the number of comorbidities, use of anticoagulants, cases where more than one physician was involved in patients' care and care being provided by family physicians/general practitioners. A very wide variation in the medication error and error-related adverse events rates is reported in the studies, this reflecting heterogeneity in the populations studied, study designs employed and outcomes evaluated. This review has identified important limitations and discrepancies in the methodologies used and gaps in the literature on the epidemiology and outcomes of medication errors in community settings. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. TU-G-BRD-08: In-Vivo EPID Dosimetry: Quantifying the Detectability of Four Classes of Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ford, E; Phillips, M; Bojechko, C

    Purpose: EPID dosimetry is an emerging method for treatment verification and QA. Given that the in-vivo EPID technique is in clinical use at some centers, we investigate the sensitivity and specificity for detecting different classes of errors. We assess the impact of these errors using dose volume histogram endpoints. Though data exist for EPID dosimetry performed pre-treatment, this is the first study quantifying its effectiveness when used during patient treatment (in-vivo). Methods: We analyzed 17 patients; EPID images of the exit dose were acquired and used to reconstruct the planar dose at isocenter. This dose was compared to the TPSmore » dose using a 3%/3mm gamma criteria. To simulate errors, modifications were made to treatment plans using four possible classes of error: 1) patient misalignment, 2) changes in patient body habitus, 3) machine output changes and 4) MLC misalignments. Each error was applied with varying magnitudes. To assess the detectability of the error, the area under a ROC curve (AUC) was analyzed. The AUC was compared to changes in D99 of the PTV introduced by the simulated error. Results: For systematic changes in the MLC leaves, changes in the machine output and patient habitus, the AUC varied from 0.78–0.97 scaling with the magnitude of the error. The optimal gamma threshold as determined by the ROC curve varied between 84–92%. There was little diagnostic power in detecting random MLC leaf errors and patient shifts (AUC 0.52–0.74). Some errors with weak detectability had large changes in D99. Conclusion: These data demonstrate the ability of EPID-based in-vivo dosimetry in detecting variations in patient habitus and errors related to machine parameters such as systematic MLC misalignments and machine output changes. There was no correlation found between the detectability of the error using the gamma pass rate, ROC analysis and the impact on the dose volume histogram. Funded by grant R18HS022244 from AHRQ.« less

  15. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  16. Modeling and Measurement Constraints in Fault Diagnostics for HVAC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Najafi, Massieh; Auslander, David M.; Bartlett, Peter L.

    2010-05-30

    Many studies have shown that energy savings of five to fifteen percent are achievable in commercial buildings by detecting and correcting building faults, and optimizing building control systems. However, in spite of good progress in developing tools for determining HVAC diagnostics, methods to detect faults in HVAC systems are still generally undeveloped. Most approaches use numerical filtering or parameter estimation methods to compare data from energy meters and building sensors to predictions from mathematical or statistical models. They are effective when models are relatively accurate and data contain few errors. In this paper, we address the case where models aremore » imperfect and data are variable, uncertain, and can contain error. We apply a Bayesian updating approach that is systematic in managing and accounting for most forms of model and data errors. The proposed method uses both knowledge of first principle modeling and empirical results to analyze the system performance within the boundaries defined by practical constraints. We demonstrate the approach by detecting faults in commercial building air handling units. We find that the limitations that exist in air handling unit diagnostics due to practical constraints can generally be effectively addressed through the proposed approach.« less

  17. Coarse-graining errors and numerical optimization using a relative entropy framework.

    PubMed

    Chaimovich, Aviel; Shell, M Scott

    2011-03-07

    The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, S(rel), that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework. © 2011 American Institute of Physics.

  18. Calculation and Error Analysis of a Digital Elevation Model of Hofsjokull, Iceland from SAR Interferometry

    NASA Technical Reports Server (NTRS)

    Barton, Jonathan S.; Hall, Dorothy K.; Sigurosson, Oddur; Williams, Richard S., Jr.; Smith, Laurence C.; Garvin, James B.

    1999-01-01

    Two ascending European Space Agency (ESA) Earth Resources Satellites (ERS)-1/-2 tandem-mode, synthetic aperture radar (SAR) pairs are used to calculate the surface elevation of Hofsjokull, an ice cap in central Iceland. The motion component of the interferometric phase is calculated using the 30 arc-second resolution USGS GTOPO30 global digital elevation product and one of the ERS tandem pairs. The topography is then derived by subtracting the motion component from the other tandem pair. In order to assess the accuracy of the resultant digital elevation model (DEM), a geodetic airborne laser-altimetry swath is compared with the elevations derived from the interferometry. The DEM is also compared with elevations derived from a digitized topographic map of the ice cap from the University of Iceland Science Institute. Results show that low temporal correlation is a significant problem for the application of interferometry to small, low-elevation ice caps, even over a one-day repeat interval, and especially at the higher elevations. Results also show that an uncompensated error in the phase, ramping from northwest to southeast, present after tying the DEM to ground-control points, has resulted in a systematic error across the DEM.

  19. Calculation and error analysis of a digital elevation model of Hofsjokull, Iceland, from SAR interferometry

    USGS Publications Warehouse

    Barton, Jonathan S.; Hall, Dorothy K.; Sigurðsson, Oddur; Williams, Richard S.; Smith, Laurence C.; Garvin, James B.; Taylor, Susan; Hardy, Janet

    1999-01-01

    Two ascending European Space Agency (ESA) Earth Resources Satellites (ERS)-1/-2 tandem-mode, synthetic aperture radar (SAR) pairs are used to calculate the surface elevation of Hofsjokull, an ice cap in central Iceland. The motion component of the interferometric phase is calculated using the 30 arc-second resolution USGS GTOPO30 global digital elevation product and one of the ERS tandem pairs. The topography is then derived by subtracting the motion component from the other tandem pair. In order to assess the accuracy of the resultant digital elevation model (DEM), a geodetic airborne laser-altimetry swath is compared with the elevations derived from the interferometry. The DEM is also compared with elevations derived from a digitized topographic map of the ice cap from the University of Iceland Science Institute. Results show that low temporal correlation is a significant problem for the application of interferometry to small, low-elevation ice caps, even over a one-day repeat interval, and especially at the higher elevations. Results also show that an uncompensated error in the phase, ramping from northwest to southeast, present after tying the DEM to ground-control points, has resulted in a systematic error across the DEM.

  20. A simple solution to systematic errors in density determination by X-ray reflectivity: The XRR-density evaluation (XRR-DE) method

    NASA Astrophysics Data System (ADS)

    Bergese, P.; Bontempi, E.; Depero, L. E.

    2006-10-01

    X-ray reflectivity (XRR) is a non-destructive, accurate and fast technique for evaluating film density. Indeed, sample-goniometer alignment is a critical experimental factor and the overriding error source in XRR density determination. With commercial single-wavelength X-ray reflectometers, alignment is difficult to control and strongly depends on the operator. In the present work, the contribution of misalignment on density evaluation error is discussed, and a novel procedure (named XRR-density evaluation or XRR-DE method) to minimize the problem will be presented. The method allows to overcome the alignment step through the extrapolation of the correct density value from appropriate non-specular XRR data sets. This procedure is operator independent and suitable for commercial single-wavelength X-ray reflectometers. To test the XRR-DE method, single crystals of TiO 2 and SrTiO 3 were used. In both cases the determined densities differed from the nominal ones less than 5.5%. Thus, the XRR-DE method can be successfully applied to evaluate the density of thin films for which only optical reflectivity is today used. The advantage is that this method can be considered thickness independent.

Top