DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, S
2015-06-15
Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignmentmore » of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors using routine measurement of electron beam energy constancy.« less
NASA Technical Reports Server (NTRS)
Koch, S. E.; Skillman, W. C.; Kocin, P. J.; Wetzel, P. J.; Brill, K. F.
1985-01-01
The synoptic scale performance characteristics of MASS 2.0 are determined by comparing filtered 12-24 hr model forecasts to same-case forecasts made by the National Meteorological Center's synoptic-scale Limited-area Fine Mesh model. Characteristics of the two systems are contrasted, and the analysis methodology used to determine statistical skill scores and systematic errors is described. The overall relative performance of the two models in the sample is documented, and important systematic errors uncovered are presented.
NASA Astrophysics Data System (ADS)
Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration
2017-07-01
We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.
Influence of ECG measurement accuracy on ECG diagnostic statements.
Zywietz, C; Celikag, D; Joseph, G
1996-01-01
Computer analysis of electrocardiograms (ECGs) provides a large amount of ECG measurement data, which may be used for diagnostic classification and storage in ECG databases. Until now, neither error limits for ECG measurements have been specified nor has their influence on diagnostic statements been systematically investigated. An analytical method is presented to estimate the influence of measurement errors on the accuracy of diagnostic ECG statements. Systematic (offset) errors will usually result in an increase of false positive or false negative statements since they cause a shift of the working point on the receiver operating characteristics curve. Measurement error dispersion broadens the distribution function of discriminative measurement parameters and, therefore, usually increases the overlap between discriminative parameters. This results in a flattening of the receiver operating characteristics curve and an increase of false positive and false negative classifications. The method developed has been applied to ECG conduction defect diagnoses by using the proposed International Electrotechnical Commission's interval measurement tolerance limits. These limits appear too large because more than 30% of false positive atrial conduction defect statements and 10-18% of false intraventricular conduction defect statements could be expected due to tolerated measurement errors. To assure long-term usability of ECG measurement databases, it is recommended that systems provide its error tolerance limits obtained on a defined test set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.
2009-12-16
Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and MS/MS fragmentation have become widely available in recent years and have allowed for sig-nificantly better discrimination between true and false MS/MS pep-tide identifications by applying relatively narrow windows for maxi-mum allowable deviations for parent ion mass measurements. To fully gain the advantage of highly accurate parent ion mass meas-urements, it is important to limit systematic mass measurement errors. The DtaRefinery software tool can correct systematic errors in parent ion masses by reading a set of fragmentation spectra, searching for MS/MS peptide identifications, then fitting a model that canmore » estimate systematic errors, and removing them. This results in a new fragmentation spectrum file with updated parent ion masses.« less
Addressing Systematic Errors in Correlation Tracking on HMI Magnetograms
NASA Astrophysics Data System (ADS)
Mahajan, Sushant S.; Hathaway, David H.; Munoz-Jaramillo, Andres; Martens, Petrus C.
2017-08-01
Correlation tracking in solar magnetograms is an effective method to measure the differential rotation and meridional flow on the solar surface. However, since the tracking accuracy required to successfully measure meridional flow is very high, small systematic errors have a noticeable impact on measured meridional flow profiles. Additionally, the uncertainties of this kind of measurements have been historically underestimated, leading to controversy regarding flow profiles at high latitudes extracted from measurements which are unreliable near the solar limb.Here we present a set of systematic errors we have identified (and potential solutions), including bias caused by physical pixel sizes, center-to-limb systematics, and discrepancies between measurements performed using different time intervals. We have developed numerical techniques to get rid of these systematic errors and in the process improve the accuracy of the measurements by an order of magnitude.We also present a detailed analysis of uncertainties in these measurements using synthetic magnetograms and the quantification of an upper limit below which meridional flow measurements cannot be trusted as a function of latitude.
Adverse effects in dual-feed interferometry
NASA Astrophysics Data System (ADS)
Colavita, M. Mark
2009-11-01
Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews the key aspects of the dual-star approach and implementation, the main contributors to the systematic error budget, and the coherence terms in the photometric error budget.
Cembrowski, G S; Hackney, J R; Carey, N
1993-04-01
The Clinical Laboratory Improvement Act of 1988 (CLIA 88) has dramatically changed proficiency testing (PT) practices having mandated (1) satisfactory PT for certain analytes as a condition of laboratory operation, (2) fixed PT limits for many of these "regulated" analytes, and (3) an increased number of PT specimens (n = 5) for each testing cycle. For many of these analytes, the fixed limits are much broader than the previously employed Standard Deviation Index (SDI) criteria. Paradoxically, there may be less incentive to identify and evaluate analytically significant outliers to improve the analytical process. Previously described "control rules" to evaluate these PT results are unworkable as they consider only two or three results. We used Monte Carlo simulations of Kodak Ektachem analyzers participating in PT to determine optimal control rules for the identification of PT results that are inconsistent with those from other laboratories using the same methods. The analysis of three representative analytes, potassium, creatine kinase, and iron was simulated with varying intrainstrument and interinstrument standard deviations (si and sg, respectively) obtained from the College of American Pathologists (Northfield, Ill) Quality Assurance Services data and Proficiency Test data, respectively. Analytical errors were simulated in each of the analytes and evaluated in terms of multiples of the interlaboratory SDI. Simple control rules for detecting systematic and random error were evaluated with power function graphs, graphs of probability of error detected vs magnitude of error. Based on the simulation results, we recommend screening all analytes for the occurrence of two or more observations exceeding the same +/- 1 SDI limit. For any analyte satisfying this condition, the mean of the observations should be calculated. For analytes with sg/si ratios between 1.0 and 1.5, a significant systematic error is signaled by the mean exceeding 1.0 SDI. Significant random error is signaled by one observation exceeding the +/- 3-SDI limit or the range of the observations exceeding 4 SDIs. For analytes with higher sg/si, significant systematic or random error is signaled by violation of the screening rule (having at least two observations exceeding the same +/- 1 SDI limit). Random error can also be signaled by one observation exceeding the +/- 1.5-SDI limit or the range of the observations exceeding 3 SDIs. We present a practical approach to the workup of apparent PT errors.
Galli, C
2001-07-01
It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.
A systematic comparison of error correction enzymes by next-generation sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.
Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less
A systematic comparison of error correction enzymes by next-generation sequencing
Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.; ...
2017-08-01
Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less
A procedure for the significance testing of unmodeled errors in GNSS observations
NASA Astrophysics Data System (ADS)
Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling
2018-01-01
It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Ke; Li Yanqiu; Wang Hai
Characterization of measurement accuracy of the phase-shifting point diffraction interferometer (PS/PDI) is usually performed by two-pinhole null test. In this procedure, the geometrical coma and detector tilt astigmatism systematic errors are almost one or two magnitude higher than the desired accuracy of PS/PDI. These errors must be accurately removed from the null test result to achieve high accuracy. Published calibration methods, which can remove the geometrical coma error successfully, have some limitations in calibrating the astigmatism error. In this paper, we propose a method to simultaneously calibrate the geometrical coma and detector tilt astigmatism errors in PS/PDI null test. Basedmore » on the measurement results obtained from two pinhole pairs in orthogonal directions, the method utilizes the orthogonal and rotational symmetry properties of Zernike polynomials over unit circle to calculate the systematic errors introduced in null test of PS/PDI. The experiment using PS/PDI operated at visible light is performed to verify the method. The results show that the method is effective in isolating the systematic errors of PS/PDI and the measurement accuracy of the calibrated PS/PDI is 0.0088{lambda} rms ({lambda}= 632.8 nm).« less
NASA Technical Reports Server (NTRS)
Harwit, M.
1977-01-01
Sources of noise and error correcting procedures characteristic of Hadamard transform optical systems were investigated. Reduction of spectral noise due to noise spikes in the data, the effect of random errors, the relative performance of Fourier and Hadamard transform spectrometers operated under identical detector-noise-limited conditions, and systematic means for dealing with mask defects are among the topics discussed. The distortion in Hadamard transform optical instruments caused by moving Masks, incorrect mask alignment, missing measurements, and diffraction is analyzed and techniques for reducing or eliminating this distortion are described.
Dynamically correcting two-qubit gates against any systematic logical error
NASA Astrophysics Data System (ADS)
Calderon Vargas, Fernando Antonio
The reliability of quantum information processing depends on the ability to deal with noise and error in an efficient way. A significant source of error in many settings is coherent, systematic gate error. This work introduces a set of composite pulse sequences that generate maximally entangling gates and correct all systematic errors within the logical subspace to arbitrary order. These sequences are applica- ble for any two-qubit interaction Hamiltonian, and make no assumptions about the underlying noise mechanism except that it is constant on the timescale of the opera- tion. The prime use for our results will be in cases where one has limited knowledge of the underlying physical noise and control mechanisms, highly constrained control, or both. In particular, we apply these composite pulse sequences to the quantum system formed by two capacitively coupled singlet-triplet qubits, which is charac- terized by having constrained control and noise sources that are low frequency and of a non-Markovian nature.
On-board error correction improves IR earth sensor accuracy
NASA Astrophysics Data System (ADS)
Alex, T. K.; Kasturirangan, K.; Shrivastava, S. K.
1989-10-01
Infra-red earth sensors are used in satellites for attitude sensing. Their accuracy is limited by systematic and random errors. The sources of errors in a scanning infra-red earth sensor are analyzed in this paper. The systematic errors arising from seasonal variation of infra-red radiation, oblate shape of the earth, ambient temperature of sensor, changes in scan/spin rates have been analyzed. Simple relations are derived using least square curve fitting for on-board correction of these errors. Random errors arising out of noise from detector and amplifiers, instability of alignment and localized radiance anomalies are analyzed and possible correction methods are suggested. Sun and Moon interference on earth sensor performance has seriously affected a number of missions. The on-board processor detects Sun/Moon interference and corrects the errors on-board. It is possible to obtain eight times improvement in sensing accuracy, which will be comparable with ground based post facto attitude refinement.
Quality Assurance of Chemical Measurements.
ERIC Educational Resources Information Center
Taylor, John K.
1981-01-01
Reviews aspects of quality control (methods to control errors) and quality assessment (verification that systems are operating within acceptable limits) including an analytical measurement system, quality control by inspection, control charts, systematic errors, and use of SRMs, materials for which properties are certified by the National Bureau…
NASA Technical Reports Server (NTRS)
Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.
2003-01-01
We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.
White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K
2016-12-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems
Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.
2016-01-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060
A water-vapor radiometer error model. [for ionosphere in geodetic microwave techniques
NASA Technical Reports Server (NTRS)
Beckman, B.
1985-01-01
The water-vapor radiometer (WVR) is used to calibrate unpredictable delays in the wet component of the troposphere in geodetic microwave techniques such as very-long-baseline interferometry (VLBI) and Global Positioning System (GPS) tracking. Based on experience with Jet Propulsion Laboratory (JPL) instruments, the current level of accuracy in wet-troposphere calibration limits the accuracy of local vertical measurements to 5-10 cm. The goal for the near future is 1-3 cm. Although the WVR is currently the best calibration method, many instruments are prone to systematic error. In this paper, a treatment of WVR data is proposed and evaluated. This treatment reduces the effect of WVR systematic errors by estimating parameters that specify an assumed functional form for the error. The assumed form of the treatment is evaluated by comparing the results of two similar WVR's operating near each other. Finally, the observability of the error parameters is estimated by covariance analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballhausen, Hendrik, E-mail: hendrik.ballhausen@med.uni-muenchen.de; Hieber, Sheila; Li, Minglun
2014-08-15
Purpose: To identify the relevant technical sources of error of a system based on three-dimensional ultrasound (3D US) for patient positioning in external beam radiotherapy. To quantify these sources of error in a controlled laboratory setting. To estimate the resulting end-to-end geometric precision of the intramodality protocol. Methods: Two identical free-hand 3D US systems at both the planning-CT and the treatment room were calibrated to the laboratory frame of reference. Every step of the calibration chain was repeated multiple times to estimate its contribution to overall systematic and random error. Optimal margins were computed given the identified and quantified systematicmore » and random errors. Results: In descending order of magnitude, the identified and quantified sources of error were: alignment of calibration phantom to laser marks 0.78 mm, alignment of lasers in treatment vs planning room 0.51 mm, calibration and tracking of 3D US probe 0.49 mm, alignment of stereoscopic infrared camera to calibration phantom 0.03 mm. Under ideal laboratory conditions, these errors are expected to limit ultrasound-based positioning to an accuracy of 1.05 mm radially. Conclusions: The investigated 3D ultrasound system achieves an intramodal accuracy of about 1 mm radially in a controlled laboratory setting. The identified systematic and random errors require an optimal clinical tumor volume to planning target volume margin of about 3 mm. These inherent technical limitations do not prevent clinical use, including hypofractionation or stereotactic body radiation therapy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Juan; Beltran, Chris J., E-mail: beltran.chris@mayo.edu; Herman, Michael G.
Purpose: To quantitatively and systematically assess dosimetric effects induced by spot positioning error as a function of spot spacing (SS) on intensity-modulated proton therapy (IMPT) plan quality and to facilitate evaluation of safety tolerance limits on spot position. Methods: Spot position errors (PE) ranging from 1 to 2 mm were simulated. Simple plans were created on a water phantom, and IMPT plans were calculated on two pediatric patients with a brain tumor of 28 and 3 cc, respectively, using a commercial planning system. For the phantom, a uniform dose was delivered to targets located at different depths from 10 tomore » 20 cm with various field sizes from 2{sup 2} to 15{sup 2} cm{sup 2}. Two nominal spot sizes, 4.0 and 6.6 mm of 1 σ in water at isocenter, were used for treatment planning. The SS ranged from 0.5 σ to 1.5 σ, which is 2–6 mm for the small spot size and 3.3–9.9 mm for the large spot size. Various perturbation scenarios of a single spot error and systematic and random multiple spot errors were studied. To quantify the dosimetric effects, percent dose error (PDE) depth profiles and the value of percent dose error at the maximum dose difference (PDE [ΔDmax]) were used for evaluation. Results: A pair of hot and cold spots was created per spot shift. PDE[ΔDmax] is found to be a complex function of PE, SS, spot size, depth, and global spot distribution that can be well defined in simple models. For volumetric targets, the PDE [ΔDmax] is not noticeably affected by the change of field size or target volume within the studied ranges. In general, reducing SS decreased the dose error. For the facility studied, given a single spot error with a PE of 1.2 mm and for both spot sizes, a SS of 1σ resulted in a 2% maximum dose error; a SS larger than 1.25 σ substantially increased the dose error and its sensitivity to PE. A similar trend was observed in multiple spot errors (both systematic and random errors). Systematic PE can lead to noticeable hot spots along the field edges, which may be near critical structures. However, random PE showed minimal dose error. Conclusions: Dose error dependence for PE was quantitatively and systematically characterized and an analytic tool was built to simulate systematic and random errors for patient-specific IMPT. This information facilitates the determination of facility specific spot position error thresholds.« less
High-contrast coronagraph performance in the presence of focal plane mask defects
NASA Astrophysics Data System (ADS)
Sidick, Erkin; Shaklan, Stuart; Balasubramanian, Kunjithapatham; Cady, Eric
2014-08-01
We have carried out a study of the performance of high-contrast coronagraphs in the presence of mask defects. We have considered the effects of opaque and dielectric particles of various dimensions, as well as systematic mask fabrication errors and the limitations of material properties in creating dark holes. We employ sequential deformable mirrors to compensate for phase and amplitude errors, and show the limitations of this approach in the presence of coronagraph image-mask defects.
Mellado-Ortega, Elena; Zabalgogeazcoa, Iñigo; Vázquez de Aldana, Beatriz R; Arellano, Juan B
2017-02-15
Oxygen radical absorbance capacity (ORAC) assay in 96-well multi-detection plate readers is a rapid method to determine total antioxidant capacity (TAC) in biological samples. A disadvantage of this method is that the antioxidant inhibition reaction does not start in all of the 96 wells at the same time due to technical limitations when dispensing the free radical-generating azo initiator 2,2'-azobis (2-methyl-propanimidamide) dihydrochloride (AAPH). The time delay between wells yields a systematic error that causes statistically significant differences in TAC determination of antioxidant solutions depending on their plate position. We propose two alternative solutions to avoid this AAPH-dependent error in ORAC assays. Copyright © 2016 Elsevier Inc. All rights reserved.
Characterizing the impact of model error in hydrologic time series recovery inverse problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.
Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less
Characterizing the impact of model error in hydrologic time series recovery inverse problems
Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.
2017-10-28
Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less
Iudici, Antonio; Salvini, Alessandro; Faccio, Elena; Castelnuovo, Gianluca
2015-01-01
According to the literature, psychological assessment in forensic contexts is one of the most controversial application areas for clinical psychology. This paper presents a review of systematic judgment errors in the forensic field. Forty-six psychological reports written by psychologists, court consultants, have been analyzed with content analysis to identify typical judgment errors related to the following areas: (a) distortions in the attribution of causality, (b) inferential errors, and (c) epistemological inconsistencies. Results indicated that systematic errors of judgment, usually referred also as “the man in the street,” are widely present in the forensic evaluations of specialist consultants. Clinical and practical implications are taken into account. This article could lead to significant benefits for clinical psychologists who want to deal with this sensitive issue and are interested in improving the quality of their contribution to the justice system. PMID:26648892
Kandel, Himal; Khadka, Jyoti; Goggin, Michael; Pesudovs, Konrad
2017-12-01
This review has identified the best existing patient-reported outcome (PRO) instruments in refractive error. The article highlights the limitations of the existing instruments and discusses the way forward. A systematic review was conducted to identify the types of PROs used in refractive error, to determine the quality of the existing PRO instruments in terms of their psychometric properties, and to determine the limitations in the content of the existing PRO instruments. Articles describing a PRO instrument measuring 1 or more domains of quality of life in people with refractive error were identified by electronic searches on the MEDLINE, PubMed, Scopus, Web of Science, and Cochrane databases. The information on content development, psychometric properties, validity, reliability, and responsiveness of those PRO instruments was extracted from the selected articles. The analysis was done based on a comprehensive set of assessment criteria. One hundred forty-eight articles describing 47 PRO instruments in refractive error were included in the review. Most of the articles (99 [66.9%]) used refractive error-specific PRO instruments. The PRO instruments comprised 19 refractive, 12 vision but nonrefractive, and 16 generic PRO instruments. Only 17 PRO instruments were validated in refractive error populations; six of them were developed using Rasch analysis. None of the PRO instruments has items across all domains of quality of life. The Quality of Life Impact of Refractive Correction, the Quality of Vision, and the Contact Lens Impact on Quality of Life have comparatively better quality with some limitations, compared with the other PRO instruments. This review describes the PRO instruments and informs the choice of an appropriate measure in refractive error. We identified need of a comprehensive and scientifically robust refractive error-specific PRO instrument. Item banking and computer-adaptive testing system can be the way to provide such an instrument.
The accuracy of the measurements in Ulugh Beg's star catalogue
NASA Astrophysics Data System (ADS)
Krisciunas, K.
1992-12-01
The star catalogue compiled by Ulugh Beg and his collaborators in Samarkand (ca. 1437) is the only catalogue primarily based on original observations between the times of Ptolemy and Tycho Brahe. Evans (1987) has given convincing evidence that Ulugh Beg's star catalogue was based on measurements made with a zodiacal armillary sphere graduated to 15(') , with interpolation to 0.2 units. He and Shevchenko (1990) were primarily interested in the systematic errors in ecliptic longitude. Shevchenko's analysis of the random errors was limited to the twelve zodiacal constellations. We have analyzed all 843 ecliptic longitudes and latitudes attributed to Ulugh Beg by Knobel (1917). This required multiplying all the longitude errors by the respective values of the cosine of the celestial latitudes. We find a random error of +/- 17minp 7 for ecliptic longitude and +/- 16minp 5 for ecliptic latitude. On the whole, the random errors are largest near the ecliptic, decreasing towards the ecliptic poles. For all of Ulugh Beg's measurements (excluding outliers) the mean systematic error is -10minp 8 +/- 0minp 8 for ecliptic longitude and 7minp 5 +/- 0minp 7 for ecliptic latitude, with the errors in the sense ``computed minus Ulugh Beg''. For the brighter stars (those designated alpha , beta , and gamma in the respective constellations), the mean systematic errors are -11minp 3 +/- 1minp 9 for ecliptic longitude and 9minp 4 +/- 1minp 5 for ecliptic latitude. Within the errors this matches the systematic error in both coordinates for alpha Vir. With greater confidence we may conclude that alpha Vir was the principal reference star in the catalogues of Ulugh Beg and Ptolemy. Evans, J. 1987, J. Hist. Astr. 18, 155. Knobel, E. B. 1917, Ulugh Beg's Catalogue of Stars, Washington, D. C.: Carnegie Institution. Shevchenko, M. 1990, J. Hist. Astr. 21, 187.
Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary
NASA Astrophysics Data System (ADS)
Anugu, N.; Garcia, P.
2016-04-01
Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.
Yang, Xiao-Xing; Critchley, Lester A; Joynt, Gavin M
2011-01-01
Thermodilution cardiac output using a pulmonary artery catheter is the reference method against which all new methods of cardiac output measurement are judged. However, thermodilution lacks precision and has a quoted precision error of ± 20%. There is uncertainty about its true precision and this causes difficulty when validating new cardiac output technology. Our aim in this investigation was to determine the current precision error of thermodilution measurements. A test rig through which water circulated at different constant rates with ports to insert catheters into a flow chamber was assembled. Flow rate was measured by an externally placed transonic flowprobe and meter. The meter was calibrated by timed filling of a cylinder. Arrow and Edwards 7Fr thermodilution catheters, connected to a Siemens SC9000 cardiac output monitor, were tested. Thermodilution readings were made by injecting 5 mL of ice-cold water. Precision error was divided into random and systematic components, which were determined separately. Between-readings (random) variability was determined for each catheter by taking sets of 10 readings at different flow rates. Coefficient of variation (CV) was calculated for each set and averaged. Between-catheter systems (systematic) variability was derived by plotting calibration lines for sets of catheters. Slopes were used to estimate the systematic component. Performances of 3 cardiac output monitors were compared: Siemens SC9000, Siemens Sirecust 1261, and Philips MP50. Five Arrow and 5 Edwards catheters were tested using the Siemens SC9000 monitor. Flow rates between 0.7 and 7.0 L/min were studied. The CV (random error) for Arrow was 5.4% and for Edwards was 4.8%. The random precision error was ± 10.0% (95% confidence limits). CV (systematic error) was 5.8% and 6.0%, respectively. The systematic precision error was ± 11.6%. The total precision error of a single thermodilution reading was ± 15.3% and ± 13.0% for triplicate readings. Precision error increased by 45% when using the Sirecust monitor and 100% when using the Philips monitor. In vitro testing of pulmonary artery catheters enabled us to measure both the random and systematic error components of thermodilution cardiac output measurement, and thus calculate the precision error. Using the Siemens monitor, we established a precision error of ± 15.3% for single and ± 13.0% for triplicate reading, which was similar to the previous estimate of ± 20%. However, this precision error was significantly worsened by using the Sirecust and Philips monitors. Clinicians should recognize that the precision error of thermodilution cardiac output is dependent on the selection of catheter and monitor model.
NASA Astrophysics Data System (ADS)
Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.
2015-10-01
All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. Here, we are applying a consistent approach based on auto- and cross-covariance functions to quantify the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining data sets from several analysers and using simulations, we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time lag eliminates these effects (provided the time lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.
NASA Astrophysics Data System (ADS)
Langford, B.; Acton, W.; Ammann, C.; Valach, A.; Nemitz, E.
2015-03-01
All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. We are here applying a consistent approach based on auto- and cross-covariance functions to quantifying the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time-lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time-lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining datasets from several analysers and using simulations we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time-lag eliminates these effects (provided the time-lag does not fluctuate unduly over time). We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased) by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time-lag. Finally, we make recommendations for the analysis and reporting of data with low signal-to-noise and their associated errors.
Orbit error characteristic and distribution of TLE using CHAMP orbit data
NASA Astrophysics Data System (ADS)
Xu, Xiao-li; Xiong, Yong-qing
2018-02-01
Space object orbital covariance data is required for collision risk assessments, but publicly accessible two line element (TLE) data does not provide orbital error information. This paper compared historical TLE data and GPS precision ephemerides of CHAMP to assess TLE orbit accuracy from 2002 to 2008, inclusive. TLE error spatial variations with longitude and latitude were calculated to analyze error characteristics and distribution. The results indicate that TLE orbit data are systematically biased from the limited SGP4 model. The biases can reach the level of kilometers, and the sign and magnitude are correlate significantly with longitude.
Phylogenomics of Lophotrochozoa with Consideration of Systematic Error.
Kocot, Kevin M; Struck, Torsten H; Merkel, Julia; Waits, Damien S; Todt, Christiane; Brannock, Pamela M; Weese, David A; Cannon, Johanna T; Moroz, Leonid L; Lieb, Bernhard; Halanych, Kenneth M
2017-03-01
Phylogenomic studies have improved understanding of deep metazoan phylogeny and show promise for resolving incongruences among analyses based on limited numbers of loci. One region of the animal tree that has been especially difficult to resolve, even with phylogenomic approaches, is relationships within Lophotrochozoa (the animal clade that includes molluscs, annelids, and flatworms among others). Lack of resolution in phylogenomic analyses could be due to insufficient phylogenetic signal, limitations in taxon and/or gene sampling, or systematic error. Here, we investigated why lophotrochozoan phylogeny has been such a difficult question to answer by identifying and reducing sources of systematic error. We supplemented existing data with 32 new transcriptomes spanning the diversity of Lophotrochozoa and constructed a new set of Lophotrochozoa-specific core orthologs. Of these, 638 orthologous groups (OGs) passed strict screening for paralogy using a tree-based approach. In order to reduce possible sources of systematic error, we calculated branch-length heterogeneity, evolutionary rate, percent missing data, compositional bias, and saturation for each OG and analyzed increasingly stricter subsets of only the most stringent (best) OGs for these five variables. Principal component analysis of the values for each factor examined for each OG revealed that compositional heterogeneity and average patristic distance contributed most to the variance observed along the first principal component while branch-length heterogeneity and, to a lesser extent, saturation contributed most to the variance observed along the second. Missing data did not strongly contribute to either. Additional sensitivity analyses examined effects of removing taxa with heterogeneous branch lengths, large amounts of missing data, and compositional heterogeneity. Although our analyses do not unambiguously resolve lophotrochozoan phylogeny, we advance the field by reducing the list of viable hypotheses. Moreover, our systematic approach for dissection of phylogenomic data can be applied to explore sources of incongruence and poor support in any phylogenomic data set. [Annelida; Brachiopoda; Bryozoa; Entoprocta; Mollusca; Nemertea; Phoronida; Platyzoa; Polyzoa; Spiralia; Trochozoa.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Fluorescence decay data analysis correcting for detector pulse pile-up at very high count rates
NASA Astrophysics Data System (ADS)
Patting, Matthias; Reisch, Paja; Sackrow, Marcus; Dowler, Rhys; Koenig, Marcelle; Wahl, Michael
2018-03-01
Using time-correlated single photon counting for the purpose of fluorescence lifetime measurements is usually limited in speed due to pile-up. With modern instrumentation, this limitation can be lifted significantly, but some artifacts due to frequent merging of closely spaced detector pulses (detector pulse pile-up) remain an issue to be addressed. We propose a data analysis method correcting for this type of artifact and the resulting systematic errors. It physically models the photon losses due to detector pulse pile-up and incorporates the loss in the decay fit model employed to obtain fluorescence lifetimes and relative amplitudes of the decay components. Comparison of results with and without this correction shows a significant reduction of systematic errors at count rates approaching the excitation rate. This allows quantitatively accurate fluorescence lifetime imaging at very high frame rates.
Identification and correction of systematic error in high-throughput sequence data
2011-01-01
Background A feature common to all DNA sequencing technologies is the presence of base-call errors in the sequenced reads. The implications of such errors are application specific, ranging from minor informatics nuisances to major problems affecting biological inferences. Recently developed "next-gen" sequencing technologies have greatly reduced the cost of sequencing, but have been shown to be more error prone than previous technologies. Both position specific (depending on the location in the read) and sequence specific (depending on the sequence in the read) errors have been identified in Illumina and Life Technology sequencing platforms. We describe a new type of systematic error that manifests as statistically unlikely accumulations of errors at specific genome (or transcriptome) locations. Results We characterize and describe systematic errors using overlapping paired reads from high-coverage data. We show that such errors occur in approximately 1 in 1000 base pairs, and that they are highly replicable across experiments. We identify motifs that are frequent at systematic error sites, and describe a classifier that distinguishes heterozygous sites from systematic error. Our classifier is designed to accommodate data from experiments in which the allele frequencies at heterozygous sites are not necessarily 0.5 (such as in the case of RNA-Seq), and can be used with single-end datasets. Conclusions Systematic errors can easily be mistaken for heterozygous sites in individuals, or for SNPs in population analyses. Systematic errors are particularly problematic in low coverage experiments, or in estimates of allele-specific expression from RNA-Seq data. Our characterization of systematic error has allowed us to develop a program, called SysCall, for identifying and correcting such errors. We conclude that correction of systematic errors is important to consider in the design and interpretation of high-throughput sequencing experiments. PMID:22099972
Accurate Magnetometer/Gyroscope Attitudes Using a Filter with Correlated Sensor Noise
NASA Technical Reports Server (NTRS)
Sedlak, J.; Hashmall, J.
1997-01-01
Magnetometers and gyroscopes have been shown to provide very accurate attitudes for a variety of spacecraft. These results have been obtained, however, using a batch-least-squares algorithm and long periods of data. For use in onboard applications, attitudes are best determined using sequential estimators such as the Kalman filter. When a filter is used to determine attitudes using magnetometer and gyroscope data for input, the resulting accuracy is limited by both the sensor accuracies and errors inherent in the Earth magnetic field model. The Kalman filter accounts for the random component by modeling the magnetometer and gyroscope errors as white noise processes. However, even when these tuning parameters are physically realistic, the rate biases (included in the state vector) have been found to show systematic oscillations. These are attributed to the field model errors. If the gyroscope noise is sufficiently small, the tuned filter 'memory' will be long compared to the orbital period. In this case, the variations in the rate bias induced by field model errors are substantially reduced. Mistuning the filter to have a short memory time leads to strongly oscillating rate biases and increased attitude errors. To reduce the effect of the magnetic field model errors, these errors are estimated within the filter and used to correct the reference model. An exponentially-correlated noise model is used to represent the filter estimate of the systematic error. Results from several test cases using in-flight data from the Compton Gamma Ray Observatory are presented. These tests emphasize magnetometer errors, but the method is generally applicable to any sensor subject to a combination of random and systematic noise.
Measuring The cmb Polarization At 94 GHz With The QUIET Pseudo-cL Pipeline
NASA Astrophysics Data System (ADS)
Buder, Immanuel; QUIET Collaboration
2012-01-01
The Q/U Imaging ExperimenT (QUIET) aims to limit or detect cosmic microwave background (CMB) B-mode polarization from inflation. This talk is part of a 3-talk series on QUIET. The previous talk describes the QUIET science and instrument. QUIET has two parallel analysis pipelines which are part of an effort to validate the analysis and confirm the result. In this talk, I will describe the analysis methods of one of these: the pseudo-Cl pipeline. Calibration, noise modeling, filtering, and data-selection choices are made following a blind-analysis strategy. Central to this strategy is a suite of 30 null tests, each motivated by a possible instrumental problem or systematic effect. The systematic errors are also evaluated through full-season simulations in the blind stage of the analysis before the result is known. The CMB power spectra are calculated using a pseudo-Cl cross-correlation technique which suppresses contamination and makes the result insensitive to noise bias. QUIET will detect the first three peaks of the even-parity (E-mode) spectrum at high significance. I will show forecasts of the systematic errors for these results and for the upper limit on B-mode polarization. The very low systematic errors in these forecasts show that the technology is ready to be applied in a more sensitive next-generation experiment. The next and final talk in this series covers the other parallel analysis pipeline, based on maximum likelihood methods. This work was supported by NSF and the Department of Education.
Bolann, B J; Asberg, A
2004-01-01
The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.
Horizon sensors attitude errors simulation for the Brazilian Remote Sensing Satellite
NASA Astrophysics Data System (ADS)
Vicente de Brum, Antonio Gil; Ricci, Mario Cesar
Remote sensing, meteorological and other types of satellites require an increasingly better Earth related positioning. From the past experience it is well known that the thermal horizon in the 15 micrometer band provides conditions of determining the local vertical at any time. This detection is done by horizon sensors which are accurate instruments for Earth referred attitude sensing and control whose performance is limited by systematic and random errors amounting about 0.5 deg. Using the computer programs OBLATE, SEASON, ELECTRO and MISALIGN, developed at INPE to simulate four distinct facets of conical scanning horizon sensors, attitude errors are obtained for the Brazilian Remote Sensing Satellite (the first one, SSR-1, is scheduled to fly in 1996). These errors are due to the oblate shape of the Earth, seasonal and latitudinal variations of the 15 micrometer infrared radiation, electronic processing time delay and misalignment of sensor axis. The sensor related attitude errors are thus properly quantified in this work and will, together with other systematic errors (for instance, ambient temperature variation) take part in the pre-launch analysis of the Brazilian Remote Sensing Satellite, with respect to the horizon sensor performance.
Why GPS makes distances bigger than they are
Ranacher, Peter; Brunauer, Richard; Trutschnig, Wolfgang; Van der Spek, Stefan; Reich, Siegfried
2016-01-01
ABSTRACT Global navigation satellite systems such as the Global Positioning System (GPS) is one of the most important sensors for movement analysis. GPS is widely used to record the trajectories of vehicles, animals and human beings. However, all GPS movement data are affected by both measurement and interpolation errors. In this article we show that measurement error causes a systematic bias in distances recorded with a GPS; the distance between two points recorded with a GPS is – on average – bigger than the true distance between these points. This systematic ‘overestimation of distance’ becomes relevant if the influence of interpolation error can be neglected, which in practice is the case for movement sampled at high frequencies. We provide a mathematical explanation of this phenomenon and illustrate that it functionally depends on the autocorrelation of GPS measurement error (C). We argue that C can be interpreted as a quality measure for movement data recorded with a GPS. If there is a strong autocorrelation between any two consecutive position estimates, they have very similar error. This error cancels out when average speed, distance or direction is calculated along the trajectory. Based on our theoretical findings we introduce a novel approach to determine C in real-world GPS movement data sampled at high frequencies. We apply our approach to pedestrian trajectories and car trajectories. We found that the measurement error in the data was strongly spatially and temporally autocorrelated and give a quality estimate of the data. Most importantly, our findings are not limited to GPS alone. The systematic bias and its implications are bound to occur in any movement data collected with absolute positioning if interpolation error can be neglected. PMID:27019610
A Systematic Approach to Error Free Telemetry
2017-06-28
A SYSTEMATIC APPROACH TO ERROR FREE TELEMETRY 412TW-TIM-17-03 DISTRIBUTION A: Approved for public release. Distribution is...Systematic Approach to Error-Free Telemetry) was submitted by the Commander, 412th Test Wing, Edwards AFB, California 93524. Prepared by...Technical Information Memorandum 3. DATES COVERED (From - Through) February 2016 4. TITLE AND SUBTITLE A Systematic Approach to Error-Free
Powell, Laurie Ehlhardt; Glang, Ann; Ettel, Deborah; Todis, Bonnie; Sohlberg, McKay; Albin, Richard
2012-01-01
The goal of this study was to experimentally evaluate systematic instruction compared with trial-and-error learning (conventional instruction) applied to assistive technology for cognition (ATC), in a double blind, pretest-posttest, randomized controlled trial. Twenty-nine persons with moderate-severe cognitive impairments due to acquired brain injury (15 in systematic instruction group; 14 in conventional instruction) completed the study. Both groups received 12, 45-minute individual training sessions targeting selected skills on the Palm Tungsten E2 personal digital assistant (PDA). A criterion-based assessment of PDA skills was used to evaluate accuracy, fluency/efficiency, maintenance, and generalization of skills. There were no significant differences between groups at immediate posttest with regard to accuracy and fluency. However, significant differences emerged at 30-day follow-up in favor of systematic instruction. Furthermore, systematic instruction participants performed significantly better at immediate posttest generalizing trained PDA skills when interacting with people other than the instructor. These results demonstrate that systematic instruction applied to ATC results in better skill maintenance and generalization than trial-and-error learning for individuals with moderate-severe cognitive impairments due to acquired brain injury. Implications, study limitations, and directions for future research are discussed. PMID:22264146
Systematic review of the evidence for Trails B cut-off scores in assessing fitness-to-drive.
Roy, Mononita; Molnar, Frank
2013-01-01
Fitness-to-drive guidelines recommend employing the Trail Making B Test (a.k.a. Trails B), but do not provide guidance regarding cut-off scores. There is ongoing debate regarding the optimal cut-off score on the Trails B test. The objective of this study was to address this controversy by systematically reviewing the evidence for specific Trails B cut-off scores (e.g., cut-offs in both time to completion and number of errors) with respect to fitness-to-drive. Systematic review of all prospective cohort, retrospective cohort, case-control, correlation, and cross-sectional studies reporting the ability of the Trails B to predict driving safety that were published in English-language, peer-reviewed journals. Forty-seven articles were reviewed. None of the articles justified sample sizes via formal calculations. Cut-off scores reported based on research include: 90 seconds, 133 seconds, 147 seconds, 180 seconds, and < 3 errors. There is support for the previously published Trails B cut-offs of 3 minutes or 3 errors (the '3 or 3 rule'). Major methodological limitations of this body of research were uncovered including (1) lack of justification of sample size leaving studies open to Type II error (i.e., false negative findings), and (2) excessive focus on associations rather than clinically useful cut-off scores.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirchhoff, William H.
2012-09-15
The extended logistic function provides a physically reasonable description of interfaces such as depth profiles or line scans of surface topological or compositional features. It describes these interfaces with the minimum number of parameters, namely, position, width, and asymmetry. Logistic Function Profile Fit (LFPF) is a robust, least-squares fitting program in which the nonlinear extended logistic function is linearized by a Taylor series expansion (equivalent to a Newton-Raphson approach) with no apparent introduction of bias in the analysis. The program provides reliable confidence limits for the parameters when systematic errors are minimal and provides a display of the residuals frommore » the fit for the detection of systematic errors. The program will aid researchers in applying ASTM E1636-10, 'Standard practice for analytically describing sputter-depth-profile and linescan-profile data by an extended logistic function,' and may also prove useful in applying ISO 18516: 2006, 'Surface chemical analysis-Auger electron spectroscopy and x-ray photoelectron spectroscopy-determination of lateral resolution.' Examples are given of LFPF fits to a secondary ion mass spectrometry depth profile, an Auger surface line scan, and synthetic data generated to exhibit known systematic errors for examining the significance of such errors to the extrapolation of partial profiles.« less
Chiral extrapolation of the leading hadronic contribution to the muon anomalous magnetic moment
NASA Astrophysics Data System (ADS)
Golterman, Maarten; Maltman, Kim; Peris, Santiago
2017-04-01
A lattice computation of the leading-order hadronic contribution to the muon anomalous magnetic moment can potentially help reduce the error on the Standard Model prediction for this quantity, if sufficient control of all systematic errors affecting such a computation can be achieved. One of these systematic errors is that associated with the extrapolation to the physical pion mass from values on the lattice larger than the physical pion mass. We investigate this extrapolation assuming lattice pion masses in the range of 200 to 400 MeV with the help of two-loop chiral perturbation theory, and we find that such an extrapolation is unlikely to lead to control of this systematic error at the 1% level. This remains true even if various tricks to improve the reliability of the chiral extrapolation employed in the literature are taken into account. In addition, while chiral perturbation theory also predicts the dependence on the pion mass of the leading-order hadronic contribution to the muon anomalous magnetic moment as the chiral limit is approached, this prediction turns out to be of no practical use because the physical pion mass is larger than the muon mass that sets the scale for the onset of this behavior.
A proposed method to investigate reliability throughout a questionnaire.
Wentzel-Larsen, Tore; Norekvål, Tone M; Ulvik, Bjørg; Nygård, Ottar; Pripp, Are H
2011-10-05
Questionnaires are used extensively in medical and health care research and depend on validity and reliability. However, participants may differ in interest and awareness throughout long questionnaires, which can affect reliability of their answers. A method is proposed for "screening" of systematic change in random error, which could assess changed reliability of answers. A simulation study was conducted to explore whether systematic change in reliability, expressed as changed random error, could be assessed using unsupervised classification of subjects by cluster analysis (CA) and estimation of intraclass correlation coefficient (ICC). The method was also applied on a clinical dataset from 753 cardiac patients using the Jalowiec Coping Scale. The simulation study showed a relationship between the systematic change in random error throughout a questionnaire and the slope between the estimated ICC for subjects classified by CA and successive items in a questionnaire. This slope was proposed as an awareness measure--to assessing if respondents provide only a random answer or one based on a substantial cognitive effort. Scales from different factor structures of Jalowiec Coping Scale had different effect on this awareness measure. Even though assumptions in the simulation study might be limited compared to real datasets, the approach is promising for assessing systematic change in reliability throughout long questionnaires. Results from a clinical dataset indicated that the awareness measure differed between scales.
A Systematic Error Correction Method for TOVS Radiances
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)
2000-01-01
Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.
Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction
NASA Technical Reports Server (NTRS)
Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.
2013-01-01
The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.
A Bayesian Approach to Systematic Error Correction in Kepler Photometric Time Series
NASA Astrophysics Data System (ADS)
Jenkins, Jon Michael; VanCleve, J.; Twicken, J. D.; Smith, J. C.; Kepler Science Team
2011-01-01
In order for the Kepler mission to achieve its required 20 ppm photometric precision for 6.5 hr observations of 12th magnitude stars, the Presearch Data Conditioning (PDC) software component of the Kepler Science Processing Pipeline must reduce systematic errors in flux time series to the limit of stochastic noise for errors with time-scales less than three days, without smoothing or over-fitting away the transits that Kepler seeks. The current version of PDC co-trends against ancillary engineering data and Pipeline generated data using essentially a least squares (LS) approach. This approach is successful for quiet stars when all sources of systematic error have been identified. If the stars are intrinsically variable or some sources of systematic error are unknown, LS will nonetheless attempt to explain all of a given time series, not just the part the model can explain well. Negative consequences can include loss of astrophysically interesting signal, and injection of high-frequency noise into the result. As a remedy, we present a Bayesian Maximum A Posteriori (MAP) approach, in which a subset of intrinsically quiet and highly-correlated stars is used to establish the probability density function (PDF) of robust fit parameters in a diagonalized basis. The PDFs then determine a "reasonable” range for the fit parameters for all stars, and brake the runaway fitting that can distort signals and inject noise. We present a closed-form solution for Gaussian PDFs, and show examples using publically available Quarter 1 Kepler data. A companion poster (Van Cleve et al.) shows applications and discusses current work in more detail. Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.
Systematic error of the Gaia DR1 TGAS parallaxes from data for the red giant clump
NASA Astrophysics Data System (ADS)
Gontcharov, G. A.
2017-08-01
Based on the Gaia DR1 TGAS parallaxes and photometry from the Tycho-2, Gaia, 2MASS, andWISE catalogues, we have produced a sample of 100 000 clump red giants within 800 pc of the Sun. The systematic variations of the mode of their absolute magnitude as a function of the distance, magnitude, and other parameters have been analyzed. We show that these variations reach 0.7 mag and cannot be explained by variations in the interstellar extinction or intrinsic properties of stars and by selection. The only explanation seems to be a systematic error of the Gaia DR1 TGAS parallax dependent on the square of the observed distance in kpc: 0.18 R 2 mas. Allowance for this error reduces significantly the systematic dependences of the absolute magnitude mode on all parameters. This error reaches 0.1 mas within 800 pc of the Sun and allows an upper limit for the accuracy of the TGAS parallaxes to be estimated as 0.2 mas. A careful allowance for such errors is needed to use clump red giants as "standard candles." This eliminates all discrepancies between the theoretical and empirical estimates of the characteristics of these stars and allows us to obtain the first estimates of the modes of their absolute magnitudes from the Gaia parallaxes: mode( M H ) = -1.49 m ± 0.04 m , mode( M Ks ) = -1.63 m ± 0.03 m , mode( M W1) = -1.67 m ± 0.05 m mode( M W2) = -1.67 m ± 0.05 m , mode( M W3) = -1.66 m ± 0.02 m , mode( M W4) = -1.73 m ± 0.03 m , as well as the corresponding estimates of their de-reddened colors.
Effects of waveform model systematics on the interpretation of GW150914
NASA Astrophysics Data System (ADS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; E Barclay, S.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; E Brau, J.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; E Broida, J.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; E Cowan, E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; E Creighton, J. D.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; E Dwyer, S.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; E Gossan, S.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; E Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; E Holz, D.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; E Lord, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; E McClelland, D.; McCormick, S.; McGrath, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; E Mikhailov, E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; E Pace, A.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; E Smith, R. J.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; E Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; E Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; E Zucker, M.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration; Boyle, M.; Chu, T.; Hemberger, D.; Hinder, I.; E Kidder, L.; Ossokine, S.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Vano Vinuales, A.
2017-05-01
Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein’s equations, any such calibration is accurate only to some non-zero tolerance and is limited by the accuracy of the underlying phenomenology, availability, quality, and parameter-space coverage of numerical simulations. This paper complements the original analyses of GW150914 with an investigation of the effects of possible systematic errors in the waveform models on estimates of its source parameters. To test for systematic errors we repeat the original Bayesian analysis on mock signals from numerical simulations of a series of binary configurations with parameters similar to those found for GW150914. Overall, we find no evidence for a systematic bias relative to the statistical error of the original parameter recovery of GW150914 due to modeling approximations or modeling inaccuracies. However, parameter biases are found to occur for some configurations disfavored by the data of GW150914: for binaries inclined edge-on to the detector over a small range of choices of polarization angles, and also for eccentricities greater than ˜0.05. For signals with higher signal-to-noise ratio than GW150914, or in other regions of the binary parameter space (lower masses, larger mass ratios, or higher spins), we expect that systematic errors in current waveform models may impact gravitational-wave measurements, making more accurate models desirable for future observations.
More on Systematic Error in a Boyle's Law Experiment
ERIC Educational Resources Information Center
McCall, Richard P.
2012-01-01
A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.
Flexible methods for segmentation evaluation: results from CT-based luggage screening.
Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry
2014-01-01
Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms' behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms.
Sobel, Michael E; Lindquist, Martin A
2014-07-01
Functional magnetic resonance imaging (fMRI) has facilitated major advances in understanding human brain function. Neuroscientists are interested in using fMRI to study the effects of external stimuli on brain activity and causal relationships among brain regions, but have not stated what is meant by causation or defined the effects they purport to estimate. Building on Rubin's causal model, we construct a framework for causal inference using blood oxygenation level dependent (BOLD) fMRI time series data. In the usual statistical literature on causal inference, potential outcomes, assumed to be measured without systematic error, are used to define unit and average causal effects. However, in general the potential BOLD responses are measured with stimulus dependent systematic error. Thus we define unit and average causal effects that are free of systematic error. In contrast to the usual case of a randomized experiment where adjustment for intermediate outcomes leads to biased estimates of treatment effects (Rosenbaum, 1984), here the failure to adjust for task dependent systematic error leads to biased estimates. We therefore adjust for systematic error using measured "noise covariates" , using a linear mixed model to estimate the effects and the systematic error. Our results are important for neuroscientists, who typically do not adjust for systematic error. They should also prove useful to researchers in other areas where responses are measured with error and in fields where large amounts of data are collected on relatively few subjects. To illustrate our approach, we re-analyze data from a social evaluative threat task, comparing the findings with results that ignore systematic error.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less
A Comprehensive Radial Velocity Error Budget for Next Generation Doppler Spectrometers
NASA Technical Reports Server (NTRS)
Halverson, Samuel; Ryan, Terrien; Mahadevan, Suvrath; Roy, Arpita; Bender, Chad; Stefansson, Guomundur Kari; Monson, Andrew; Levi, Eric; Hearty, Fred; Blake, Cullen;
2016-01-01
We describe a detailed radial velocity error budget for the NASA-NSF Extreme Precision Doppler Spectrometer instrument concept NEID (NN-explore Exoplanet Investigations with Doppler spectroscopy). Such an instrument performance budget is a necessity for both identifying the variety of noise sources currently limiting Doppler measurements, and estimating the achievable performance of next generation exoplanet hunting Doppler spectrometers. For these instruments, no single source of instrumental error is expected to set the overall measurement floor. Rather, the overall instrumental measurement precision is set by the contribution of many individual error sources. We use a combination of numerical simulations, educated estimates based on published materials, extrapolations of physical models, results from laboratory measurements of spectroscopic subsystems, and informed upper limits for a variety of error sources to identify likely sources of systematic error and construct our global instrument performance error budget. While natively focused on the performance of the NEID instrument, this modular performance budget is immediately adaptable to a number of current and future instruments. Such an approach is an important step in charting a path towards improving Doppler measurement precisions to the levels necessary for discovering Earth-like planets.
Propagation of stage measurement uncertainties to streamflow time series
NASA Astrophysics Data System (ADS)
Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary
2016-04-01
Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.
Leyland, M J; Beurskens, M N A; Flanagan, J C; Frassinetti, L; Gibson, K J; Kempenaars, M; Maslov, M; Scannell, R
2016-01-01
The Joint European Torus (JET) high resolution Thomson scattering (HRTS) system measures radial electron temperature and density profiles. One of the key capabilities of this diagnostic is measuring the steep pressure gradient, termed the pedestal, at the edge of JET plasmas. The pedestal is susceptible to limiting instabilities, such as Edge Localised Modes (ELMs), characterised by a periodic collapse of the steep gradient region. A common method to extract the pedestal width, gradient, and height, used on numerous machines, is by performing a modified hyperbolic tangent (mtanh) fit to overlaid profiles selected from the same region of the ELM cycle. This process of overlaying profiles, termed ELM synchronisation, maximises the number of data points defining the pedestal region for a given phase of the ELM cycle. When fitting to HRTS profiles, it is necessary to incorporate the diagnostic radial instrument function, particularly important when considering the pedestal width. A deconvolved fit is determined by a forward convolution method requiring knowledge of only the instrument function and profiles. The systematic error due to the deconvolution technique incorporated into the JET pedestal fitting tool has been documented by Frassinetti et al. [Rev. Sci. Instrum. 83, 013506 (2012)]. This paper seeks to understand and quantify the systematic error introduced to the pedestal width due to ELM synchronisation. Synthetic profiles, generated with error bars and point-to-point variation characteristic of real HRTS profiles, are used to evaluate the deviation from the underlying pedestal width. We find on JET that the ELM synchronisation systematic error is negligible in comparison to the statistical error when assuming ten overlaid profiles (typical for a pre-ELM fit to HRTS profiles). This confirms that fitting a mtanh to ELM synchronised profiles is a robust and practical technique for extracting the pedestal structure.
Measuring diagnoses: ICD code accuracy.
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-10-01
To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.
Systematic review of the evidence for Trails B cut-off scores in assessing fitness-to-drive
Roy, Mononita; Molnar, Frank
2013-01-01
Background Fitness-to-drive guidelines recommend employing the Trail Making B Test (a.k.a. Trails B), but do not provide guidance regarding cut-off scores. There is ongoing debate regarding the optimal cut-off score on the Trails B test. The objective of this study was to address this controversy by systematically reviewing the evidence for specific Trails B cut-off scores (e.g., cut-offs in both time to completion and number of errors) with respect to fitness-to-drive. Methods Systematic review of all prospective cohort, retrospective cohort, case-control, correlation, and cross-sectional studies reporting the ability of the Trails B to predict driving safety that were published in English-language, peer-reviewed journals. Results Forty-seven articles were reviewed. None of the articles justified sample sizes via formal calculations. Cut-off scores reported based on research include: 90 seconds, 133 seconds, 147 seconds, 180 seconds, and < 3 errors. Conclusions There is support for the previously published Trails B cut-offs of 3 minutes or 3 errors (the ‘3 or 3 rule’). Major methodological limitations of this body of research were uncovered including (1) lack of justification of sample size leaving studies open to Type II error (i.e., false negative findings), and (2) excessive focus on associations rather than clinically useful cut-off scores. PMID:23983828
Parametric decadal climate forecast recalibration (DeFoReSt 1.0)
NASA Astrophysics Data System (ADS)
Pasternack, Alexander; Bhend, Jonas; Liniger, Mark A.; Rust, Henning W.; Müller, Wolfgang A.; Ulbrich, Uwe
2018-01-01
Near-term climate predictions such as decadal climate forecasts are increasingly being used to guide adaptation measures. For near-term probabilistic predictions to be useful, systematic errors of the forecasting systems have to be corrected. While methods for the calibration of probabilistic forecasts are readily available, these have to be adapted to the specifics of decadal climate forecasts including the long time horizon of decadal climate forecasts, lead-time-dependent systematic errors (drift) and the errors in the representation of long-term changes and variability. These features are compounded by small ensemble sizes to describe forecast uncertainty and a relatively short period for which typically pairs of reforecasts and observations are available to estimate calibration parameters. We introduce the Decadal Climate Forecast Recalibration Strategy (DeFoReSt), a parametric approach to recalibrate decadal ensemble forecasts that takes the above specifics into account. DeFoReSt optimizes forecast quality as measured by the continuous ranked probability score (CRPS). Using a toy model to generate synthetic forecast observation pairs, we demonstrate the positive effect on forecast quality in situations with pronounced and limited predictability. Finally, we apply DeFoReSt to decadal surface temperature forecasts from the MiKlip prototype system and find consistent, and sometimes considerable, improvements in forecast quality compared with a simple calibration of the lead-time-dependent systematic errors.
A proposed method to investigate reliability throughout a questionnaire
2011-01-01
Background Questionnaires are used extensively in medical and health care research and depend on validity and reliability. However, participants may differ in interest and awareness throughout long questionnaires, which can affect reliability of their answers. A method is proposed for "screening" of systematic change in random error, which could assess changed reliability of answers. Methods A simulation study was conducted to explore whether systematic change in reliability, expressed as changed random error, could be assessed using unsupervised classification of subjects by cluster analysis (CA) and estimation of intraclass correlation coefficient (ICC). The method was also applied on a clinical dataset from 753 cardiac patients using the Jalowiec Coping Scale. Results The simulation study showed a relationship between the systematic change in random error throughout a questionnaire and the slope between the estimated ICC for subjects classified by CA and successive items in a questionnaire. This slope was proposed as an awareness measure - to assessing if respondents provide only a random answer or one based on a substantial cognitive effort. Scales from different factor structures of Jalowiec Coping Scale had different effect on this awareness measure. Conclusions Even though assumptions in the simulation study might be limited compared to real datasets, the approach is promising for assessing systematic change in reliability throughout long questionnaires. Results from a clinical dataset indicated that the awareness measure differed between scales. PMID:21974842
Characterization and limits of a cold-atom Sagnac interferometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauguet, A.; Canuel, B.; Leveque, T.
2009-12-15
We present the full evaluation of a cold-atom gyroscope based on atom interferometry. We have performed extensive studies to determine the systematic errors, scale factor and sensitivity. We demonstrate that the acceleration noise can be efficiently removed from the rotation signal, allowing us to reach the fundamental limit of the quantum projection noise for short term measurements. The technical limits to the long term sensitivity and accuracy have been identified, clearing the way for the next generation of ultrasensitive atom gyroscopes.
Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen
2005-10-01
Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errorsmore » of {sigma} = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with {sigma} = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for {sigma} = {sigma} = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D{sub 98}), clinical target volume (CTV) D{sub 90}, nodes D{sub 90}, cord D{sub 2}, and parotid D{sub 50} and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error {sigma} exceeded 3 mm. Simulated systematic setup errors with {sigma} = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a {sigma} = 3.0 mm resulted in half of the plans having more than a 3% dose error and 28% with a 5% dose error. Combined random and systematic dose errors with {sigma} = {sigma} = 3.0 mm resulted in more than 50% of plans having at least a 3% dose error and 38% of the plans having at least a 5% dose error. Evaluation with respect to a 3-mm expanded PTV reduced the observed dose deviations greater than 5% for the {sigma} = {sigma} = 3.0 mm simulations to 5.4% of the plans simulated. Conclusions: Head-and-neck SIB-IMRT dosimetric accuracy would benefit from methods to reduce patient systematic setup errors. When GTV, CTV, or nodal volumes are used for dose evaluation, plans simulated including the effects of random and systematic errors deviate substantially from the nominal plan. The use of PTVs for dose evaluation in the nominal plan improves agreement with evaluated GTV, CTV, and nodal dose values under simulated setup errors. PTV concepts should be used for SIB-IMRT head-and-neck squamous cell carcinoma patients, although the size of the margins may be less than those used with three-dimensional conformal radiation therapy.« less
NASA Astrophysics Data System (ADS)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
Biases in Planet Occurrence Caused by Unresolved Binaries in Transit Surveys
NASA Astrophysics Data System (ADS)
Bouma, L. G.; Masuda, Kento; Winn, Joshua N.
2018-06-01
Wide-field surveys for transiting planets, such as the NASA Kepler and TESS missions, are usually conducted without knowing which stars have binary companions. Unresolved and unrecognized binaries give rise to systematic errors in planet occurrence rates, including misclassified planets and mistakes in completeness corrections. The individual errors can have different signs, making it difficult to anticipate the net effect on inferred occurrence rates. Here, we use simplified models of signal-to-noise limited transit surveys to try and clarify the situation. We derive a formula for the apparent occurrence rate density measured by an observer who falsely assumes all stars are single. The formula depends on the binary fraction, the mass function of the secondary stars, and the true occurrence of planets around primaries, secondaries, and single stars. It also takes into account the Malmquist bias by which binaries are over-represented in flux-limited samples. Application of the formula to an idealized Kepler-like survey shows that for planets larger than 2 R ⊕, the net systematic error is of order 5%. In particular, unrecognized binaries are unlikely to be the reason for the apparent discrepancies between hot-Jupiter occurrence rates measured in different surveys. For smaller planets the errors are potentially larger: the occurrence of Earth-sized planets could be overestimated by as much as 50%. We also show that whenever high-resolution imaging reveals a transit host star to be a binary, the planet is usually more likely to orbit the primary star than the secondary star.
Allegrini, Maria-Cristina; Canullo, Roberto; Campetella, Giandiego
2009-04-01
Knowledge of accuracy and precision rates is particularly important for long-term studies. Vegetation assessments include many sources of error related to overlooking and misidentification, that are usually influenced by some factors, such as cover estimate subjectivity, observer biased species lists and experience of the botanist. The vegetation assessment protocol adopted in the Italian forest monitoring programme (CONECOFOR) contains a Quality Assurance programme. The paper presents the different phases of QA, separates the 5 main critical points of the whole protocol as sources of random or systematic errors. Examples of Measurement Quality Objectives (MQOs) expressed as Data Quality Limits (DQLs) are given for vascular plant cover estimates, in order to establish the reproducibility of the data. Quality control activities were used to determine the "distance" between the surveyor teams and the control team. Selected data were acquired during the training and inter-calibration courses. In particular, an index of average cover by species groups was used to evaluate the random error (CV 4%) as the dispersion around the "true values" of the control team. The systematic error in the evaluation of species composition, caused by overlooking or misidentification of species, was calculated following the pseudo-turnover rate; detailed species censuses on smaller sampling units were accepted as the pseudo-turnover which always fell below the 25% established threshold; species density scores recorded at community level (100 m(2) surface) rarely exceeded that limit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alderliesten, Tanja; Sonke, Jan-Jakob; Betgen, Anja
2013-02-01
Purpose: To investigate the applicability of 3-dimensional (3D) surface imaging for image guidance in deep-inspiration breath-hold radiation therapy (DIBH-RT) for patients with left-sided breast cancer. For this purpose, setup data based on captured 3D surfaces was compared with setup data based on cone beam computed tomography (CBCT). Methods and Materials: Twenty patients treated with DIBH-RT after breast-conserving surgery (BCS) were included. Before the start of treatment, each patient underwent a breath-hold CT scan for planning purposes. During treatment, dose delivery was preceded by setup verification using CBCT of the left breast. 3D surfaces were captured by a surface imaging systemmore » concurrently with the CBCT scan. Retrospectively, surface registrations were performed for CBCT to CT and for a captured 3D surface to CT. The resulting setup errors were compared with linear regression analysis. For the differences between setup errors, group mean, systematic error, random error, and 95% limits of agreement were calculated. Furthermore, receiver operating characteristic (ROC) analysis was performed. Results: Good correlation between setup errors was found: R{sup 2}=0.70, 0.90, 0.82 in left-right, craniocaudal, and anterior-posterior directions, respectively. Systematic errors were {<=}0.17 cm in all directions. Random errors were {<=}0.15 cm. The limits of agreement were -0.34-0.48, -0.42-0.39, and -0.52-0.23 cm in left-right, craniocaudal, and anterior-posterior directions, respectively. ROC analysis showed that a threshold between 0.4 and 0.8 cm corresponds to promising true positive rates (0.78-0.95) and false positive rates (0.12-0.28). Conclusions: The results support the application of 3D surface imaging for image guidance in DIBH-RT after BCS.« less
Errors in causal inference: an organizational schema for systematic error and random error.
Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji
2016-11-01
To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Richter, J.; Mayer, J.; Weigand, B.
2018-02-01
Non-resonant laser-induced thermal acoustics (LITA) was applied to measure Mach number, temperature and turbulence level along the centerline of a transonic nozzle flow. The accuracy of the measurement results was systematically studied regarding misalignment of the interrogation beam and frequency analysis of the LITA signals. 2D steady-state Reynolds-averaged Navier-Stokes (RANS) simulations were performed for reference. The simulations were conducted using ANSYS CFX 18 employing the shear-stress transport turbulence model. Post-processing of the LITA signals is performed by applying a discrete Fourier transformation (DFT) to determine the beat frequencies. It is shown that the systematical error of the DFT, which depends on the number of oscillations, signal chirp, and damping rate, is less than 1.5% for our experiments resulting in an average error of 1.9% for Mach number. Further, the maximum calibration error is investigated for a worst-case scenario involving maximum in situ readjustment of the interrogation beam within the limits of constructive interference. It is shown that the signal intensity becomes zero if the interrogation angle is altered by 2%. This, together with the accuracy of frequency analysis, results in an error of about 5.4% for temperature throughout the nozzle. Comparison with numerical results shows good agreement within the error bars.
Drought Persistence Errors in Global Climate Models
NASA Astrophysics Data System (ADS)
Moon, H.; Gudmundsson, L.; Seneviratne, S. I.
2018-04-01
The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.
Systematic Error Study for ALICE charged-jet v2 Measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinz, M.; Soltz, R.
We study the treatment of systematic errors in the determination of v 2 for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ 2 according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ 2 and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methodsmore » are equivalent.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Sen; Li, Guangjun; Wang, Maojie
The purpose of this study was to investigate the effect of multileaf collimator (MLC) leaf position, collimator rotation angle, and accelerator gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma. To compare dosimetric differences between the simulating plans and the clinical plans with evaluation parameters, 6 patients with nasopharyngeal carcinoma were selected for simulation of systematic and random MLC leaf position errors, collimator rotation angle errors, and accelerator gantry rotation angle errors. There was a high sensitivity to dose distribution for systematic MLC leaf position errors in response to field size. When the systematic MLC position errors weremore » 0.5, 1, and 2 mm, respectively, the maximum values of the mean dose deviation, observed in parotid glands, were 4.63%, 8.69%, and 18.32%, respectively. The dosimetric effect was comparatively small for systematic MLC shift errors. For random MLC errors up to 2 mm and collimator and gantry rotation angle errors up to 0.5°, the dosimetric effect was negligible. We suggest that quality control be regularly conducted for MLC leaves, so as to ensure that systematic MLC leaf position errors are within 0.5 mm. Because the dosimetric effect of 0.5° collimator and gantry rotation angle errors is negligible, it can be concluded that setting a proper threshold for allowed errors of collimator and gantry rotation angle may increase treatment efficacy and reduce treatment time.« less
Flexible methods for segmentation evaluation: Results from CT-based luggage screening
Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry
2017-01-01
BACKGROUND Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms’ behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. OBJECTIVE To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. METHODS We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. RESULTS Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. CONCLUSIONS Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms. PMID:24699346
Measuring Diagnoses: ICD Code Accuracy
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-01-01
Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999
Modeling coherent errors in quantum error correction
NASA Astrophysics Data System (ADS)
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
Keers, Richard N; Williams, Steven D; Cooke, Jonathan; Ashcroft, Darren M
2013-11-01
Underlying systems factors have been seen to be crucial contributors to the occurrence of medication errors. By understanding the causes of these errors, the most appropriate interventions can be designed and implemented to minimise their occurrence. This study aimed to systematically review and appraise empirical evidence relating to the causes of medication administration errors (MAEs) in hospital settings. Nine electronic databases (MEDLINE, EMBASE, International Pharmaceutical Abstracts, ASSIA, PsycINFO, British Nursing Index, CINAHL, Health Management Information Consortium and Social Science Citations Index) were searched between 1985 and May 2013. Inclusion and exclusion criteria were applied to identify eligible publications through title analysis followed by abstract and then full text examination. English language publications reporting empirical data on causes of MAEs were included. Reference lists of included articles and relevant review papers were hand searched for additional studies. Studies were excluded if they did not report data on specific MAEs, used accounts from individuals not directly involved in the MAE concerned or were presented as conference abstracts with insufficient detail. A total of 54 unique studies were included. Causes of MAEs were categorised according to Reason's model of accident causation. Studies were assessed to determine relevance to the research question and how likely the results were to reflect the potential underlying causes of MAEs based on the method(s) used. Slips and lapses were the most commonly reported unsafe acts, followed by knowledge-based mistakes and deliberate violations. Error-provoking conditions influencing administration errors included inadequate written communication (prescriptions, documentation, transcription), problems with medicines supply and storage (pharmacy dispensing errors and ward stock management), high perceived workload, problems with ward-based equipment (access, functionality), patient factors (availability, acuity), staff health status (fatigue, stress) and interruptions/distractions during drug administration. Few studies sought to determine the causes of intravenous MAEs. A number of latent pathway conditions were less well explored, including local working culture and high-level managerial decisions. Causes were often described superficially; this may be related to the use of quantitative surveys and observation methods in many studies, limited use of established error causation frameworks to analyse data and a predominant focus on issues other than the causes of MAEs among studies. As only English language publications were included, some relevant studies may have been missed. Limited evidence from studies included in this systematic review suggests that MAEs are influenced by multiple systems factors, but if and how these arise and interconnect to lead to errors remains to be fully determined. Further research with a theoretical focus is needed to investigate the MAE causation pathway, with an emphasis on ensuring interventions designed to minimise MAEs target recognised underlying causes of errors to maximise their impact.
Model parameter-related optimal perturbations and their contributions to El Niño prediction errors
NASA Astrophysics Data System (ADS)
Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua
2018-04-01
Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.
Improved RF Measurements of SRF Cavity Quality Factors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzbauer, J. P.; Contreras, C.; Pischalnikov, Y.
SRF cavity quality factors can be accurately measured using RF-power based techniques only when the cavity is very close to critically coupled. This limitation is from systematic errors driven by non-ideal RF components. When the cavity is not close to critically coupled, these systematic effects limit the accuracy of the measurements. The combination of the complex base-band envelopes of the cavity RF signals in combination with a trombone in the circuit allow the relative calibration of the RF signals to be extracted from the data and systematic effects to be characterized and suppressed. The improved calibration allows accurate measurements tomore » be made over a much wider range of couplings. Demonstration of these techniques during testing of a single-spoke resonator with a coupling factor of near 7 will be presented, along with recommendations for application of these techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horch, Elliott P.; Van Altena, William F.; Howell, Steve B.
2011-06-15
In this paper, we study the ability of CCD- and electron-multiplying-CCD-based speckle imaging to obtain reliable astrometry and photometry of binary stars below the diffraction limit of the WIYN 3.5 m Telescope. We present a total of 120 measures of binary stars, 75 of which are below the diffraction limit. The measures are divided into two groups that have different measurement accuracy and precision. The first group is composed of standard speckle observations, that is, a sequence of speckle images taken in a single filter, while the second group consists of paired observations where the two observations are taken onmore » the same observing run and in different filters. The more recent paired observations were taken simultaneously with the Differential Speckle Survey Instrument, which is a two-channel speckle imaging system. In comparing our results to the ephemeris positions of binaries with known orbits, we find that paired observations provide the opportunity to identify cases of systematic error in separation below the diffraction limit and after removing these from consideration, we obtain a linear measurement uncertainty of 3-4 mas. However, if observations are unpaired or if two observations taken in the same filter are paired, it becomes harder to identify cases of systematic error, presumably because the largest source of this error is residual atmospheric dispersion, which is color dependent. When observations are unpaired, we find that it is unwise to report separations below approximately 20 mas, as these are most susceptible to this effect. Using the final results obtained, we are able to update two older orbits in the literature and present preliminary orbits for three systems that were discovered by Hipparcos.« less
NASA Technical Reports Server (NTRS)
Mallinckrodt, A. J.
1977-01-01
Data from an extensive array of collocated instrumentation at the Wallops Island test facility were intercompared in order to (1) determine the practical achievable accuracy limitations of various tropospheric and ionospheric correction techniques; (2) examine the theoretical bases and derivation of improved refraction correction techniques; and (3) estimate internal systematic and random error levels of the various tracking stations. The GEOS 2 satellite was used as the target vehicle. Data were obtained regarding the ionospheric and tropospheric propagation errors, the theoretical and data analysis of which was documented in some 30 separate reports over the last 6 years. An overview of project results is presented.
Drought Persistence in Models and Observations
NASA Astrophysics Data System (ADS)
Moon, Heewon; Gudmundsson, Lukas; Seneviratne, Sonia
2017-04-01
Many regions of the world have experienced drought events that persisted several years and caused substantial economic and ecological impacts in the 20th century. However, it remains unclear whether there are significant trends in the frequency or severity of these prolonged drought events. In particular, an important issue is linked to systematic biases in the representation of persistent drought events in climate models, which impedes analysis related to the detection and attribution of drought trends. This study assesses drought persistence errors in global climate model (GCM) simulations from the 5th phase of Coupled Model Intercomparison Project (CMIP5), in the period of 1901-2010. The model simulations are compared with five gridded observational data products. The analysis focuses on two aspects: the identification of systematic biases in the models and the partitioning of the spread of drought-persistence-error into four possible sources of uncertainty: model uncertainty, observation uncertainty, internal climate variability and the estimation error of drought persistence. We use monthly and yearly dry-to-dry transition probabilities as estimates for drought persistence with drought conditions defined as negative precipitation anomalies. For both time scales we find that most model simulations consistently underestimated drought persistence except in a few regions such as India and Eastern South America. Partitioning the spread of the drought-persistence-error shows that at the monthly time scale model uncertainty and observation uncertainty are dominant, while the contribution from internal variability does play a minor role in most cases. At the yearly scale, the spread of the drought-persistence-error is dominated by the estimation error, indicating that the partitioning is not statistically significant, due to a limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current climate models and highlight the main contributors of uncertainty of drought-persistence-error. Future analyses will focus on investigating the temporal propagation of drought persistence to better understand the causes for the identified errors in the representation of drought persistence in state-of-the-art climate models.
Irregular analytical errors in diagnostic testing - a novel concept.
Vogeser, Michael; Seger, Christoph
2018-02-23
In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC-isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T. S.
Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmissionmore » and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
Systematic Errors in an Air Track Experiment.
ERIC Educational Resources Information Center
Ramirez, Santos A.; Ham, Joe S.
1990-01-01
Errors found in a common physics experiment to measure acceleration resulting from gravity using a linear air track are investigated. Glider position at release and initial velocity are shown to be sources of systematic error. (CW)
The reliability of three devices used for measuring vertical jump height.
Nuzzo, James L; Anning, Jonathan H; Scharfenberg, Jessica M
2011-09-01
The purpose of this investigation was to assess the intrasession and intersession reliability of the Vertec, Just Jump System, and Myotest for measuring countermovement vertical jump (CMJ) height. Forty male and 39 female university students completed 3 maximal-effort CMJs during 2 testing sessions, which were separated by 24-48 hours. The height of the CMJ was measured from all 3 devices simultaneously. Systematic error, relative reliability, absolute reliability, and heteroscedasticity were assessed for each device. Systematic error across the 3 CMJ trials was observed within both sessions for males and females, and this was most frequently observed when the CMJ height was measured by the Vertec. No systematic error was discovered across the 2 testing sessions when the maximum CMJ heights from the 2 sessions were compared. In males, the Myotest demonstrated the best intrasession reliability (intraclass correlation coefficient [ICC] = 0.95; SEM = 1.5 cm; coefficient of variation [CV] = 3.3%) and intersession reliability (ICC = 0.88; SEM = 2.4 cm; CV = 5.3%; limits of agreement = -0.08 ± 4.06 cm). Similarly, in females, the Myotest demonstrated the best intrasession reliability (ICC = 0.91; SEM = 1.4 cm; CV = 4.5%) and intersession reliability (ICC = 0.92; SEM = 1.3 cm; CV = 4.1%; limits of agreement = 0.33 ± 3.53 cm). Additional analysis revealed that heteroscedasticity was present in the CMJ when measured from all 3 devices, indicating that better jumpers demonstrate greater fluctuations in CMJ scores across testing sessions. To attain reliable CMJ height measurements, practitioners are encouraged to familiarize athletes with the CMJ technique and then allow the athletes to complete numerous repetitions until performance plateaus, particularly if the Vertec is being used.
Data multiplexing in radio interferometric calibration
NASA Astrophysics Data System (ADS)
Yatawatta, Sarod; Diblen, Faruk; Spreeuw, Hanno; Koopmans, L. V. E.
2018-03-01
New and upcoming radio interferometers will produce unprecedented amount of data that demand extremely powerful computers for processing. This is a limiting factor due to the large computational power and energy costs involved. Such limitations restrict several key data processing steps in radio interferometry. One such step is calibration where systematic errors in the data are determined and corrected. Accurate calibration is an essential component in reaching many scientific goals in radio astronomy and the use of consensus optimization that exploits the continuity of systematic errors across frequency significantly improves calibration accuracy. In order to reach full consensus, data at all frequencies need to be calibrated simultaneously. In the SKA regime, this can become intractable if the available compute agents do not have the resources to process data from all frequency channels simultaneously. In this paper, we propose a multiplexing scheme that is based on the alternating direction method of multipliers with cyclic updates. With this scheme, it is possible to simultaneously calibrate the full data set using far fewer compute agents than the number of frequencies at which data are available. We give simulation results to show the feasibility of the proposed multiplexing scheme in simultaneously calibrating a full data set when a limited number of compute agents are available.
Mayo-Wilson, Evan; Ng, Sueko Matsumura; Chuck, Roy S; Li, Tianjing
2017-09-05
Systematic reviews should inform American Academy of Ophthalmology (AAO) Preferred Practice Pattern® (PPP) guidelines. The quality of systematic reviews related to the forthcoming Preferred Practice Pattern® guideline (PPP) Refractive Errors & Refractive Surgery is unknown. We sought to identify reliable systematic reviews to assist the AAO Refractive Errors & Refractive Surgery PPP. Systematic reviews were eligible if they evaluated the effectiveness or safety of interventions included in the 2012 PPP Refractive Errors & Refractive Surgery. To identify potentially eligible systematic reviews, we searched the Cochrane Eyes and Vision United States Satellite database of systematic reviews. Two authors identified eligible reviews and abstracted information about the characteristics and quality of the reviews independently using the Systematic Review Data Repository. We classified systematic reviews as "reliable" when they (1) defined criteria for the selection of studies, (2) conducted comprehensive literature searches for eligible studies, (3) assessed the methodological quality (risk of bias) of the included studies, (4) used appropriate methods for meta-analyses (which we assessed only when meta-analyses were reported), (5) presented conclusions that were supported by the evidence provided in the review. We identified 124 systematic reviews related to refractive error; 39 met our eligibility criteria, of which we classified 11 to be reliable. Systematic reviews classified as unreliable did not define the criteria for selecting studies (5; 13%), did not assess methodological rigor (10; 26%), did not conduct comprehensive searches (17; 44%), or used inappropriate quantitative methods (3; 8%). The 11 reliable reviews were published between 2002 and 2016. They included 0 to 23 studies (median = 9) and analyzed 0 to 4696 participants (median = 666). Seven reliable reviews (64%) assessed surgical interventions. Most systematic reviews of interventions for refractive error are low methodological quality. Following widely accepted guidance, such as Cochrane or Institute of Medicine standards for conducting systematic reviews, would contribute to improved patient care and inform future research.
An analysis of the least-squares problem for the DSN systematic pointing error model
NASA Technical Reports Server (NTRS)
Alvarez, L. S.
1991-01-01
A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.
Preparatory studies for the WFIRST supernova cosmology measurements
NASA Astrophysics Data System (ADS)
Perlmutter, Saul
In the context of the WFIRST-AFTA Science Definition Team we developed a first version of a supernova program, described in the WFIRST-AFTA SDT report. This program uses the imager to discover supernova candidates and an Integral Field Spectrograph (IFS) to obtain spectrophotometric light curves and higher signal to noise spectra of the supernovae near peak to better characterize the supernovae and thus minimize systematic errors. While this program was judged a robust one, and the estimates of the sensitivity to the cosmological parameters were felt to be reliable, due to limitation of time the analysis was clearly limited in depth on a number of issues. The goal of this proposal is to further develop this program and refine the estimates of the sensitivities to the cosmological parameters using more sophisticated systematic uncertainty models and covariance error matrices that fold in more realistic data concerning observed populations of SNe Ia as well as more realistic instrument models. We propose to develop analysis algorithms and approaches that are needed to build, optimize, and refine the WFIRST instrument and program requirements to accomplish the best supernova cosmology measurements possible. We plan to address the following: a) Use realistic Supernova populations, subclasses and population drift. One bothersome uncertainty with the supernova technique is the possibility of population drift with redshift. We are in a unique position to characterize and mitigate such effects using the spectrophotometric time series of real Type Ia supernovae from the Nearby Supernova Factory (SNfactory). Each supernova in this sample has global galaxy measurements as well as additional local environment information derived from the IFS spectroscopy. We plan to develop methods of coping with this issue, e.g., by selecting similar subsamples of supernovae and allowing additional model flexibility, in order to reduce systematic uncertainties. These studies will allow us to tune details, like the wavelength coverage and S/N requirements, of the WFIRST IFS to capitalize on these systematic error reduction methods. b) Supernova extraction and host galaxy subtractions. The underlying light of the host galaxy must be subtracted from the supernova images making up the lightcurves. Using the IFS to provide the lightcurve points via spectrophotometry requires the subtraction of a reference spectrum of the galaxy taken after the supernova light has faded to a negligible level. We plan to apply the expertise obtained from the SNfactory to develop galaxy background procedures that minimize the systematic errors introduced by this step in the analysis. c) Instrument calibration and ground to space cross calibration. Calibrating the entire supernova sample will be a challenge as no standard stars exist that span the range of magnitudes and wavelengths relevant to the WFIRST survey. Linking the supernova measurements to the relatively brighter standards will require several links. WFIRST will produce the high redshift sample, but the nearby supernova to anchor the Hubble diagram will have to come from ground based observations. Developing algorithms to carry out the cross calibration of these two samples to the required one percent level will be an important goal of our proposal. An integral part of this calibration will be to remove all instrumental signatures and to develop unbiased measurement techniques starting at the pixel level. We then plan to pull the above studies together in a synthesis to produce a correlated error matrix. We plan to develop a Fisher Matrix based model to evaluate the correlated error matrix due to the various systematic errors discussed above. A realistic error model will allow us to carry out a more reliable estimates of the eventual errors on the measurement of the cosmological parameters, as well as serve as a means of optimizing and fine tuning the requirements for the instruments and survey strategies.
Bauer, Amy M.; Alegría, Margarita
2010-01-01
Objective To determine the effects of limited English proficiency and use of interpreters on the quality of psychiatric care. Methods A systematic literature search for English-language publications was conducted in PubMed, PsycInfo, and CINAHL and by review of the reference lists of included articles and expert sources. Of 321 citations, 26 peer-reviewed articles met inclusion criteria by reporting primary data on the clinical care for psychiatric disorders among patients with limited proficiency in English or in the providers’ language. Results Little systematic research has addressed the impact of language proficiency or interpreter use on the quality of psychiatric care in contemporary US settings. Therefore, the literature to date is insufficient to inform evidence-based guidelines for improving quality of care among patients with limited English proficiency. Nonetheless, evaluation in a patient’s non-primary language can lead to incomplete or distorted mental status assessment whereas assessments conducted via untrained interpreters may contain interpreting errors. Consequences of interpreter errors include clinicians’ failure to identify disordered thought or delusional content. Use of professional interpreters may improve disclosure and attenuate some difficulties. Diagnostic agreement, collaborative treatment planning, and referral for specialty care may be compromised. Conclusions Clinicians should become aware of the types of quality problems that may occur when evaluating patients in a non-primary language or via an interpreter. Given demographic trends in the US, future research should aim to address the deficit in the evidence base to guide clinical practice and policy. PMID:20675834
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@nao.ac.jp, E-mail: tof@astr.tohoku.ac.jp
This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging,more » but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of {nu} {approx} 11.7.« less
NASA Astrophysics Data System (ADS)
Lichti, Derek D.; Chow, Jacky; Lahamy, Hervé
One of the important systematic error parameters identified in terrestrial laser scanners is the collimation axis error, which models the non-orthogonality between two instrumental axes. The quality of this parameter determined by self-calibration, as measured by its estimated precision and its correlation with the tertiary rotation angle κ of the scanner exterior orientation, is strongly dependent on instrument architecture. While the quality is generally very high for panoramic-type scanners, it is comparably poor for hybrid-style instruments. Two methods for improving the quality of the collimation axis error in hybrid instrument self-calibration are proposed herein: (1) the inclusion of independent observations of the tertiary rotation angle κ; and (2) the use of a new collimation axis error model. Five real datasets were captured with two different hybrid-style scanners to test each method's efficacy. While the first method achieves the desired outcome of complete decoupling of the collimation axis error from κ, it is shown that the high correlation is simply transferred to other model variables. The second method achieves partial parameter de-correlation to acceptable levels. Importantly, it does so without any adverse, secondary correlations and is therefore the method recommended for future use. Finally, systematic error model identification has been greatly aided in previous studies by graphical analyses of self-calibration residuals. This paper presents results showing the architecture dependence of this technique, revealing its limitations for hybrid scanners.
Optimal surveys for weak-lensing tomography
NASA Astrophysics Data System (ADS)
Amara, Adam; Réfrégier, Alexandre
2007-11-01
Weak-lensing surveys provide a powerful probe of dark energy through the measurement of the mass distribution of the local Universe. A number of ground-based and space-based surveys are being planned for this purpose. Here, we study the optimal strategy for these future surveys using the joint constraints on the equation-of-state parameter wn and its evolution wa as a figure of merit by considering power spectrum tomography. For this purpose, we first consider an `ideal' survey which is both wide and deep and exempt from systematics. We find that such a survey has great potential for dark energy studies, reaching 1σ precisions of 1 and 10 per cent on the two parameters, respectively. We then study the relative impact of various limitations by degrading this ideal survey. In particular, we consider the effect of sky coverage, survey depth, shape measurement systematics, photometric redshift systematics and uncertainties in the non-linear power spectrum predictions. We find that, for a given observing time, it is always advantageous to choose a wide rather than a deep survey geometry. We also find that the dark energy constraints from power spectrum tomography are robust to photometric redshift errors and catastrophic failures, if a spectroscopic calibration sample of 104-105 galaxies are available. The impact of these systematics is small compared to the limitations that come from potential uncertainties in the power spectrum, due to shear measurement and theoretical errors. To help the planning of future surveys, we summarize our results with comprehensive scaling relations which avoid the need for full Fisher matrix calculations.
SU-E-T-613: Dosimetric Consequences of Systematic MLC Leaf Positioning Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kathuria, K; Siebers, J
2014-06-01
Purpose: The purpose of this study is to determine the dosimetric consequences of systematic MLC leaf positioning errors for clinical IMRT patient plans so as to establish detection tolerances for quality assurance programs. Materials and Methods: Dosimetric consequences were simulated by extracting mlc delivery instructions from the TPS, altering the file by the specified error, reloading the delivery instructions into the TPS, recomputing dose, and extracting dose-volume metrics for one head-andneck and one prostate patient. Machine error was simulated by offsetting MLC leaves in Pinnacle in a systematic way. Three different algorithms were followed for these systematic offsets, and aremore » as follows: a systematic sequential one-leaf offset (one leaf offset in one segment per beam), a systematic uniform one-leaf offset (same one leaf offset per segment per beam) and a systematic offset of a given number of leaves picked uniformly at random from a given number of segments (5 out of 10 total). Dose to the PTV and normal tissue was simulated. Results: A systematic 5 mm offset of 1 leaf for all delivery segments of all beams resulted in a maximum PTV D98 deviation of 1%. Results showed very low dose error in all reasonably possible machine configurations, rare or otherwise, which could be simulated. Very low error in dose to PTV and OARs was shown in all possible cases of one leaf per beam per segment being offset (<1%), or that of only one leaf per beam being offset (<.2%). The errors resulting from a high number of adjacent leaves (maximum of 5 out of 60 total leaf-pairs) being simultaneously offset in many (5) of the control points (total 10–18 in all beams) per beam, in both the PTV and the OARs analyzed, were similarly low (<2–3%). Conclusions: The above results show that patient shifts and anatomical changes are the main source of errors in dose delivered, not machine delivery. These two sources of error are “visually complementary” and uncorrelated (albeit not additive in the final error) and one can easily incorporate error resulting from machine delivery in an error model based purely on tumor motion.« less
NASA Technical Reports Server (NTRS)
Deloach, Richard; Obara, Clifford J.; Goodman, Wesley L.
2012-01-01
This paper documents a check standard wind tunnel test conducted in the Langley 0.3-Meter Transonic Cryogenic Tunnel (0.3M TCT) that was designed and analyzed using the Modern Design of Experiments (MDOE). The test designed to partition the unexplained variance of typical wind tunnel data samples into two constituent components, one attributable to ordinary random error, and one attributable to systematic error induced by covariate effects. Covariate effects in wind tunnel testing are discussed, with examples. The impact of systematic (non-random) unexplained variance on the statistical independence of sequential measurements is reviewed. The corresponding correlation among experimental errors is discussed, as is the impact of such correlation on experimental results generally. The specific experiment documented herein was organized as a formal test for the presence of unexplained variance in representative samples of wind tunnel data, in order to quantify the frequency with which such systematic error was detected, and its magnitude relative to ordinary random error. Levels of systematic and random error reported here are representative of those quantified in other facilities, as cited in the references.
2014-01-01
Background The Health Information Technology for Economic and Clinical Health (HITECH) Act subsidizes implementation by hospitals of electronic health records with computerized provider order entry (CPOE), which may reduce patient injuries caused by medication errors (preventable adverse drug events, pADEs). Effects on pADEs have not been rigorously quantified, and effects on medication errors have been variable. The objectives of this analysis were to assess the effectiveness of CPOE at reducing pADEs in hospital-related settings, and examine reasons for heterogeneous effects on medication errors. Methods Articles were identified using MEDLINE, Cochrane Library, Econlit, web-based databases, and bibliographies of previous systematic reviews (September 2013). Eligible studies compared CPOE with paper-order entry in acute care hospitals, and examined diverse pADEs or medication errors. Studies on children or with limited event-detection methods were excluded. Two investigators extracted data on events and factors potentially associated with effectiveness. We used random effects models to pool data. Results Sixteen studies addressing medication errors met pooling criteria; six also addressed pADEs. Thirteen studies used pre-post designs. Compared with paper-order entry, CPOE was associated with half as many pADEs (pooled risk ratio (RR) = 0.47, 95% CI 0.31 to 0.71) and medication errors (RR = 0.46, 95% CI 0.35 to 0.60). Regarding reasons for heterogeneous effects on medication errors, five intervention factors and two contextual factors were sufficiently reported to support subgroup analyses or meta-regression. Differences between commercial versus homegrown systems, presence and sophistication of clinical decision support, hospital-wide versus limited implementation, and US versus non-US studies were not significant, nor was timing of publication. Higher baseline rates of medication errors predicted greater reductions (P < 0.001). Other context and implementation variables were seldom reported. Conclusions In hospital-related settings, implementing CPOE is associated with a greater than 50% decline in pADEs, although the studies used weak designs. Decreases in medication errors are similar and robust to variations in important aspects of intervention design and context. This suggests that CPOE implementation, as subsidized under the HITECH Act, may benefit public health. More detailed reporting of the context and process of implementation could shed light on factors associated with greater effectiveness. PMID:24894078
Using ridge regression in systematic pointing error corrections
NASA Technical Reports Server (NTRS)
Guiar, C. N.
1988-01-01
A pointing error model is used in the antenna calibration process. Data from spacecraft or radio star observations are used to determine the parameters in the model. However, the regression variables are not truly independent, displaying a condition known as multicollinearity. Ridge regression, a biased estimation technique, is used to combat the multicollinearity problem. Two data sets pertaining to Voyager 1 spacecraft tracking (days 105 and 106 of 1987) were analyzed using both linear least squares and ridge regression methods. The advantages and limitations of employing the technique are presented. The problem is not yet fully resolved.
NASA Astrophysics Data System (ADS)
Arzoumanian, Z.; Baker, P. T.; Brazier, A.; Burke-Spolaor, S.; Chamberlin, S. J.; Chatterjee, S.; Christy, B.; Cordes, J. M.; Cornish, N. J.; Crawford, F.; Thankful Cromartie, H.; Crowter, K.; DeCesar, M.; Demorest, P. B.; Dolch, T.; Ellis, J. A.; Ferdman, R. D.; Ferrara, E.; Folkner, W. M.; Fonseca, E.; Garver-Daniels, N.; Gentile, P. A.; Haas, R.; Hazboun, J. S.; Huerta, E. A.; Islo, K.; Jones, G.; Jones, M. L.; Kaplan, D. L.; Kaspi, V. M.; Lam, M. T.; Lazio, T. J. W.; Levin, L.; Lommen, A. N.; Lorimer, D. R.; Luo, J.; Lynch, R. S.; Madison, D. R.; McLaughlin, M. A.; McWilliams, S. T.; Mingarelli, C. M. F.; Ng, C.; Nice, D. J.; Park, R. S.; Pennucci, T. T.; Pol, N. S.; Ransom, S. M.; Ray, P. S.; Rasskazov, A.; Siemens, X.; Simon, J.; Spiewak, R.; Stairs, I. H.; Stinebring, D. R.; Stovall, K.; Swiggum, J.; Taylor, S. R.; Vallisneri, M.; van Haasteren, R.; Vigeland, S.; Zhu, W. W.; The NANOGrav Collaboration
2018-05-01
We search for an isotropic stochastic gravitational-wave background (GWB) in the newly released 11 year data set from the North American Nanohertz Observatory for Gravitational Waves (NANOGrav). While we find no evidence for a GWB, we place constraints on a population of inspiraling supermassive black hole (SMBH) binaries, a network of decaying cosmic strings, and a primordial GWB. For the first time, we find that the GWB constraints are sensitive to the solar system ephemeris (SSE) model used and that SSE errors can mimic a GWB signal. We developed an approach that bridges systematic SSE differences, producing the first pulsar-timing array (PTA) constraints that are robust against SSE errors. We thus place a 95% upper limit on the GW-strain amplitude of A GWB < 1.45 × 10‑15 at a frequency of f = 1 yr‑1 for a fiducial f ‑2/3 power-law spectrum and with interpulsar correlations modeled. This is a factor of ∼2 improvement over the NANOGrav nine-year limit calculated using the same procedure. Previous PTA upper limits on the GWB (as well as their astrophysical and cosmological interpretations) will need revision in light of SSE systematic errors. We use our constraints to characterize the combined influence on the GWB of the stellar mass density in galactic cores, the eccentricity of SMBH binaries, and SMBH–galactic-bulge scaling relationships. We constrain the cosmic-string tension using recent simulations, yielding an SSE-marginalized 95% upper limit of Gμ < 5.3 × 10‑11—a factor of ∼2 better than the published NANOGrav nine-year constraints. Our SSE-marginalized 95% upper limit on the energy density of a primordial GWB (for a radiation-dominated post-inflation universe) is ΩGWB(f) h 2 < 3.4 × 10‑10.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
Improvements in GRACE Gravity Fields Using Regularization
NASA Astrophysics Data System (ADS)
Save, H.; Bettadpur, S.; Tapley, B. D.
2008-12-01
The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or spatial smoothing.
Sirriyeh, Reema; Lawton, Rebecca; Gardner, Peter; Armitage, Gerry
2010-12-01
Previous research has established health professionals as secondary victims of medical error, with the identification of a range of emotional and psychological repercussions that may occur as a result of involvement in error.2 3 Due to the vast range of emotional and psychological outcomes, research to date has been inconsistent in the variables measured and tools used. Therefore, differing conclusions have been drawn as to the nature of the impact of error on professionals and the subsequent repercussions for their team, patients and healthcare institution. A systematic review was conducted. Data sources were identified using database searches, with additional reference and hand searching. Eligibility criteria were applied to all studies identified, resulting in a total of 24 included studies. Quality assessment was conducted with the included studies using a tool that was developed as part of this research, but due to the limited number and diverse nature of studies, no exclusions were made on this basis. Review findings suggest that there is consistent evidence for the widespread impact of medical error on health professionals. Psychological repercussions may include negative states such as shame, self-doubt, anxiety and guilt. Despite much attention devoted to the assessment of negative outcomes, the potential for positive outcomes resulting from error also became apparent, with increased assertiveness, confidence and improved colleague relationships reported. It is evident that involvement in a medical error can elicit a significant psychological response from the health professional involved. However, a lack of literature around coping and support, coupled with inconsistencies and weaknesses in methodology, may need be addressed in future work.
Effect of cephalometer misalignment on calculations of facial asymmetry.
Lee, Ki-Heon; Hwang, Hyeon-Shik; Curry, Sean; Boyd, Robert L; Norris, Kevin; Baumrind, Sheldon
2007-07-01
In this study, we evaluated errors introduced into the interpretation of facial asymmetry on posteroanterior (PA) cephalograms due to malpositioning of the x-ray emitter focal spot. We tested the hypothesis that horizontal displacements of the emitter from its ideal position would produce systematic displacements of skull landmarks that could be fully accounted for by the rules of projective geometry alone. A representative dry skull with 22 metal markers was used to generate a series of PA images from different emitter positions by using a fully calibrated stereo cephalometer. Empirical measurements of the resulting cephalograms were compared with mathematical predictions based solely on geometric rules. The empirical measurements matched the mathematical predictions within the limits of measurement error (x= 0.23 mm), thus supporting the hypothesis. Based upon this finding, we generated a completely symmetrical mathematical skull and calculated the expected errors for focal spots of several different magnitudes. Quantitative data were computed for focal spot displacements of different magnitudes. Misalignment of the x-ray emitter focal spot introduces systematic errors into the interpretation of facial asymmetry on PA cephalograms. For misalignments of less than 20 mm, the effect is small in individual cases. However, misalignments as small as 10 mm can introduce spurious statistical findings of significant asymmetry when mean values for large groups of PA images are evaluated.
Meaningful Peer Review in Radiology: A Review of Current Practices and Potential Future Directions.
Moriarity, Andrew K; Hawkins, C Matthew; Geis, J Raymond; Dreyer, Keith J; Kamer, Aaron P; Khandheria, Paras; Morey, Jose; Whitfill, James; Wiggins, Richard H; Itri, Jason N
2016-12-01
The current practice of peer review within radiology is well developed and widely implemented compared with other medical specialties. However, there are many factors that limit current peer review practices from reducing diagnostic errors and improving patient care. The development of "meaningful peer review" requires a transition away from compliance toward quality improvement, whereby the information and insights gained facilitate education and drive systematic improvements that reduce the frequency and impact of diagnostic error. The next generation of peer review requires significant improvements in IT functionality and integration, enabling features such as anonymization, adjudication by multiple specialists, categorization and analysis of errors, tracking, feedback, and easy export into teaching files and other media that require strong partnerships with vendors. In this article, the authors assess various peer review practices, with focused discussion on current limitations and future needs for meaningful peer review in radiology. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Mohammad, F. G.; Granett, B. R.; Guzzo, L.; Bel, J.; Branchini, E.; de la Torre, S.; Moscardini, L.; Peacock, J. A.; Bolzonella, M.; Garilli, B.; Scodeggio, M.; Abbas, U.; Adami, C.; Bottini, D.; Cappi, A.; Cucciati, O.; Davidzon, I.; Franzetti, P.; Fritz, A.; Iovino, A.; Krywult, J.; Le Brun, V.; Le Fèvre, O.; Maccagni, D.; Małek, K.; Marulli, F.; Polletta, M.; Pollo, A.; Tasca, L. A. M.; Tojeiro, R.; Vergani, D.; Zanichelli, A.; Arnouts, S.; Coupon, J.; De Lucia, G.; Ilbert, O.; Moutard, T.
2018-02-01
We used the VIMOS Public Extragalactic Redshift Survey (VIPERS) final data release (PDR-2) to investigate the performance of colour-selected populations of galaxies as tracers of linear large-scale motions. We empirically selected volume-limited samples of blue and red galaxies as to minimise the systematic error on the estimate of the growth rate of structure fσ8 from the anisotropy of the two-point correlation function. To this end, rather than rigidly splitting the sample into two colour classes we defined the red or blue fractional contribution of each object through a weight based on the (U - V ) colour distribution. Using mock surveys that are designed to reproduce the observed properties of VIPERS galaxies, we find the systematic error in recovering the fiducial value of fσ8 to be minimised when using a volume-limited sample of luminous blue galaxies. We modelled non-linear corrections via the Scoccimarro extension of the Kaiser model (with updated fitting formulae for the velocity power spectra), finding systematic errors on fσ8 of below 1-2%, using scales as small as 5 h-1 Mpc. We interpret this result as indicating that selection of luminous blue galaxies maximises the fraction that are central objects in their dark matter haloes; this in turn minimises the contribution to the measured ξ(rp,π) from the 1-halo term, which is dominated by non-linear motions. The gain is inferior if one uses the full magnitude-limited sample of blue objects, consistent with the presence of a significant fraction of blue, fainter satellites dominated by non-streaming, orbital velocities. We measured a value of fσ8 = 0.45 ± 0.11 over the single redshift range 0.6 ≤ z ≤ 1.0, corresponding to an effective redshift for the blue galaxies ⟨z⟩=0.85. Including in the likelihood the potential extra information contained in the blue-red galaxy cross-correlation function does not lead to an appreciable improvement in the error bars, while it increases the systematic error. Based on observations collected at the European Southern Observatory, Cerro Paranal, Chile, using the Very Large Telescope under programs 182.A-0886 and partly 070.A-9007. Also based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at TERAPIX and the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS. The VIPERS web site is http://www.vipers.inaf.it/
Reyes, Jeanette M; Xu, Yadong; Vizuete, William; Serre, Marc L
2017-01-01
The regulatory Community Multiscale Air Quality (CMAQ) model is a means to understanding the sources, concentrations and regulatory attainment of air pollutants within a model's domain. Substantial resources are allocated to the evaluation of model performance. The Regionalized Air quality Model Performance (RAMP) method introduced here explores novel ways of visualizing and evaluating CMAQ model performance and errors for daily Particulate Matter ≤ 2.5 micrometers (PM2.5) concentrations across the continental United States. The RAMP method performs a non-homogenous, non-linear, non-homoscedastic model performance evaluation at each CMAQ grid. This work demonstrates that CMAQ model performance, for a well-documented 2001 regulatory episode, is non-homogeneous across space/time. The RAMP correction of systematic errors outperforms other model evaluation methods as demonstrated by a 22.1% reduction in Mean Square Error compared to a constant domain wide correction. The RAMP method is able to accurately reproduce simulated performance with a correlation of r = 76.1%. Most of the error coming from CMAQ is random error with only a minority of error being systematic. Areas of high systematic error are collocated with areas of high random error, implying both error types originate from similar sources. Therefore, addressing underlying causes of systematic error will have the added benefit of also addressing underlying causes of random error.
Errors in radial velocity variance from Doppler wind lidar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, H.; Barthelmie, R. J.; Doubrawa, P.
A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less
Errors in radial velocity variance from Doppler wind lidar
Wang, H.; Barthelmie, R. J.; Doubrawa, P.; ...
2016-08-29
A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less
Sub-Camera Calibration of a Penta-Camera
NASA Astrophysics Data System (ADS)
Jacobsen, K.; Gerke, M.
2016-03-01
Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding cameras of both blocks have the same trend, but as usual for block adjustments with self calibration, they still show significant differences. Based on the very high number of image points the remaining image residuals can be safely determined by overlaying and averaging the image residuals corresponding to their image coordinates. The size of the systematic image errors, not covered by the used additional parameters, is in the range of a square mean of 0.1 pixels corresponding to 0.6μm. They are not the same for both blocks, but show some similarities for corresponding cameras. In general the bundle block adjustment with a satisfying set of additional parameters, checked by remaining systematic errors, is required for use of the whole geometric potential of the penta camera. Especially for object points on facades, often only in two images and taken with a limited base length, the correct handling of systematic image errors is important. At least in the analyzed data sets the self calibration of sub-cameras by bundle block adjustment suffers from the correlation of the inner to the exterior calibration due to missing crossing flight directions. As usual, the systematic image errors differ from block to block even without the influence of the correlation to the exterior orientation.
Perils of using speed zone data to assess real-world compliance to speed limits.
Chevalier, Anna; Clarke, Elizabeth; Chevalier, Aran John; Brown, Julie; Coxon, Kristy; Ivers, Rebecca; Keay, Lisa
2017-11-17
Real-world driving studies, including those involving speeding alert devices and autonomous vehicles, can gauge an individual vehicle's speeding behavior by comparing measured speed with mapped speed zone data. However, there are complexities with developing and maintaining a database of mapped speed zones over a large geographic area that may lead to inaccuracies within the data set. When this approach is applied to large-scale real-world driving data or speeding alert device data to determine speeding behavior, these inaccuracies may result in invalid identification of speeding. We investigated speeding events based on service provider speed zone data. We compared service provider speed zone data (Speed Alert by Smart Car Technologies Pty Ltd., Ultimo, NSW, Australia) against a second set of speed zone data (Google Maps Application Programming Interface [API] mapped speed zones). We found a systematic error in the zones where speed limits of 50-60 km/h, typical of local roads, were allocated to high-speed motorways, which produced false speed limits in the speed zone database. The result was detection of false-positive high-range speeding. Through comparison of the service provider speed zone data against a second set of speed zone data, we were able to identify and eliminate data most affected by this systematic error, thereby establishing a data set of speeding events with a high level of sensitivity (a true positive rate of 92% or 6,412/6,960). Mapped speed zones can be a source of error in real-world driving when examining vehicle speed. We explored the types of inaccuracies found within speed zone data and recommend that a second set of speed zone data be utilized when investigating speeding behavior or developing mapped speed zone data to minimize inaccuracy in estimates of speeding.
SKA weak lensing - III. Added value of multiwavelength synergies for the mitigation of systematics
NASA Astrophysics Data System (ADS)
Camera, Stefano; Harrison, Ian; Bonaldi, Anna; Brown, Michael L.
2017-02-01
In this third paper of a series on radio weak lensing for cosmology with the Square Kilometre Array, we scrutinize synergies between cosmic shear measurements in the radio and optical/near-infrared (IR) bands for mitigating systematic effects. We focus on three main classes of systematics: (I) experimental systematic errors in the observed shear; (II) signal contamination by intrinsic alignments and (III) systematic effects due to an incorrect modelling of non-linear scales. First, we show that a comprehensive, multiwavelength analysis provides a self-calibration method for experimental systematic effects, only implying <50 per cent increment on the errors on cosmological parameters. We also illustrate how the cross-correlation between radio and optical/near-IR surveys alone is able to remove residual systematics with variance as large as 10-5, I.e. the same order of magnitude of the cosmological signal. This also opens the possibility of using such a cross-correlation as a means to detect unknown experimental systematics. Secondly, we demonstrate that, thanks to polarization information, radio weak lensing surveys will be able to mitigate contamination by intrinsic alignments, in a way similar but fully complementary to available self-calibration methods based on position-shear correlations. Lastly, we illustrate how radio weak lensing experiments, reaching higher redshifts than those accessible to optical surveys, will probe dark energy and the growth of cosmic structures in regimes less contaminated by non-linearities in the matter perturbations. For instance, the higher redshift bins of radio catalogues peak at z ≃ 0.8-1, whereas their optical/near-IR counterparts are limited to z ≲ 0.5-0.7. This translates into having a cosmological signal 2-5 times less contaminated by non-linear perturbations.
NASA Astrophysics Data System (ADS)
Pandey, Manoj Kumar; Ramachandran, Ramesh
2010-03-01
The application of solid-state NMR methodology for bio-molecular structure determination requires the measurement of constraints in the form of 13C-13C and 13C-15N distances, torsion angles and, in some cases, correlation of the anisotropic interactions. Since the availability of structurally important constraints in the solid state is limited due to lack of sufficient spectral resolution, the accuracy of the measured constraints become vital in studies relating the three-dimensional structure of proteins to its biological functions. Consequently, the theoretical methods employed to quantify the experimental data become important. To accentuate this aspect, we re-examine analytical two-spin models currently employed in the estimation of 13C-13C distances based on the rotational resonance (R 2) phenomenon. Although the error bars for the estimated distances tend to be in the range 0.5-1.0 Å, R 2 experiments are routinely employed in a variety of systems ranging from simple peptides to more complex amyloidogenic proteins. In this article we address this aspect by highlighting the systematic errors introduced by analytical models employing phenomenological damping terms to describe multi-spin effects. Specifically, the spin dynamics in R 2 experiments is described using Floquet theory employing two different operator formalisms. The systematic errors introduced by the phenomenological damping terms and their limitations are elucidated in two analytical models and analysed by comparing the results with rigorous numerical simulations.
On the interaction of deaffrication and consonant harmony*
Dinnsen, Daniel A.; Gierut, Judith A.; Morrisette, Michele L.; Green, Christopher R.; Farris-Trimble, Ashley W.
2010-01-01
Error patterns in children’s phonological development are often described as simplifying processes that can interact with one another with different consequences. Some interactions limit the applicability of an error pattern, and others extend it to more words. Theories predict that error patterns interact to their full potential. While specific interactions have been documented for certain pairs of processes, no developmental study has shown that the range of typologically predicted interactions occurs for those processes. To determine whether this anomaly is an accidental gap or a systematic peculiarity of particular error patterns, two commonly occurring processes were considered, namely Deaffrication and Consonant Harmony. Results are reported from a cross-sectional and longitudinal study of 12 children (age 3;0 – 5;0) with functional phonological delays. Three interaction types were attested to varying degrees. The longitudinal results further instantiated the typology and revealed a characteristic trajectory of change. Implications of these findings are explored. PMID:20513256
Teaching concepts of clinical measurement variation to medical students.
Hodder, R A; Longfield, J N; Cruess, D F; Horton, J A
1982-09-01
An exercise in clinical epidemiology was developed for medical students to demonstrate the process and limitations of scientific measurement using models that simulate common clinical experiences. All scales of measurement (nominal, ordinal and interval) were used to illustrate concepts of intra- and interobserver variation, systematic error, recording error, and procedural error. In a laboratory, students a) determined blood pressures on six videotaped subjects, b) graded sugar content of unknown solutions from 0 to 4+ using Clinitest tablets, c) measured papules that simulated PPD reactions, d) measured heart and kidney size on X-rays and, e) described a model skin lesion (melanoma). Traditionally, measurement variation is taught in biostatistics or epidemiology courses using previously collected data. Use of these models enables students to produce their own data using measurements commonly employed by the clinician. The exercise provided material for a meaningful discussion of the implications of measurement error in clinical decision-making.
NASA Astrophysics Data System (ADS)
He, Yingwei; Li, Ping; Feng, Guojin; Cheng, Li; Wang, Yu; Wu, Houping; Liu, Zilong; Zheng, Chundi; Sha, Dingguo
2010-11-01
For measuring large-aperture optical system transmittance, a novel sub-aperture scanning machine with double-rotating arms (SSMDA) was designed to obtain sub-aperture beam spot. Optical system full-aperture transmittance measurements can be achieved by applying sub-aperture beam spot scanning technology. The mathematical model of the SSMDA based on a homogeneous coordinate transformation matrix is established to develop a detailed methodology for analyzing the beam spot scanning errors. The error analysis methodology considers two fundamental sources of scanning errors, namely (1) the length systematic errors and (2) the rotational systematic errors. As the systematic errors of the parameters are given beforehand, computational results of scanning errors are between -0.007~0.028mm while scanning radius is not lager than 400.000mm. The results offer theoretical and data basis to the research on transmission characteristics of large optical system.
Gagné, Myriam; Boulet, Louis-Philippe; Pérez, Norma; Moisan, Jocelyne
2018-04-30
To systematically identify the measurement properties of patient-reported outcome instruments (PROs) that evaluate adherence to inhaled maintenance medication in adults with asthma. We conducted a systematic review of six databases. Two reviewers independently included studies on the measurement properties of PROs that evaluated adherence in asthmatic participants aged ≥18 years. Based on the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN), the reviewers (1) extracted data on internal consistency, reliability, measurement error, content validity, structural validity, hypotheses testing, cross-cultural validity, criterion validity, and responsiveness; (2) assessed the methodological quality of the included studies; (3) assessed the quality of the measurement properties (positive or negative); and (4) summarised the level of evidence (limited, moderate, or strong). We screened 6,068 records and included 15 studies (14 PROs). No studies evaluated measurement error or responsiveness. Based on methodological and measurement property quality assessments, we found limited positive evidence of: (a) internal consistency of the Adherence Questionnaire, Refined Medication Adherence Reason Scale (MAR-Scale), Medication Adherence Report Scale for Asthma (MARS-A), and Test of the Adherence to Inhalers (TAI); (b) reliability of the TAI; and (c) structural validity of the Adherence Questionnaire, MAR-Scale, MARS-A, and TAI. We also found limited negative evidence of: (d) hypotheses testing of Adherence Questionnaire; (e) reliability of the MARS-A; and (f) criterion validity of the MARS-A and TAI. Our results highlighted the need to conduct further high-quality studies that will positively evaluate the reliability, validity, and responsiveness of the available PROs. This article is protected by copyright. All rights reserved.
NASA Technical Reports Server (NTRS)
Ricks, Douglas W.
1993-01-01
There are a number of sources of scattering in binary optics: etch depth errors, line edge errors, quantization errors, roughness, and the binary approximation to the ideal surface. These sources of scattering can be systematic (deterministic) or random. In this paper, scattering formulas for both systematic and random errors are derived using Fourier optics. These formulas can be used to explain the results of scattering measurements and computer simulations.
Fitzgerald, John S; Johnson, LuAnn; Tomkinson, Grant; Stein, Jesse; Roemmich, James N
2018-05-01
Mechanography during the vertical jump may enhance screening and determining mechanistic causes underlying physical performance changes. Utility of jump mechanography for evaluation is limited by scant test-retest reliability data on force-time variables. This study examined the test-retest reliability of eight jump execution variables assessed from mechanography. Thirty-two women (mean±SD: age 20.8 ± 1.3 yr) and 16 men (age 22.1 ± 1.9 yr) attended a familiarization session and two testing sessions, all one week apart. Participants performed two variations of the squat jump with squat depth self-selected and controlled using a goniometer to 80º knee flexion. Test-retest reliability was quantified as the systematic error (using effect size between jumps), random error (using coefficients of variation), and test-retest correlations (using intra-class correlation coefficients). Overall, jump execution variables demonstrated acceptable reliability, evidenced by small systematic errors (mean±95%CI: 0.2 ± 0.07), moderate random errors (mean±95%CI: 17.8 ± 3.7%), and very strong test-retest correlations (range: 0.73-0.97). Differences in random errors between controlled and self-selected protocols were negligible (mean±95%CI: 1.3 ± 2.3%). Jump execution variables demonstrated acceptable reliability, with no meaningful differences between the controlled and self-selected jump protocols. To simplify testing, a self-selected jump protocol can be used to assess force-time variables with negligible impact on measurement error.
Auditing as part of the terminology design life cycle.
Min, Hua; Perl, Yehoshua; Chen, Yan; Halper, Michael; Geller, James; Wang, Yue
2006-01-01
To develop and test an auditing methodology for detecting errors in medical terminologies satisfying systematic inheritance. This methodology is based on various abstraction taxonomies that provide high-level views of a terminology and highlight potentially erroneous concepts. Our auditing methodology is based on dividing concepts of a terminology into smaller, more manageable units. First, we divide the terminology's concepts into areas according to their relationships/roles. Then each multi-rooted area is further divided into partial-areas (p-areas) that are singly-rooted. Each p-area contains a set of structurally and semantically uniform concepts. Two kinds of abstraction networks, called the area taxonomy and p-area taxonomy, are derived. These taxonomies form the basis for the auditing approach. Taxonomies tend to highlight potentially erroneous concepts in areas and p-areas. Human reviewers can focus their auditing efforts on the limited number of problematic concepts following two hypotheses on the probable concentration of errors. A sample of the area taxonomy and p-area taxonomy for the Biological Process (BP) hierarchy of the National Cancer Institute Thesaurus (NCIT) was derived from the application of our methodology to its concepts. These views led to the detection of a number of different kinds of errors that are reported, and to confirmation of the hypotheses on error concentration in this hierarchy. Our auditing methodology based on area and p-area taxonomies is an efficient tool for detecting errors in terminologies satisfying systematic inheritance of roles, and thus facilitates their maintenance. This methodology concentrates a domain expert's manual review on portions of the concepts with a high likelihood of errors.
NASA Astrophysics Data System (ADS)
Wang, Yang; Beirle, Steffen; Hendrick, Francois; Hilboll, Andreas; Jin, Junli; Kyuberis, Aleksandra A.; Lampel, Johannes; Li, Ang; Luo, Yuhan; Lodi, Lorenzo; Ma, Jianzhong; Navarro, Monica; Ortega, Ivan; Peters, Enno; Polyansky, Oleg L.; Remmers, Julia; Richter, Andreas; Puentedura, Olga; Van Roozendael, Michel; Seyler, André; Tennyson, Jonathan; Volkamer, Rainer; Xie, Pinhua; Zobov, Nikolai F.; Wagner, Thomas
2017-10-01
In order to promote the development of the passive DOAS technique the Multi Axis DOAS - Comparison campaign for Aerosols and Trace gases (MAD-CAT) was held at the Max Planck Institute for Chemistry in Mainz, Germany, from June to October 2013. Here, we systematically compare the differential slant column densities (dSCDs) of nitrous acid (HONO) derived from measurements of seven different instruments. We also compare the tropospheric difference of SCDs (delta SCD) of HONO, namely the difference of the SCDs for the non-zenith observations and the zenith observation of the same elevation sequence. Different research groups analysed the spectra from their own instruments using their individual fit software. All the fit errors of HONO dSCDs from the instruments with cooled large-size detectors are mostly in the range of 0.1 to 0.3 × 1015 molecules cm-2 for an integration time of 1 min. The fit error for the mini MAX-DOAS is around 0.7 × 1015 molecules cm-2. Although the HONO delta SCDs are normally smaller than 6 × 1015 molecules cm-2, consistent time series of HONO delta SCDs are retrieved from the measurements of different instruments. Both fits with a sequential Fraunhofer reference spectrum (FRS) and a daily noon FRS lead to similar consistency. Apart from the mini-MAX-DOAS, the systematic absolute differences of HONO delta SCDs between the instruments are smaller than 0.63 × 1015 molecules cm-2. The correlation coefficients are higher than 0.7 and the slopes of linear regressions deviate from unity by less than 16 % for the elevation angle of 1°. The correlations decrease with an increase in elevation angle. All the participants also analysed synthetic spectra using the same baseline DOAS settings to evaluate the systematic errors of HONO results from their respective fit programs. In general the errors are smaller than 0.3 × 1015 molecules cm-2, which is about half of the systematic difference between the real measurements.The differences of HONO delta SCDs retrieved in the selected three spectral ranges 335-361, 335-373 and 335-390 nm are considerable (up to 0.57 × 1015 molecules cm-2) for both real measurements and synthetic spectra. We performed sensitivity studies to quantify the dominant systematic error sources and to find a recommended DOAS setting in the three spectral ranges. The results show that water vapour absorption, temperature and wavelength dependence of O4 absorption, temperature dependence of Ring spectrum, and polynomial and intensity offset correction all together dominate the systematic errors. We recommend a fit range of 335-373 nm for HONO retrievals. In such fit range the overall systematic uncertainty is about 0.87 × 1015 molecules cm-2, much smaller than those in the other two ranges. The typical random uncertainty is estimated to be about 0.16 × 1015 molecules cm-2, which is only 25 % of the total systematic uncertainty for most of the instruments in the MAD-CAT campaign. In summary for most of the MAX-DOAS instruments for elevation angle below 5°, half daytime measurements (usually in the morning) of HONO delta SCD can be over the detection limit of 0.2 × 1015 molecules cm-2 with an uncertainty of ˜ 0.9 × 1015 molecules cm-2.
Correlation methods in optical metrology with state-of-the-art x-ray mirrors
NASA Astrophysics Data System (ADS)
Yashchuk, Valeriy V.; Centers, Gary; Gevorkyan, Gevork S.; Lacey, Ian; Smith, Brian V.
2018-01-01
The development of fully coherent free electron lasers and diffraction limited storage ring x-ray sources has brought to focus the need for higher performing x-ray optics with unprecedented tolerances for surface slope and height errors and roughness. For example, the proposed beamlines for the future upgraded Advance Light Source, ALS-U, require optical elements characterized by a residual slope error of <100 nrad (root-mean-square) and height error of <1-2 nm (peak-tovalley). These are for optics with a length of up to one meter. However, the current performance of x-ray optical fabrication and metrology generally falls short of these requirements. The major limitation comes from the lack of reliable and efficient surface metrology with required accuracy and with reasonably high measurement rate, suitable for integration into the modern deterministic surface figuring processes. The major problems of current surface metrology relate to the inherent instrumental temporal drifts, systematic errors, and/or an unacceptably high cost, as in the case of interferometry with computer-generated holograms as a reference. In this paper, we discuss the experimental methods and approaches based on correlation analysis to the acquisition and processing of metrology data developed at the ALS X-Ray Optical Laboratory (XROL). Using an example of surface topography measurements of a state-of-the-art x-ray mirror performed at the XROL, we demonstrate the efficiency of combining the developed experimental correlation methods to the advanced optimal scanning strategy (AOSS) technique. This allows a significant improvement in the accuracy and capacity of the measurements via suppression of the instrumental low frequency noise, temporal drift, and systematic error in a single measurement run. Practically speaking, implementation of the AOSS technique leads to an increase of the measurement accuracy, as well as the capacity of ex situ metrology by a factor of about four. The developed method is general and applicable to a broad spectrum of high accuracy measurements.
Systematic errors of EIT systems determined by easily-scalable resistive phantoms.
Hahn, G; Just, A; Dittmar, J; Hellige, G
2008-06-01
We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.
Extension of sonic anemometry to high subsonic Mach number flows
NASA Astrophysics Data System (ADS)
Otero, R.; Lowe, K. T.; Ng, W. F.
2017-03-01
In the literature, the application of sonic anemometry has been limited to low subsonic Mach number, near-incompressible flow conditions. To the best of the authors’ knowledge, this paper represents the first time a sonic anemometry approach has been used to characterize flow velocity beyond Mach 0.3. Using a high speed jet, flow velocity was measured using a modified sonic anemometry technique in flow conditions up to Mach 0.83. A numerical study was conducted to identify the effects of microphone placement on the accuracy of the measured velocity. Based on estimated error strictly due to uncertainty in time-of-acoustic flight, a random error of +/- 4 m s-1 was identified for the configuration used in this experiment. Comparison with measurements from a Pitot probe indicated a velocity RMS error of +/- 9 m s-1. The discrepancy in error is attributed to a systematic error which may be calibrated out in future work. Overall, the experimental results from this preliminary study support the use of acoustics for high subsonic flow characterization.
Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks
Besada, Juan A.
2017-01-01
In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation. PMID:28934157
Sources of variability and systematic error in mouse timing behavior.
Gallistel, C R; King, Adam; McDonald, Robert
2004-01-01
In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.
On the Quality of Point-Clouds Derived from Sfm-Photogrammetry Applied to UAS Imagery
NASA Astrophysics Data System (ADS)
Carbonneau, P.; James, T.
2014-12-01
Structure from Motion photogrammetry (SfM-photogrammetry) recently appeared in environmental sciences as an impressive tool allowing for the creation of topographic data from unstructured imagery. Several authors have tested the performance of SfM-photogrammetry vs that of TLS or dGPS. Whilst the initial results were very promising, there is currently a growing awareness that systematic deformations occur in DEMs and point-clouds derived from SfM-photogrammetry. Notably, some authors have identified a systematic doming manifest as an increasing error vs distance to the model centre. Simulation studies have confirmed that this error is due to errors in the calibration of camera distortions. This work aims to further investigate these effects in the presence of real data. We start with a dataset of 220 images acquired from a sUAS. After obtaining an initial self-calibration of the camera lens with Agisoft Photoscan, our method consists in applying systematic perturbations to 2 key lens parameters: Focal length and the k1 distortion parameter. For each perturbation, a point-cloud was produced and compared to LiDAR data. After deriving the mean and standard deviation of the error residuals (ɛ), a 2nd order polynomial surface was fitted to the errors point-cloud and the peak ɛ defined as the mathematical extrema of this surface. The results are presented in figure 1. This figure shows that lens perturbations can induce a range of errors with systematic behaviours. Peak ɛ is primarily controlled by K1 with a secondary control exerted by the focal length. These results allow us to state that: To limit the peak ɛ to 10cm, the K1 parameter must be calibrated to within 0.00025 and the focal length to within 2.5 pixels (≈10 µm). This level of calibration accuracy can only be achieved with proper design of image acquisition and control network geometry. Our main point is therefore that SfM is not a bypass to a rigorous and well-informed photogrammetric approach. Users of SfM-photogrammetry will still require basic training and knowledge in the fundamentals of photogrammetry. This is especially true for applications where very small topographic changes need to be detected or where gradient-sensitive processes need to be modelled.
Error analysis and system optimization of non-null aspheric testing system
NASA Astrophysics Data System (ADS)
Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo
2010-10-01
A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.
2010-09-01
overlooked during previous SCR and other searches. The Two-Micron All Sky Survey ( 2MASS ) was used to probe for and reduce systematic errors in UCAC CCD...of 50–200 mas, when compared to 2MASS data. For a detailed description of the derived UCAC3 proper motions see Zacharias et al. (2010). An effort was...meeting the declination and proper motion survey limits, all stars (1) must be in the 2MASS catalog with an e2mpho ( 2MASS photometry error) less than
Cullen, Jared; Lobo, Charlene J; Ford, Michael J; Toth, Milos
2015-09-30
Electron-beam-induced deposition (EBID) is a direct-write chemical vapor deposition technique in which an electron beam is used for precursor dissociation. Here we show that Arrhenius analysis of the deposition rates of nanostructures grown by EBID can be used to deduce the diffusion energies and corresponding preexponential factors of EBID precursor molecules. We explain the limitations of this approach, define growth conditions needed to minimize errors, and explain why the errors increase systematically as EBID parameters diverge from ideal growth conditions. Under suitable deposition conditions, EBID can be used as a localized technique for analysis of adsorption barriers and prefactors.
Assiri, Ghadah Asaad; Shebl, Nada Atef; Mahmoud, Mansour Adam; Aloudah, Nouf; Grant, Elizabeth; Aljadhey, Hisham; Sheikh, Aziz
2018-05-05
To investigate the epidemiology of medication errors and error-related adverse events in adults in primary care, ambulatory care and patients' homes. Systematic review. Six international databases were searched for publications between 1 January 2006 and 31 December 2015. Two researchers independently extracted data from eligible studies and assessed the quality of these using established instruments. Synthesis of data was informed by an appreciation of the medicines' management process and the conceptual framework from the International Classification for Patient Safety. 60 studies met the inclusion criteria, of which 53 studies focused on medication errors, 3 on error-related adverse events and 4 on risk factors only. The prevalence of prescribing errors was reported in 46 studies: prevalence estimates ranged widely from 2% to 94%. Inappropriate prescribing was the most common type of error reported. Only one study reported the prevalence of monitoring errors, finding that incomplete therapeutic/safety laboratory-test monitoring occurred in 73% of patients. The incidence of preventable adverse drug events (ADEs) was estimated as 15/1000 person-years, the prevalence of drug-drug interaction-related adverse drug reactions as 7% and the prevalence of preventable ADE as 0.4%. A number of patient, healthcare professional and medication-related risk factors were identified, including the number of medications used by the patient, increased patient age, the number of comorbidities, use of anticoagulants, cases where more than one physician was involved in patients' care and care being provided by family physicians/general practitioners. A very wide variation in the medication error and error-related adverse events rates is reported in the studies, this reflecting heterogeneity in the populations studied, study designs employed and outcomes evaluated. This review has identified important limitations and discrepancies in the methodologies used and gaps in the literature on the epidemiology and outcomes of medication errors in community settings. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
A new systematic calibration method of ring laser gyroscope inertial navigation system
NASA Astrophysics Data System (ADS)
Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Xiong, Zhenyu; Long, Xingwu
2016-10-01
Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.
Measuring Systematic Error with Curve Fits
ERIC Educational Resources Information Center
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…
Systematic Error Modeling and Bias Estimation
Zhang, Feihu; Knoll, Alois
2016-01-01
This paper analyzes the statistic properties of the systematic error in terms of range and bearing during the transformation process. Furthermore, we rely on a weighted nonlinear least square method to calculate the biases based on the proposed models. The results show the high performance of the proposed approach for error modeling and bias estimation. PMID:27213386
A review of uncertainty in in situ measurements and data sets of sea surface temperature
NASA Astrophysics Data System (ADS)
Kennedy, John J.
2014-03-01
Archives of in situ sea surface temperature (SST) measurements extend back more than 160 years. Quality of the measurements is variable, and the area of the oceans they sample is limited, especially early in the record and during the two world wars. Measurements of SST and the gridded data sets that are based on them are used in many applications so understanding and estimating the uncertainties are vital. The aim of this review is to give an overview of the various components that contribute to the overall uncertainty of SST measurements made in situ and of the data sets that are derived from them. In doing so, it also aims to identify current gaps in understanding. Uncertainties arise at the level of individual measurements with both systematic and random effects and, although these have been extensively studied, refinement of the error models continues. Recent improvements have been made in the understanding of the pervasive systematic errors that affect the assessment of long-term trends and variability. However, the adjustments applied to minimize these systematic errors are uncertain and these uncertainties are higher before the 1970s and particularly large in the period surrounding the Second World War owing to a lack of reliable metadata. The uncertainties associated with the choice of statistical methods used to create globally complete SST data sets have been explored using different analysis techniques, but they do not incorporate the latest understanding of measurement errors, and they want for a fair benchmark against which their skill can be objectively assessed. These problems can be addressed by the creation of new end-to-end SST analyses and by the recovery and digitization of data and metadata from ship log books and other contemporary literature.
NASA Technical Reports Server (NTRS)
Jourdan, Didier; Gautier, Catherine
1995-01-01
Comprehensive Ocean-Atmosphere Data Set (COADS) and satellite-derived parameters are input to a similarity theory-based model and treated in completely equivalent ways to compute global latent heat flux (LHF). In order to compute LHF exclusively from satellite measurements, an empirical relationship (Q-W relationship) is used to compute the air mixing ratio from Special Sensor Microwave/Imager (SSM/I) precipitable water W and a new one is derived to compute the air temperature also from retrieved W(T-W relationship). First analyses indicate that in situ and satellite LHF computations compare within 40%, but systematic errors increase the differences up to 100% in some regions. By investigating more closely the origin of the discrepancies, the spatial sampling of ship reports has been found to be an important source of error in the observed differences. When the number of in situ data records increases (more than 20 per month), the agreement is about 50 W/sq m rms (40 W/sq m rms for multiyear averages). Limitations of both empirical relationships and W retrieval errors strongly affect the LHF computation. Systematic LHF overestimation occurs in strong subsidence regions and LHF underestimation occurs within surface convergence zones and over oceanic upwelling areas. The analysis of time series of the different parameters in these regions confirms that systematic LHF discrepancies are negatively correlated with the differences between COADS and satellite-derived values of the air mixing ratio and air temperature. To reduce the systematic differences in satellite-derived LHF, a preliminary ship-satellite blending procedure has been developed for the air mixing ratio and air temperature.
Modeling the North American vertical datum of 1988 errors in the conterminous United States
NASA Astrophysics Data System (ADS)
Li, X.
2018-02-01
A large systematic difference (ranging from -20 cm to +130 cm) was found between NAVD 88 (North AmericanVertical Datum of 1988) and the pure gravimetric geoid models. This difference not only makes it very difficult to augment the local geoid model by directly using the vast NAVD 88 network with state-of-the-art technologies recently developed in geodesy, but also limits the ability of researchers to effectively demonstrate the geoid model improvements on the NAVD 88 network. Here, both conventional regression analyses based on various predefined basis functions such as polynomials, B-splines, and Legendre functions and the Latent Variable Analysis (LVA) such as the Factor Analysis (FA) are used to analyze the systematic difference. Besides giving a mathematical model, the regression results do not reveal a great deal about the physical reasons that caused the large differences in NAVD 88, which may be of interest to various researchers. Furthermore, there is still a significant amount of no-Gaussian signals left in the residuals of the conventional regression models. On the other side, the FA method not only provides a better not of the data, but also offers possible explanations of the error sources. Without requiring extra hypothesis tests on the model coefficients, the results from FA are more efficient in terms of capturing the systematic difference. Furthermore, without using a covariance model, a novel interpolating method based on the relationship between the loading matrix and the factor scores is developed for predictive purposes. The prediction error analysis shows that about 3-7 cm precision is expected in NAVD 88 after removing the systematic difference.
Gaia Data Release 1. Validation of the photometry
NASA Astrophysics Data System (ADS)
Evans, D. W.; Riello, M.; De Angeli, F.; Busso, G.; van Leeuwen, F.; Jordi, C.; Fabricius, C.; Brown, A. G. A.; Carrasco, J. M.; Voss, H.; Weiler, M.; Montegriffo, P.; Cacciari, C.; Burgess, P.; Osborne, P.
2017-04-01
Aims: The photometric validation of the Gaia DR1 release of the ESA Gaia mission is described and the quality of the data shown. Methods: This is carried out via an internal analysis of the photometry using the most constant sources. Comparisons with external photometric catalogues are also made, but are limited by the accuracies and systematics present in these catalogues. An analysis of the quoted errors is also described. Investigations of the calibration coefficients reveal some of the systematic effects that affect the fluxes. Results: The analysis of the constant sources shows that the early-stage photometric calibrations can reach an accuracy as low as 3 mmag.
A Systematic Methodology for Verifying Superscalar Microprocessors
NASA Technical Reports Server (NTRS)
Srivas, Mandayam; Hosabettu, Ravi; Gopalakrishnan, Ganesh
1999-01-01
We present a systematic approach to decompose and incrementally build the proof of correctness of pipelined microprocessors. The central idea is to construct the abstraction function by using completion functions, one per unfinished instruction, each of which specifies the effect (on the observables) of completing the instruction. In addition to avoiding the term size and case explosion problem that limits the pure flushing approach, our method helps localize errors, and also handles stages with interactive loops. The technique is illustrated on pipelined and superscalar pipelined implementations of a subset of the DLX architecture. It has also been applied to a processor with out-of-order execution.
Crab Pulsar Astrometry and Spin-Velocity Alignment
NASA Astrophysics Data System (ADS)
Romani, Roger W.; Ng, C.-Y.
2009-01-01
The proper motion of the Crab pulsar and its orientation with respect to the PWN symmetry axis is interesting for testing models of neutron star birth kicks. A number of authors have measured the Crab's motion using archival HST images. The most detailed study by Kaplan et al. (2008) compares a wide range of WFPC and ACS images to obtain an accurate proper motion measurement. However, they concluded that a kick comparison is fundamentally limited by the uncertainty in the progenitor's motion. Here we report on new HST images matched to 1994 and 1995 data frames, providing independent proper motion measurement with over 13 year time base and minimal systematic errors. The new observations also allow us to estimate the systematic errors due to CCD saturation. Our preliminary result indicates a proper motion consistent with Kaplan et al.'s finding. We discuss a model for the progenitor's motion, suggesting that the pulsar spin is much closer to alignment than previously suspected.
Practical issues in ultrashort-laser-pulse measurement using frequency-resolved optical gating
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeLong, K.W.; Fittinghoff, D.N.; Trebino, R.
1996-07-01
The authors explore several practical experimental issues in measuring ultrashort laser pulses using the technique of frequency-resolved optical gating (FROG). They present a simple method for checking the consistency of experimentally measured FROG data with the independently measured spectrum and autocorrelation of the pulse. This method is a powerful way of discovering systematic errors in FROG experiments. They show how to determine the optimum sampling rate for FROG and show that this satisfies the Nyquist criterion for the laser pulse. They explore the low- and high-power limits to FROG and determine that femtojoule operation should be possible, while the effectsmore » of self-phase modulation limit the highest signal efficiency in FROG to 1%. They also show quantitatively that the temporal blurring due to a finite-thickness medium in single-shot geometries does not strongly limit the FROG technique. They explore the limiting time-bandwidth values that can be represented on a FROG trace of a given size. Finally, they report on a new measure of the FROG error that improves convergence in the presence of noise.« less
Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo
2015-08-05
This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.
Methodology for designing psychological habitability for the space station.
Komastubara, A
2000-09-01
Psychological habitability is a critical quality issue for the International Space Station because poor habitability degrades performance shaping factors (PSFs) and increases human errors. However, habitability often receives rather limited design attention based on someone's superficial tastes because systematic design procedures lack habitability quality. To improve design treatment of psychological habitability, this paper proposes and discusses a design methodology for designing psychological habitability for the International Space Station.
Computer Aided Design Parameters for Forward Basing
1988-12-01
21 meters. Systematic errors within limits stated for absolute accuracy are tolerated at this level. DEM data acquired photogrammetrically using manual ...This is a professional drawing package, 19 capable of the manipulation required for this project. With the AutoLISP programming language (a variation on...Table 2). 0 25 Data Conversion Package II GWN System’s Digital Terrain Modeling (DTM) package was used. This AutoLISP -based third party software is
The Storage Ring Proton EDM Experiment
NASA Astrophysics Data System (ADS)
Semertzidis, Yannis; Storage Ring Proton EDM Collaboration
2014-09-01
The storage ring pEDM experiment utilizes an all-electric storage ring to store ~1011 longitudinally polarized protons simultaneously in clock-wise and counter-clock-wise directions for 103 seconds. The radial E-field acts on the proton EDM for the duration of the storage time to precess its spin in the vertical plane. The ring lattice is optimized to reduce intra-beam scattering, increase the statistical sensitivity and reduce the systematic errors of the method. The main systematic error is a net radial B-field integrated around the ring causing an EDM-like vertical spin precession. The counter-rotating beams sense this integrated field and are vertically shifted by an amount, which depends on the strength of the vertical focusing in the ring, thus creating a radial B-field. Modulating the vertical focusing at 10 kHz makes possible the detection of this radial B-field by a SQUID-magnetometer (SQUID-based BPM). For a total number of n SQUID-based BPMs distributed around the ring the effectiveness of the method is limited to the N = n /2 harmonic of the background radial B-field due to the Nyquist sampling theorem limit. This limitation establishes the requirement to reduce the maximum radial B-field to 0.1-1 nT everywhere around the ring by layers of mu-metal and aluminum vacuum tube. The metho's sensitivity is 10-29 e .cm , more than three orders of magnitude better than the present neutron EDM experimental limit, making it sensitive to SUSY-like new physics mass scale up to 300 TeV.
Assessment of Systematic Measurement Errors for Acoustic Travel-Time Tomography of the Atmosphere
2013-01-01
measurements include assess- ment of the time delays in electronic circuits and mechanical hardware (e.g., drivers and microphones) of a tomography array ...hardware and electronic circuits of the tomography array and errors in synchronization of the transmitted and recorded signals. For example, if...coordinates can be as large as 30 cm. These errors are equivalent to the systematic errors in the travel times of 0.9 ms. Third, loudspeakers which are used
Auditing as Part of the Terminology Design Life Cycle
Min, Hua; Perl, Yehoshua; Chen, Yan; Halper, Michael; Geller, James; Wang, Yue
2006-01-01
Objective To develop and test an auditing methodology for detecting errors in medical terminologies satisfying systematic inheritance. This methodology is based on various abstraction taxonomies that provide high-level views of a terminology and highlight potentially erroneous concepts. Design Our auditing methodology is based on dividing concepts of a terminology into smaller, more manageable units. First, we divide the terminology’s concepts into areas according to their relationships/roles. Then each multi-rooted area is further divided into partial-areas (p-areas) that are singly-rooted. Each p-area contains a set of structurally and semantically uniform concepts. Two kinds of abstraction networks, called the area taxonomy and p-area taxonomy, are derived. These taxonomies form the basis for the auditing approach. Taxonomies tend to highlight potentially erroneous concepts in areas and p-areas. Human reviewers can focus their auditing efforts on the limited number of problematic concepts following two hypotheses on the probable concentration of errors. Results A sample of the area taxonomy and p-area taxonomy for the Biological Process (BP) hierarchy of the National Cancer Institute Thesaurus (NCIT) was derived from the application of our methodology to its concepts. These views led to the detection of a number of different kinds of errors that are reported, and to confirmation of the hypotheses on error concentration in this hierarchy. Conclusion Our auditing methodology based on area and p-area taxonomies is an efficient tool for detecting errors in terminologies satisfying systematic inheritance of roles, and thus facilitates their maintenance. This methodology concentrates a domain expert’s manual review on portions of the concepts with a high likelihood of errors. PMID:16929044
Error analysis for the ground-based microwave ozone measurements during STOIC
NASA Technical Reports Server (NTRS)
Connor, Brian J.; Parrish, Alan; Tsou, Jung-Jung; McCormick, M. Patrick
1995-01-01
We present a formal error analysis and characterization of the microwave measurements made during the Stratospheric Ozone Intercomparison Campaign (STOIC). The most important error sources are found to be determination of the tropospheric opacity, the pressure-broadening coefficient of the observed line, and systematic variations in instrument response as a function of frequency ('baseline'). Net precision is 4-6% between 55 and 0.2 mbar, while accuracy is 6-10%. Resolution is 8-10 km below 3 mbar and increases to 17km at 0.2 mbar. We show the 'blind' microwave measurements from STOIC and make limited comparisons to other measurements. We use the averaging kernels of the microwave measurement to eliminate resolution and a priori effects from a comparison to SAGE 2. The STOIC results and comparisons are broadly consistent with the formal analysis.
NASA Astrophysics Data System (ADS)
Davis, K. J.; Bakwin, P. S.; Yi, C.; Cook, B. D.; Wang, W.; Denning, A. S.; Teclaw, R.; Isebrands, J. G.
2001-05-01
Long-term, tower-based measurements using the eddy-covariance method have revealed a wealth of detail about the temporal dynamics of netecosystem-atmosphere exchange (NEE) of CO2. The data also provide a measure of the annual net CO2 exchange. The area represented by these flux measurements, however, is limited, and doubts remain about possible systematic errors that may bias the annual net exchange measurements. Flux and mixing ratio measurements conducted at the WLEF tall tower as part of the Chequamegon Ecosystem-Atmosphere Study (ChEAS) allow for unique assessment of the uncertainties in NEE of CO2. The synergy between flux and mixing ratio observations shows the potential for comparing inverse and eddy-covariance methods of estimating NEE of CO2. Such comparisons may strengthen confidence in both results and begin to bridge the huge gap in spatial scales (at least 3 orders of magnitude) between continental or hemispheric scale inverse studies and kilometer-scale eddy covariance flux measurements. Data from WLEF and Willow Creek, another ChEAS tower, are used to estimate random and systematic errors in NEE of CO2. Random uncertainty in seasonal exchange rates and the annual integrated NEE, including both turbulent sampling errors and variability in enviromental conditions, is small. Systematic errors are identified by examining changes in flux as a function of atmospheric stability and wind direction, and by comparing the multiple level flux measurements on the WLEF tower. Nighttime drainage is modest but evident. Systematic horizontal advection occurs during the morning turbulence transition. The potential total systematic error appears to be larger than random uncertainty, but still modest. The total systematic error, however, is difficult to assess. It appears that the WLEF region ecosystems were a small net sink of CO2 in 1997. It is clear that the summer uptake rate at WLEF is much smaller than that at most deciduous forest sites, including the nearby Willow Creek site. The WLEF tower also allows us to study the potential for monitoring continental CO2 mixing ratios from tower sites. Despite concerns about the proximity to ecosystem sources and sinks, it is clear that boundary layer CO2 mixing ratios can be monitored using typical surface layer towers. Seasonal and annual land-ocean mixing ratio gradients are readily detectable, providing the motivation for a flux-tower based mixing ratio observation network that could greatly improve the accuracy of inversion-based estimates of NEE of CO2, and enable inversions to be applied on smaller temporal and spatial scales. Results from the WLEF tower illustrate the degree to which local flux measurements represent interannual, seasonal and synoptic CO2 mixing ratio trends. This coherence between fluxes and mixing ratios serves to "regionalize" the eddy-covariance based local NEE observations.
Distance error correction for time-of-flight cameras
NASA Astrophysics Data System (ADS)
Fuersattel, Peter; Schaller, Christian; Maier, Andreas; Riess, Christian
2017-06-01
The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.
Karliner, Leah S; Jacobs, Elizabeth A; Chen, Alice Hm; Mutha, Sunita
2007-01-01
Objective To determine if professional medical interpreters have a positive impact on clinical care for limited English proficiency (LEP) patients. Data Sources A systematic literature search, limited to the English language, in PubMed and PsycINFO for publications between 1966 and September 2005, and a search of the Cochrane Library. Study Design Any peer-reviewed article which compared at least two language groups, and contained data about professional medical interpreters and addressed communication (errors and comprehension), utilization, clinical outcomes, or satisfaction were included. Of 3,698 references, 28 were found by multiple reviewers to meet inclusion criteria and, of these, 21 assessed professional interpreters separately from ad hoc interpreters. Data were abstracted from each article by two reviewers. Data were collected on the study design, size, comparison groups, analytic technique, interpreter training, and method of determining the participants' need for an interpreter. Each study was evaluated for the effect of interpreter use on four clinical topics that were most likely to either impact or reflect disparities in health and health care. Principal Findings In all four areas examined, use of professional interpreters is associated with improved clinical care more than is use of ad hoc interpreters, and professional interpreters appear to raise the quality of clinical care for LEP patients to approach or equal that for patients without language barriers. Conclusions Published studies report positive benefits of professional interpreters on communication (errors and comprehension), utilization, clinical outcomes and satisfaction with care. PMID:17362215
Experimental search for the violation of Pauli exclusion principle: VIP-2 Collaboration.
Shi, H; Milotti, E; Bartalucci, S; Bazzi, M; Bertolucci, S; Bragadireanu, A M; Cargnelli, M; Clozza, A; De Paolis, L; Di Matteo, S; Egger, J-P; Elnaggar, H; Guaraldo, C; Iliescu, M; Laubenstein, M; Marton, J; Miliucci, M; Pichler, A; Pietreanu, D; Piscicchia, K; Scordo, A; Sirghi, D L; Sirghi, F; Sperandio, L; Vazquez Doce, O; Widmann, E; Zmeskal, J; Curceanu, C
2018-01-01
The VIolation of Pauli exclusion principle -2 experiment, or VIP-2 experiment, at the Laboratori Nazionali del Gran Sasso searches for X-rays from copper atomic transitions that are prohibited by the Pauli exclusion principle. Candidate direct violation events come from the transition of a 2 p electron to the ground state that is already occupied by two electrons. From the first data taking campaign in 2016 of VIP-2 experiment, we determined a best upper limit of [Formula: see text] for the probability that such a violation exists. Significant improvement in the control of the experimental systematics was also achieved, although not explicitly reflected in the improved upper limit. By introducing a simultaneous spectral fit of the signal and background data in the analysis, we succeeded in taking into account systematic errors that could not be evaluated previously in this type of measurements.
Experimental search for the violation of Pauli exclusion principle. VIP-2 Collaboration
NASA Astrophysics Data System (ADS)
Shi, H.; Milotti, E.; Bartalucci, S.; Bazzi, M.; Bertolucci, S.; Bragadireanu, A. M.; Cargnelli, M.; Clozza, A.; De Paolis, L.; Di Matteo, S.; Egger, J.-P.; Elnaggar, H.; Guaraldo, C.; Iliescu, M.; Laubenstein, M.; Marton, J.; Miliucci, M.; Pichler, A.; Pietreanu, D.; Piscicchia, K.; Scordo, A.; Sirghi, D. L.; Sirghi, F.; Sperandio, L.; Vazquez Doce, O.; Widmann, E.; Zmeskal, J.; Curceanu, C.
2018-04-01
The VIolation of Pauli exclusion principle -2 experiment, or VIP-2 experiment, at the Laboratori Nazionali del Gran Sasso searches for X-rays from copper atomic transitions that are prohibited by the Pauli exclusion principle. Candidate direct violation events come from the transition of a 2 p electron to the ground state that is already occupied by two electrons. From the first data taking campaign in 2016 of VIP-2 experiment, we determined a best upper limit of 3.4 × 10^{-29} for the probability that such a violation exists. Significant improvement in the control of the experimental systematics was also achieved, although not explicitly reflected in the improved upper limit. By introducing a simultaneous spectral fit of the signal and background data in the analysis, we succeeded in taking into account systematic errors that could not be evaluated previously in this type of measurements.
NASA Astrophysics Data System (ADS)
Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim
2017-09-01
Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.
NASA Technical Reports Server (NTRS)
Sun, Jielun
1993-01-01
Results are presented of a test of the physically based total column water vapor retrieval algorithm of Wentz (1992) for sensitivity to realistic vertical distributions of temperature and water vapor. The ECMWF monthly averaged temperature and humidity fields are used to simulate the spatial pattern of systematic retrieval error of total column water vapor due to this sensitivity. The estimated systematic error is within 0.1 g/sq cm over about 70 percent of the global ocean area; systematic errors greater than 0.3 g/sq cm are expected to exist only over a few well-defined regions, about 3 percent of the global oceans, assuming that the global mean value is unbiased.
Casas, Francisco J.; Ortiz, David; Villa, Enrique; Cano, Juan L.; Cagigas, Jaime; Pérez, Ana R.; Aja, Beatriz; Terán, J. Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo
2015-01-01
This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906
Improving Drive Files for Vehicle Road Simulations
NASA Astrophysics Data System (ADS)
Cherng, John G.; Goktan, Ali; French, Mark; Gu, Yi; Jacob, Anil
2001-09-01
Shaker tables are commonly used in laboratories for automotive vehicle component testing to study durability and acoustics performance. An example is development testing of car seats. However, it is difficult to repeat the measured road data perfectly with the response of a shaker table as there are basic differences in dynamic characteristics between a flexible vehicle and substantially rigid shaker table. In addition, there are performance limits in the shaker table drive systems that can limit correlation. In practice, an optimal drive signal for the actuators is created iteratively. During each iteration, the error between the road data and the response data is minimised by an optimising algorithm which is generally a part of the feed back loop of the shake table controller. This study presents a systematic investigation to the errors in time and frequency domains as well as joint time-frequency domain and an evaluation of different digital signal processing techniques that have been used in previous work. In addition, we present an innovative approach that integrates the dynamic characteristics of car seats and the human body into the error-minimising iteration process. We found that the iteration process can be shortened and the error reduced by using a weighting function created by normalising the frequency response function of the car seat. Two road data test sets were used in the study.
A study for systematic errors of the GLA forecast model in tropical regions
NASA Technical Reports Server (NTRS)
Chen, Tsing-Chang; Baker, Wayman E.; Pfaendtner, James; Corrigan, Martin
1988-01-01
From the sensitivity studies performed with the Goddard Laboratory for Atmospheres (GLA) analysis/forecast system, it was revealed that the forecast errors in the tropics affect the ability to forecast midlatitude weather in some cases. Apparently, the forecast errors occurring in the tropics can propagate to midlatitudes. Therefore, the systematic error analysis of the GLA forecast system becomes a necessary step in improving the model's forecast performance. The major effort of this study is to examine the possible impact of the hydrological-cycle forecast error on dynamical fields in the GLA forecast system.
Measurement error is often neglected in medical literature: a systematic review.
Brakenhoff, Timo B; Mitroiu, Marian; Keogh, Ruth H; Moons, Karel G M; Groenwold, Rolf H H; van Smeden, Maarten
2018-06-01
In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary. Copyright © 2018 Elsevier Inc. All rights reserved.
Bao, Guzhi; Wickenbrock, Arne; Rochester, Simon; Zhang, Weiping; Budker, Dmitry
2018-01-19
The nonlinear Zeeman effect can induce splitting and asymmetries of magnetic-resonance lines in the geophysical magnetic-field range. This is a major source of "heading error" for scalar atomic magnetometers. We demonstrate a method to suppress the nonlinear Zeeman effect and heading error based on spin locking. In an all-optical synchronously pumped magnetometer with separate pump and probe beams, we apply a radio-frequency field which is in phase with the precessing magnetization. This results in the collapse of the multicomponent asymmetric magnetic-resonance line with ∼100 Hz width in the Earth-field range into a single peak with a width of 22 Hz, whose position is largely independent of the orientation of the sensor within a range of orientation angles. The technique is expected to be broadly applicable in practical magnetometry, potentially boosting the sensitivity and accuracy of Earth-surveying magnetometers by increasing the magnetic-resonance amplitude, decreasing its width, and removing the important and limiting heading-error systematic.
NASA Astrophysics Data System (ADS)
Liu, Zhixiang; Xing, Tingwen; Jiang, Yadong; Lv, Baobin
2018-02-01
A two-dimensional (2-D) shearing interferometer based on an amplitude chessboard grating was designed to measure the wavefront aberration of a high numerical-aperture (NA) objective. Chessboard gratings offer better diffraction efficiencies and fewer disturbing diffraction orders than traditional cross gratings. The wavefront aberration of the tested objective was retrieved from the shearing interferogram using the Fourier transform and differential Zernike polynomial-fitting methods. Grating manufacturing errors, including the duty-cycle and pattern-deviation errors, were analyzed with the Fourier transform method. Then, according to the relation between the spherical pupil and planar detector coordinates, the influence of the distortion of the pupil coordinates was simulated. Finally, the systematic error attributable to grating alignment errors was deduced through the geometrical ray-tracing method. Experimental results indicate that the measuring repeatability (3σ) of the wavefront aberration of an objective with NA 0.4 was 3.4 mλ. The systematic-error results were consistent with previous analyses. Thus, the correct wavefront aberration can be obtained after calibration.
NASA Astrophysics Data System (ADS)
Johnson, Traci L.; Sharon, Keren
2016-11-01
Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.
Correcting systematic errors in high-sensitivity deuteron polarization measurements
NASA Astrophysics Data System (ADS)
Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.
2012-02-01
This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.
A study to assess the long-term stability of the ionization chamber reference system in the LNMRI
NASA Astrophysics Data System (ADS)
Trindade Filho, O. L.; Conceição, D. A.; da Silva, C. J.; Delgado, J. U.; de Oliveira, A. E.; Iwahara, A.; Tauhata, L.
2018-03-01
Ionization chambers are used as secondary standard in order to maintain the calibration factors of radionuclides in the activity measurements in metrology laboratories. Used as radionuclide calibrator in nuclear medicine clinics to control dose in patients, its long-term performance is not evaluated systematically. A methodology for long-term evaluation for its stability is monitored and checked. Historical data produced monthly of 2012 until 2017, by an ionization chamber, electrometer and 226Ra, were analyzed via control chart, aiming to follow the long-term performance. Monitoring systematic errors were consistent within the limits of control, demonstrating the quality of measurements in compliance with ISO17025.
TRAINING ERRORS AND RUNNING RELATED INJURIES: A SYSTEMATIC REVIEW
Buist, Ida; Sørensen, Henrik; Lind, Martin; Rasmussen, Sten
2012-01-01
Purpose: The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries. Methods: A systematic search was performed in PubMed, Web of Science, Embase, and SportDiscus. Studies were included if they examined novice, recreational, or elite runners between the ages of 18 and 65. Exposure variables were training characteristics defined as volume, distance or mileage, time or duration, frequency, intensity, speed or pace, or similar terms. The outcome of interest was Running Related Injuries (RRI) in general or specific RRI in the lower extremity or lower back. Methodological quality was evaluated using quality assessment tools of 11 to 16 items. Results: After examining 4561 titles and abstracts, 63 articles were identified as potentially relevant. Finally, nine retrospective cohort studies, 13 prospective cohort studies, six case-control studies, and three randomized controlled trials were included. The mean quality score was 44.1%. Conflicting results were reported on the relationships between volume, duration, intensity, and frequency and RRI. Conclusion: It was not possible to identify which training errors were related to running related injuries. Still, well supported data on which training errors relate to or cause running related injuries is highly important for determining proper prevention strategies. If methodological limitations in measuring training variables can be resolved, more work can be conducted to define training and the interactions between different training variables, create several hypotheses, test the hypotheses in a large scale prospective study, and explore cause and effect relationships in randomized controlled trials. Level of evidence: 2a PMID:22389869
Du, Jiaying; Gerdtman, Christer; Lindén, Maria
2018-04-06
Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented.
Gerdtman, Christer
2018-01-01
Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented. PMID:29642412
Mossburg, Sarah E; Dennison Himmelfarb, Cheryl
2018-06-25
In the last 20 years, there have been numerous successful efforts to improve patient safety, although recent research still shows a significant gap. Researchers have begun exploring the impact of individual level factors on patient safety culture and safety outcomes. This review examines the state of the science exploring the impact of professional burnout and engagement on patient safety culture and safety outcomes. A systematic search was conducted in CINAHL, PubMed, and Embase. Studies included reported on the relationships among burnout or engagement and safety culture or safety outcomes. Twenty-two studies met inclusion criteria. Ten studies showed a relationship between both safety culture and clinical errors with burnout. Two of 3 studies reported an association between burnout and patient outcomes. Fewer studies focused on engagement. Most studies exploring engagement and safety culture found a moderately strong positive association. The limited evidence on the relationship between engagement and errors depicts inconsistent findings. Only one study explored engagement and patient outcomes, which failed to find a relationship. The burnout/safety literature should be expanded to a multidisciplinary focus. Mixed results of the relationship between burnout and errors could be due to a disparate relationship with perceived versus observed errors. The engagement/safety literature is immature, although high engagement seems to be associated with high safety culture. Extending this science into safety outcomes would be meaningful, especially in light of the recent focus on an abundance-based approach to safety.
Tully, Mary P; Ashcroft, Darren M; Dornan, Tim; Lewis, Penny J; Taylor, David; Wass, Val
2009-01-01
Prescribing errors are common, they result in adverse events and harm to patients and it is unclear how best to prevent them because recommendations are more often based on surmized rather than empirically collected data. The aim of this systematic review was to identify all informative published evidence concerning the causes of and factors associated with prescribing errors in specialist and non-specialist hospitals, collate it, analyse it qualitatively and synthesize conclusions from it. Seven electronic databases were searched for articles published between 1985-July 2008. The reference lists of all informative studies were searched for additional citations. To be included, a study had to be of handwritten prescriptions for adult or child inpatients that reported empirically collected data on the causes of or factors associated with errors. Publications in languages other than English and studies that evaluated errors for only one disease, one route of administration or one type of prescribing error were excluded. Seventeen papers reporting 16 studies, selected from 1268 papers identified by the search, were included in the review. Studies from the US and the UK in university-affiliated hospitals predominated (10/16 [62%]). The definition of a prescribing error varied widely and the included studies were highly heterogeneous. Causes were grouped according to Reason's model of accident causation into active failures, error-provoking conditions and latent conditions. The active failure most frequently cited was a mistake due to inadequate knowledge of the drug or the patient. Skills-based slips and memory lapses were also common. Where error-provoking conditions were reported, there was at least one per error. These included lack of training or experience, fatigue, stress, high workload for the prescriber and inadequate communication between healthcare professionals. Latent conditions included reluctance to question senior colleagues and inadequate provision of training. Prescribing errors are often multifactorial, with several active failures and error-provoking conditions often acting together to cause them. In the face of such complexity, solutions addressing a single cause, such as lack of knowledge, are likely to have only limited benefit. Further rigorous study, seeking potential ways of reducing error, needs to be conducted. Multifactorial interventions across many parts of the system are likely to be required.
ERIC Educational Resources Information Center
Py, Bernard
A progress report is presented of a study which applies a system of generative grammar to error analysis. The objective of the study was to reconstruct the grammar of students' interlanguage, using a systematic analysis of errors. (Interlanguage refers to the linguistic competence of a student who possesses a relatively systematic body of rules,…
Evaluation and Application of Satellite-Based Latent Heating Profile Estimation Methods
NASA Technical Reports Server (NTRS)
Olson, William S.; Grecu, Mircea; Yang, Song; Tao, Wei-Kuo
2004-01-01
In recent years, methods for estimating atmospheric latent heating vertical structure from both passive and active microwave remote sensing have matured to the point where quantitative evaluation of these methods is the next logical step. Two approaches for heating algorithm evaluation are proposed: First, application of heating algorithms to synthetic data, based upon cloud-resolving model simulations, can be used to test the internal consistency of heating estimates in the absence of systematic errors in physical assumptions. Second, comparisons of satellite-retrieved vertical heating structures to independent ground-based estimates, such as rawinsonde-derived analyses of heating, provide an additional test. The two approaches are complementary, since systematic errors in heating indicated by the second approach may be confirmed by the first. A passive microwave and combined passive/active microwave heating retrieval algorithm are evaluated using the described approaches. In general, the passive microwave algorithm heating profile estimates are subject to biases due to the limited vertical heating structure information contained in the passive microwave observations. These biases may be partly overcome by including more environment-specific a priori information into the algorithm s database of candidate solution profiles. The combined passive/active microwave algorithm utilizes the much higher-resolution vertical structure information provided by spaceborne radar data to produce less biased estimates; however, the global spatio-temporal sampling by spaceborne radar is limited. In the present study, the passive/active microwave algorithm is used to construct a more physically-consistent and environment-specific set of candidate solution profiles for the passive microwave algorithm and to help evaluate errors in the passive algorithm s heating estimates. Although satellite estimates of latent heating are based upon instantaneous, footprint- scale data, suppression of random errors requires averaging to at least half-degree resolution. Analysis of mesoscale and larger space-time scale phenomena based upon passive and passive/active microwave heating estimates from TRMM, SSMI, and AMSR data will be presented at the conference.
Thirty Years of Improving the NCEP Global Forecast System
NASA Astrophysics Data System (ADS)
White, G. H.; Manikin, G.; Yang, F.
2014-12-01
Current eight day forecasts by the NCEP Global Forecast System are as accurate as five day forecasts 30 years ago. This revolution in weather forecasting reflects increases in computer power, improvements in the assimilation of observations, especially satellite data, improvements in model physics, improvements in observations and international cooperation and competition. One important component has been and is the diagnosis, evaluation and reduction of systematic errors. The effect of proposed improvements in the GFS on systematic errors is one component of the thorough testing of such improvements by the Global Climate and Weather Modeling Branch. Examples of reductions in systematic errors in zonal mean temperatures and winds and other fields will be presented. One challenge in evaluating systematic errors is uncertainty in what reality is. Model initial states can be regarded as the best overall depiction of the atmosphere, but can be misleading in areas of few observations or for fields not well observed such as humidity or precipitation over the oceans. Verification of model physics is particularly difficult. The Environmental Modeling Center emphasizes the evaluation of systematic biases against observations. Recently EMC has placed greater emphasis on synoptic evaluation and on precipitation, 2-meter temperatures and dew points and 10 meter winds. A weekly EMC map discussion reviews the performance of many models over the United States and has helped diagnose and alleviate significant systematic errors in the GFS, including a near surface summertime evening cold wet bias over the eastern US and a multi-week period when the GFS persistently developed bogus tropical storms off Central America. The GFS exhibits a wet bias for light rain and a dry bias for moderate to heavy rain over the continental United States. Significant changes to the GFS are scheduled to be implemented in the fall of 2014. These include higher resolution, improved physics and improvements to the assimilation. These changes significantly improve the tropospheric flow and reduce a tropical upper tropospheric warm bias. One important error remaining is the failure of the GFS to maintain deep convection over Indonesia and in the tropical west Pacific. This and other current systematic errors will be presented.
Binny, Diana; Lancaster, Craig M; Trapp, Jamie V; Crowe, Scott B
2017-09-01
This study utilizes process control techniques to identify action limits for TomoTherapy couch positioning quality assurance tests. A test was introduced to monitor accuracy of the applied couch offset detection in the TomoTherapy Hi-Art treatment system using the TQA "Step-Wedge Helical" module and MVCT detector. Individual X-charts, process capability (cp), probability (P), and acceptability (cpk) indices were used to monitor a 4-year couch IEC offset data to detect systematic and random errors in the couch positional accuracy for different action levels. Process capability tests were also performed on the retrospective data to define tolerances based on user-specified levels. A second study was carried out whereby physical couch offsets were applied using the TQA module and the MVCT detector was used to detect the observed variations. Random and systematic variations were observed for the SPC-based upper and lower control limits, and investigations were carried out to maintain the ongoing stability of the process for a 4-year and a three-monthly period. Local trend analysis showed mean variations up to ±0.5 mm in the three-monthly analysis period for all IEC offset measurements. Variations were also observed in the detected versus applied offsets using the MVCT detector in the second study largely in the vertical direction, and actions were taken to remediate this error. Based on the results, it was recommended that imaging shifts in each coordinate direction be only applied after assessing the machine for applied versus detected test results using the step helical module. User-specified tolerance levels of at least ±2 mm were recommended for a test frequency of once every 3 months to improve couch positional accuracy. SPC enables detection of systematic variations prior to reaching machine tolerance levels. Couch encoding system recalibrations reduced variations to user-specified levels and a monitoring period of 3 months using SPC facilitated in detecting systematic and random variations. SPC analysis for couch positional accuracy enabled greater control in the identification of errors, thereby increasing confidence levels in daily treatment setups. © 2017 Royal Brisbane and Women's Hospital, Metro North Hospital and Health Service. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Adelberger, E. G.; Stubbs, C. W.; Heckel, B. R.; Su, Y.; Swanson, H. E.; Smith, G.; Gundlach, J. H.; Rogers, W. F.
1990-11-01
A sensitive, systematic search for feeble, macroscopic forces arising from the exchange of hypothetical ultra-low-mass bosons was made by observing the differential acceleration of two different test body pairs toward two different sources. Our differential accelerometer-a highly symmetric, continuously rotating torsion balance-incorporated several innovations that effectively suppressed systematic errors. All known sources of systematic error were demonstrated to be negligible in comparison to our fluctuating errors which are roughly 7 times larger than the fundamental limit set by the fact that we observe an oscillator at room temperature with a given damping time. Our 1σ limits on the horizontal differential acceleration of Be/Al or Be/Cu test body pairs in the field of the Earth, Δa⊥=(2.1+/-2.1)×10-11 cm s-2 and Δa⊥=(0.8+/-1.7)×10-11 cm s-2, respectively, set improved bounds on Yukawa interactions mediated by bosons with masses ranging between mbc2~=3×10-18 and mbc2~=1×10-6 eV. For example, our constraints on infinite-range vector interactions with charges of B and of B-L are roughly 10 and 2 times more sensitive than those obtained by Roll, Krotkov, and Dicke using the field of the Sun. Furthermore we set stringent constraints down to λ=1 m, while those of solar experiments are weak for λ<1 AU. In terms of the weak equivalence principle in the field of the Earth, our 1σ result corresponds to mi/mg(Cu)-mi/mg(Be)=(0.2+/-1.0)×10-11. Our results also yield stringent constraints on the nonsymmetric gravitation theory of Moffat and on the anomalous acceleration of antimatter in proposed ``quantum gravity'' models, and have implications for lunar-ranging tests of the strong equivalence principle. Our 1σ limit on the differential acceleration of Be/Al test body pairs toward a 1.5 Mg Pb laboratory source, Δa=(-0.15+/-1.31)×10-10 cm s-2, provides constraints on Yukawa interactions with ranges down to 10 cm, and on interactions whose charge is B-2L.
NASA Astrophysics Data System (ADS)
Bertin, Stephane; Friedrich, Heide; Delmas, Patrice; Chan, Edwin; Gimel'farb, Georgy
2015-03-01
Grain-scale monitoring of fluvial morphology is important for the evaluation of river system dynamics. Significant progress in remote sensing and computer performance allows rapid high-resolution data acquisition, however, applications in fluvial environments remain challenging. Even in a controlled environment, such as a laboratory, the extensive acquisition workflow is prone to the propagation of errors in digital elevation models (DEMs). This is valid for both of the common surface recording techniques: digital stereo photogrammetry and terrestrial laser scanning (TLS). The optimisation of the acquisition process, an effective way to reduce the occurrence of errors, is generally limited by the use of commercial software. Therefore, the removal of evident blunders during post processing is regarded as standard practice, although this may introduce new errors. This paper presents a detailed evaluation of a digital stereo-photogrammetric workflow developed for fluvial hydraulic applications. The introduced workflow is user-friendly and can be adapted to various close-range measurements: imagery is acquired with two Nikon D5100 cameras and processed using non-proprietary "on-the-job" calibration and dense scanline-based stereo matching algorithms. Novel ground truth evaluation studies were designed to identify the DEM errors, which resulted from a combination of calibration errors, inaccurate image rectifications and stereo-matching errors. To ensure optimum DEM quality, we show that systematic DEM errors must be minimised by ensuring a good distribution of control points throughout the image format during calibration. DEM quality is then largely dependent on the imagery utilised. We evaluated the open access multi-scale Retinex algorithm to facilitate the stereo matching, and quantified its influence on DEM quality. Occlusions, inherent to any roughness element, are still a major limiting factor to DEM accuracy. We show that a careful selection of the camera-to-object and baseline distance reduces errors in occluded areas and that realistic ground truths help to quantify those errors.
Within-Tunnel Variations in Pressure Data for Three Transonic Wind Tunnels
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2014-01-01
This paper compares the results of pressure measurements made on the same test article with the same test matrix in three transonic wind tunnels. A comparison is presented of the unexplained variance associated with polar replicates acquired in each tunnel. The impact of a significance component of systematic (not random) unexplained variance is reviewed, and the results of analyses of variance are presented to assess the degree of significant systematic error in these representative wind tunnel tests. Total uncertainty estimates are reported for 140 samples of pressure data, quantifying the effects of within-polar random errors and between-polar systematic bias errors.
The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Suarez, M. J.; Bacmeister, J. T.; Chen, B.; Takacs, L. L.
2006-01-01
Previous GCM studies have found that the systematic errors in the GCM simulation of the seasonal mean ITCZ intensity and location could be substantially corrected by adding suitable amount of rain re-evaporation or cumulus momentum transport. However, the reason(s) for these systematic errors and solutions has remained a puzzle. In this work the knowledge gained from previous studies of the ITCZ in an aqua-planet model with zonally uniform SST is applied to solve this puzzle. The solution is supported by further aqua-planet and full model experiments using the latest version of the Goddard Earth Observing System GCM.
Quantifying Errors in TRMM-Based Multi-Sensor QPE Products Over Land in Preparation for GPM
NASA Technical Reports Server (NTRS)
Peters-Lidard, Christa D.; Tian, Yudong
2011-01-01
Determining uncertainties in satellite-based multi-sensor quantitative precipitation estimates over land of fundamental importance to both data producers and hydro climatological applications. ,Evaluating TRMM-era products also lays the groundwork and sets the direction for algorithm and applications development for future missions including GPM. QPE uncertainties result mostly from the interplay of systematic errors and random errors. In this work, we will synthesize our recent results quantifying the error characteristics of satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMap). For systematic errors, we devised an error decomposition scheme to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals hydroclimatologically-relevant error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. For the random errors, we calculated the measurement spread from the ensemble of these six quasi-independent products, and thus produced a global map of measurement uncertainties. The map yields a global view of the error characteristics and their regional and seasonal variations, reveals many undocumented error features over areas with no validation data available, and provides better guidance to global assimilation of satellite-based precipitation data. Insights gained from these results and how they could help with GPM will be highlighted.
St-Pierre, Corinne; Desmeules, François; Dionne, Clermont E; Frémont, Pierre; MacDermid, Joy C; Roy, Jean-Sébastien
2016-01-01
To conduct a systematic review of the psychometric properties (reliability, validity and responsiveness) of self-report questionnaires used to assess symptoms and functional limitations of individuals with rotator cuff (RC) disorders. A systematic search in three databases (Cinahl, Medline and Embase) was conducted. Data extraction and critical methodological appraisal were performed independently by three raters using structured tools, and agreement was achieved by consensus. A descriptive synthesis was performed. One-hundred and twenty articles reporting on 11 questionnaires were included. All questionnaires were highly reliable and responsive to change, and showed construct validity; seven questionnaires also shown known-group validity. The minimal detectable change ranged from 6.4% to 20.8% of total score; only two questionnaires (American Shoulder and Elbow Surgeon questionnaire [ASES] and Upper Limb Functional Index [ULFI]) had a measurement error below 10% of global score. Minimal clinically important differences were established for eight questionnaires, and ranged from 8% to 20% of total score. Overall, included questionnaires showed acceptable psychometric properties for individuals with RC disorders. The ASES and ULFI have the smallest absolute error of measurement, while the Western Ontario RC Index is one of the most responsive questionnaires for individuals suffering from RC disorders. All included questionnaires are reliable, valid and responsive for the evaluation of individuals with RC disorders. As all included questionnaires showed good psychometric properties for the targeted population, the choice should be made according to the purpose of the evaluation and to the construct being evaluated by the questionnaire. The WORC, a RC-specific questionnaire, appeared to be more responsive. It should therefore be used to evaluate change in time. If the evaluation is time-limited, shorter questionnaires or short versions should be considered (such as Quick DASH or SST).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu
Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading asmore » to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.« less
Yago, Martín
2017-05-01
QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results. A statistical model was used to construct charts for the 1 ks and X /χ 2 rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results. 1 ks Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. X /χ 2 rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance. Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision. © 2017 American Association for Clinical Chemistry.
Sokolenko, Stanislav; Aucoin, Marc G
2015-09-04
The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small as 2.5 % under a wide range of conditions. Both the simulation framework and error correction method represent examples of time-course analysis that can be applied to further developments in (1)H-NMR methodology and the more general application of quantitative metabolomics.
Limitations of the planning organ at risk volume (PRV) concept.
Stroom, Joep C; Heijmen, Ben J M
2006-09-01
Previously, we determined a planning target volume (PTV) margin recipe for geometrical errors in radiotherapy equal to M(T) = 2 Sigma + 0.7 sigma, with Sigma and sigma standard deviations describing systematic and random errors, respectively. In this paper, we investigated margins for organs at risk (OAR), yielding the so-called planning organ at risk volume (PRV). For critical organs with a maximum dose (D(max)) constraint, we calculated margins such that D(max) in the PRV is equal to the motion averaged D(max) in the (moving) clinical target volume (CTV). We studied margins for the spinal cord in 10 head-and-neck cases and 10 lung cases, each with two different clinical plans. For critical organs with a dose-volume constraint, we also investigated whether a margin recipe was feasible. For the 20 spinal cords considered, the average margin recipe found was: M(R) = 1.6 Sigma + 0.2 sigma with variations for systematic and random errors of 1.2 Sigma to 1.8 Sigma and -0.2 sigma to 0.6 sigma, respectively. The variations were due to differences in shape and position of the dose distributions with respect to the cords. The recipe also depended significantly on the volume definition of D(max). For critical organs with a dose-volume constraint, the PRV concept appears even less useful because a margin around, e.g., the rectum changes the volume in such a manner that dose-volume constraints stop making sense. The concept of PRV for planning of radiotherapy is of limited use. Therefore, alternative ways should be developed to include geometric uncertainties of OARs in radiotherapy planning.
Park, S B; Kim, H; Yao, M; Ellis, R; Machtay, M; Sohn, J W
2012-06-01
To quantify the systematic error of a Deformable Image Registration (DIR) system and establish Quality Assurance (QA) procedure. To address the shortfall of landmark approach which it is only available at the significant visible feature points, we adapted a Deformation Vector Map (DVM) comparison approach. We used two CT image sets (R and T image sets) taken for the same patient at different time and generated a DVM, which includes the DIR systematic error. The DVM was calculated using fine-tuned B-Spline DIR and L-BFGS optimizer. By utilizing this DVM we generated R' image set to eliminate the systematic error in DVM,. Thus, we have truth data set, R' and T image sets, and the truth DVM. To test a DIR system, we use R' and T image sets to a DIR system. We compare the test DVM to the truth DVM. If there is no systematic error, they should be identical. We built Deformation Error Histogram (DEH) for quantitative analysis. The test registration was performed with an in-house B-Spline DIR system using a stochastic gradient descent optimizer. Our example data set was generated with a head and neck patient case. We also tested CT to CBCT deformable registration. We found skin regions which interface with the air has relatively larger errors. Also mobile joints such as shoulders had larger errors. Average error for ROIs were as follows; CTV: 0.4mm, Brain stem: 1.4mm, Shoulders: 1.6mm, and Normal tissues: 0.7mm. We succeeded to build DEH approach to quantify the DVM uncertainty. Our data sets are available for testing other systems in our web page. Utilizing DEH, users can decide how much systematic error they would accept. DEH and our data can be a tool for an AAPM task group to compose a DIR system QA guideline. This project is partially supported by the Agency for Healthcare Research and Quality (AHRQ) grant 1R18HS017424-01A2. © 2012 American Association of Physicists in Medicine.
A review of setup error in supine breast radiotherapy using cone-beam computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batumalai, Vikneswary, E-mail: Vikneswary.batumalai@sswahs.nsw.gov.au; Liverpool and Macarthur Cancer Therapy Centres, New South Wales; Ingham Institute of Applied Medical Research, Sydney, New South Wales
2016-10-01
Setup error in breast radiotherapy (RT) measured with 3-dimensional cone-beam computed tomography (CBCT) is becoming more common. The purpose of this study is to review the literature relating to the magnitude of setup error in breast RT measured with CBCT. The different methods of image registration between CBCT and planning computed tomography (CT) scan were also explored. A literature search, not limited by date, was conducted using Medline and Google Scholar with the following key words: breast cancer, RT, setup error, and CBCT. This review includes studies that reported on systematic and random errors, and the methods used when registeringmore » CBCT scans with planning CT scan. A total of 11 relevant studies were identified for inclusion in this review. The average magnitude of error is generally less than 5 mm across a number of studies reviewed. The common registration methods used when registering CBCT scans with planning CT scan are based on bony anatomy, soft tissue, and surgical clips. No clear relationships between the setup errors detected and methods of registration were observed from this review. Further studies are needed to assess the benefit of CBCT over electronic portal image, as CBCT remains unproven to be of wide benefit in breast RT.« less
Quantifying errors without random sampling.
Phillips, Carl V; LaPole, Luwanna M
2003-06-12
All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.
Kaye, Stephen B
2009-04-01
To provide a scalar measure of refractive error, based on geometric lens power through principal, orthogonal and oblique meridians, that is not limited to the paraxial and sag height approximations. A function is derived to model sections through the principal meridian of a lens, followed by rotation of the section through orthogonal and oblique meridians. Average focal length is determined using the definition for the average of a function. Average univariate power in the principal meridian (including spherical aberration), can be computed from the average of a function over the angle of incidence as determined by the parameters of the given lens, or adequately computed from an integrated series function. Average power through orthogonal and oblique meridians, can be similarly determined using the derived formulae. The widely used computation for measuring refractive error, the spherical equivalent, introduces non-constant approximations, leading to a systematic bias. The equations proposed provide a good univariate representation of average lens power and are not subject to a systematic bias. They are particularly useful for the analysis of aggregate data, correlating with biological treatment variables and for developing analyses, which require a scalar equivalent representation of refractive power.
Wuellner, Sara E.; Bonauto, David K.
2016-01-01
Background Little research has been done to identify reasons employers fail to report some injuries and illnesses in the Bureau of Labor Statistics Survey of Occupational Injuries and Illnesses (SOII). Methods We interviewed the 2012 Washington SOII respondents from establishments that had failed to report one or more eligible workers’ compensation claims in the SOII about their reasons for not reporting specific claims. Qualitative content analysis methods were used to identify themes and patterns in the responses. Results Non‐compliance with OSHA recordkeeping or SOII reporting instructions and data entry errors led to unreported claims. Some employers refused to include claims because they did not consider the injury to be work‐related, despite workers’ compensation eligibility. Participant responses brought the SOII eligibility of some claims into question. Conclusion Systematic and non‐systematic errors lead to SOII underreporting. Insufficient recordkeeping systems and limited knowledge of reporting requirements are barriers to accurate workplace injury records. Am. J. Ind. Med. 59:343–356, 2016. © 2016 The Authors. American Journal of Industrial Medicine Published by Wiley Periodicals, Inc. PMID:26970051
The Effect of Systematic Error in Forced Oscillation Testing
NASA Technical Reports Server (NTRS)
Williams, Brianne Y.; Landman, Drew; Flory, Isaac L., IV; Murphy, Patrick C.
2012-01-01
One of the fundamental problems in flight dynamics is the formulation of aerodynamic forces and moments acting on an aircraft in arbitrary motion. Classically, conventional stability derivatives are used for the representation of aerodynamic loads in the aircraft equations of motion. However, for modern aircraft with highly nonlinear and unsteady aerodynamic characteristics undergoing maneuvers at high angle of attack and/or angular rates the conventional stability derivative model is no longer valid. Attempts to formulate aerodynamic model equations with unsteady terms are based on several different wind tunnel techniques: for example, captive, wind tunnel single degree-of-freedom, and wind tunnel free-flying techniques. One of the most common techniques is forced oscillation testing. However, the forced oscillation testing method does not address the systematic and systematic correlation errors from the test apparatus that cause inconsistencies in the measured oscillatory stability derivatives. The primary objective of this study is to identify the possible sources and magnitude of systematic error in representative dynamic test apparatuses. Sensitivities of the longitudinal stability derivatives to systematic errors are computed, using a high fidelity simulation of a forced oscillation test rig, and assessed using both Design of Experiments and Monte Carlo methods.
Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda
2015-01-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735
Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda
2014-06-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity.
Basis set limit and systematic errors in local-orbital based all-electron DFT
NASA Astrophysics Data System (ADS)
Blum, Volker; Behler, Jörg; Gehrke, Ralf; Reuter, Karsten; Scheffler, Matthias
2006-03-01
With the advent of efficient integration schemes,^1,2 numeric atom-centered orbitals (NAO's) are an attractive basis choice in practical density functional theory (DFT) calculations of nanostructured systems (surfaces, clusters, molecules). Though all-electron, the efficiency of practical implementations promises to be on par with the best plane-wave pseudopotential codes, while having a noticeably higher accuracy if required: Minimal-sized effective tight-binding like calculations and chemically accurate all-electron calculations are both possible within the same framework; non-periodic and periodic systems can be treated on equal footing; and the localized nature of the basis allows in principle for O(N)-like scaling. However, converging an observable with respect to the basis set is less straightforward than with competing systematic basis choices (e.g., plane waves). We here investigate the basis set limit of optimized NAO basis sets in all-electron calculations, using as examples small molecules and clusters (N2, Cu2, Cu4, Cu10). meV-level total energy convergence is possible using <=50 basis functions per atom in all cases. We also find a clear correlation between the errors which arise from underconverged basis sets, and the system geometry (interatomic distance). ^1 B. Delley, J. Chem. Phys. 92, 508 (1990), ^2 J.M. Soler et al., J. Phys.: Condens. Matter 14, 2745 (2002).
Global Warming Estimation from MSU
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, Robert; Yoo, Jung-Moon
1998-01-01
Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz) from sequential, sun-synchronous, polar-orbiting NOAA satellites contain small systematic errors. Some of these errors are time-dependent and some are time-independent. Small errors in Ch 2 data of successive satellites arise from calibration differences. Also, successive NOAA satellites tend to have different Local Equatorial Crossing Times (LECT), which introduce differences in Ch 2 data due to the diurnal cycle. These two sources of systematic error are largely time independent. However, because of atmospheric drag, there can be a drift in the LECT of a given satellite, which introduces time-dependent systematic errors. One of these errors is due to the progressive chance in the diurnal cycle and the other is due to associated chances in instrument heating by the sun. In order to infer global temperature trend from the these MSU data, we have eliminated explicitly the time-independent systematic errors. Both of the time-dependent errors cannot be assessed from each satellite. For this reason, their cumulative effect on the global temperature trend is evaluated implicitly. Christy et al. (1998) (CSL). based on their method of analysis of the MSU Ch 2 data, infer a global temperature cooling trend (-0.046 K per decade) from 1979 to 1997, although their near nadir measurements yield near zero trend (0.003 K/decade). Utilising an independent method of analysis, we infer global temperature warmed by 0.12 +/- 0.06 C per decade from the observations of the MSU Ch 2 during the period 1980 to 1997.
NASA Astrophysics Data System (ADS)
Glover, Paul W. J.
2016-07-01
When scientists apply Archie's first law they often include an extra parameter a, which was introduced about 10 years after the equation's first publication by Winsauer et al. (1952), and which is sometimes called the "tortuosity" or "lithology" parameter. This parameter is not, however, theoretically justified. Paradoxically, the Winsauer et al. (1952) form of Archie's law often performs better than the original, more theoretically correct version. The difference in the cementation exponent calculated from these two forms of Archie's law is important, and can lead to a misestimation of reserves by at least 20 % for typical reservoir parameter values. We have examined the apparent paradox, and conclude that while the theoretical form of the law is correct, the data that we have been analysing with Archie's law have been in error. There are at least three types of systematic error that are present in most measurements: (i) a porosity error, (ii) a pore fluid salinity error, and (iii) a temperature error. Each of these systematic errors is sufficient to ensure that a non-unity value of the parameter a is required in order to fit the electrical data well. Fortunately, the inclusion of this parameter in the fit has compensated for the presence of the systematic errors in the electrical and porosity data, leading to a value of cementation exponent that is correct. The exceptions are those cementation exponents that have been calculated for individual core plugs. We make a number of recommendations for reducing the systematic errors that contribute to the problem and suggest that the value of the parameter a may now be used as an indication of data quality.
[Errors in Peruvian medical journals references].
Huamaní, Charles; Pacheco-Romero, José
2009-01-01
References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.
McGinnis, Ryan S; Mahadevan, Nikhil; Moon, Yaejin; Seagers, Kirsten; Sheth, Nirav; Wright, John A; DiCristofaro, Steven; Silva, Ikaro; Jortberg, Elise; Ceruolo, Melissa; Pindado, Jesus A; Sosnoff, Jacob; Ghaffari, Roozbeh; Patel, Shyamal
2017-01-01
Gait speed is a powerful clinical marker for mobility impairment in patients suffering from neurological disorders. However, assessment of gait speed in coordination with delivery of comprehensive care is usually constrained to clinical environments and is often limited due to mounting demands on the availability of trained clinical staff. These limitations in assessment design could give rise to poor ecological validity and limited ability to tailor interventions to individual patients. Recent advances in wearable sensor technologies have fostered the development of new methods for monitoring parameters that characterize mobility impairment, such as gait speed, outside the clinic, and therefore address many of the limitations associated with clinical assessments. However, these methods are often validated using normal gait patterns; and extending their utility to subjects with gait impairments continues to be a challenge. In this paper, we present a machine learning method for estimating gait speed using a configurable array of skin-mounted, conformal accelerometers. We establish the accuracy of this technique on treadmill walking data from subjects with normal gait patterns and subjects with multiple sclerosis-induced gait impairments. For subjects with normal gait, the best performing model systematically overestimates speed by only 0.01 m/s, detects changes in speed to within less than 1%, and achieves a root-mean-square-error of 0.12 m/s. Extending these models trained on normal gait to subjects with gait impairments yields only minor changes in model performance. For example, for subjects with gait impairments, the best performing model systematically overestimates speed by 0.01 m/s, quantifies changes in speed to within 1%, and achieves a root-mean-square-error of 0.14 m/s. Additional analyses demonstrate that there is no correlation between gait speed estimation error and impairment severity, and that the estimated speeds maintain the clinical significance of ground truth speed in this population. These results support the use of wearable accelerometer arrays for estimating walking speed in normal subjects and their extension to MS patient cohorts with gait impairment.
Charge renormalization at the large-D limit for N-electron atoms and weakly bound systems
NASA Astrophysics Data System (ADS)
Kais, S.; Bleil, R.
1995-05-01
We develop a systematic way to determine an effective nuclear charge ZRD such that the Hartree-Fock results will be significantly closer to the exact energies by utilizing the analytically known large-D limit energies. This method yields an expansion for the effective nuclear charge in powers of (1/D), which we have evaluated to the first order. This first order approximation to the desired effective nuclear charge has been applied to two-electron atoms with Z=2-20, and weakly bound systems such as H-. The errors for the two-electron atoms when compared with exact results were reduced from ˜0.2% for Z=2 to ˜0.002% for large Z. Although usual Hartree-Fock calculations for H- show this to be unstable, our results reduce the percent error of the Hartree-Fock energy from 7.6% to 1.86% and predicts the anion to be stable. For N-electron atoms (N=3-18, Z=3-28), using only the zeroth order approximation for the effective charge significantly reduces the error of Hartree-Fock calculations and recovers more than 80% of the correlation energy.
Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN
NASA Astrophysics Data System (ADS)
Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.
2016-12-01
In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of this product. Overall, we believe that different error component's analysis performed in this study, can substantially help any further local studies for post-calibration and bias reduction of PERSIANN estimations.
Neale, Chris; Madill, Chris; Rauscher, Sarah; Pomès, Régis
2013-08-13
All molecular dynamics simulations are susceptible to sampling errors, which degrade the accuracy and precision of observed values. The statistical convergence of simulations containing atomistic lipid bilayers is limited by the slow relaxation of the lipid phase, which can exceed hundreds of nanoseconds. These long conformational autocorrelation times are exacerbated in the presence of charged solutes, which can induce significant distortions of the bilayer structure. Such long relaxation times represent hidden barriers that induce systematic sampling errors in simulations of solute insertion. To identify optimal methods for enhancing sampling efficiency, we quantitatively evaluate convergence rates using generalized ensemble sampling algorithms in calculations of the potential of mean force for the insertion of the ionic side chain analog of arginine in a lipid bilayer. Umbrella sampling (US) is used to restrain solute insertion depth along the bilayer normal, the order parameter commonly used in simulations of molecular solutes in lipid bilayers. When US simulations are modified to conduct random walks along the bilayer normal using a Hamiltonian exchange algorithm, systematic sampling errors are eliminated more rapidly and the rate of statistical convergence of the standard free energy of binding of the solute to the lipid bilayer is increased 3-fold. We compute the ratio of the replica flux transmitted across a defined region of the order parameter to the replica flux that entered that region in Hamiltonian exchange simulations. We show that this quantity, the transmission factor, identifies sampling barriers in degrees of freedom orthogonal to the order parameter. The transmission factor is used to estimate the depth-dependent conformational autocorrelation times of the simulation system, some of which exceed the simulation time, and thereby identify solute insertion depths that are prone to systematic sampling errors and estimate the lower bound of the amount of sampling that is required to resolve these sampling errors. Finally, we extend our simulations and verify that the conformational autocorrelation times estimated by the transmission factor accurately predict correlation times that exceed the simulation time scale-something that, to our knowledge, has never before been achieved.
Taylor, C; Parker, J; Stratford, J; Warren, M
2018-05-01
Although all systematic and random positional setup errors can be corrected for in entirety during on-line image-guided radiotherapy, the use of a specified action level, below which no correction occurs, is also an option. The following service evaluation aimed to investigate the use of this 3 mm action level for on-line image assessment and correction (online, systematic set-up error and weekly evaluation) for lower extremity sarcoma, and understand the impact on imaging frequency and patient positioning error within one cancer centre. All patients were immobilised using a thermoplastic shell attached to a plastic base and an individual moulded footrest. A retrospective analysis of 30 patients was performed. Patient setup and correctional data derived from cone beam CT analysis was retrieved. The timing, frequency and magnitude of corrections were evaluated. The population systematic and random error was derived. 20% of patients had no systematic corrections over the duration of treatment, and 47% had one. The maximum number of systematic corrections per course of radiotherapy was 4, which occurred for 2 patients. 34% of episodes occurred within the first 5 fractions. All patients had at least one observed translational error during their treatment greater than 0.3 cm, and 80% of patients had at least one observed translational error during their treatment greater than 0.5 cm. The population systematic error was 0.14 cm, 0.10 cm, 0.14 cm and random error was 0.27 cm, 0.22 cm, 0.23 cm in the lateral, caudocranial and anteroposterial directions. The required Planning Target Volume margin for the study population was 0.55 cm, 0.41 cm and 0.50 cm in the lateral, caudocranial and anteroposterial directions. The 3 mm action level for image assessment and correction prior to delivery reduced the imaging burden and focussed intervention on patients that exhibited greater positional variability. This strategy could be an efficient deployment of departmental resources if full daily correction of positional setup error is not possible. Copyright © 2017. Published by Elsevier Ltd.
Helical tomotherapy setup variations in canine nasal tumor patients immobilized with a bite block.
Kubicek, Lyndsay N; Seo, Songwon; Chappell, Richard J; Jeraj, Robert; Forrest, Lisa J
2012-01-01
The purpose of our study was to compare setup variation in four degrees of freedom (vertical, longitudinal, lateral, and roll) between canine nasal tumor patients immobilized with a mattress and bite block, versus a mattress alone. Our secondary aim was to define a clinical target volume (CTV) to planning target volume (PTV) expansion margin based on our mean systematic error values associated with nasal tumor patients immobilized by a mattress and bite block. We evaluated six parameters for setup corrections: systematic error, random error, patient-patient variation in systematic errors, the magnitude of patient-specific random errors (root mean square [RMS]), distance error, and the variation of setup corrections from zero shift. The variations in all parameters were statistically smaller in the group immobilized by a mattress and bite block. The mean setup corrections in the mattress and bite block group ranged from 0.91 mm to 1.59 mm for the translational errors and 0.5°. Although most veterinary radiation facilities do not have access to Image-guided radiotherapy (IGRT), we identified a need for more rigid fixation, established the value of adding IGRT to veterinary radiation therapy, and define the CTV-PTV setup error margin for canine nasal tumor patients immobilized in a mattress and bite block. © 2012 Veterinary Radiology & Ultrasound.
Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim
2015-01-01
Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.
Internal robustness: systematic search for systematic bias in SN Ia data
NASA Astrophysics Data System (ADS)
Amendola, Luca; Marra, Valerio; Quartin, Miguel
2013-04-01
A great deal of effort is currently being devoted to understanding, estimating and removing systematic errors in cosmological data. In the particular case of Type Ia supernovae, systematics are starting to dominate the error budget. Here we propose a Bayesian tool for carrying out a systematic search for systematic contamination. This serves as an extension to the standard goodness-of-fit tests and allows not only to cross-check raw or processed data for the presence of systematics but also to pin-point the data that are most likely contaminated. We successfully test our tool with mock catalogues and conclude that the Union2.1 data do not possess a significant amount of systematics. Finally, we show that if one includes in Union2.1 the supernovae that originally failed the quality cuts, our tool signals the presence of systematics at over 3.8σ confidence level.
Jiang, Jie; Yu, Wenbo; Zhang, Guangjun
2017-01-01
Navigation accuracy is one of the key performance indicators of an inertial navigation system (INS). Requirements for an accuracy assessment of an INS in a real work environment are exceedingly urgent because of enormous differences between real work and laboratory test environments. An attitude accuracy assessment of an INS based on the intensified high dynamic star tracker (IHDST) is particularly suitable for a real complex dynamic environment. However, the coupled systematic coordinate errors of an INS and the IHDST severely decrease the attitude assessment accuracy of an INS. Given that, a high-accuracy decoupling estimation method of the above systematic coordinate errors based on the constrained least squares (CLS) method is proposed in this paper. The reference frame of the IHDST is firstly converted to be consistent with that of the INS because their reference frames are completely different. Thereafter, the decoupling estimation model of the systematic coordinate errors is established and the CLS-based optimization method is utilized to estimate errors accurately. After compensating for error, the attitude accuracy of an INS can be assessed based on IHDST accurately. Both simulated experiments and real flight experiments of aircraft are conducted, and the experimental results demonstrate that the proposed method is effective and shows excellent performance for the attitude accuracy assessment of an INS in a real work environment. PMID:28991179
NASA Astrophysics Data System (ADS)
Landry, Guillaume; Parodi, Katia; Wildberger, Joachim E.; Verhaegen, Frank
2013-08-01
Dedicated methods of in-vivo verification of ion treatment based on the detection of secondary emitted radiation, such as positron-emission-tomography and prompt gamma detection require high accuracy in the assignment of the elemental composition. This especially concerns the content in carbon and oxygen, which are the most abundant elements of human tissue. The standard single-energy computed tomography (SECT) approach to carbon and oxygen concentration determination has been shown to introduce significant discrepancies in the carbon and oxygen content of tissues. We propose a dual-energy CT (DECT)-based approach for carbon and oxygen content assignment and investigate the accuracy gains of the method. SECT and DECT Hounsfield units (HU) were calculated using the stoichiometric calibration procedure for a comprehensive set of human tissues. Fit parameters for the stoichiometric calibration were obtained from phantom scans. Gaussian distributions with standard deviations equal to those derived from phantom scans were subsequently generated for each tissue for several values of the computed tomography dose index (CTDIvol). The assignment of %weight carbon and oxygen (%wC,%wO) was performed based on SECT and DECT. The SECT scheme employed a HU versus %wC,O approach while for DECT we explored a Zeff versus %wC,O approach and a (Zeff, ρe) space approach. The accuracy of each scheme was estimated by calculating the root mean square (RMS) error on %wC,O derived from the input Gaussian distribution of HU for each tissue and also for the noiseless case as a limiting case. The (Zeff, ρe) space approach was also compared to SECT by comparing RMS error for hydrogen and nitrogen (%wH,%wN). Systematic shifts were applied to the tissue HU distributions to assess the robustness of the method against systematic uncertainties in the stoichiometric calibration procedure. In the absence of noise the (Zeff, ρe) space approach showed more accurate %wC,O assignment (largest error of 2%) than the Zeff versus %wC,O and HU versus %wC,O approaches (largest errors of 15% and 30%, respectively). When noise was present, the accuracy of the (Zeff, ρe) space (DECT approach) was decreased but the RMS error over all tissues was lower than the HU versus %wC,O (SECT approach) (5.8%wC versus 7.5%wC at CTDIvol = 20 mGy). The DECT approach showed decreasing RMS error with decreasing image noise (or increasing CTDIvol). At CTDIvol = 80 mGy the RMS error over all tissues was 3.7% for DECT and 6.2% for SECT approaches. However, systematic shifts greater than ±5HU undermined the accuracy gains afforded by DECT at any dose level. DECT provides more accurate %wC,O assignment than SECT when imaging noise and systematic uncertainties in HU values are not considered. The presence of imaging noise degrades the DECT accuracy on %wC,O assignment but it remains superior to SECT. However, DECT was found to be sensitive to systematic shifts of human tissue HU.
Ronchi, Roberta; Revol, Patrice; Katayama, Masahiro; Rossetti, Yves; Farnè, Alessandro
2011-01-01
During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: As consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects). Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift) were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction) produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion “to feel” the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors. PMID:21731649
Alonso-Carné, Jorge; García-Martín, Alberto; Estrada-Peña, Agustin
2013-11-01
The modelling of habitat suitability for parasites is a growing area of research due to its association with climate change and ensuing shifts in the distribution of infectious diseases. Such models depend on remote sensing data and require accurate, high-resolution temperature measurements. The temperature is critical for accurate estimation of development rates and potential habitat ranges for a given parasite. The MODIS sensors aboard the Aqua and Terra satellites provide high-resolution temperature data for remote sensing applications. This paper describes comparative analysis of MODIS-derived temperatures relative to ground records of surface temperature in the western Palaearctic. The results show that MODIS overestimated maximum temperature values and underestimated minimum temperatures by up to 5-6 °C. The combined use of both Aqua and Terra datasets provided the most accurate temperature estimates around latitude 35-44° N, with an overestimation during spring-summer months and an underestimation in autumn-winter. Errors in temperature estimation were associated with specific ecological regions within the target area as well as technical limitations in the temporal and orbital coverage of the satellites (e.g. sensor limitations and satellite transit times). We estimated error propagation of temperature uncertainties in parasite habitat suitability models by comparing outcomes of published models. Error estimates reached 36% of annual respective measurements depending on the model used. Our analysis demonstrates the importance of adequate image processing and points out the limitations of MODIS temperature data as inputs into predictive models concerning parasite lifecycles.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)
2000-01-01
Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
Cosmology from Cosmic Microwave Background and large- scale structure
NASA Astrophysics Data System (ADS)
Xu, Yongzhong
2003-10-01
This dissertation consists of a series of studies, constituting four published papers, involving the Cosmic Microwave Background and the large scale structure, which help constrain Cosmological parameters and potential systematic errors. First, we present a method for comparing and combining maps with different resolutions and beam shapes, and apply it to the Saskatoon, QMAP and COBE/DMR data sets. Although the Saskatoon and QMAP maps detect signal at the 21σ and 40σ, levels, respectively, their difference is consistent with pure noise, placing strong limits on possible systematic errors. In particular, we obtain quantitative upper limits on relative calibration and pointing errors. Splitting the combined data by frequency shows similar consistency between the Ka- and Q-bands, placing limits on foreground contamination. The visual agreement between the maps is equally striking. Our combined QMAP+Saskatoon map, nicknamed QMASK, is publicly available at www.hep.upenn.edu/˜xuyz/qmask.html together with its 6495 x 6495 noise covariance matrix. This thoroughly tested data set covers a large enough area (648 square degrees—at the time, the largest degree-scale map available) to allow a statistical comparison with LOBE/DMR, showing good agreement. By band-pass-filtering the QMAP and Saskatoon maps, we are also able to spatially compare them scale-by-scale to check for beam- and pointing-related systematic errors. Using the QMASK map, we then measure the cosmic microwave background (CMB) power spectrum on angular scales ℓ ˜ 30 200 (1° 6°), and we test it for non-Gaussianity using morphological statistics known as Minkowski functionals. We conclude that the QMASK map is neither a very typical nor a very exceptional realization of a Gaussian random field. At least about 20% of the 1000 Gaussian Monte Carlo maps differ more than the QMASK map from the mean morphological parameters of the Gaussian fields. Finally, we compute the real-space power spectrum and the redshift-space distortions of galaxies in the 2dF 100k galaxy redshift survey using pseudo-Karhunen-Loève eigenmodes and the stochastic bias formalism. Our results agree well with those published by the 2dFGRS team, and have the added advantage of producing easy-to-interpret uncorrelated minimum-variance measurements of the galaxy- galaxy, galaxy-velocity and velocity-velocity power spectra in 27 k-bands, with narrow and well-behaved window functions in the range 0.01 h /Mpc < k < 0.8 h/Mpc. We find no significant detection of baryonic wiggles. We measure the galaxy-matter correlation coefficient r > 0.4 and the redshift-distortion parameter β = 0.49 ± 0.16 for r = 1.
NASA Astrophysics Data System (ADS)
Weichert, Christoph; Köchert, Paul; Schötka, Eugen; Flügge, Jens; Manske, Eberhard
2018-06-01
The uncertainty of a straightness interferometer is independent of the component used to introduce the divergence angle between the two probing beams, and is limited by three main error sources, which are linked to each other: their resolution, the influence of refractive index gradients and the topography of the straightness reflector. To identify the configuration with minimal uncertainties under laboratory conditions, a fully fibre-coupled heterodyne interferometer was successively equipped with three different wedge prisms, resulting in three different divergence angles (4°, 8° and 20°). To separate the error sources an independent reference with a smaller reproducibility is needed. Therefore, the straightness measurement capability of the Nanometer Comparator, based on a multisensor error separation method, was improved to provide measurements with a reproducibility of 0.2 nm. The comparison results revealed that the influence of the refractive index gradients of air did not increase with interspaces between the probing beams of more than 11.3 mm. Therefore, over a movement range of 220 mm, the lowest uncertainty was achieved with the largest divergence angle. The dominant uncertainty contribution arose from the mirror topography, which was additionally determined with a Fizeau interferometer. The measured topography agreed within ±1.3 nm with the systematic deviations revealed in the straightness comparison, resulting in an uncertainty contribution of 2.6 nm for the straightness interferometer.
In-flight calibration of the Hitomi Soft X-ray Spectrometer. (2) Point spread function
NASA Astrophysics Data System (ADS)
Maeda, Yoshitomo; Sato, Toshiki; Hayashi, Takayuki; Iizuka, Ryo; Angelini, Lorella; Asai, Ryota; Furuzawa, Akihiro; Kelley, Richard; Koyama, Shu; Kurashima, Sho; Ishida, Manabu; Mori, Hideyuki; Nakaniwa, Nozomi; Okajima, Takashi; Serlemitsos, Peter J.; Tsujimoto, Masahiro; Yaqoob, Tahir
2018-03-01
We present results of inflight calibration of the point spread function of the Soft X-ray Telescope that focuses X-rays onto the pixel array of the Soft X-ray Spectrometer system. We make a full array image of a point-like source by extracting a pulsed component of the Crab nebula emission. Within the limited statistics afforded by an exposure time of only 6.9 ks and limited knowledge of the systematic uncertainties, we find that the raytracing model of 1 {^'.} 2 half-power-diameter is consistent with an image of the observed event distributions across pixels. The ratio between the Crab pulsar image and the raytracing shows scatter from pixel to pixel that is 40% or less in all except one pixel. The pixel-to-pixel ratio has a spread of 20%, on average, for the 15 edge pixels, with an averaged statistical error of 17% (1 σ). In the central 16 pixels, the corresponding ratio is 15% with an error of 6%.
Predicting thunderstorm evolution using ground-based lightning detection networks
NASA Technical Reports Server (NTRS)
Goodman, Steven J.
1990-01-01
Lightning measurements acquired principally by a ground-based network of magnetic direction finders are used to diagnose and predict the existence, temporal evolution, and decay of thunderstorms over a wide range of space and time scales extending over four orders of magnitude. The non-linear growth and decay of thunderstorms and their accompanying cloud-to-ground lightning activity is described by the three parameter logistic growth model. The growth rate is shown to be a function of the storm size and duration, and the limiting value of the total lightning activity is related to the available energy in the environment. A new technique is described for removing systematic bearing errors from direction finder data where radar echoes are used to constrain site error correction and optimization (best point estimate) algorithms. A nearest neighbor pattern recognition algorithm is employed to cluster the discrete lightning discharges into storm cells and the advantages and limitations of different clustering strategies for storm identification and tracking are examined.
NASA Astrophysics Data System (ADS)
Orosz, G.; Imai, H.; Dodson, R.; Rioja, M. J.; Frey, S.; Burns, R. A.; Etoka, S.; Nakagawa, A.; Nakanishi, H.; Asaki, Y.; Goldman, S. R.; Tafoya, D.
2017-03-01
We report on the measurement of the trigonometric parallaxes of 1612 MHz hydroxyl masers around two asymptotic giant branch stars, WX Psc and OH 138.0+7.2, using the NRAO Very Long Baseline Array with in-beam phase referencing calibration. We obtain a 3σ upper limit of ≤5.3 mas on the parallax of WX Psc, corresponding to a lower limit distance estimate of ≳190 pc. The obtained parallax of OH 138.0+7.2 is 0.52 ± 0.09 mas (±18%), corresponding to a distance of {1.9}-0.3+0.4 {kpc}, making this the first hydroxyl maser parallax below one milliarcsecond. We also introduce a new method of error analysis for detecting systematic errors in the astrometry. Finally, we compare our trigonometric distances to published phase-lag distances toward these stars and find a good agreement between the two methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carson, M; Molineu, A; Taylor, P
Purpose: To analyze the most recent results of IROC Houston’s anthropomorphic H&N phantom to determine the nature of failing irradiations and the feasibility of altering pass/fail credentialing criteria. Methods: IROC Houston’s H&N phantom, used for IMRT credentialing for NCI-sponsored clinical trials, requires that an institution’s treatment plan must agree with measurement within 7% (TLD doses) and ≥85% pixels must pass 7%/4 mm gamma analysis. 156 phantom irradiations (November 2014 – October 2015) were re-evaluated using tighter criteria: 1) 5% TLD and 5%/4 mm, 2) 5% TLD and 5%/3 mm, 3) 4% TLD and 4%/4 mm, and 4) 3% TLD andmore » 3%/3 mm. Failure/poor performance rates were evaluated with respect to individual film and TLD performance by location in the phantom. Overall poor phantom results were characterized qualitatively as systematic (dosimetric) errors, setup errors/positional shifts, global but non-systematic errors, and errors affecting only a local region. Results: The pass rate for these phantoms using current criteria is 90%. Substituting criteria 1-4 reduces the overall pass rate to 77%, 70%, 63%, and 37%, respectively. Statistical analyses indicated the probability of noise-induced TLD failure at the 5% criterion was <0.5%. Using criteria 1, TLD results were most often the cause of failure (86% failed TLD while 61% failed film), with most failures identified in the primary PTV (77% cases). Other criteria posed similar results. Irradiations that failed from film only were overwhelmingly associated with phantom shifts/setup errors (≥80% cases). Results failing criteria 1 were primarily diagnosed as systematic: 58% of cases. 11% were setup/positioning errors, 8% were global non-systematic errors, and 22% were local errors. Conclusion: This study demonstrates that 5% TLD and 5%/4 mm gamma criteria may be both practically and theoretically achievable. Further work is necessary to diagnose and resolve dosimetric inaccuracy in these trials, particularly for systematic dose errors. This work is funded by NCI Grant CA180803.« less
Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...
Richards, Emilie J; Brown, Jeremy M; Barley, Anthony J; Chong, Rebecca A; Thomson, Robert C
2018-02-19
The use of large genomic datasets in phylogenetics has highlighted extensive topological variation across genes. Much of this discordance is assumed to result from biological processes. However, variation among gene trees can also be a consequence of systematic error driven by poor model fit, and the relative importance of biological versus methodological factors in explaining gene tree variation is a major unresolved question. Using mitochondrial genomes to control for biological causes of gene tree variation, we estimate the extent of gene tree discordance driven by systematic error and employ posterior prediction to highlight the role of model fit in producing this discordance. We find that the amount of discordance among mitochondrial gene trees is similar to the amount of discordance found in other studies that assume only biological causes of variation. This similarity suggests that the role of systematic error in generating gene tree variation is underappreciated and critical evaluation of fit between assumed models and the data used for inference is important for the resolution of unresolved phylogenetic questions.
St James, Sara; Seco, Joao; Mishra, Pankaj; Lewis, John H
2013-09-01
The purpose of this work is to present a framework to evaluate the accuracy of four-dimensional treatment planning in external beam radiation therapy using measured patient data and digital phantoms. To accomplish this, 4D digital phantoms of two model patients were created using measured patient lung tumor positions. These phantoms were used to simulate a four-dimensional computed tomography image set, which in turn was used to create a 4D Monte Carlo (4DMC) treatment plan. The 4DMC plan was evaluated by simulating the delivery of the treatment plan over approximately 5 min of tumor motion measured from the same patient on a different day. Unique phantoms accounting for the patient position (tumor position and thorax position) at 2 s intervals were used to represent the model patients on the day of treatment delivery and the delivered dose to the tumor was determined using Monte Carlo simulations. For Patient 1, the tumor was adequately covered with 95.2% of the tumor receiving the prescribed dose. For Patient 2, the tumor was not adequately covered and only 74.3% of the tumor received the prescribed dose. This study presents a framework to evaluate 4D treatment planning methods and demonstrates a potential limitation of 4D treatment planning methods. When systematic errors are present, including when the imaging study used for treatment planning does not represent all potential tumor locations during therapy, the treatment planning methods may not adequately predict the dose to the tumor. This is the first example of a simulation study based on patient tumor trajectories where systematic errors that occur due to an inaccurate estimate of tumor motion are evaluated.
An empirical understanding of triple collocation evaluation measure
NASA Astrophysics Data System (ADS)
Scipal, Klaus; Doubkova, Marcela; Hegyova, Alena; Dorigo, Wouter; Wagner, Wolfgang
2013-04-01
Triple collocation method is an advanced evaluation method that has been used in the soil moisture field for only about half a decade. The method requires three datasets with an independent error structure that represent an identical phenomenon. The main advantages of the method are that it a) doesn't require a reference dataset that has to be considered to represent the truth, b) limits the effect of random and systematic errors of other two datasets, and c) simultaneously assesses the error of three datasets. The objective of this presentation is to assess the triple collocation error (Tc) of the ASAR Global Mode Surface Soil Moisture (GM SSM 1) km dataset and highlight problems of the method related to its ability to cancel the effect of error of ancillary datasets. In particular, the goal is to a) investigate trends in Tc related to the change in spatial resolution from 5 to 25 km, b) to investigate trends in Tc related to the choice of a hydrological model, and c) to study the relationship between Tc and other absolute evaluation methods (namely RMSE and Error Propagation EP). The triple collocation method is implemented using ASAR GM, AMSR-E, and a model (either AWRA-L, GLDAS-NOAH, or ERA-Interim). First, the significance of the relationship between the three soil moisture datasets was tested that is a prerequisite for the triple collocation method. Second, the trends in Tc related to the choice of the third reference dataset and scale were assessed. For this purpose the triple collocation is repeated replacing AWRA-L with two different globally available model reanalysis dataset operating at different spatial resolution (ERA-Interim and GLDAS-NOAH). Finally, the retrieved results were compared to the results of the RMSE and EP evaluation measures. Our results demonstrate that the Tc method does not eliminate the random and time-variant systematic errors of the second and the third dataset used in the Tc. The possible reasons include the fact a) that the TC method could not fully function with datasets acting at very different spatial resolutions, or b) that the errors were not fully independent as initially assumed.
Technical Basis for Evaluating Software-Related Common-Cause Failures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muhlheim, Michael David; Wood, Richard
2016-04-01
The instrumentation and control (I&C) system architecture at a nuclear power plant (NPP) incorporates protections against common-cause failures (CCFs) through the use of diversity and defense-in-depth. Even for well-established analog-based I&C system designs, the potential for CCFs of multiple systems (or redundancies within a system) constitutes a credible threat to defeating the defense-in-depth provisions within the I&C system architectures. The integration of digital technologies into the I&C systems provides many advantages compared to the aging analog systems with respect to reliability, maintenance, operability, and cost effectiveness. However, maintaining the diversity and defense-in-depth for both the hardware and software within themore » digital system is challenging. In fact, the introduction of digital technologies may actually increase the potential for CCF vulnerabilities because of the introduction of undetected systematic faults. These systematic faults are defined as a “design fault located in a software component” and at a high level, are predominately the result of (1) errors in the requirement specification, (2) inadequate provisions to account for design limits (e.g., environmental stress), or (3) technical faults incorporated in the internal system (or architectural) design or implementation. Other technology-neutral CCF concerns include hardware design errors, equipment qualification deficiencies, installation or maintenance errors, instrument loop scaling and setpoint mistakes.« less
PHYSICAL PROPERTIES OF THE 0.94-DAY PERIOD TRANSITING PLANETARY SYSTEM WASP-18
DOE Office of Scientific and Technical Information (OSTI.GOV)
Southworth, John; Anderson, D. R.; Maxted, P. F. L.
2009-12-10
We present high-precision photometry of five consecutive transits of WASP-18, an extrasolar planetary system with one of the shortest orbital periods known. Through the use of telescope defocusing we achieve a photometric precision of 0.47-0.83 mmag per observation over complete transit events. The data are analyzed using the JKTEBOP code and three different sets of stellar evolutionary models. We find the mass and radius of the planet to be M {sub b} = 10.43 +- 0.30 +- 0.24 M {sub Jup} and R {sub b} = 1.165 +- 0.055 +- 0.014 R {sub Jup} (statistical and systematic errors), respectively. Themore » systematic errors in the orbital separation and the stellar and planetary masses, arising from the use of theoretical predictions, are of a similar size to the statistical errors and set a limit on our understanding of the WASP-18 system. We point out that seven of the nine known massive transiting planets (M {sub b} > 3 M {sub Jup}) have eccentric orbits, whereas significant orbital eccentricity has been detected for only four of the 46 less-massive planets. This may indicate that there are two different populations of transiting planets, but could also be explained by observational biases. Further radial velocity observations of low-mass planets will make it possible to choose between these two scenarios.« less
NASA Technical Reports Server (NTRS)
Miller, N. J.; Chuss, D. T.; Marriage, T. A.; Wollack, E. J.; Appel, J. W.; Bennett, C. L.; Eimer, J.; Essinger-Hileman, T.; Fixsen, D. J.; Harrington, K.;
2016-01-01
Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residual modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/ f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r= 0.01. Indeed, r less than 0.01 is achievable with commensurately improved characterizations and controls.
Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G
NASA Astrophysics Data System (ADS)
DeSalvo, Riccardo
2015-06-01
Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.
Adverse Effects in Dual-Star Interferometry
NASA Technical Reports Server (NTRS)
Colavita, M. Mark
2008-01-01
Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews: the keys aspects of the dual-star approach and implementation; the main contributors to the
Phase jitter in a differential phase experiment.
NASA Technical Reports Server (NTRS)
Tanenbaum, B. S.; Connolly, D. J.; Austin, G. L.
1973-01-01
Austin (1971) had concluded that, because of the 'phase jitter,' the differential phase experiment is useful over a more limited height range than the differential absorption experiment. Several observations are presented to show that this conclusion is premature. It is pointed out that the logical basis of the differential absorption experiment also requires that the O- and X-mode echoes, at a given time, come from the same irregularities. Austin's calculations are believed to contain a systematic error above 80 km.
13Check_RNA: A tool to evaluate 13C chemical shifts assignments of RNA.
Icazatti, A A; Martin, O A; Villegas, M; Szleifer, I; Vila, J A
2018-06-19
Chemical shifts (CS) are an important source of structural information of macromolecules such as RNA. In addition to the scarce availability of CS for RNA, the observed values are prone to errors due to a wrong re-calibration or miss assignments. Different groups have dedicated their efforts to correct CS systematic errors on RNA. Despite this, there are not automated and freely available algorithms for correct assignments of RNA 13C CS before their deposition to the BMRB or re-reference already deposited CS with systematic errors. Based on an existent method we have implemented an open source python module to correct 13C CS (from here on 13Cexp) systematic errors of RNAs and then return the results in 3 formats including the nmrstar one. This software is available on GitHub at https://github.com/BIOS-IMASL/13Check_RNA under a MIT license. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Krisciunas, Kevin
2007-12-01
A gnomon, or vertical pointed stick, can be used to determine the north-south direction at a site, as well as one's latitude. If one has accurate time and knows one's time zone, it is also possible to determine one's longitude. From observations on the first day of winter and the first day of summer one can determine the obliquity of the ecliptic. Since we can obtain accurate geographical coordinates from Google Earth or a GPS device, analysis of set of shadow length measurements can be used by students to learn about astronomical coordinate systems, time systems, systematic errors, and random errors. Systematic latitude errors of student datasets are typically 30 nautical miles (0.5 degree) or more, but with care one can achieve systematic and random errors less than 8 nautical miles. One of the advantages of this experiment is that it can be carried out during the day. Also, it is possible to determine if a student has made up his data.
NASA Astrophysics Data System (ADS)
Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.
2006-06-01
Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.
Detecting and overcoming systematic errors in genome-scale phylogenies.
Rodríguez-Ezpeleta, Naiara; Brinkmann, Henner; Roure, Béatrice; Lartillot, Nicolas; Lang, B Franz; Philippe, Hervé
2007-06-01
Genome-scale data sets result in an enhanced resolution of the phylogenetic inference by reducing stochastic errors. However, there is also an increase of systematic errors due to model violations, which can lead to erroneous phylogenies. Here, we explore the impact of systematic errors on the resolution of the eukaryotic phylogeny using a data set of 143 nuclear-encoded proteins from 37 species. The initial observation was that, despite the impressive amount of data, some branches had no significant statistical support. To demonstrate that this lack of resolution is due to a mutual annihilation of phylogenetic and nonphylogenetic signals, we created a series of data sets with slightly different taxon sampling. As expected, these data sets yielded strongly supported but mutually exclusive trees, thus confirming the presence of conflicting phylogenetic and nonphylogenetic signals in the original data set. To decide on the correct tree, we applied several methods expected to reduce the impact of some kinds of systematic error. Briefly, we show that (i) removing fast-evolving positions, (ii) recoding amino acids into functional categories, and (iii) using a site-heterogeneous mixture model (CAT) are three effective means of increasing the ratio of phylogenetic to nonphylogenetic signal. Finally, our results allow us to formulate guidelines for detecting and overcoming phylogenetic artefacts in genome-scale phylogenetic analyses.
Local systematic differences in 2MASS positions
NASA Astrophysics Data System (ADS)
Bustos Fierro, I. H.; Calderón, J. H.
2018-01-01
We have found that positions in the 2MASS All-sky Catalog of Point Sources show local systematic differences with characteristic length-scales of ˜ 5 to ˜ 8 arcminutes when compared with several catalogs. We have observed that when 2MASS positions are used in the computation of proper motions, the mentioned systematic differences cause systematic errors in the resulting proper motions. We have developed a method to locally rectify 2MASS with respect to UCAC4 in order to diminish the systematic differences between these catalogs. The rectified 2MASS catalog with the proposed method can be regarded as an extension of UCAC4 for astrometry with accuracy ˜ 90 mas in its positions, with negligible systematic errors. Also we show that the use of these rectified positions removes the observed systematic pattern in proper motions derived from original 2MASS positions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keeling, V; Jin, H; Hossain, S
2014-06-15
Purpose: To evaluate setup accuracy and quantify individual systematic and random errors for the various hardware and software components of the frameless 6D-BrainLAB ExacTrac system. Methods: 35 patients with cranial lesions, some with multiple isocenters (50 total lesions treated in 1, 3, 5 fractions), were investigated. All patients were simulated with a rigid head-and-neck mask and the BrainLAB localizer. CT images were transferred to the IPLAN treatment planning system where optimized plans were generated using stereotactic reference frame based on the localizer. The patients were setup initially with infrared (IR) positioning ExacTrac system. Stereoscopic X-ray images (XC: X-ray Correction) weremore » registered to their corresponding digitally-reconstructed-radiographs, based on bony anatomy matching, to calculate 6D-translational and rotational (Lateral, Longitudinal, Vertical, Pitch, Roll, Yaw) shifts. XC combines systematic errors of the mask, localizer, image registration, frame, and IR. If shifts were below tolerance (0.7 mm translational and 1 degree rotational), treatment was initiated; otherwise corrections were applied and additional X-rays were acquired to verify patient position (XV: X-ray Verification). Statistical analysis was used to extract systematic and random errors of the different components of the 6D-ExacTrac system and evaluate the cumulative setup accuracy. Results: Mask systematic errors (translational; rotational) were the largest and varied from one patient to another in the range (−15 to 4mm; −2.5 to 2.5degree) obtained from mean of XC for each patient. Setup uncertainty in IR positioning (0.97,2.47,1.62mm;0.65,0.84,0.96degree) was extracted from standard-deviation of XC. Combined systematic errors of the frame and localizer (0.32,−0.42,−1.21mm; −0.27,0.34,0.26degree) was extracted from mean of means of XC distributions. Final patient setup uncertainty was obtained from the standard deviations of XV (0.57,0.77,0.67mm,0.39,0.35,0.30degree). Conclusion: Statistical analysis was used to calculate cumulative and individual systematic errors from the different hardware and software components of the 6D-ExacTrac-system. Patients were treated with cumulative errors (<1mm,<1degree) with XV image guidance.« less
NASA Astrophysics Data System (ADS)
Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.
2017-11-01
Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.
Medication errors in the Middle East countries: a systematic review of the literature.
Alsulami, Zayed; Conroy, Sharon; Choonara, Imti
2013-04-01
Medication errors are a significant global concern and can cause serious medical consequences for patients. Little is known about medication errors in Middle Eastern countries. The objectives of this systematic review were to review studies of the incidence and types of medication errors in Middle Eastern countries and to identify the main contributory factors involved. A systematic review of the literature related to medication errors in Middle Eastern countries was conducted in October 2011 using the following databases: Embase, Medline, Pubmed, the British Nursing Index and the Cumulative Index to Nursing & Allied Health Literature. The search strategy included all ages and languages. Inclusion criteria were that the studies assessed or discussed the incidence of medication errors and contributory factors to medication errors during the medication treatment process in adults or in children. Forty-five studies from 10 of the 15 Middle Eastern countries met the inclusion criteria. Nine (20 %) studies focused on medication errors in paediatric patients. Twenty-one focused on prescribing errors, 11 measured administration errors, 12 were interventional studies and one assessed transcribing errors. Dispensing and documentation errors were inadequately evaluated. Error rates varied from 7.1 % to 90.5 % for prescribing and from 9.4 % to 80 % for administration. The most common types of prescribing errors reported were incorrect dose (with an incidence rate from 0.15 % to 34.8 % of prescriptions), wrong frequency and wrong strength. Computerised physician rder entry and clinical pharmacist input were the main interventions evaluated. Poor knowledge of medicines was identified as a contributory factor for errors by both doctors (prescribers) and nurses (when administering drugs). Most studies did not assess the clinical severity of the medication errors. Studies related to medication errors in the Middle Eastern countries were relatively few in number and of poor quality. Educational programmes on drug therapy for doctors and nurses are urgently needed.
Patient disclosure of medical errors in paediatrics: A systematic literature review
Koller, Donna; Rummens, Anneke; Le Pouesard, Morgane; Espin, Sherry; Friedman, Jeremy; Coffey, Maitreya; Kenneally, Noah
2016-01-01
Medical errors are common within paediatrics; however, little research has examined the process of disclosing medical errors in paediatric settings. The present systematic review of current research and policy initiatives examined evidence regarding the disclosure of medical errors involving paediatric patients. Peer-reviewed research from a range of scientific journals from the past 10 years is presented, and an overview of Canadian and international policies regarding disclosure in paediatric settings are provided. The purpose of the present review was to scope the existing literature and policy, and to synthesize findings into an integrated and accessible report. Future research priorities and policy implications are then identified. PMID:27429578
System calibration method for Fourier ptychographic microscopy
NASA Astrophysics Data System (ADS)
Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli
2017-09-01
Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic.
Li, T. S.; DePoy, D. L.; Marshall, J. L.; ...
2016-06-01
Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
Systematic error of diode thermometer.
Iskrenovic, Predrag S
2009-08-01
Semiconductor diodes are often used for measuring temperatures. The forward voltage across a diode decreases, approximately linearly, with the increase in temperature. The applied method is mainly the simplest one. A constant direct current flows through the diode, and voltage is measured at diode terminals. The direct current that flows through the diode, putting it into operating mode, heats up the diode. The increase in temperature of the diode-sensor, i.e., the systematic error due to self-heating, depends on the intensity of current predominantly and also on other factors. The results of systematic error measurements due to heating up by the forward-bias current have been presented in this paper. The measurements were made at several diodes over a wide range of bias current intensity.
Hadronic Contribution to Muon g-2 with Systematic Error Correlations
NASA Astrophysics Data System (ADS)
Brown, D. H.; Worstell, W. A.
1996-05-01
We have performed a new evaluation of the hadronic contribution to a_μ=(g-2)/2 of the muon with explicit correlations of systematic errors among the experimental data on σ( e^+e^- → hadrons ). Our result for the lowest order hadronic vacuum polarization contribution is a_μ^hvp = 701.7(7.6)(13.4) × 10-10 where the total systematic error contributions from below and above √s = 1.4 GeV are (12.5) × 10-10 and (4.8) × 10-10 respectively. Therefore new measurements on σ( e^+e^- → hadrons ) below 1.4 GeV in Novosibirsk, Russia can significantly reduce the total error on a_μ^hvp. This contrasts with a previous evaluation which indicated that the dominant error is due to the energy region above 1.4 GeV. The latter analysis correlated systematic errors at each energy point separately but not across energy ranges as we have done. Combination with higher order hadronic contributions is required for a new measurement of a_μ at Brookhaven National Laboratory to be sensitive to electroweak and possibly supergravity and muon substructure effects. Our analysis may also be applied to calculations of hadronic contributions to the running of α(s) at √s= M_Z, the hyperfine structure of muonium, and the running of sin^2 θW in Møller scattering. The analysis of the new Novosibirsk data will also be given.
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
Comparison of Different Attitude Correction Models for ZY-3 Satellite Imagery
NASA Astrophysics Data System (ADS)
Song, Wenping; Liu, Shijie; Tong, Xiaohua; Niu, Changling; Ye, Zhen; Zhang, Han; Jin, Yanmin
2018-04-01
ZY-3 satellite, launched in 2012, is the first civilian high resolution stereo mapping satellite of China. This paper analyzed the positioning errors of ZY-3 satellite imagery and conducted compensation for geo-position accuracy improvement using different correction models, including attitude quaternion correction, attitude angle offset correction, and attitude angle linear correction. The experimental results revealed that there exist systematic errors with ZY-3 attitude observations and the positioning accuracy can be improved after attitude correction with aid of ground controls. There is no significant difference between the results of attitude quaternion correction method and the attitude angle correction method. However, the attitude angle offset correction model produced steady improvement than the linear correction model when limited ground control points are available for single scene.
Pratte, Michael S.; Park, Young Eun; Rademaker, Rosanne L.; Tong, Frank
2016-01-01
If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced “oblique effect”, with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. PMID:28004957
Pratte, Michael S; Park, Young Eun; Rademaker, Rosanne L; Tong, Frank
2017-01-01
If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced "oblique effect," with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Fundamental limits on beam stability at the Advanced Photon Source.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Decker, G. A.
1998-06-18
Orbit correction is now routinely performed at the few-micron level in the Advanced Photon Source (APS) storage ring. Three diagnostics are presently in use to measure and control both AC and DC orbit motions: broad-band turn-by-turn rf beam position monitors (BPMs), narrow-band switched heterodyne receivers, and photoemission-style x-ray beam position monitors. Each type of diagnostic has its own set of systematic error effects that place limits on the ultimate pointing stability of x-ray beams supplied to users at the APS. Limiting sources of beam motion at present are magnet power supply noise, girder vibration, and thermal timescale vacuum chamber andmore » girder motion. This paper will investigate the present limitations on orbit correction, and will delve into the upgrades necessary to achieve true sub-micron beam stability.« less
The role of bias in simulation of the Indian monsoon and its relationship to predictability
NASA Astrophysics Data System (ADS)
Kelly, P.
2016-12-01
Confidence in future projections of how climate change will affect the Indian monsoon is currently limited by- among other things-model biases. That is, the systematic error in simulating the mean present day climate. An important priority question in seamless prediction involves the role of the mean state. How much of the prediction error in imperfect models stems from a biased mean state (itself a result of many interacting process errors), and how much stems from the flow dependence of processes during an oscillation or variation we are trying to predict? Using simple but effective nudging techniques, we are able to address this question in a clean and incisive framework that teases apart the roles of the mean state vs. transient flow dependence in constraining predictability. The role of bias in model fidelity of simulations of the Indian monsoon is investigated in CAM5, and the relationship to predictability in remote regions in the "free" (non-nudged) domain is explored.
A highly accurate ab initio potential energy surface for methane.
Owens, Alec; Yurchenko, Sergei N; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter
2016-09-14
A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of (12)CH4 reproduced with a root-mean-square error of 0.70 cm(-1). The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement.
Experiments on Frequency Dependence of the Deflection of Light in Yang-Mills Gravity
NASA Astrophysics Data System (ADS)
Hao, Yun; Zhu, Yiyi; Hsu, Jong-Ping
2018-01-01
In Yang-Mills gravity based on flat space-time, the eikonal equation for a light ray is derived from the modified Maxwell's wave equations in the geometric-optics limit. One obtains a Hamilton-Jacobi type equation, GLµv∂µΨ∂vΨ = 0 with an effective Riemannian metric tensor GLµv. According to Yang-Mills gravity, light rays (and macroscopic objects) move as if they were in an effective curved space-time with a metric tensor. The deflection angle of a light ray by the sun is about 1.53″ for experiments with optical frequencies ≈ 1014Hz. It is roughly 12% smaller than the usual value 1.75″. However, the experimental data in the past 100 years for the deflection of light by the sun in optical frequencies have uncertainties of (10-20)% due to large systematic errors. If one does not take the geometric-optics limit, one has the equation, GLµv[∂µΨ∂vΨcosΨ+ (∂µ∂vΨ)sinΨ] = 0, which suggests that the deflection angle could be frequency-dependent, according to Yang-Mills gravity. Nowadays, one has very accurate data in the radio frequencies ≈ 109Hz with uncertainties less than 0.1%. Thus, one can test this suggestion by using frequencies ≈ 1012 Hz, which could have a small uncertainty 0.1% due to the absence of systematic errors in the very long baseline interferometry.
Development of a Hard X-ray Beam Position Monitor for Insertion Device Beams at the APS
NASA Astrophysics Data System (ADS)
Decker, Glenn; Rosenbaum, Gerd; Singh, Om
2006-11-01
Long-term pointing stability requirements at the Advanced Photon Source (APS) are very stringent, at the level of 500 nanoradians peak-to-peak or better over a one-week time frame. Conventional rf beam position monitors (BPMs) close to the insertion device source points are incapable of assuring this level of stability, owing to mechanical, thermal, and electronic stability limitations. Insertion device gap-dependent systematic errors associated with the present ultraviolet photon beam position monitors similarly limit their ability to control long-term pointing stability. We report on the development of a new BPM design sensitive only to hard x-rays. Early experimental results will be presented.
NASA Technical Reports Server (NTRS)
da Silva, Arlindo; Redder, Christopher
2010-01-01
MERRA is a NASA reanalysis for the satellite era using a major new version of the Goddard Earth Observing System Data Assimilation System Version 5 (GEOS-5). The project focuses on historical analyses of the hydrological cycle on a broad range of weather and climate time scales and places the NASA EOS suite of observations in a climate context. The characterization of uncertainty in reanalysis fields is a commonly requested feature by users of such data. While intercomparison with reference data sets is common practice for ascertaining the realism of the datasets, such studies typically are restricted to long term climatological statistics and seldom provide state dependent measures of the uncertainties involved. In principle, variational data assimilation algorithms have the ability of producing error estimates for the analysis variables (typically surface pressure, winds, temperature, moisture and ozone) consistent with the assumed background and observation error statistics. However, these "perceived error estimates" are expensive to obtain and are limited by the somewhat simplistic errors assumed in the algorithm. The observation minus forecast residuals (innovations) by-product of any assimilation system constitutes a powerful tool for estimating the systematic and random errors in the analysis fields. Unfortunately, such data is usually not readily available with reanalysis products, often requiring the tedious decoding of large datasets and not so-user friendly file formats. With MERRA we have introduced a gridded version of the observations/innovations used in the assimilation process, using the same grid and data formats as the regular datasets. Such dataset empowers the user with the ability of conveniently performing observing system related analysis and error estimates. The scope of this dataset will be briefly described. We will present a systematic analysis of MERRA innovation time series for the conventional observing system, including maximum-likelihood estimates of background and observation errors, as well as global bias estimates. Starting with the joint PDF of innovations and analysis increments at observation locations we propose a technique for diagnosing bias among the observing systems, and document how these contextual biases have evolved during the satellite era covered by MERRA.
NASA Astrophysics Data System (ADS)
da Silva, A.; Redder, C. R.
2010-12-01
MERRA is a NASA reanalysis for the satellite era using a major new version of the Goddard Earth Observing System Data Assimilation System Version 5 (GEOS-5). The Project focuses on historical analyses of the hydrological cycle on a broad range of weather and climate time scales and places the NASA EOS suite of observations in a climate context. The characterization of uncertainty in reanalysis fields is a commonly requested feature by users of such data. While intercomparison with reference data sets is common practice for ascertaining the realism of the datasets, such studies typically are restricted to long term climatological statistics and seldom provide state dependent measures of the uncertainties involved. In principle, variational data assimilation algorithms have the ability of producing error estimates for the analysis variables (typically surface pressure, winds, temperature, moisture and ozone) consistent with the assumed background and observation error statistics. However, these "perceived error estimates" are expensive to obtain and are limited by the somewhat simplistic errors assumed in the algorithm. The observation minus forecast residuals (innovations) by-product of any assimilation system constitutes a powerful tool for estimating the systematic and random errors in the analysis fields. Unfortunately, such data is usually not readily available with reanalysis products, often requiring the tedious decoding of large datasets and not so-user friendly file formats. With MERRA we have introduced a gridded version of the observations/innovations used in the assimilation process, using the same grid and data formats as the regular datasets. Such dataset empowers the user with the ability of conveniently performing observing system related analysis and error estimates. The scope of this dataset will be briefly described. We will present a systematic analysis of MERRA innovation time series for the conventional observing system, including maximum-likelihood estimates of background and observation errors, as well as global bias estimates. Starting with the joint PDF of innovations and analysis increments at observation locations we propose a technique for diagnosing bias among the observing systems, and document how these contextual biases have evolved during the satellite era covered by MERRA.
Semi-supervised anomaly detection - towards model-independent searches of new physics
NASA Astrophysics Data System (ADS)
Kuusela, Mikael; Vatanen, Tommi; Malmi, Eric; Raiko, Tapani; Aaltonen, Timo; Nagai, Yoshikazu
2012-06-01
Most classification algorithms used in high energy physics fall under the category of supervised machine learning. Such methods require a training set containing both signal and background events and are prone to classification errors should this training data be systematically inaccurate for example due to the assumed MC model. To complement such model-dependent searches, we propose an algorithm based on semi-supervised anomaly detection techniques, which does not require a MC training sample for the signal data. We first model the background using a multivariate Gaussian mixture model. We then search for deviations from this model by fitting to the observations a mixture of the background model and a number of additional Gaussians. This allows us to perform pattern recognition of any anomalous excess over the background. We show by a comparison to neural network classifiers that such an approach is a lot more robust against misspecification of the signal MC than supervised classification. In cases where there is an unexpected signal, a neural network might fail to correctly identify it, while anomaly detection does not suffer from such a limitation. On the other hand, when there are no systematic errors in the training data, both methods perform comparably.
Reconstructing the calibrated strain signal in the Advanced LIGO detectors
NASA Astrophysics Data System (ADS)
Viets, A. D.; Wade, M.; Urban, A. L.; Kandhasamy, S.; Betzwieser, J.; Brown, Duncan A.; Burguet-Castell, J.; Cahillane, C.; Goetz, E.; Izumi, K.; Karki, S.; Kissel, J. S.; Mendell, G.; Savage, R. L.; Siemens, X.; Tuyenbayev, D.; Weinstein, A. J.
2018-05-01
Advanced LIGO’s raw detector output needs to be calibrated to compute dimensionless strain h(t) . Calibrated strain data is produced in the time domain using both a low-latency, online procedure and a high-latency, offline procedure. The low-latency h(t) data stream is produced in two stages, the first of which is performed on the same computers that operate the detector’s feedback control system. This stage, referred to as the front-end calibration, uses infinite impulse response (IIR) filtering and performs all operations at a 16 384 Hz digital sampling rate. Due to several limitations, this procedure currently introduces certain systematic errors in the calibrated strain data, motivating the second stage of the low-latency procedure, known as the low-latency gstlal calibration pipeline. The gstlal calibration pipeline uses finite impulse response (FIR) filtering to apply corrections to the output of the front-end calibration. It applies time-dependent correction factors to the sensing and actuation components of the calibrated strain to reduce systematic errors. The gstlal calibration pipeline is also used in high latency to recalibrate the data, which is necessary due mainly to online dropouts in the calibrated data and identified improvements to the calibration models or filters.
THE IDENTIFICATION OF THE X-RAY COUNTERPART TO PSR J2021+4026
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weisskopf, Martin C.; Elsner, Ronald F.; O'Dell, Stephen L.
2011-12-10
We report the probable identification of the X-ray counterpart to the {gamma}-ray pulsar PSR J2021+4026 using imaging with the Chandra X-ray Observatory Advanced CCD Imaging Spectrometer and timing analysis with the Fermi satellite. Given the statistical and systematic errors, the positions determined by both satellites are coincident. The X-ray source position is R.A. 20{sup h}21{sup m}30.{sup s}733, decl. +40 Degree-Sign 26'46.''04 (J2000) with an estimated uncertainty of 1.''3 combined statistical and systematic error. Moreover, both the X-ray to {gamma}-ray and the X-ray to optical flux ratios are sensible assuming a neutron star origin for the X-ray flux. The X-ray sourcemore » has no cataloged infrared-to-visible counterpart and, through new observations, we set upper limits to its optical emission of i' > 23.0 mag and r' > 25.2 mag. The source exhibits an X-ray spectrum with most likely both a power law and a thermal component. We also report on the X-ray and visible light properties of the 43 other sources detected in our Chandra observation.« less
The Identification Of The X-Ray Counterpart To PSR J2021+4026
Weisskopf, Martin C.; Romani, Roger W.; Razzano, Massimiliano; ...
2011-11-23
We report the probable identification of the X-ray counterpart to the γ-ray pulsar PSR J2021+4026 using imaging with the Chandra X-ray Observatory ACIS and timing analysis with the Fermi satellite. Given the statistical and systematic errors, the positions determined by both satellites are coincident. The X-ray source position is R.A. 20h21m30s.733, Decl. +40°26'46.04" (J2000) with an estimated uncertainty of 1."3 combined statistical and systematic error. Moreover, both the X-ray to γ-ray and the X-ray to optical flux ratios are sensible assuming a neutron star origin for the X-ray flux. The X-ray source has no cataloged infrared-to-visible counterpart and, through newmore » observations, we set upper limits to its optical emission of i' > 23.0 mag and r' > 25.2 mag. The source exhibits an X-ray spectrum with most likely both a powerlaw and a thermal component. We also report on the X-ray and visible light properties of the 43 other sources detected in our Chandra observation.« less
Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel
ERIC Educational Resources Information Center
Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.
2007-01-01
A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…
Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization
NASA Technical Reports Server (NTRS)
Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred
2014-01-01
In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.
Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers
NASA Technical Reports Server (NTRS)
Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.
2012-01-01
Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.
Screen Time and Sleep among School-Aged Children and Adolescents: A Systematic Literature Review
Hale, Lauren; Guan, Stanford
2015-01-01
Summary We systematically examined and updated the scientific literature on the association between screen time (e.g., television, computers, video games, and mobile devices) and sleep outcomes among school-aged children and adolescents. We reviewed 67 studies published from 1999 to early 2014. We found that screen time is adversely associated with sleep outcomes (primarily shortened duration and delayed timing) in 90% of studies. Some of the results varied by type of screen exposure, age of participant, gender, and day of the week. While the evidence regarding the association between screen time and sleep is consistent, we discuss limitations of the current studies: 1.) causal association not confirmed; 2.) measurement error (of both screen time exposure and sleep measures); 3.) limited data on simultaneous use of multiple screens, characteristics and content of screens used. Youth should be advised to limit or reduce screen time exposure, especially before or during bedtime hours to minimize any harmful effects of screen time on sleep and well-being. Future research should better account for the methodological limitations of the extant studies, and seek to better understand the magnitude and mechanisms of the association. These steps will help the development and implementation of policies or interventions related to screen time among youth. PMID:25193149
Thoomes-de Graaf, M; Scholten-Peeters, G G M; Schellingerhout, J M; Bourne, A M; Buchbinder, R; Koehorst, M; Terwee, C B; Verhagen, A P
2016-09-01
To critically appraise and compare the measurement properties of self-administered patient-reported outcome measures (PROMs) focussing on the shoulder, assessing "activity limitations." Systematic review. The study population had to consist of patients with shoulder pain. We excluded postoperative patients or patients with generic diseases. The methodological quality of the selected studies and the results of the measurement properties were critically appraised and rated using the COSMIN checklist. Out of a total of 3427 unique hits, 31 articles, evaluating 7 different questionnaires, were included. The SPADI is the most frequently evaluated PROM and its measurement properties seem adequate apart from a lack of information regarding its measurement error and content validity. For English, Norwegian and Turkish users, we recommend to use the SPADI. Dutch users could use either the SDQ or the SST. In German, we recommend the DASH. In Tamil, Slovene, Spanish and the Danish languages, the evaluated PROMs were not yet of acceptable validity. None of these PROMs showed strong positive evidence for all measurement properties. We propose to develop a new shoulder PROM focused on activity limitations, taking new knowledge and techniques into account.
System calibration method for Fourier ptychographic microscopy.
Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli
2017-09-01
Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
NASA Astrophysics Data System (ADS)
Semenov, Z. V.; Labusov, V. A.
2017-11-01
Results of studying the errors of indirect monitoring by means of computer simulations are reported. The monitoring method is based on measuring spectra of reflection from additional monitoring substrates in a wide spectral range. Special software (Deposition Control Simulator) is developed, which allows one to estimate the influence of the monitoring system parameters (noise of the photodetector array, operating spectral range of the spectrometer and errors of its calibration in terms of wavelengths, drift of the radiation source intensity, and errors in the refractive index of deposited materials) on the random and systematic errors of deposited layer thickness measurements. The direct and inverse problems of multilayer coatings are solved using the OptiReOpt library. Curves of the random and systematic errors of measurements of the deposited layer thickness as functions of the layer thickness are presented for various values of the system parameters. Recommendations are given on using the indirect monitoring method for the purpose of reducing the layer thickness measurement error.
Human-simulation-based learning to prevent medication error: A systematic review.
Sarfati, Laura; Ranchon, Florence; Vantard, Nicolas; Schwiertz, Vérane; Larbre, Virginie; Parat, Stéphanie; Faudel, Amélie; Rioufol, Catherine
2018-01-31
In the past 2 decades, there has been an increasing interest in simulation-based learning programs to prevent medication error (ME). To improve knowledge, skills, and attitudes in prescribers, nurses, and pharmaceutical staff, these methods enable training without directly involving patients. However, best practices for simulation for healthcare providers are as yet undefined. By analysing the current state of experience in the field, the present review aims to assess whether human simulation in healthcare helps to reduce ME. A systematic review was conducted on Medline from 2000 to June 2015, associating the terms "Patient Simulation," "Medication Errors," and "Simulation Healthcare." Reports of technology-based simulation were excluded, to focus exclusively on human simulation in nontechnical skills learning. Twenty-one studies assessing simulation-based learning programs were selected, focusing on pharmacy, medicine or nursing students, or concerning programs aimed at reducing administration or preparation errors, managing crises, or learning communication skills for healthcare professionals. The studies varied in design, methodology, and assessment criteria. Few demonstrated that simulation was more effective than didactic learning in reducing ME. This review highlights a lack of long-term assessment and real-life extrapolation, with limited scenarios and participant samples. These various experiences, however, help in identifying the key elements required for an effective human simulation-based learning program for ME prevention: ie, scenario design, debriefing, and perception assessment. The performance of these programs depends on their ability to reflect reality and on professional guidance. Properly regulated simulation is a good way to train staff in events that happen only exceptionally, as well as in standard daily activities. By integrating human factors, simulation seems to be effective in preventing iatrogenic risk related to ME, if the program is well designed. © 2018 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chi, Y.; Liang, J.; Yan, D.
2006-02-15
Model-based deformable organ registration techniques using the finite element method (FEM) have recently been investigated intensively and applied to image-guided adaptive radiotherapy (IGART). These techniques assume that human organs are linearly elastic material, and their mechanical properties are predetermined. Unfortunately, the accurate measurement of the tissue material properties is challenging and the properties usually vary between patients. A common issue is therefore the achievable accuracy of the calculation due to the limited access to tissue elastic material constants. In this study, we performed a systematic investigation on this subject based on tissue biomechanics and computer simulations to establish the relationshipsmore » between achievable registration accuracy and tissue mechanical and organ geometrical properties. Primarily we focused on image registration for three organs: rectal wall, bladder wall, and prostate. The tissue anisotropy due to orientation preference in tissue fiber alignment is captured by using an orthotropic or a transversely isotropic elastic model. First we developed biomechanical models for the rectal wall, bladder wall, and prostate using simplified geometries and investigated the effect of varying material parameters on the resulting organ deformation. Then computer models based on patient image data were constructed, and image registrations were performed. The sensitivity of registration errors was studied by perturbating the tissue material properties from their mean values while fixing the boundary conditions. The simulation results demonstrated that registration error for a subvolume increases as its distance from the boundary increases. Also, a variable associated with material stability was found to be a dominant factor in registration accuracy in the context of material uncertainty. For hollow thin organs such as rectal walls and bladder walls, the registration errors are limited. Given 30% in material uncertainty, the registration error is limited to within 1.3 mm. For a solid organ such as the prostate, the registration errors are much larger. Given 30% in material uncertainty, the registration error can reach 4.5 mm. However, the registration error distribution for prostates shows that most of the subvolumes have a much smaller registration error. A deformable organ registration technique that uses FEM is a good candidate in IGART if the mean material parameters are available.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slepian, Zachary; Slosar, Anze; Eisenstein, Daniel J.
We search for a galaxy clustering bias due to a modulation of galaxy number with the baryon-dark matter relative velocity resulting from recombination-era physics. We find no detected signal and place the constraint bv <0.01 on the relative velocity bias for the CMASS galaxies. This bias is an important potential systematic of Baryon Acoustic Oscillation (BAO) method measurements of the cosmic distance scale using the 2-point clustering. Our limit on the relative velocity bias indicates a systematic shift of no more than 0.3% rms in the distance scale inferred from the BAO feature in the BOSS 2-point clustering, well belowmore » the 1% statistical error of this measurement. In conclusion, this constraint is the most stringent currently available and has important implications for the ability of upcoming large-scale structure surveys such as DESI to self-protect against the relative velocity as a possible systematic.« less
Magnetic effect in the test of the weak equivalence principle using a rotating torsion pendulum
NASA Astrophysics Data System (ADS)
Zhu, Lin; Liu, Qi; Zhao, Hui-Hui; Yang, Shan-Qing; Luo, Pengshun; Shao, Cheng-Gang; Luo, Jun
2018-04-01
The high precision test of the weak equivalence principle (WEP) using a rotating torsion pendulum requires thorough analysis of systematic effects. Here we investigate one of the main systematic effects, the coupling of the ambient magnetic field to the pendulum. It is shown that the dominant term, the interaction between the average magnetic field and the magnetic dipole of the pendulum, is decreased by a factor of 1.1 × 104 with multi-layer magnetic shield shells. The shield shells reduce the magnetic field to 1.9 × 10-9 T in the transverse direction so that the dipole-interaction limited WEP test is expected at η ≲ 10-14 for a pendulum dipole less than 10-9 A m2. The high-order effect, the coupling of the magnetic field gradient to the magnetic quadrupole of the pendulum, would also contribute to the systematic errors for a test precision down to η ˜ 10-14.
Magnetic effect in the test of the weak equivalence principle using a rotating torsion pendulum.
Zhu, Lin; Liu, Qi; Zhao, Hui-Hui; Yang, Shan-Qing; Luo, Pengshun; Shao, Cheng-Gang; Luo, Jun
2018-04-01
The high precision test of the weak equivalence principle (WEP) using a rotating torsion pendulum requires thorough analysis of systematic effects. Here we investigate one of the main systematic effects, the coupling of the ambient magnetic field to the pendulum. It is shown that the dominant term, the interaction between the average magnetic field and the magnetic dipole of the pendulum, is decreased by a factor of 1.1 × 10 4 with multi-layer magnetic shield shells. The shield shells reduce the magnetic field to 1.9 × 10 -9 T in the transverse direction so that the dipole-interaction limited WEP test is expected at η ≲ 10 -14 for a pendulum dipole less than 10 -9 A m 2 . The high-order effect, the coupling of the magnetic field gradient to the magnetic quadrupole of the pendulum, would also contribute to the systematic errors for a test precision down to η ∼ 10 -14 .
Slepian, Zachary; Slosar, Anze; Eisenstein, Daniel J.; ...
2017-10-24
We search for a galaxy clustering bias due to a modulation of galaxy number with the baryon-dark matter relative velocity resulting from recombination-era physics. We find no detected signal and place the constraint bv <0.01 on the relative velocity bias for the CMASS galaxies. This bias is an important potential systematic of Baryon Acoustic Oscillation (BAO) method measurements of the cosmic distance scale using the 2-point clustering. Our limit on the relative velocity bias indicates a systematic shift of no more than 0.3% rms in the distance scale inferred from the BAO feature in the BOSS 2-point clustering, well belowmore » the 1% statistical error of this measurement. In conclusion, this constraint is the most stringent currently available and has important implications for the ability of upcoming large-scale structure surveys such as DESI to self-protect against the relative velocity as a possible systematic.« less
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.; Blazek, Jonathan A.; Brownstein, Joel R.; Chuang, Chia-Hsun; Gil-Marín, Héctor; Ho, Shirley; Kitaura, Francisco-Shu; McEwen, Joseph E.; Percival, Will J.; Ross, Ashley J.; Rossi, Graziano; Seo, Hee-Jong; Slosar, Anže; Vargas-Magaña, Mariana
2018-02-01
We search for a galaxy clustering bias due to a modulation of galaxy number with the baryon-dark matter relative velocity resulting from recombination-era physics. We find no detected signal and place the constraint bv < 0.01 on the relative velocity bias for the CMASS galaxies. This bias is an important potential systematic of baryon acoustic oscillation (BAO) method measurements of the cosmic distance scale using the two-point clustering. Our limit on the relative velocity bias indicates a systematic shift of no more than 0.3 per cent rms in the distance scale inferred from the BAO feature in the BOSS two-point clustering, well below the 1 per cent statistical error of this measurement. This constraint is the most stringent currently available and has important implications for the ability of upcoming large-scale structure surveys such as the Dark Energy Spectroscopic Instrument (DESI) to self-protect against the relative velocity as a possible systematic.
Vrijheid, Martine; Deltour, Isabelle; Krewski, Daniel; Sanchez, Marie; Cardis, Elisabeth
2006-07-01
This paper examines the effects of systematic and random errors in recall and of selection bias in case-control studies of mobile phone use and cancer. These sensitivity analyses are based on Monte-Carlo computer simulations and were carried out within the INTERPHONE Study, an international collaborative case-control study in 13 countries. Recall error scenarios simulated plausible values of random and systematic, non-differential and differential recall errors in amount of mobile phone use reported by study subjects. Plausible values for the recall error were obtained from validation studies. Selection bias scenarios assumed varying selection probabilities for cases and controls, mobile phone users, and non-users. Where possible these selection probabilities were based on existing information from non-respondents in INTERPHONE. Simulations used exposure distributions based on existing INTERPHONE data and assumed varying levels of the true risk of brain cancer related to mobile phone use. Results suggest that random recall errors of plausible levels can lead to a large underestimation in the risk of brain cancer associated with mobile phone use. Random errors were found to have larger impact than plausible systematic errors. Differential errors in recall had very little additional impact in the presence of large random errors. Selection bias resulting from underselection of unexposed controls led to J-shaped exposure-response patterns, with risk apparently decreasing at low to moderate exposure levels. The present results, in conjunction with those of the validation studies conducted within the INTERPHONE study, will play an important role in the interpretation of existing and future case-control studies of mobile phone use and cancer risk, including the INTERPHONE study.
Analyzing False Positives of Four Questions in the Force Concept Inventory
ERIC Educational Resources Information Center
Yasuda, Jun-ichro; Mae, Naohiro; Hull, Michael M.; Taniguchi, Masa-aki
2018-01-01
In this study, we analyze the systematic error from false positives of the Force Concept Inventory (FCI). We compare the systematic errors of question 6 (Q.6), Q.7, and Q.16, for which clearly erroneous reasoning has been found, with Q.5, for which clearly erroneous reasoning has not been found. We determine whether or not a correct response to a…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, N. J.; Marriage, T. A.; Appel, J. W.
2016-02-20
Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residualmore » modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r = 0.01. Indeed, r < 0.01 is achievable with commensurately improved characterizations and controls.« less
Matus, Bethany A; Bridges, Kayla M; Logomarsino, John V
2018-06-21
Individualized feeding care plans and safe handling of milk (human or formula) are critical in promoting growth, immune function, and neurodevelopment in the preterm infant. Feeding errors and disruptions or limitations to feeding processes in the neonatal intensive care unit (NICU) are associated with negative safety events. Feeding errors include contamination of milk and delivery of incorrect or expired milk and may result in adverse gastrointestinal illnesses. The purpose of this review was to evaluate the effect(s) of centralized milk preparation, use of trained technicians, use of bar code-scanning software, and collaboration between registered dietitians and registered nurses on feeding safety in the NICU. A systematic review of the literature was completed, and 12 articles were selected as relevant to search criteria. Study quality was evaluated using the Downs and Black scoring tool. An evaluation of human studies indicated that the use of centralized milk preparation, trained technicians, bar code-scanning software, and possible registered dietitian involvement decreased feeding-associated error in the NICU. A state-of-the-art NICU includes a centralized milk preparation area staffed by trained technicians, care supported by bar code-scanning software, and utilization of a registered dietitian to improve patient safety. These resources will provide nurses more time to focus on nursing-specific neonatal care. Further research is needed to evaluate the impact of factors related to feeding safety in the NICU as well as potential financial benefits of these quality improvement opportunities.
Economic impact of medication error: a systematic review.
Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P
2017-05-01
Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Error Sources in Asteroid Astrometry
NASA Technical Reports Server (NTRS)
Owen, William M., Jr.
2000-01-01
Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.
The Tycho-Gaia Astrometric Solution
NASA Astrophysics Data System (ADS)
Lindegren, Lennart
2018-04-01
Gaia DR1 is based on the first 14 months of Gaia's observations. This is not long enough to reliably disentangle the parallax effect from proper motion. For most sources, therefore, only positions and magnitudes are given. Parallaxes and proper motions were nevertheless obtained for about two million of the brighter stars through the Tycho-Gaia astrometric solution (TGAS), combining the Gaia observations with the much earlier Hipparcos and Tycho-2 positions. In this review I focus on some important characteristics and limitations of TGAS, in particular the reference frame, astrometric uncertainties, correlations, and systematic errors.
Comparison of photogrammetric and astrometric data reduction results for the wild BC-4 camera
NASA Technical Reports Server (NTRS)
Hornbarger, D. H.; Mueller, I., I.
1971-01-01
The results of astrometric and photogrammetric plate reduction techniques for a short focal length camera are compared. Several astrometric models are tested on entire and limited plate areas to analyze their ability to remove systematic errors from interpolated satellite directions using a rigorous photogrammetric reduction as a standard. Residual plots are employed to graphically illustrate the analysis. Conclusions are made as to what conditions will permit the astrometric reduction to achieve comparable accuracies to those of photogrammetric reduction when applied for short focal length ballistic cameras.
Possibility of measuring Adler angles in charged current single pion neutrino-nucleus interactions
NASA Astrophysics Data System (ADS)
Sánchez, F.
2016-05-01
Uncertainties in modeling neutrino-nucleus interactions are a major contribution to systematic errors in long-baseline neutrino oscillation experiments. Accurate modeling of neutrino interactions requires additional experimental observables such as the Adler angles which carry information about the polarization of the Δ resonance and the interference with nonresonant single pion production. The Adler angles were measured with limited statistics in bubble chamber neutrino experiments as well as in electron-proton scattering experiments. We discuss the viability of measuring these angles in neutrino interactions with nuclei.
NASA Astrophysics Data System (ADS)
Stone, Dáithí A.; Hansen, Gerrit
2016-09-01
Despite being a well-established research field, the detection and attribution of observed climate change to anthropogenic forcing is not yet provided as a climate service. One reason for this is the lack of a methodology for performing tailored detection and attribution assessments on a rapid time scale. Here we develop such an approach, based on the translation of quantitative analysis into the "confidence" language employed in recent Assessment Reports of the Intergovernmental Panel on Climate Change. While its systematic nature necessarily ignores some nuances examined in detailed expert assessments, the approach nevertheless goes beyond most detection and attribution studies in considering contributors to building confidence such as errors in observational data products arising from sparse monitoring networks. When compared against recent expert assessments, the results of this approach closely match those of the existing assessments. Where there are small discrepancies, these variously reflect ambiguities in the details of what is being assessed, reveal nuances or limitations of the expert assessments, or indicate limitations of the accuracy of the sort of systematic approach employed here. Deployment of the method on 116 regional assessments of recent temperature and precipitation changes indicates that existing rules of thumb concerning the detectability of climate change ignore the full range of sources of uncertainty, most particularly the importance of adequate observational monitoring.
The Accuracy of GBM GRB Localizations
NASA Astrophysics Data System (ADS)
Briggs, Michael Stephen; Connaughton, V.; Meegan, C.; Hurley, K.
2010-03-01
We report an study of the accuracy of GBM GRB localizations, analyzing three types of localizations: those produced automatically by the GBM Flight Software on board GBM, those produced automatically with ground software in near real time, and localizations produced with human guidance. The two types of automatic locations are distributed in near real-time via GCN Notices; the human-guided locations are distributed on timescale of many minutes or hours using GCN Circulars. This work uses a Bayesian analysis that models the distribution of the GBM total location error by comparing GBM locations to more accurate locations obtained with other instruments. Reference locations are obtained from Swift, Super-AGILE, the LAT, and with the IPN. We model the GBM total location errors as having systematic errors in addition to the statistical errors and use the Bayesian analysis to constrain the systematic errors.
Homogeneous studies of transiting extrasolar planets - III. Additional planets and stellar models
NASA Astrophysics Data System (ADS)
Southworth, John
2010-11-01
I derive the physical properties of 30 transiting extrasolar planetary systems using a homogeneous analysis of published data. The light curves are modelled with the JKTEBOP code, with special attention paid to the treatment of limb darkening, orbital eccentricity and error analysis. The light from some systems is contaminated by faint nearby stars, which if ignored will systematically bias the results. I show that it is not realistically possible to account for this using only transit light curves: light-curve solutions must be constrained by measurements of the amount of contaminating light. A contamination of 5 per cent is enough to make the measurement of a planetary radius 2 per cent too low. The physical properties of the 30 transiting systems are obtained by interpolating in tabulated predictions from theoretical stellar models to find the best match to the light-curve parameters and the measured stellar velocity amplitude, temperature and metal abundance. Statistical errors are propagated by a perturbation analysis which constructs complete error budgets for each output parameter. These error budgets are used to compile a list of systems which would benefit from additional photometric or spectroscopic measurements. The systematic errors arising from the inclusion of stellar models are assessed by using five independent sets of theoretical predictions for low-mass stars. This model dependence sets a lower limit on the accuracy of measurements of the physical properties of the systems, ranging from 1 per cent for the stellar mass to 0.6 per cent for the mass of the planet and 0.3 per cent for other quantities. The stellar density and the planetary surface gravity and equilibrium temperature are not affected by this model dependence. An external test on these systematic errors is performed by comparing the two discovery papers of the WASP-11/HAT-P-10 system: these two studies differ in their assessment of the ratio of the radii of the components and the effective temperature of the star. I find that the correlations of planetary surface gravity and mass with orbital period have significance levels of only 3.1σ and 2.3σ, respectively. The significance of the latter has not increased with the addition of new data since Paper II. The division of planets into two classes based on Safronov number is increasingly blurred. Most of the objects studied here would benefit from improved photometric and spectroscopic observations, as well as improvements in our understanding of low-mass stars and their effective temperature scale.
Frankenfield, David; Roth-Yousey, Lori; Compher, Charlene
2005-05-01
An assessment of energy needs is a necessary component in the development and evaluation of a nutrition care plan. The metabolic rate can be measured or estimated by equations, but estimation is by far the more common method. However, predictive equations might generate errors large enough to impact outcome. Therefore, a systematic review of the literature was undertaken to document the accuracy of predictive equations preliminary to deciding on the imperative to measure metabolic rate. As part of a larger project to determine the role of indirect calorimetry in clinical practice, an evidence team identified published articles that examined the validity of various predictive equations for resting metabolic rate (RMR) in nonobese and obese people and also in individuals of various ethnic and age groups. Articles were accepted based on defined criteria and abstracted using evidence analysis tools developed by the American Dietetic Association. Because these equations are applied by dietetics practitioners to individuals, a key inclusion criterion was research reports of individual data. The evidence was systematically evaluated, and a conclusion statement and grade were developed. Four prediction equations were identified as the most commonly used in clinical practice (Harris-Benedict, Mifflin-St Jeor, Owen, and World Health Organization/Food and Agriculture Organization/United Nations University [WHO/FAO/UNU]). Of these equations, the Mifflin-St Jeor equation was the most reliable, predicting RMR within 10% of measured in more nonobese and obese individuals than any other equation, and it also had the narrowest error range. No validation work concentrating on individual errors was found for the WHO/FAO/UNU equation. Older adults and US-residing ethnic minorities were underrepresented both in the development of predictive equations and in validation studies. The Mifflin-St Jeor equation is more likely than the other equations tested to estimate RMR to within 10% of that measured, but noteworthy errors and limitations exist when it is applied to individuals and possibly when it is generalized to certain age and ethnic groups. RMR estimation errors would be eliminated by valid measurement of RMR with indirect calorimetry, using an evidence-based protocol to minimize measurement error. The Expert Panel advises clinical judgment regarding when to accept estimated RMR using predictive equations in any given individual. Indirect calorimetry may be an important tool when, in the judgment of the clinician, the predictive methods fail an individual in a clinically relevant way. For members of groups that are greatly underrepresented by existing validation studies of predictive equations, a high level of suspicion regarding the accuracy of the equations is warranted.
Mathes, Tim; Klaßen, Pauline; Pieper, Dawid
2017-11-28
Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.
Systematic errors in transport calculations of shear viscosity using the Green-Kubo formalism
NASA Astrophysics Data System (ADS)
Rose, J. B.; Torres-Rincon, J. M.; Oliinychenko, D.; Schäfer, A.; Petersen, H.
2018-05-01
The purpose of this study is to provide a reproducible framework in the use of the Green-Kubo formalism to extract transport coefficients. More specifically, in the case of shear viscosity, we investigate the limitations and technical details of fitting the auto-correlation function to a decaying exponential. This fitting procedure is found to be applicable for systems interacting both through constant and energy-dependent cross-sections, although this is only true for sufficiently dilute systems in the latter case. We find that the optimal fit technique consists in simultaneously fixing the intercept of the correlation function and use a fitting interval constrained by the relative error on the correlation function. The formalism is then applied to the full hadron gas, for which we obtain the shear viscosity to entropy ratio.
Systematic reviews, systematic error and the acquisition of clinical knowledge
2010-01-01
Background Since its inception, evidence-based medicine and its application through systematic reviews, has been widely accepted. However, it has also been strongly criticised and resisted by some academic groups and clinicians. One of the main criticisms of evidence-based medicine is that it appears to claim to have unique access to absolute scientific truth and thus devalues and replaces other types of knowledge sources. Discussion The various types of clinical knowledge sources are categorised on the basis of Kant's categories of knowledge acquisition, as being either 'analytic' or 'synthetic'. It is shown that these categories do not act in opposition but rather, depend upon each other. The unity of analysis and synthesis in knowledge acquisition is demonstrated during the process of systematic reviewing of clinical trials. Systematic reviews constitute comprehensive synthesis of clinical knowledge but depend upon plausible, analytical hypothesis development for the trials reviewed. The dangers of systematic error regarding the internal validity of acquired knowledge are highlighted on the basis of empirical evidence. It has been shown that the systematic review process reduces systematic error, thus ensuring high internal validity. It is argued that this process does not exclude other types of knowledge sources. Instead, amongst these other types it functions as an integrated element during the acquisition of clinical knowledge. Conclusions The acquisition of clinical knowledge is based on interaction between analysis and synthesis. Systematic reviews provide the highest form of synthetic knowledge acquisition in terms of achieving internal validity of results. In that capacity it informs the analytic knowledge of the clinician but does not replace it. PMID:20537172
Preliminary constraints on variable w dark energy cosmologies from the SNLS
NASA Astrophysics Data System (ADS)
Carlberg, R. G.; Conley, A.; Howell, D. A.; Neill, J. D.; Perrett, K.; Pritchet, C. J.; Sullivan, M.
2005-12-01
The first 71 confirmed Ia supernovae from the Supernova Legacy Survey being conducted with CFHT imaging and Gemini, VLT and Keck spectroscopy set limits on variable dark energy cosmological models. For a generalized Chaplygin gas, in which the dark energy content is (1-Ω M)/ρ a, we find that a is statistically consistent with zero, with a best fit a=-0.2±-0.3 (68 systematic errors requires a further refinement of the photometric calibration and the potential model biases. A variable dark energy equation of state with w=w0+w_1 z shows the expected degeneracy between increasingly positive w0 and negative w1. The existing data rule out the parameters of the Weller & Linder (2002) Super-gravity inspired model cosmology (w0,w_1)=(-0.81,0.31). The full 700 Ia of the completed survey will provide a statistical error limit of w1 of about 0.2 and significant constraints on variable w models. The Canadian NSERC provided funding for the scientific analysis. These results are based on observations obtained at the CFHT, Gemini, VLT and Keck observatories.
Empirical evidence for resource-rational anchoring and adjustment.
Lieder, Falk; Griffiths, Thomas L; M Huys, Quentin J; Goodman, Noah D
2018-04-01
People's estimates of numerical quantities are systematically biased towards their initial guess. This anchoring bias is usually interpreted as sign of human irrationality, but it has recently been suggested that the anchoring bias instead results from people's rational use of their finite time and limited cognitive resources. If this were true, then adjustment should decrease with the relative cost of time. To test this hypothesis, we designed a new numerical estimation paradigm that controls people's knowledge and varies the cost of time and error independently while allowing people to invest as much or as little time and effort into refining their estimate as they wish. Two experiments confirmed the prediction that adjustment decreases with time cost but increases with error cost regardless of whether the anchor was self-generated or provided. These results support the hypothesis that people rationally adapt their number of adjustments to achieve a near-optimal speed-accuracy tradeoff. This suggests that the anchoring bias might be a signature of the rational use of finite time and limited cognitive resources rather than a sign of human irrationality.
Phase-demodulation error of a fiber-optic Fabry-Perot sensor with complex reflection coefficients.
Kilpatrick, J M; MacPherson, W N; Barton, J S; Jones, J D
2000-03-20
The influence of reflector losses attracts little discussion in standard treatments of the Fabry-Perot interferometer yet may be an important factor contributing to errors in phase-stepped demodulation of fiber optic Fabry-Perot (FFP) sensors. We describe a general transfer function for FFP sensors with complex reflection coefficients and estimate systematic phase errors that arise when the asymmetry of the reflected fringe system is neglected, as is common in the literature. The measured asymmetric response of higher-finesse metal-dielectric FFP constructions corroborates a model that predicts systematic phase errors of 0.06 rad in three-step demodulation of a low-finesse FFP sensor (R = 0.05) with internal reflector losses of 25%.
Miraldi Utz, Virginia
2017-01-01
Myopia is the most common eye disorder and major cause of visual impairment worldwide. As the incidence of myopia continues to rise, the need to further understand the complex roles of molecular and environmental factors controlling variation in refractive error is of increasing importance. Tkatchenko and colleagues applied a systematic approach using a combination of gene set enrichment analysis, genome-wide association studies, and functional analysis of a murine model to identify a myopia susceptibility gene, APLP2. Differential expression of refractive error was associated with time spent reading for those with low frequency variants in this gene. This provides support for the longstanding hypothesis of gene-environment interactions in refractive error development.
Experimental demonstration of laser tomographic adaptive optics on a 30-meter telescope at 800 nm
NASA Astrophysics Data System (ADS)
Ammons, S., Mark; Johnson, Luke; Kupke, Renate; Gavel, Donald T.; Max, Claire E.
2010-07-01
A critical goal in the next decade is to develop techniques that will extend Adaptive Optics correction to visible wavelengths on Extremely Large Telescopes (ELTs). We demonstrate in the laboratory the highly accurate atmospheric tomography necessary to defeat the cone effect on ELTs, an essential milestone on the path to this capability. We simulate a high-order Laser Tomographic AO System for a 30-meter telescope with the LTAO/MOAO testbed at UCSC. Eight Sodium Laser Guide Stars (LGSs) are sensed by 99x99 Shack-Hartmann wavefront sensors over 75". The AO system is diffraction-limited at a science wavelength of 800 nm (S ~ 6-9%) over a field of regard of 20" diameter. Openloop WFS systematic error is observed to be proportional to the total input atmospheric disturbance and is nearly the dominant error budget term (81 nm RMS), exceeded only by tomographic wavefront estimation error (92 nm RMS). The total residual wavefront error for this experiment is comparable to that expected for wide-field tomographic adaptive optics systems of similar wavefront sensor order and LGS constellation geometry planned for Extremely Large Telescopes.
Upper limits on the 21 cm power spectrum at z = 5.9 from quasar absorption line spectroscopy
NASA Astrophysics Data System (ADS)
Pober, Jonathan C.; Greig, Bradley; Mesinger, Andrei
2016-11-01
We present upper limits on the 21 cm power spectrum at z = 5.9 calculated from the model-independent limit on the neutral fraction of the intergalactic medium of x_{H I} < 0.06 + 0.05 (1σ ) derived from dark pixel statistics of quasar absorption spectra. Using 21CMMC, a Markov chain Monte Carlo Epoch of Reionization analysis code, we explore the probability distribution of 21 cm power spectra consistent with this constraint on the neutral fraction. We present 99 per cent confidence upper limits of Δ2(k) < 10-20 mK2 over a range of k from 0.5 to 2.0 h Mpc-1, with the exact limit dependent on the sampled k mode. This limit can be used as a null test for 21 cm experiments: a detection of power at z = 5.9 in excess of this value is highly suggestive of residual foreground contamination or other systematic errors affecting the analysis.
NASA Astrophysics Data System (ADS)
Güttler, I.
2012-04-01
Systematic errors in near-surface temperature (T2m), total cloud cover (CLD), shortwave albedo (ALB) and surface net longwave (SNL) and shortwave energy flux (SNS) are detected in simulations of RegCM on 50 km resolution over the European CORDEX domain when forced with ERA-Interim reanalysis. Simulated T2m is compared to CRU 3.0 and other variables to GEWEX-SRB 3.0 dataset. Most of systematic errors found in SNL and SNS are consistent with errors in T2m, CLD and ALB: they include prevailing negative errors in T2m and positive errors in CLD present during most of the year. Errors in T2m and CLD can be associated with the overestimation of SNL and SNS in most simulations. Impact of errors in albedo are primarily confined to north Africa, where e.g. underestimation of albedo in JJA is consistent with associated surface heating and positive SNS and T2m errors. Sensitivity to the choice of the PBL scheme and various parameters in PBL schemes is examined from an ensemble of 20 simulations. The recently implemented prognostic PBL scheme performs over Europe with a mixed success when compared to standard diagnostic scheme with a general increase of errors in T2m and CLD over all of the domain. Nevertheless, the improvements in T2m can be found in e.g. north-eastern Europe during DJF and western Europe during JJA where substantial warm biases existed in simulations with the diagnostic scheme. The most detectable impact, in terms of the JJA T2m errors over western Europe, comes form the variation in the formulation of mixing length. In order to reduce the above errors an update of the RegCM albedo values and further work in customizing PBL scheme is suggested.
Interventions to reduce medication errors in neonatal care: a systematic review
Nguyen, Minh-Nha Rhylie; Mosel, Cassandra
2017-01-01
Background: Medication errors represent a significant but often preventable cause of morbidity and mortality in neonates. The objective of this systematic review was to determine the effectiveness of interventions to reduce neonatal medication errors. Methods: A systematic review was undertaken of all comparative and noncomparative studies published in any language, identified from searches of PubMed and EMBASE and reference-list checking. Eligible studies were those investigating the impact of any medication safety interventions aimed at reducing medication errors in neonates in the hospital setting. Results: A total of 102 studies were identified that met the inclusion criteria, including 86 comparative and 16 noncomparative studies. Medication safety interventions were classified into six themes: technology (n = 38; e.g. electronic prescribing), organizational (n = 16; e.g. guidelines, policies, and procedures), personnel (n = 13; e.g. staff education), pharmacy (n = 9; e.g. clinical pharmacy service), hazard and risk analysis (n = 8; e.g. error detection tools), and multifactorial (n = 18; e.g. any combination of previous interventions). Significant variability was evident across all included studies, with differences in intervention strategies, trial methods, types of medication errors evaluated, and how medication errors were identified and evaluated. Most studies demonstrated an appreciable risk of bias. The vast majority of studies (>90%) demonstrated a reduction in medication errors. A similar median reduction of 50–70% in medication errors was evident across studies included within each of the identified themes, but findings varied considerably from a 16% increase in medication errors to a 100% reduction in medication errors. Conclusion: While neonatal medication errors can be reduced through multiple interventions aimed at improving the medication use process, no single intervention appeared clearly superior. Further research is required to evaluate the relative cost-effectiveness of the various medication safety interventions to facilitate decisions regarding uptake and implementation into clinical practice. PMID:29387337
Empirical Analysis of Systematic Communication Errors.
1981-09-01
human o~ . .... 8 components in communication systems. (Systematic errors were defined to be those that occur regularly in human communication links...phase of the human communication process and focuses on the linkage between a specific piece of information (and the receiver) and the transmission...communication flow. (2) Exchange. Exchange is the next phase in human communication and entails a concerted effort on the part of the sender and receiver to share
Systematics errors in strong lens modeling
NASA Astrophysics Data System (ADS)
Johnson, Traci L.; Sharon, Keren; Bayliss, Matthew B.
We investigate how varying the number of multiple image constraints and the available redshift information can influence the systematic errors of strong lens models, specifically, the image predictability, mass distribution, and magnifications of background sources. This work will not only inform upon Frontier Field science, but also for work on the growing collection of strong lensing galaxy clusters, most of which are less massive and are capable of lensing a handful of galaxies.
Low-Energy Proton Testing Methodology
NASA Technical Reports Server (NTRS)
Pellish, Jonathan A.; Marshall, Paul W.; Heidel, David F.; Schwank, James R.; Shaneyfelt, Marty R.; Xapsos, M.A.; Ladbury, Raymond L.; LaBel, Kenneth A.; Berg, Melanie; Kim, Hak S.;
2009-01-01
Use of low-energy protons and high-energy light ions is becoming necessary to investigate current-generation SEU thresholds. Systematic errors can dominate measurements made with low-energy protons. Range and energy straggling contribute to systematic error. Low-energy proton testing is not a step-and-repeat process. Low-energy protons and high-energy light ions can be used to measure SEU cross section of single sensitive features; important for simulation.
Focusing cosmic telescopes: systematics of strong lens modeling
NASA Astrophysics Data System (ADS)
Johnson, Traci Lin; Sharon, Keren q.
2018-01-01
The use of strong gravitational lensing by galaxy clusters has become a popular method for studying the high redshift universe. While diverse in computational methods, lens modeling techniques have grasped the means for determining statistical errors on cluster masses and magnifications. However, the systematic errors have yet to be quantified, arising from the number of constraints, availablity of spectroscopic redshifts, and various types of image configurations. I will be presenting my dissertation work on quantifying systematic errors in parametric strong lensing techniques. I have participated in the Hubble Frontier Fields lens model comparison project, using simulated clusters to compare the accuracy of various modeling techniques. I have extended this project to understanding how changing the quantity of constraints affects the mass and magnification. I will also present my recent work extending these studies to clusters in the Outer Rim Simulation. These clusters are typical of the clusters found in wide-field surveys, in mass and lensing cross-section. These clusters have fewer constraints than the HFF clusters and thus, are more susceptible to systematic errors. With the wealth of strong lensing clusters discovered in surveys such as SDSS, SPT, DES, and in the future, LSST, this work will be influential in guiding the lens modeling efforts and follow-up spectroscopic campaigns.
Short-range optical air data measurements for aircraft control using rotational Raman backscatter.
Fraczek, Michael; Behrendt, Andreas; Schmitt, Nikolaus
2013-07-15
A first laboratory prototype of a novel concept for a short-range optical air data system for aircraft control and safety was built. The measurement methodology was introduced in [Appl. Opt. 51, 148 (2012)] and is based on techniques known from lidar detecting elastic and Raman backscatter from air. A wide range of flight-critical parameters, such as air temperature, molecular number density and pressure can be measured as well as data on atmospheric particles and humidity can be collected. In this paper, the experimental measurement performance achieved with the first laboratory prototype using 532 nm laser radiation of a pulse energy of 118 mJ is presented. Systematic measurement errors and statistical measurement uncertainties are quantified separately. The typical systematic temperature, density and pressure measurement errors obtained from the mean of 1000 averaged signal pulses are small amounting to < 0.22 K, < 0.36% and < 0.31%, respectively, for measurements at air pressures varying from 200 hPa to 950 hPa but constant air temperature of 298.95 K. The systematic measurement errors at air temperatures varying from 238 K to 308 K but constant air pressure of 946 hPa are even smaller and < 0.05 K, < 0.07% and < 0.06%, respectively. A focus is put on the system performance at different virtual flight altitudes as a function of the laser pulse energy. The virtual flight altitudes are precisely generated with a custom-made atmospheric simulation chamber system. In this context, minimum laser pulse energies and pulse numbers are experimentally determined, which are required using the measurement system, in order to meet measurement error demands for temperature and pressure specified in aviation standards. The aviation error margins limit the allowable temperature errors to 1.5 K for all measurement altitudes and the pressure errors to 0.1% for 0 m and 0.5% for 13000 m. With regard to 100-pulse-averaged temperature measurements, the pulse energy using 532 nm laser radiation has to be larger than 11 mJ (35 mJ), regarding 1-σ (3-σ) uncertainties at all measurement altitudes. For 100-pulse-averaged pressure measurements, the laser pulse energy has to be larger than 95 mJ (355 mJ), respectively. Based on these experimental results, the laser pulse energy requirements are extrapolated to the ultraviolet wavelength region as well, resulting in significantly lower pulse energy demand of 1.5 - 3 mJ (4-10 mJ) and 12-27 mJ (45-110 mJ) for 1-σ (3-σ) 100-pulse-averaged temperature and pressure measurements, respectively.
A probabilistic approach to remote compositional analysis of planetary surfaces
Lapotre, Mathieu G.A.; Ehlmann, Bethany L.; Minson, Sarah E.
2017-01-01
Reflected light from planetary surfaces provides information, including mineral/ice compositions and grain sizes, by study of albedo and absorption features as a function of wavelength. However, deconvolving the compositional signal in spectra is complicated by the nonuniqueness of the inverse problem. Trade-offs between mineral abundances and grain sizes in setting reflectance, instrument noise, and systematic errors in the forward model are potential sources of uncertainty, which are often unquantified. Here we adopt a Bayesian implementation of the Hapke model to determine sets of acceptable-fit mineral assemblages, as opposed to single best fit solutions. We quantify errors and uncertainties in mineral abundances and grain sizes that arise from instrument noise, compositional end members, optical constants, and systematic forward model errors for two suites of ternary mixtures (olivine-enstatite-anorthite and olivine-nontronite-basaltic glass) in a series of six experiments in the visible-shortwave infrared (VSWIR) wavelength range. We show that grain sizes are generally poorly constrained from VSWIR spectroscopy. Abundance and grain size trade-offs lead to typical abundance errors of ≤1 wt % (occasionally up to ~5 wt %), while ~3% noise in the data increases errors by up to ~2 wt %. Systematic errors further increase inaccuracies by a factor of 4. Finally, phases with low spectral contrast or inaccurate optical constants can further increase errors. Overall, typical errors in abundance are <10%, but sometimes significantly increase for specific mixtures, prone to abundance/grain-size trade-offs that lead to high unmixing uncertainties. These results highlight the need for probabilistic approaches to remote determination of planetary surface composition.
The Effects of Bar-coding Technology on Medication Errors: A Systematic Literature Review.
Hutton, Kevin; Ding, Qian; Wellman, Gregory
2017-02-24
The bar-coding technology adoptions have risen drastically in U.S. health systems in the past decade. However, few studies have addressed the impact of bar-coding technology with strong prospective methodologies and the research, which has been conducted from both in-pharmacy and bedside implementations. This systematic literature review is to examine the effectiveness of bar-coding technology on preventing medication errors and what types of medication errors may be prevented in the hospital setting. A systematic search of databases was performed from 1998 to December 2016. Studies measuring the effect of bar-coding technology on medication errors were included in a full-text review. Studies with the outcomes other than medication errors such as efficiency or workarounds were excluded. The outcomes were measured and findings were summarized for each retained study. A total of 2603 articles were initially identified and 10 studies, which used prospective before-and-after study design, were fully reviewed in this article. Of the 10 included studies, 9 took place in the United States, whereas the remaining was conducted in the United Kingdom. One research article focused on bar-coding implementation in a pharmacy setting, whereas the other 9 focused on bar coding within patient care areas. All 10 studies showed overall positive effects associated with bar-coding implementation. The results of this review show that bar-coding technology may reduce medication errors in hospital settings, particularly on preventing targeted wrong dose, wrong drug, wrong patient, unauthorized drug, and wrong route errors.
NASA Technical Reports Server (NTRS)
Martin, D. L.; Perry, M. J.
1994-01-01
Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.
A novel method of measuring the concentration of anaesthetic vapours using a dew-point hygrometer.
Wilkes, A R; Mapleson, W W; Mecklenburgh, J S
1994-02-01
The Antoine equation relates the saturated vapour pressure of a volatile substance, such as an anaesthetic agent, to the temperature. The measurement of the 'dew-point' of a dry gas mixture containing a volatile anaesthetic agent by a dew-point hygrometer permits the determination of the partial pressure of the anaesthetic agent. The accuracy of this technique is limited only by the accuracy of the Antoine coefficients and of the temperature measurement. Comparing measurements by the dew-point method with measurements by refractometry showed systematic discrepancies up to 0.2% and random discrepancies with SDS up to 0.07% concentration in the 1% to 5% range for three volatile anaesthetics. The systematic discrepancies may be due to errors in available data for the vapour pressures and/or the refractive indices of the anaesthetics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirasaki, Masato; Yoshida, Naoki, E-mail: masato.shirasaki@utap.phys.s.u-tokyo.ac.jp
2014-05-01
The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degradesmore » the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ∼1400 deg{sup 2} will constrain the dark energy equation of the state parameter with an error of Δw {sub 0} ∼ 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1σ error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density Ω{sub m0}=0.256±{sub 0.046}{sup 0.054}.« less
NASA Technical Reports Server (NTRS)
Prive, Nikki C.; Errico, Ronald M.
2013-01-01
A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.
An Analysis of Computational Errors in the Use of Division Algorithms by Fourth-Grade Students.
ERIC Educational Resources Information Center
Stefanich, Greg P.; Rokusek, Teri
1992-01-01
Presents a study that analyzed errors made by randomly chosen fourth grade students (25 of 57) while using the division algorithm and investigated the effect of remediation on identified systematic errors. Results affirm that error pattern diagnosis and directed remediation lead to new learning and long-term retention. (MDH)
de Cordova, Pamela B; Bradford, Michelle A; Stone, Patricia W
2016-02-15
Shift workers have worse health outcomes than employees who work standard business hours. However, it is unclear how this poorer health shift may be related to employee work productivity. The purpose of this systematic review is to assess the relationship between shift work and errors and performance. Searches of MEDLINE/PubMed, EBSCOhost, and CINAHL were conducted to identify articles that examined the relationship between shift work, errors, quality, productivity, and performance. All articles were assessed for study quality. A total of 435 abstracts were screened with 13 meeting inclusion criteria. Eight studies were rated to be of strong, methodological quality. Nine studies demonstrated a positive relationship that night shift workers committed more errors and had decreased performance. Night shift workers have worse health that may contribute to errors and decreased performance in the workplace.
NASA Technical Reports Server (NTRS)
Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil
2011-01-01
Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.
NASA Astrophysics Data System (ADS)
Taut, A.; Berger, L.; Drews, C.; Bower, J.; Keilbach, D.; Lee, M. A.; Moebius, E.; Wimmer-Schweingruber, R. F.
2017-12-01
Complementary to the direct neutral particle measurements performed by e.g. IBEX, the measurement of PickUp Ions (PUIs) constitutes a diagnostic tool to investigate the local interstellar medium. PUIs are former neutral particles that have been ionized in the inner heliosphere. Subsequently, they are picked up by the solar wind and its frozen-in magnetic field. Due to this process, a characteristic Velocity Distribution Function (VDF) with a sharp cutoff evolves, which carries information about the PUI's injection speed and thus the former neutral particle velocity. The symmetry of the injection speed about the interstellar flow vector is used to derive the interstellar flow longitude from PUI measurements. Using He PUI data obtained by the PLASTIC sensor on STEREO A, we investigate how this concept may be affected by systematic errors. The PUI VDF strongly depends on the orientation of the local interplanetary magnetic field. Recently injected PUIs with speeds just below the cutoff speed typically form a highly anisotropic torus distribution in velocity space, which leads to a longitudinal transport for certain magnetic field orientation. Therefore, we investigate how the selection of magnetic field configurations in the data affects the result for the interstellar flow longitude that we derive from the PUI cutoff. Indeed, we find that the results follow a systematic trend with the filtered magnetic field angles that can lead to a shift of the result up to 5°. In turn, this means that every value for the interstellar flow longitude derived from the PUI cutoff is affected by a systematic error depending on the utilized magnetic field orientations. Here, we present our observations, discuss possible reasons for the systematic trend we discovered, and indicate selections that may minimize the systematic errors.
Contributory factors in surgical incidents as delineated by a confidential reporting system.
Mushtaq, F; O'Driscoll, C; Smith, Fct; Wilkins, D; Kapur, N; Lawton, R
2018-05-01
Background Confidential reporting systems play a key role in capturing information about adverse surgical events. However, the value of these systems is limited if the reports that are generated are not subjected to systematic analysis. The aim of this study was to provide the first systematic analysis of data from a novel surgical confidential reporting system to delineate contributory factors in surgical incidents and document lessons that can be learned. Methods One-hundred and forty-five patient safety incidents submitted to the UK Confidential Reporting System for Surgery over a 10-year period were analysed using an adapted version of the empirically-grounded Yorkshire Contributory Factors Framework. Results The most common factors identified as contributing to reported surgical incidents were cognitive limitations (30.09%), communication failures (16.11%) and a lack of adherence to established policies and procedures (8.81%). The analysis also revealed that adverse events were only rarely related to an isolated, single factor (20.71%) - with the majority of cases involving multiple contributory factors (79.29% of all cases had more than one contributory factor). Examination of active failures - those closest in time and space to the adverse event - pointed to frequent coupling with latent, systems-related contributory factors. Conclusions Specific patterns of errors often underlie surgical adverse events and may therefore be amenable to targeted intervention, including particular forms of training. The findings in this paper confirm the view that surgical errors tend to be multi-factorial in nature, which also necessitates a multi-disciplinary and system-wide approach to bringing about improvements.
A cognitive taxonomy of medical errors.
Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H
2004-06-01
Propose a cognitive taxonomy of medical errors at the level of individuals and their interactions with technology. Use cognitive theories of human error and human action to develop the theoretical foundations of the taxonomy, develop the structure of the taxonomy, populate the taxonomy with examples of medical error cases, identify cognitive mechanisms for each category of medical error under the taxonomy, and apply the taxonomy to practical problems. Four criteria were used to evaluate the cognitive taxonomy. The taxonomy should be able (1) to categorize major types of errors at the individual level along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to describe how and explain why a specific error occurs, and (4) to generate intervention strategies for each type of error. The proposed cognitive taxonomy largely satisfies the four criteria at a theoretical and conceptual level. Theoretically, the proposed cognitive taxonomy provides a method to systematically categorize medical errors at the individual level along cognitive dimensions, leads to a better understanding of the underlying cognitive mechanisms of medical errors, and provides a framework that can guide future studies on medical errors. Practically, it provides guidelines for the development of cognitive interventions to decrease medical errors and foundation for the development of medical error reporting system that not only categorizes errors but also identifies problems and helps to generate solutions. To validate this model empirically, we will next be performing systematic experimental studies.
Removal of batch effects using distribution-matching residual networks.
Shaham, Uri; Stanton, Kelly P; Zhao, Jun; Li, Huamin; Raddassi, Khadir; Montgomery, Ruth; Kluger, Yuval
2017-08-15
Sources of variability in experimentally derived data include measurement error in addition to the physical phenomena of interest. This measurement error is a combination of systematic components, originating from the measuring instrument and random measurement errors. Several novel biological technologies, such as mass cytometry and single-cell RNA-seq (scRNA-seq), are plagued with systematic errors that may severely affect statistical analysis if the data are not properly calibrated. We propose a novel deep learning approach for removing systematic batch effects. Our method is based on a residual neural network, trained to minimize the Maximum Mean Discrepancy between the multivariate distributions of two replicates, measured in different batches. We apply our method to mass cytometry and scRNA-seq datasets, and demonstrate that it effectively attenuates batch effects. our codes and data are publicly available at https://github.com/ushaham/BatchEffectRemoval.git. yuval.kluger@yale.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Improved methods for the measurement and analysis of stellar magnetic fields
NASA Technical Reports Server (NTRS)
Saar, Steven H.
1988-01-01
The paper presents several improved methods for the measurement of magnetic fields on cool stars which take into account simple radiative transfer effects and the exact Zeeman patterns. Using these methods, high-resolution, low-noise data can be fitted with theoretical line profiles to determine the mean magnetic field strength in stellar active regions and a model-dependent fraction of the stellar surface (filling factor) covered by these regions. Random errors in the derived field strength and filling factor are parameterized in terms of signal-to-noise ratio, wavelength, spectral resolution, stellar rotation rate, and the magnetic parameters themselves. Weak line blends, if left uncorrected, can have significant systematic effects on the derived magnetic parameters, and thus several methods are developed to compensate partially for them. The magnetic parameters determined by previous methods likely have systematic errors because of such line blends and because of line saturation effects. Other sources of systematic error are explored in detail. These sources of error currently make it difficult to determine the magnetic parameters of individual stars to better than about + or - 20 percent.
Local blur analysis and phase error correction method for fringe projection profilometry systems.
Rao, Li; Da, Feipeng
2018-05-20
We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.
Dynamically corrected gates for singlet-triplet spin qubits with control-dependent errors
NASA Astrophysics Data System (ADS)
Jacobson, N. Tobias; Witzel, Wayne M.; Nielsen, Erik; Carroll, Malcolm S.
2013-03-01
Magnetic field inhomogeneity due to random polarization of quasi-static local magnetic impurities is a major source of environmentally induced error for singlet-triplet double quantum dot (DQD) spin qubits. Moreover, for singlet-triplet qubits this error may depend on the applied controls. This effect is significant when a static magnetic field gradient is applied to enable full qubit control. Through a configuration interaction analysis, we observe that the dependence of the field inhomogeneity-induced error on the DQD bias voltage can vary systematically as a function of the controls for certain experimentally relevant operating regimes. To account for this effect, we have developed a straightforward prescription for adapting dynamically corrected gate sequences that assume control-independent errors into sequences that compensate for systematic control-dependent errors. We show that accounting for such errors may lead to a substantial increase in gate fidelities. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Y; Fullerton, G; Goins, B
Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group;more » 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement errors during the animal study.« less
Atmospheric Dispersion Effects in Weak Lensing Measurements
Plazas, Andrés Alejandro; Bernstein, Gary
2012-10-01
The wavelength dependence of atmospheric refraction causes elongation of finite-bandwidth images along the elevation vector, which produces spurious signals in weak gravitational lensing shear measurements unless this atmospheric dispersion is calibrated and removed to high precision. Because astrometric solutions and PSF characteristics are typically calibrated from stellar images, differences between the reference stars' spectra and the galaxies' spectra will leave residual errors in both the astrometric positions (dr) and in the second moment (width) of the wavelength-averaged PSF (dv) for galaxies.We estimate the level of dv that will induce spurious weak lensing signals in PSF-corrected galaxy shapes that exceed themore » statistical errors of the DES and the LSST cosmic-shear experiments. We also estimate the dr signals that will produce unacceptable spurious distortions after stacking of exposures taken at different airmasses and hour angles. We also calculate the errors in the griz bands, and find that dispersion systematics, uncorrected, are up to 6 and 2 times larger in g and r bands,respectively, than the requirements for the DES error budget, but can be safely ignored in i and z bands. For the LSST requirements, the factors are about 30, 10, and 3 in g, r, and i bands,respectively. We find that a simple correction linear in galaxy color is accurate enough to reduce dispersion shear systematics to insignificant levels in the r band for DES and i band for LSST,but still as much as 5 times than the requirements for LSST r-band observations. More complex corrections will likely be able to reduce the systematic cosmic-shear errors below statistical errors for LSST r band. But g-band effects remain large enough that it seems likely that induced systematics will dominate the statistical errors of both surveys, and cosmic-shear measurements should rely on the redder bands.« less
Ground state properties of 3d metals from self-consistent GW approach
Kutepov, Andrey L.
2017-10-06
The self consistent GW approach (scGW) has been applied to calculate the ground state properties (equilibrium Wigner–Seitz radius S WZ and bulk modulus B) of 3d transition metals Sc, Ti, V, Fe, Co, Ni, and Cu. The approach systematically underestimates S WZ with average relative deviation from the experimental data of about 1% and it overestimates the calculated bulk modulus with relative error of about 25%. We show that scGW is superior in accuracy as compared to the local density approximation but it is less accurate than the generalized gradient approach for the materials studied. If compared to the randommore » phase approximation, scGW is slightly less accurate, but its error for 3d metals looks more systematic. Lastly, the systematic nature of the deviation from the experimental data suggests that the next order of the perturbation theory should allow one to reduce the error.« less
Ground state properties of 3d metals from self-consistent GW approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutepov, Andrey L.
The self consistent GW approach (scGW) has been applied to calculate the ground state properties (equilibrium Wigner–Seitz radius S WZ and bulk modulus B) of 3d transition metals Sc, Ti, V, Fe, Co, Ni, and Cu. The approach systematically underestimates S WZ with average relative deviation from the experimental data of about 1% and it overestimates the calculated bulk modulus with relative error of about 25%. We show that scGW is superior in accuracy as compared to the local density approximation but it is less accurate than the generalized gradient approach for the materials studied. If compared to the randommore » phase approximation, scGW is slightly less accurate, but its error for 3d metals looks more systematic. Lastly, the systematic nature of the deviation from the experimental data suggests that the next order of the perturbation theory should allow one to reduce the error.« less
NASA Technical Reports Server (NTRS)
Casper, Paul W.; Bent, Rodney B.
1991-01-01
The algorithm used in previous technology time-of-arrival lightning mapping systems was based on the assumption that the earth is a perfect spheroid. These systems yield highly-accurate lightning locations, which is their major strength. However, extensive analysis of tower strike data has revealed occasionally significant (one to two kilometer) systematic offset errors which are not explained by the usual error sources. It was determined that these systematic errors reduce dramatically (in some cases) when the oblate shape of the earth is taken into account. The oblate spheroid correction algorithm and a case example is presented.
Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.
Zaitsev, M; Steinhoff, S; Shah, N J
2003-06-01
A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.
The Red Edge Problem in asteroid band parameter analysis
NASA Astrophysics Data System (ADS)
Lindsay, Sean S.; Dunn, Tasha L.; Emery, Joshua P.; Bowles, Neil E.
2016-04-01
Near-infrared reflectance spectra of S-type asteroids contain two absorptions at 1 and 2 μm (band I and II) that are diagnostic of mineralogy. A parameterization of these two bands is frequently employed to determine the mineralogy of S(IV) asteroids through the use of ordinary chondrite calibration equations that link the mineralogy to band parameters. The most widely used calibration study uses a Band II terminal wavelength point (red edge) at 2.50 μm. However, due to the limitations of the NIR detectors on prominent telescopes used in asteroid research, spectral data for asteroids are typically only reliable out to 2.45 μm. We refer to this discrepancy as "The Red Edge Problem." In this report, we evaluate the associated errors for measured band area ratios (BAR = Area BII/BI) and calculated relative abundance measurements. We find that the Red Edge Problem is often not the dominant source of error for the observationally limited red edge set at 2.45 μm, but it frequently is for a red edge set at 2.40 μm. The error, however, is one sided and therefore systematic. As such, we provide equations to adjust measured BARs to values with a different red edge definition. We also provide new ol/(ol+px) calibration equations for red edges set at 2.40 and 2.45 μm.
Quick, Jeffrey C
2014-01-01
Annual CO2 emission tallies for 210 coal-fired power plants during 2009 were more accurately calculated from fuel consumption records reported by the US. Energy Information Administration (EIA) than measurements from Continuous Emissions Monitoring Systems (CEMS) reported by the US. Environmental Protection Agency. Results from these accounting methods for individual plants vary by +/- 10.8%. Although the differences systematically vary with the method used to certify flue-gas flow instruments in CEMS, additional sources of CEMS measurement error remain to be identified. Limitations of the EIA fuel consumption data are also discussed. Consideration of weighing, sample collection, laboratory analysis, emission factor, and stock adjustment errors showed that the minimum error for CO2 emissions calculated from the fuel consumption data ranged from +/- 1.3% to +/- 7.2% with a plant average of +/- 1.6%. This error might be reduced by 50% if the carbon content of coal delivered to U.S. power plants were reported. Potentially, this study might inform efforts to regulate CO2 emissions (such as CO2 performance standards or taxes) and more immediately, the U.S. Greenhouse Gas Reporting Rule where large coal-fired power plants currently use CEMS to measure CO2 emissions. Moreover, if, as suggested here, the flue-gas flow measurement limits the accuracy of CO2 emission tallies from CEMS, then the accuracy of other emission tallies from CEMS (such as SO2, NOx, and Hg) would be similarly affected. Consequently, improved flue gas flow measurements are needed to increase the reliability of emission measurements from CEMS.
Psychometric Evaluation of the Brachial Assessment Tool Part 1: Reproducibility.
Hill, Bridget; Williams, Gavin; Olver, John; Ferris, Scott; Bialocerkowski, Andrea
2018-04-01
To evaluate reproducibility (reliability and agreement) of the Brachial Assessment Tool (BrAT), a new patient-reported outcome measure for adults with traumatic brachial plexus injury (BPI). Prospective repeated-measure design. Outpatient clinics. Adults with confirmed traumatic BPI (N=43; age range, 19-82y). People with BPI completed the 31-item 4-response BrAT twice, 2 weeks apart. Results for the 3 subscales and summed score were compared at time 1 and time 2 to determine reliability, including systematic differences using paired t tests, test retest using intraclass correlation coefficient model 1,1 (ICC 1,1 ), and internal consistency using Cronbach α. Agreement parameters included standard error of measurement, minimal detectable change, and limits of agreement. BrAT. Test-retest reliability was excellent (ICC 1,1 =.90-.97). Internal consistency was high (Cronbach α=.90-.98). Measurement error was relatively low (standard error of measurement range, 3.1-8.8). A change of >4 for subscale 1, >6 for subscale 2, >4 for subscale 3, and >10 for the summed score is indicative of change over and above measurement error. Limits of agreement ranged from ±4.4 (subscale 3) to 11.61 (summed score). These findings support the use of the BrAT as a reproducible patient-reported outcome measure for adults with traumatic BPI with evidence of appropriate reliability and agreement for both individual and group comparisons. Further psychometric testing is required to establish the construct validity and responsiveness of the BrAT. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, X.; Xu, L.
2018-04-01
One of the most important applications of remote sensing classification is water extraction. The water index (WI) based on Landsat images is one of the most common ways to distinguish water bodies from other land surface features. But conventional WI methods take into account spectral information only form a limited number of bands, and therefore the accuracy of those WI methods may be constrained in some areas which are covered with snow/ice, clouds, etc. An accurate and robust water extraction method is the key to the study at present. The support vector machine (SVM) using all bands spectral information can reduce for these classification error to some extent. Nevertheless, SVM which barely considers spatial information is relatively sensitive to noise in local regions. Conditional random field (CRF) which considers both spatial information and spectral information has proven to be able to compensate for these limitations. Hence, in this paper, we develop a systematic water extraction method by taking advantage of the complementarity between the SVM and a water index-guided stochastic fully-connected conditional random field (SVM-WIGSFCRF) to address the above issues. In addition, we comprehensively evaluate the reliability and accuracy of the proposed method using Landsat-8 operational land imager (OLI) images of one test site. We assess the method's performance by calculating the following accuracy metrics: Omission Errors (OE) and Commission Errors (CE); Kappa coefficient (KP) and Total Error (TE). Experimental results show that the new method can improve target detection accuracy under complex and changeable environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagler, Peter C.; Tucker, Gregory S.; Fixsen, Dale J.
The detection of the primordial B-mode polarization signal of the cosmic microwave background (CMB) would provide evidence for inflation. Yet as has become increasingly clear, the detection of a such a faint signal requires an instrument with both wide frequency coverage to reject foregrounds and excellent control over instrumental systematic effects. Using a polarizing Fourier transform spectrometer (FTS) for CMB observations meets both of these requirements. In this work, we present an analysis of instrumental systematic effects in polarizing FTSs, using the Primordial Inflation Explorer (PIXIE) as a worked example. We analytically solve for the most important systematic effects inherentmore » to the FTS—emissive optical components, misaligned optical components, sampling and phase errors, and spin synchronous effects—and demonstrate that residual systematic error terms after corrections will all be at the sub-nK level, well below the predicted 100 nK B-mode signal.« less
Bazavov, A; Bernard, C; Bouchard, C M; Detar, C; Du, Daping; El-Khadra, A X; Foley, J; Freeland, E D; Gámiz, E; Gottlieb, Steven; Heller, U M; Kim, Jongjeong; Kronfeld, A S; Laiho, J; Levkova, L; Mackenzie, P B; Neil, E T; Oktay, M B; Qiu, Si-Wei; Simone, J N; Sugar, R; Toussaint, D; Van de Water, R S; Zhou, Ran
2014-03-21
We calculate the kaon semileptonic form factor f+(0) from lattice QCD, working, for the first time, at the physical light-quark masses. We use gauge configurations generated by the MILC Collaboration with Nf = 2 + 1 + 1 flavors of sea quarks, which incorporate the effects of dynamical charm quarks as well as those of up, down, and strange. We employ data at three lattice spacings to extrapolate to the continuum limit. Our result, f+(0) = 0.9704(32), where the error is the total statistical plus systematic uncertainty added in quadrature, is the most precise determination to date. Combining our result with the latest experimental measurements of K semileptonic decays, one obtains the Cabibbo-Kobayashi-Maskawa matrix element |V(us)| = 0.22290(74)(52), where the first error is from f+(0) and the second one is from experiment. In the first-row test of Cabibbo-Kobayashi-Maskawa unitarity, the error stemming from |V(us)| is now comparable to that from |V(ud)|.
Optics measurement algorithms and error analysis for the proton energy frontier
NASA Astrophysics Data System (ADS)
Langner, A.; Tomás, R.
2015-03-01
Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.
$$B\\to Kl^+l^-$$ decay form factors from three-flavor lattice QCD
Bailey, Jon A.
2016-01-27
We compute the form factors for the B → Kl +l - semileptonic decay process in lattice QCD using gauge-field ensembles with 2+1 flavors of sea quark, generated by the MILC Collaboration. The ensembles span lattice spacings from 0.12 to 0.045 fm and have multiple sea-quark masses to help control the chiral extrapolation. The asqtad improved staggered action is used for the light valence and sea quarks, and the clover action with the Fermilab interpretation is used for the heavy b quark. We present results for the form factors f+(q 2), f 0(q 2), and f T(q 2), where q 2more » is the momentum transfer, together with a comprehensive examination of systematic errors. Lattice QCD determines the form factors for a limited range of q 2, and we use the model-independent z expansion to cover the whole kinematically allowed range. We present our final form-factor results as coefficients of the z expansion and the correlations between them, where the errors on the coefficients include statistical and all systematic uncertainties. Lastly, we use this complete description of the form factors to test QCD predictions of the form factors at high and low q 2.« less
NASA Astrophysics Data System (ADS)
Caimmi, R.
2011-08-01
Concerning bivariate least squares linear regression, the classical approach pursued for functional models in earlier attempts ( York, 1966, 1969) is reviewed using a new formalism in terms of deviation (matrix) traces which, for unweighted data, reduce to usual quantities leaving aside an unessential (but dimensional) multiplicative factor. Within the framework of classical error models, the dependent variable relates to the independent variable according to the usual additive model. The classes of linear models considered are regression lines in the general case of correlated errors in X and in Y for weighted data, and in the opposite limiting situations of (i) uncorrelated errors in X and in Y, and (ii) completely correlated errors in X and in Y. The special case of (C) generalized orthogonal regression is considered in detail together with well known subcases, namely: (Y) errors in X negligible (ideally null) with respect to errors in Y; (X) errors in Y negligible (ideally null) with respect to errors in X; (O) genuine orthogonal regression; (R) reduced major-axis regression. In the limit of unweighted data, the results determined for functional models are compared with their counterparts related to extreme structural models i.e. the instrumental scatter is negligible (ideally null) with respect to the intrinsic scatter ( Isobe et al., 1990; Feigelson and Babu, 1992). While regression line slope and intercept estimators for functional and structural models necessarily coincide, the contrary holds for related variance estimators even if the residuals obey a Gaussian distribution, with the exception of Y models. An example of astronomical application is considered, concerning the [O/H]-[Fe/H] empirical relations deduced from five samples related to different stars and/or different methods of oxygen abundance determination. For selected samples and assigned methods, different regression models yield consistent results within the errors (∓ σ) for both heteroscedastic and homoscedastic data. Conversely, samples related to different methods produce discrepant results, due to the presence of (still undetected) systematic errors, which implies no definitive statement can be made at present. A comparison is also made between different expressions of regression line slope and intercept variance estimators, where fractional discrepancies are found to be not exceeding a few percent, which grows up to about 20% in the presence of large dispersion data. An extension of the formalism to structural models is left to a forthcoming paper.
Shack-Hartmann Phasing of Segmented Telescopes: Systematic Effects from Lenslet Arrays
NASA Technical Reports Server (NTRS)
Troy, Mitchell; Chanan, Gary; Roberts, Jennifer
2010-01-01
The segments in the Keck telescopes are routinely phased using a Shack-Hartmann wavefront sensor with sub-apertures that span adjacent segments. However, one potential limitation to the absolute accuracy of this technique is that it relies on a lenslet array (or a single lens plus a prism array) to form the subimages. These optics have the potential to introduce wavefront errors and stray reflections at the subaperture level that will bias the phasing measurement. We present laboratory data to quantify this effect, using measured errors from Keck and two other lenslet arrays. In addition, as part of the design of the Thirty Meter Telescope Alignment and Phasing System we present a preliminary investigation of a lenslet-free approach that relies on Fresnel diffraction to form the subimages at the CCD. Such a technique has several advantages, including the elimination of lenslet aberrations.
NASA Astrophysics Data System (ADS)
Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.
2017-12-01
Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.
Marathe, A R; Taylor, D M
2015-08-01
Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.
NASA Astrophysics Data System (ADS)
Marathe, A. R.; Taylor, D. M.
2015-08-01
Objective. Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. Approach. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Main results. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. Significance. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.
Schnock, Kumiko O; Biggs, Bonnie; Fladger, Anne; Bates, David W; Rozenblum, Ronen
2017-02-22
Retained surgical instruments (RSI) are one of the most serious preventable complications in operating room settings, potentially leading to profound adverse effects for patients, as well as costly legal and financial consequences for hospitals. Safety measures to eliminate RSIs have been widely adopted in the United States and abroad, but despite widespread efforts, medical errors with RSI have not been eliminated. Through a systematic review of recent studies, we aimed to identify the impact of radio frequency identification (RFID) technology on reducing RSI errors and improving patient safety. A literature search on the effects of RFID technology on RSI error reduction was conducted in PubMed and CINAHL (2000-2016). Relevant articles were selected and reviewed by 4 researchers. After the literature search, 385 articles were identified and the full texts of the 88 articles were assessed for eligibility. Of these, 5 articles were included to evaluate the benefits and drawbacks of using RFID for preventing RSI-related errors. The use of RFID resulted in rapid detection of RSI through body tissue with high accuracy rates, reducing risk of counting errors and improving workflow. Based on the existing literature, RFID technology seems to have the potential to substantially improve patient safety by reducing RSI errors, although the body of evidence is currently limited. Better designed research studies are needed to get a clear understanding of this domain and to find new opportunities to use this technology and improve patient safety.
Application of a Laplace transform pair model for high-energy x-ray spectral reconstruction.
Archer, B R; Almond, P R; Wagner, L K
1985-01-01
A Laplace transform pair model, previously shown to accurately reconstruct x-ray spectra at diagnostic energies, has been applied to megavoltage energy beams. The inverse Laplace transforms of 2-, 6-, and 25-MV attenuation curves were evaluated to determine the energy spectra of these beams. The 2-MV data indicate that the model can reliably reconstruct spectra in the low megavoltage range. Experimental limitations in acquiring the 6-MV transmission data demonstrate the sensitivity of the model to systematic experimental error. The 25-MV data result in a physically realistic approximation of the present spectrum.
Three-dimensional assessment of facial asymmetry: A systematic review.
Akhil, Gopi; Senthil Kumar, Kullampalayam Palanisamy; Raja, Subramani; Janardhanan, Kumaresan
2015-08-01
For patients with facial asymmetry, complete and precise diagnosis, and surgical treatments to correct the underlying cause of the asymmetry are significant. Conventional diagnostic radiographs (submento-vertex projections, posteroanterior radiography) have limitations in asymmetry diagnosis due to two-dimensional assessments of three-dimensional (3D) images. The advent of 3D images has greatly reduced the magnification and projection errors that are common in conventional radiographs making it as a precise diagnostic aid for assessment of facial asymmetry. Thus, this article attempts to review the newly introduced 3D tools in the diagnosis of more complex facial asymmetries.
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; ...
2016-09-19
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and themore » trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land–atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. Ultimately, these results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land–atmosphere interactions in the development and maintenance of SAM.« less
Dominant Drivers of GCMs Errors in the Simulation of South Asian Summer Monsoon
NASA Astrophysics Data System (ADS)
Ashfaq, Moetasim
2017-04-01
Accurate simulation of the South Asian summer monsoon (SAM) is a longstanding unresolved problem in climate modeling science. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to demonstrate that most of the simulation errors in the summer season and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and the trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation over land further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land-atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. These results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land-atmosphere interactions in the development and maintenance of SAM.
NASA Astrophysics Data System (ADS)
Fernandez, Alvaro; Müller, Inigo A.; Rodríguez-Sanz, Laura; van Dijk, Joep; Looser, Nathan; Bernasconi, Stefano M.
2017-12-01
Carbonate clumped isotopes offer a potentially transformational tool to interpret Earth's history, but the proxy is still limited by poor interlaboratory reproducibility. Here, we focus on the uncertainties that result from the analysis of only a few replicate measurements to understand the extent to which unconstrained errors affect calibration relationships and paleoclimate reconstructions. We find that highly precise data can be routinely obtained with multiple replicate analyses, but this is not always done in many laboratories. For instance, using published estimates of external reproducibilities we find that typical clumped isotope measurements (three replicate analyses) have margins of error at the 95% confidence level (CL) that are too large for many applications. These errors, however, can be systematically reduced with more replicate measurements. Second, using a Monte Carlo-type simulation we demonstrate that the degree of disagreement on published calibration slopes is about what we should expect considering the precision of Δ47 data, the number of samples and replicate analyses, and the temperature range covered in published calibrations. Finally, we show that the way errors are typically reported in clumped isotope data can be problematic and lead to the impression that data are more precise than warranted. We recommend that uncertainties in Δ47 data should no longer be reported as the standard error of a few replicate measurements. Instead, uncertainties should be reported as margins of error at a specified confidence level (e.g., 68% or 95% CL). These error bars are a more realistic indication of the reliability of a measurement.
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
NASA Astrophysics Data System (ADS)
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; Touma, Danielle; Ruby Leung, L.
2017-07-01
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and the trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land-atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. These results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land-atmosphere interactions in the development and maintenance of SAM.
Sources of errors in the simulation of south Asian summer monsoon in the CMIP5 GCMs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui
2016-09-19
Accurate simulation of the South Asian summer monsoon (SAM) is still an unresolved challenge. There has not been a benchmark effort to decipher the origin of undesired yet virtually invariable unsuccessfulness of general circulation models (GCMs) over this region. This study analyzes a large ensemble of CMIP5 GCMs to show that most of the simulation errors in the precipitation distribution and their driving mechanisms are systematic and of similar nature across the GCMs, with biases in meridional differential heating playing a critical role in determining the timing of monsoon onset over land, the magnitude of seasonal precipitation distribution and themore » trajectories of monsoon depressions. Errors in the pre-monsoon heat low over the lower latitudes and atmospheric latent heating over the slopes of Himalayas and Karakoram Range induce significant errors in the atmospheric circulations and meridional differential heating. Lack of timely precipitation further exacerbates such errors by limiting local moisture recycling and latent heating aloft from convection. Most of the summer monsoon errors and their sources are reproducible in the land–atmosphere configuration of a GCM when it is configured at horizontal grid spacing comparable to the CMIP5 GCMs. While an increase in resolution overcomes many modeling challenges, coarse resolution is not necessarily the primary driver in the exhibition of errors over South Asia. These results highlight the importance of previously less well known pre-monsoon mechanisms that critically influence the strength of SAM in the GCMs and highlight the importance of land–atmosphere interactions in the development and maintenance of SAM.« less
The Observational Determination of the Primordial Helium Abundance: a Y2K Status Report
NASA Astrophysics Data System (ADS)
Skillman, Evan D.
I review observational progress and assess the current state of the determination of the primordial helium abundance, Yp. At present there are two determinations with non-overlapping errors. My impression is that the errors have been under-estimated in both studies. I review recent work on errors assessment and give suggestions for decreasing systematic errors in future studies.
Improved Quality in Aerospace Testing Through the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, R.
2000-01-01
This paper illustrates how, in the presence of systematic error, the quality of an experimental result can be influenced by the order in which the independent variables are set. It is suggested that in typical experimental circumstances in which systematic errors are significant, the common practice of organizing the set point order of independent variables to maximize data acquisition rate results in a test matrix that fails to produce the highest quality research result. With some care to match the volume of data required to satisfy inference error risk tolerances, it is possible to accept a lower rate of data acquisition and still produce results of higher technical quality (lower experimental error) with less cost and in less time than conventional test procedures, simply by optimizing the sequence in which independent variable levels are set.
Detecting Spatial Patterns in Biological Array Experiments
ROOT, DAVID E.; KELLEY, BRIAN P.; STOCKWELL, BRENT R.
2005-01-01
Chemical genetic screening and DNA and protein microarrays are among a number of increasingly important and widely used biological research tools that involve large numbers of parallel experiments arranged in a spatial array. It is often difficult to ensure that uniform experimental conditions are present throughout the entire array, and as a result, one often observes systematic spatially correlated errors, especially when array experiments are performed using robots. Here, the authors apply techniques based on the discrete Fourier transform to identify and quantify spatially correlated errors superimposed on a spatially random background. They demonstrate that these techniques are effective in identifying common spatially systematic errors in high-throughput 384-well microplate assay data. In addition, the authors employ a statistical test to allow for automatic detection of such errors. Software tools for using this approach are provided. PMID:14567791
Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu
2017-05-25
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.
Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu
2017-01-01
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3–5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems. PMID:28587086
NASA Astrophysics Data System (ADS)
Zhao, Q.
2017-12-01
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.
NASA Astrophysics Data System (ADS)
Helhel, S.; Khamitov, I.; Kahya, G.; Bayar, C.; Kaynar, S.; Gumerov, R.
2015-10-01
Photometric and spectroscopic observation capabilities of 1.5-m Russian-Turkish Telescope RTT150 has been broadened with the integration of presented polarimeter. The well-known double-wedged Wollaston-type dual-beam technique was preferred and applied to design and produce it. The designed polarimeter was integrated into the telescope detector TFOSC, and called TFOSC-WP. Its capabilities and limitations were attempted to be determined by a number of observation sets. Non-polarized and strongly polarized stars were observed to determine its limitations as well as its linearity. An instrumental intrinsic polarization was determined for the 1 × 5 arcmin field of view in equatorial coordinate system, the systematic error of polarization degree as 0.2 %, and position angle as 1.9∘. These limitations and capabilities are denoted as good enough to satisfy telescopes' present and future astrophysical space missions related to GAIA and SRG projects.
Stone, Daithi A.; Hansen, Gerrit
2015-11-21
Despite being a well-established research field, the detection and attribution of observed climate change to anthropogenic forcing is not yet provided as a climate service. One reason for this is the lack of a methodology for performing tailored detection and attribution assessments on a rapid time scale. Here we develop such an approach, based on the translation of quantitative analysis into the “confidence” language employed in recent Assessment Reports of the Intergovernmental Panel on Climate Change. While its systematic nature necessarily ignores some nuances examined in detailed expert assessments, the approach nevertheless goes beyond most detection and attribution studies inmore » considering contributors to building confidence such as errors in observational data products arising from sparse monitoring networks. When compared against recent expert assessments, the results of this approach closely match those of the existing assessments. Where there are small discrepancies, these variously reflect ambiguities in the details of what is being assessed, reveal nuances or limitations of the expert assessments, or indicate limitations of the accuracy of the sort of systematic approach employed here. Deployment of the method on 116 regional assessments of recent temperature and precipitation changes indicates that existing rules of thumb concerning the detectability of climate change ignore the full range of sources of uncertainty, most particularly the importance of adequate observational monitoring.« less
NASA Technical Reports Server (NTRS)
Heck, M. L.; Findlay, J. T.; Compton, H. R.
1983-01-01
The Aerodynamic Coefficient Identification Package (ACIP) is an instrument consisting of body mounted linear accelerometers, rate gyros, and angular accelerometers for measuring the Space Shuttle vehicular dynamics. The high rate recorded data are utilized for postflight aerodynamic coefficient extraction studies. Although consistent with pre-mission accuracies specified by the manufacturer, the ACIP data were found to contain detectable levels of systematic error, primarily bias, as well as scale factor, static misalignment, and temperature dependent errors. This paper summarizes the technique whereby the systematic ACIP error sources were detected, identified, and calibrated with the use of recorded dynamic data from the low rate, highly accurate Inertial Measurement Units.
Lau, Billy T; Ji, Hanlee P
2017-09-21
RNA-Seq measures gene expression by counting sequence reads belonging to unique cDNA fragments. Molecular barcodes commonly in the form of random nucleotides were recently introduced to improve gene expression measures by detecting amplification duplicates, but are susceptible to errors generated during PCR and sequencing. This results in false positive counts, leading to inaccurate transcriptome quantification especially at low input and single-cell RNA amounts where the total number of molecules present is minuscule. To address this issue, we demonstrated the systematic identification of molecular species using transposable error-correcting barcodes that are exponentially expanded to tens of billions of unique labels. We experimentally showed random-mer molecular barcodes suffer from substantial and persistent errors that are difficult to resolve. To assess our method's performance, we applied it to the analysis of known reference RNA standards. By including an inline random-mer molecular barcode, we systematically characterized the presence of sequence errors in random-mer molecular barcodes. We observed that such errors are extensive and become more dominant at low input amounts. We described the first study to use transposable molecular barcodes and its use for studying random-mer molecular barcode errors. Extensive errors found in random-mer molecular barcodes may warrant the use of error correcting barcodes for transcriptome analysis as input amounts decrease.
Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers
Sun, Ting; Xing, Fei; You, Zheng
2013-01-01
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527
Seeing in the Dark: Weak Lensing from the Sloan Digital Sky Survey
NASA Astrophysics Data System (ADS)
Huff, Eric Michael
Statistical weak lensing by large-scale structure { cosmic shear { is a promising cosmological tool, which has motivated the design of several large upcoming astronomical surveys. This Thesis presents a measurement of cosmic shear using coadded Sloan Digital Sky Survey (SDSS) imaging in 168 square degrees of the equatorial region, with r < 23:5 and i < 22:5, a source number density of 2.2 per arcmin2 and median redshift of zmed = 0.52. These coadds were generated using a new rounding kernel method that was intended to minimize systematic errors in the lensing measurement due to coherent PSF anisotropies that are otherwise prevalent in the SDSS imaging data. Measurements of cosmic shear out to angular separations of 2 degrees are presented, along with systematics tests of the catalog generation and shear measurement steps that demonstrate that these results are dominated by statistical rather than systematic errors. Assuming a cosmological model corresponding to WMAP7 (Komatsu et al., 2011) and allowing only the amplitude of matter fluctuations sigma8 to vary, the best-t value of the amplitude of matter fluctuations is sigma 8=0.636+0.109-0.154 (1sigma); without systematic errors this would be sigma8=0.636+0.099 -0.137 (1sigma). Assuming a flat Λ CDM model, the combined constraints with WMAP7 are sigma8=0.784+0.028 -0.026 (1sigma). The 2sigma error range is 14 percent smaller than WMAP7 alone. Aside from the intrinsic value of such cosmological constraints from the growth of structure, some important lessons are identified for upcoming surveys that may face similar issues when combining multi-epoch data to measure cosmic shear. Motivated by the challenges faced in the cosmic shear measurement, two new lensing probes are suggested for increasing the available weak lensing signal. Both use galaxy scaling relations to control for scatter in lensing observables. The first employs a version of the well-known fundamental plane relation for early type galaxies. This modified "photometric fundamental plane" replaces velocity dispersions with photometric galaxy properties, thus obviating the need for spectroscopic data. We present the first detection of magnification using this method by applying it to photometric catalogs from the Sloan Digital Sky Survey. This analysis shows that the derived magnification signal is comparable to that available from conventional methods using gravitational shear. We suppress the dominant sources of systematic error and discuss modest improvements that may allow this method to equal or even surpass the signal-to-noise achievable with shear. Moreover, some of the dominant sources of systematic error are substantially different from those of shear-based techniques. The second outlines an idea for using the optical Tully-Fisher relation to dramatically improve the signal-to-noise and systematic error control for shear measurements. The expected error properties and potential advantages of such a measurement are proposed, and a pilot study is suggested in order to test the viability of Tully-Fisher weak lensing in the context of the forthcoming generation of large spectroscopic surveys.
Initializing a Mesoscale Boundary-Layer Model with Radiosonde Observations
NASA Astrophysics Data System (ADS)
Berri, Guillermo J.; Bertossa, Germán
2018-01-01
A mesoscale boundary-layer model is used to simulate low-level regional wind fields over the La Plata River of South America, a region characterized by a strong daily cycle of land-river surface-temperature contrast and low-level circulations of sea-land breeze type. The initial and boundary conditions are defined from a limited number of local observations and the upper boundary condition is taken from the only radiosonde observations available in the region. The study considers 14 different upper boundary conditions defined from the radiosonde data at standard levels, significant levels, level of the inversion base and interpolated levels at fixed heights, all of them within the first 1500 m. The period of analysis is 1994-2008 during which eight daily observations from 13 weather stations of the region are used to validate the 24-h surface-wind forecast. The model errors are defined as the root-mean-square of relative error in wind-direction frequency distribution and mean wind speed per wind sector. Wind-direction errors are greater than wind-speed errors and show significant dispersion among the different upper boundary conditions, not present in wind speed, revealing a sensitivity to the initialization method. The wind-direction errors show a well-defined daily cycle, not evident in wind speed, with the minimum at noon and the maximum at dusk, but no systematic deterioration with time. The errors grow with the height of the upper boundary condition level, in particular wind direction, and double the errors obtained when the upper boundary condition is defined from the lower levels. The conclusion is that defining the model upper boundary condition from radiosonde data closer to the ground minimizes the low-level wind-field errors throughout the region.
Voshall, Barbara; Piscotty, Ronald; Lawrence, Jeanette; Targosz, Mary
2013-10-01
Safe medication administration is necessary to ensure quality healthcare. Barcode medication administration systems were developed to reduce drug administration errors and the related costs and improve patient safety. Work-arounds created by nurses in the execution of the required processes can lead to unintended consequences, including errors. This article provides a systematic review of the literature associated with barcoded medication administration and work-arounds and suggests interventions that should be adopted by nurse executives to ensure medication safety.
NASA Technical Reports Server (NTRS)
Larson, T. J.; Ehernberger, L. J.
1985-01-01
The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air data calibrations, but also for studies of atmospheric structure and space positioning accuracy performance. The examples presented cover a wide range of radar tracking conditions for both subsonic and supersonic flight to an altitude of 42,000 ft.
Systematic errors in long baseline oscillation experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Deborah A.; /Fermilab
This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
Global Warming Estimation from MSU
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, Robert, Jr.
1999-01-01
In this study, we have developed time series of global temperature from 1980-97 based on the Microwave Sounding Unit (MSU) Ch 2 (53.74 GHz) observations taken from polar-orbiting NOAA operational satellites. In order to create these time series, systematic errors (approx. 0.1 K) in the Ch 2 data arising from inter-satellite differences are removed objectively. On the other hand, smaller systematic errors (approx. 0.03 K) in the data due to orbital drift of each satellite cannot be removed objectively. Such errors are expected to remain in the time series and leave an uncertainty in the inferred global temperature trend. With the help of a statistical method, the error in the MSU inferred global temperature trend resulting from orbital drifts and residual inter-satellite differences of all satellites is estimated to be 0.06 K decade. Incorporating this error, our analysis shows that the global temperature increased at a rate of 0.13 +/- 0.06 K decade during 1980-97.
NASA Astrophysics Data System (ADS)
Tedd, B. L.; Strangeways, H. J.; Jones, T. B.
1985-11-01
Systematic ionospheric tilts (SITs) at midlatitudes and the diurnal variation of bearing error for different transmission paths are examined. An explanation of diurnal variations of bearing error based on the dependence of ionospheric tilt on solar zenith angle and plasma transport processes is presented. The effect of vertical ion drift and the momentum transfer of neutral winds is investigated. During the daytime the transmissions are low and photochemical processes control SITs; however, at night transmissions are at higher heights and spatial and temporal variations of plasma transport processes influence SITs. A HF ray tracing technique which uses a three-dimensional ionospheric model based on predictions to simulate SIT-induced bearing errors is described; poor correlation with experimental data is observed and the causes for this are studied. A second model based on measured vertical-sounder data is proposed. Model two is applicable for predicting bearing error for a range of transmission paths and correlates well with experimental data.
Continuum limit of Bk from 2+1 flavor domain wall QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soni, A.; T. Izubuchi, et al.
2011-07-01
We determine the neutral kaon mixing matrix element B{sub K} in the continuum limit with 2+1 flavors of domain wall fermions, using the Iwasaki gauge action at two different lattice spacings. These lattice fermions have near exact chiral symmetry and therefore avoid artificial lattice operator mixing. We introduce a significant improvement to the conventional nonperturbative renormalization (NPR) method in which the bare matrix elements are renormalized nonperturbatively in the regularization invariant momentum scheme (RI-MOM) and are then converted into the MS{sup -} scheme using continuum perturbation theory. In addition to RI-MOM, we introduce and implement four nonexceptional intermediate momentum schemesmore » that suppress infrared nonperturbative uncertainties in the renormalization procedure. We compute the conversion factors relating the matrix elements in this family of regularization invariant symmetric momentum schemes (RI-SMOM) and MS{sup -} at one-loop order. Comparison of the results obtained using these different intermediate schemes allows for a more reliable estimate of the unknown higher-order contributions and hence for a correspondingly more robust estimate of the systematic error. We also apply a recently proposed approach in which twisted boundary conditions are used to control the Symanzik expansion for off-shell vertex functions leading to a better control of the renormalization in the continuum limit. We control chiral extrapolation errors by considering both the next-to-leading order SU(2) chiral effective theory, and an analytic mass expansion. We obtain B{sub K}{sup MS{sup -}} (3 GeV) = 0.529(5){sub stat}(15){sub {chi}}(2){sub FV}(11){sub NPR}. This corresponds to B{sup -}{sub K}{sup RGI{sup -}} = 0.749(7){sub stat}(21){sub {chi}}(3){sub FV}(15){sub NPR}. Adding all sources of error in quadrature, we obtain B{sup -}{sub K}{sup RGI{sup -}} = 0.749(27){sub combined}, with an overall combined error of 3.6%.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalinin, V.A.; Tarasenko, V.L.; Tselser, L.B.
1988-09-01
Numerical values of the variation in ultrasonic velocity in constructional metal alloys and the measurement errors related to them are systematized. The systematization is based on the measurement results of the group ultrasonic velocity made in the All-Union Scientific-Research Institute for Nondestructive Testing in 1983-1984 and also on the measurement results of the group velocity made by various authors. The variations in ultrasonic velocity were systematized for carbon, low-alloy, and medium-alloy constructional steels; high-alloy iron base alloys; nickel-base heat-resistant alloys; wrought aluminum constructional alloys; titanium alloys; and cast irons and copper alloys.
High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS
NASA Astrophysics Data System (ADS)
Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu
2017-05-01
Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.
[Can the scattering of differences from the target refraction be avoided?].
Janknecht, P
2008-10-01
We wanted to check how the stochastic error is affected by two lens formulae. The power of the intraocular lens was calculated using the SRK-II formula and the Haigis formula after eye length measurement with ultrasound and the IOL Master. Both lens formulae were partially derived and Gauss error analysis was used for examination of the propagated error. 61 patients with a mean age of 73.8 years were analysed. The postoperative refraction differed from the calculated refraction after ultrasound biometry using the SRK-II formula by 0.05 D (-1.56 to + 1.31, S. D.: 0.59 D; 92 % within +/- 1.0 D), after IOL Master biometry using the SRK-II formula by -0.15 D (-1.18 to + 1.25, S. D.: 0.52 D; 97 % within +/- 1.0 D), and after IOL Master biometry using the Haigis formula by -0.11 D (-1.14 to + 1.14, S. D.: 0.48 D; 95 % within +/- 1.0 D). The results did not differ from one another. The propagated error of the Haigis formula can be calculated according to DeltaP = square root (deltaL x (-4.206))(2) + (deltaVK x 0.9496)(2) + (DeltaDC x (-1.4950))(2). (DeltaL: error measuring axial length, DeltaVK error measuring anterior chamber depth, DeltaDC error measuring corneal power), the propagated error of the SRK-II formula according to DeltaP = square root (DeltaL x (-2.5))(2) + (DeltaDC x (-0.9))(2). The propagated error of the Haigis formula is always larger than the propagated error of the SRK-II formula. Scattering of the postoperative difference from the expected refraction cannot be avoided completely. It is possible to limit the systematic error by developing complicated formulae like the Haigis formula. However, increasing the number of parameters which need to be measured increases the dispersion of the calculated postoperative refraction. A compromise has to be found, and therefore the SRK-II formula is not outdated.
The role of the basic state in the ENSO-monsoon relationship and implications for predictability
NASA Astrophysics Data System (ADS)
Turner, A. G.; Inness, P. M.; Slingo, J. M.
2005-04-01
The impact of systematic model errors on a coupled simulation of the Asian summer monsoon and its interannual variability is studied. Although the mean monsoon climate is reasonably well captured, systematic errors in the equatorial Pacific mean that the monsoon-ENSO teleconnection is rather poorly represented in the general-circulation model. A system of ocean-surface heat flux adjustments is implemented in the tropical Pacific and Indian Oceans in order to reduce the systematic biases. In this version of the general-circulation model, the monsoon-ENSO teleconnection is better simulated, particularly the lag-lead relationships in which weak monsoons precede the peak of El Niño. In part this is related to changes in the characteristics of El Niño, which has a more realistic evolution in its developing phase. A stronger ENSO amplitude in the new model version also feeds back to further strengthen the teleconnection. These results have important implications for the use of coupled models for seasonal prediction of systems such as the monsoon, and suggest that some form of flux correction may have significant benefits where model systematic error compromises important teleconnections and modes of interannual variability.
NASA Technical Reports Server (NTRS)
Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas
2013-01-01
In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.
NASA Astrophysics Data System (ADS)
Appleby, Graham; Rodríguez, José; Altamimi, Zuheir
2016-12-01
Satellite laser ranging (SLR) to the geodetic satellites LAGEOS and LAGEOS-2 uniquely determines the origin of the terrestrial reference frame and, jointly with very long baseline interferometry, its scale. Given such a fundamental role in satellite geodesy, it is crucial that any systematic errors in either technique are at an absolute minimum as efforts continue to realise the reference frame at millimetre levels of accuracy to meet the present and future science requirements. Here, we examine the intrinsic accuracy of SLR measurements made by tracking stations of the International Laser Ranging Service using normal point observations of the two LAGEOS satellites in the period 1993 to 2014. The approach we investigate in this paper is to compute weekly reference frame solutions solving for satellite initial state vectors, station coordinates and daily Earth orientation parameters, estimating along with these weekly average range errors for each and every one of the observing stations. Potential issues in any of the large number of SLR stations assumed to have been free of error in previous realisations of the ITRF may have been absorbed in the reference frame, primarily in station height. Likewise, systematic range errors estimated against a fixed frame that may itself suffer from accuracy issues will absorb network-wide problems into station-specific results. Our results suggest that in the past two decades, the scale of the ITRF derived from the SLR technique has been close to 0.7 ppb too small, due to systematic errors either or both in the range measurements and their treatment. We discuss these results in the context of preparations for ITRF2014 and additionally consider the impact of this work on the currently adopted value of the geocentric gravitational constant, GM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T. S.; DePoy, D. L.; Marshall, J. L.
Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T. S.; DePoy, D. L.; Marshall, J. L.
Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence ofmore » the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S. R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R; Kocak-Uzel, Esengul; Fuller, Clifton D.
2016-01-01
Larynx may alternatively serve as a target or organ-at-risk (OAR) in head and neck cancer (HNC) image-guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population–based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT-on-rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other 6 points were calculated post-isocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all 6 points for all scans over the course of treatment were calculated. Residual systematic and random error, and the necessary compensatory CTV-to-PTV and OAR-to-PRV margins were calculated, using both observational cohort data and a bootstrap-resampled population estimator. The grand mean displacements for all anatomical points was 5.07mm, with mean systematic error of 1.1mm and mean random setup error of 2.63mm, while bootstrapped POIs grand mean displacement was 5.09mm, with mean systematic error of 1.23mm and mean random setup error of 2.61mm. Required margin for CTV-PTV expansion was 4.6mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9mm. The calculated OAR-to-PRV expansion for the observed residual set-up error was 2.7mm, and bootstrap estimated expansion of 2.9mm. We conclude that the interfractional larynx setup error is a significant source of RT set-up/delivery error in HNC both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5mm to compensate for set up error if the larynx is a target or 3mm if the larynx is an OAR when using a non-laryngeal bony isocenter. PMID:25679151
Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S.R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R.; Kocak‐Uzel, Esengul
2014-01-01
Larynx may alternatively serve as a target or organs at risk (OAR) in head and neck cancer (HNC) image‐guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population‐based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT on‐rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior‐anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other six points were calculated postisocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all six points for all scans over the course of treatment was calculated. Residual systematic and random error and the necessary compensatory CTV‐to‐PTV and OAR‐to‐PRV margins were calculated, using both observational cohort data and a bootstrap‐resampled population estimator. The grand mean displacements for all anatomical points was 5.07 mm, with mean systematic error of 1.1 mm and mean random setup error of 2.63 mm, while bootstrapped POIs grand mean displacement was 5.09 mm, with mean systematic error of 1.23 mm and mean random setup error of 2.61 mm. Required margin for CTV‐PTV expansion was 4.6 mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9 mm. The calculated OAR‐to‐PRV expansion for the observed residual setup error was 2.7 mm and bootstrap estimated expansion of 2.9 mm. We conclude that the interfractional larynx setup error is a significant source of RT setup/delivery error in HNC, both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5 mm to compensate for setup error if the larynx is a target, or 3 mm if the larynx is an OAR, when using a nonlaryngeal bony isocenter. PACS numbers: 87.55.D‐, 87.55.Qr
Functional Independent Scaling Relation for ORR/OER Catalysts
Christensen, Rune; Hansen, Heine A.; Dickens, Colin F.; ...
2016-10-11
A widely used adsorption energy scaling relation between OH* and OOH* intermediates in the oxygen reduction reaction (ORR) and oxygen evolution reaction (OER), has previously been determined using density functional theory and shown to dictate a minimum thermodynamic overpotential for both reactions. Here, we show that the oxygen–oxygen bond in the OOH* intermediate is, however, not well described with the previously used class of exchange-correlation functionals. By quantifying and correcting the systematic error, an improved description of gaseous peroxide species versus experimental data and a reduction in calculational uncertainty is obtained. For adsorbates, we find that the systematic error largelymore » cancels the vdW interaction missing in the original determination of the scaling relation. An improved scaling relation, which is fully independent of the applied exchange–correlation functional, is obtained and found to differ by 0.1 eV from the original. Lastly, this largely confirms that, although obtained with a method suffering from systematic errors, the previously obtained scaling relation is applicable for predictions of catalytic activity.« less
Prevalence of refractive errors in children in India: a systematic review.
Sheeladevi, Sethu; Seelam, Bharani; Nukella, Phanindra B; Modi, Aditi; Ali, Rahul; Keay, Lisa
2018-04-22
Uncorrected refractive error is an avoidable cause of visual impairment which affects children in India. The objective of this review is to estimate the prevalence of refractive errors in children ≤ 15 years of age. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines were followed in this review. A detailed literature search was performed to include all population and school-based studies published from India between January 1990 and January 2017, using the Cochrane Library, Medline and Embase. The quality of the included studies was assessed based on a critical appraisal tool developed for systematic reviews of prevalence studies. Four population-based studies and eight school-based studies were included. The overall prevalence of refractive error per 100 children was 8.0 (CI: 7.4-8.1) and in schools it was 10.8 (CI: 10.5-11.2). The population-based prevalence of myopia, hyperopia (≥ +2.00 D) and astigmatism was 5.3 per cent, 4.0 per cent and 5.4 per cent, respectively. Combined refractive error and myopia alone were higher in urban areas compared to rural areas (odds ratio [OR]: 2.27 [CI: 2.09-2.45]) and (OR: 2.12 [CI: 1.79-2.50]), respectively. The prevalence of combined refractive errors and myopia alone in schools was higher among girls than boys (OR: 1.2 [CI: 1.1-1.3] and OR: 1.1 [CI: 1.1-1.2]), respectively. However, hyperopia was more prevalent among boys than girls in schools (OR: 2.1 [CI: 1.8-2.4]). Refractive error in children in India is a major public health problem and requires concerted efforts from various stakeholders including the health care workforce, education professionals and parents, to manage this issue. © 2018 Optometry Australia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Straten, W., E-mail: vanstraten.willem@gmail.com
2013-01-15
A new method of polarimetric calibration is presented in which the instrumental response is derived from regular observations of PSR J0437-4715 based on the assumption that the mean polarized emission from this millisecond pulsar remains constant over time. The technique is applicable to any experiment in which high-fidelity polarimetry is required over long timescales; it is demonstrated by calibrating 7.2 years of high-precision timing observations of PSR J1022+1001 made at the Parkes Observatory. Application of the new technique followed by arrival time estimation using matrix template matching yields post-fit residuals with an uncertainty-weighted standard deviation of 880 ns, two timesmore » smaller than that of arrival time residuals obtained via conventional methods of calibration and arrival time estimation. The precision achieved by this experiment yields the first significant measurements of the secular variation of the projected semimajor axis, the precession of periastron, and the Shapiro delay; it also places PSR J1022+1001 among the 10 best pulsars regularly observed as part of the Parkes Pulsar Timing Array (PPTA) project. It is shown that the timing accuracy of a large fraction of the pulsars in the PPTA is currently limited by the systematic timing error due to instrumental polarization artifacts. More importantly, long-term variations of systematic error are correlated between different pulsars, which adversely affects the primary objectives of any pulsar timing array experiment. These limitations may be overcome by adopting the techniques presented in this work, which relax the demand for instrumental polarization purity and thereby have the potential to reduce the development cost of next-generation telescopes such as the Square Kilometre Array.« less
Dual-process cognitive interventions to enhance diagnostic reasoning: a systematic review.
Lambe, Kathryn Ann; O'Reilly, Gary; Kelly, Brendan D; Curristan, Sarah
2016-10-01
Diagnostic error incurs enormous human and economic costs. The dual-process model reasoning provides a framework for understanding the diagnostic process and attributes certain errors to faulty cognitive shortcuts (heuristics). The literature contains many suggestions to counteract these and to enhance analytical and non-analytical modes of reasoning. To identify, describe and appraise studies that have empirically investigated interventions to enhance analytical and non-analytical reasoning among medical trainees and doctors, and to assess their effectiveness. Systematic searches of five databases were carried out (Medline, PsycInfo, Embase, Education Resource Information Centre (ERIC) and Cochrane Database of Controlled Trials), supplemented with searches of bibliographies and relevant journals. Included studies evaluated an intervention to enhance analytical and/or non-analytical reasoning among medical trainees or doctors. Twenty-eight studies were included under five categories: educational interventions, checklists, cognitive forcing strategies, guided reflection, instructions at test and other interventions. While many of the studies found some effect of interventions, guided reflection interventions emerged as the most consistently successful across five studies, and cognitive forcing strategies improved accuracy and confidence judgements. Significant heterogeneity of measurement approaches was observed, and existing studies are largely limited to early-career doctors. Results to date are promising and this relatively young field is now close to a point where these kinds of cognitive interventions can be recommended to educators. Further research with refined methodology and more diverse samples is required before firm recommendations may be made for medical education and policy; however, these results suggest that such interventions hold promise, with much current enthusiasm for new research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Wee, Leonard; Hackett, Sara Lyons; Jones, Andrew; Lim, Tee Sin; Harper, Christopher Stirling
2013-01-01
This study evaluated the agreement of fiducial marker localization between two modalities — an electronic portal imaging device (EPID) and cone‐beam computed tomography (CBCT) — using a low‐dose, half‐rotation scanning protocol. Twenty‐five prostate cancer patients with implanted fiducial markers were enrolled. Before each daily treatment, EPID and half‐rotation CBCT images were acquired. Translational shifts were computed for each modality and two marker‐matching algorithms, seed‐chamfer and grey‐value, were performed for each set of CBCT images. The localization offsets, and systematic and random errors from both modalities were computed. Localization performances for both modalities were compared using Bland‐Altman limits of agreement (LoA) analysis, Deming regression analysis, and Cohen's kappa inter‐rater analysis. The differences in the systematic and random errors between the modalities were within 0.2 mm in all directions. The LoA analysis revealed a 95% agreement limit of the modalities of 2 to 3.5 mm in any given translational direction. Deming regression analysis demonstrated that constant biases existed in the shifts computed by the modalities in the superior–inferior (SI) direction, but no significant proportional biases were identified in any direction. Cohen's kappa analysis showed good agreement between the modalities in prescribing translational corrections of the couch at 3 and 5 mm action levels. Images obtained from EPID and half‐rotation CBCT showed acceptable agreement for registration of fiducial markers. The seed‐chamfer algorithm for tracking of fiducial markers in CBCT datasets yielded better agreement than the grey‐value matching algorithm with EPID‐based registration. PACS numbers: 87.55.km, 87.55.Qr PMID:23835391
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dietrich, J.P.; et al.
Uncertainty in the mass-observable scaling relations is currently the limiting factor for galaxy cluster based cosmology. Weak gravitational lensing can provide a direct mass calibration and reduce the mass uncertainty. We present new ground-based weak lensing observations of 19 South Pole Telescope (SPT) selected clusters and combine them with previously reported space-based observations of 13 galaxy clusters to constrain the cluster mass scaling relations with the Sunyaev-Zel'dovich effect (SZE), the cluster gas massmore » $$M_\\mathrm{gas}$$, and $$Y_\\mathrm{X}$$, the product of $$M_\\mathrm{gas}$$ and X-ray temperature. We extend a previously used framework for the analysis of scaling relations and cosmological constraints obtained from SPT-selected clusters to make use of weak lensing information. We introduce a new approach to estimate the effective average redshift distribution of background galaxies and quantify a number of systematic errors affecting the weak lensing modelling. These errors include a calibration of the bias incurred by fitting a Navarro-Frenk-White profile to the reduced shear using $N$-body simulations. We blind the analysis to avoid confirmation bias. We are able to limit the systematic uncertainties to 6.4% in cluster mass (68% confidence). Our constraints on the mass-X-ray observable scaling relations parameters are consistent with those obtained by earlier studies, and our constraints for the mass-SZE scaling relation are consistent with the the simulation-based prior used in the most recent SPT-SZ cosmology analysis. We can now replace the external mass calibration priors used in previous SPT-SZ cosmology studies with a direct, internal calibration obtained on the same clusters.« less
SN IA in the IR: RAISIN A progress report
NASA Astrophysics Data System (ADS)
Kirshner, Robert P.; The RAISIN TEAM
2014-01-01
SN Ia have proven to be a powerful tool for cosmology. Near-IR observations of SN Ia promise even better results because the supernovae are more nearly standard candles at those wavelengths and absorption by dust is diminished by a factor of 4 compared to rest-frame B-band observations. Near IR observations of cosmologically-distant SN Ia discovered with PanSTARRS are underway using the infrared camera on the Hubble Space Telescope (GO-13046). These targets are discovered in the difference images created in the CfA/JHU pipeline, confirmed spectroscopically at the MMT, Magellan, Gemini, or Keck, and inserted in a non-disruptive way into the HST observing schedule for WFC3-IR. We have observed over 20 SN Ia in the range 0.2 < z < 0.5 during Cycle 21 and this is a progress report on the analysis. The final results require a repeat observation after the supernova has faded. Those will be completed in 2014, but we have a sufficient sample of objects for which the supernova is well separated from the host galaxy to illustrate the power of this technique. Preliminary analysis shows HST data can reduce the uncertainty in the distance to each supernova by a factor or 2. Sufficiently large supernova samples have been gathered at all redshifts so that statistical errors in interesting parameters (like the dark energy equation-of-state index (1 +w)), have been driven down to the same level as the systematic errors (about 7%). Further progress is limited by our ability to master the systematic errors. These include the correction for luminosity based on the light curve shape and the correction based on intrinsic color and reddening by dust. Since SN IA behave better in the IR in both these ways, there is reason to expect that this approach will be effective in driving down the systematic errors over time. If we are diligent in building up the size of the sample that is observed in the rest-frame infrared, we can expect more certain knowledge of the properties of dark energy. Unsolved problems include constructing precise K-corrections and firming up the fundamental photometric system in y, J, H, and K, but this approach seems a promising one for the HST era now, JWST soon, and WFIRST in good time.
Lash, Timothy L
2007-11-26
The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a qualitative description of study limitations. The latter approach is likely to lead to overconfidence regarding the potential for causal associations, whereas the former safeguards against such overinterpretations. Furthermore, such analyses, once programmed, allow rapid implementation of alternative assignments of probability distributions to the bias parameters, so elevate the plane of discussion regarding study bias from characterizing studies as "valid" or "invalid" to a critical and quantitative discussion of sources of uncertainty.
NASA Astrophysics Data System (ADS)
Pathiraja, S.; Anghileri, D.; Burlando, P.; Sharma, A.; Marshall, L.; Moradkhani, H.
2018-03-01
The global prevalence of rapid and extensive land use change necessitates hydrologic modelling methodologies capable of handling non-stationarity. This is particularly true in the context of Hydrologic Forecasting using Data Assimilation. Data Assimilation has been shown to dramatically improve forecast skill in hydrologic and meteorological applications, although such improvements are conditional on using bias-free observations and model simulations. A hydrologic model calibrated to a particular set of land cover conditions has the potential to produce biased simulations when the catchment is disturbed. This paper sheds new light on the impacts of bias or systematic errors in hydrologic data assimilation, in the context of forecasting in catchments with changing land surface conditions and a model calibrated to pre-change conditions. We posit that in such cases, the impact of systematic model errors on assimilation or forecast quality is dependent on the inherent prediction uncertainty that persists even in pre-change conditions. Through experiments on a range of catchments, we develop a conceptual relationship between total prediction uncertainty and the impacts of land cover changes on the hydrologic regime to demonstrate how forecast quality is affected when using state estimation Data Assimilation with no modifications to account for land cover changes. This work shows that systematic model errors as a result of changing or changed catchment conditions do not always necessitate adjustments to the modelling or assimilation methodology, for instance through re-calibration of the hydrologic model, time varying model parameters or revised offline/online bias estimation.
Errors in Viking Lander Atmospheric Profiles Discovered Using MOLA Topography
NASA Technical Reports Server (NTRS)
Withers, Paul; Lorenz, R. D.; Neumann, G. A.
2002-01-01
Each Viking lander measured a topographic profile during entry. Comparing to MOLA (Mars Orbiter Laser Altimeter), we find a vertical error of 1-2 km in the Viking trajectory. This introduces a systematic error of 10-20% in the Viking densities and pressures at a given altitude. Additional information is contained in the original extended abstract.
Timmer, M A; Gouw, S C; Feldman, B M; Zwagemaker, A; de Kleijn, P; Pisters, M F; Schutgens, R E G; Blanchette, V; Srivastava, A; David, J A; Fischer, K; van der Net, J
2018-03-01
Monitoring clinical outcome in persons with haemophilia (PWH) is essential in order to provide optimal treatment for individual patients and compare effectiveness of treatment strategies. Experience with measurement of activities and participation in haemophilia is limited and consensus on preferred tools is lacking. The aim of this study was to give a comprehensive overview of the measurement properties of a selection of commonly used tools developed to assess activities and participation in PWH. Electronic databases were searched for articles that reported on reliability, validity or responsiveness of predetermined measurement tools (5 self-reported and 4 performance based measurement tools). Methodological quality of the studies was assessed according to the COSMIN checklist. Best evidence synthesis was used to summarize evidence on the measurement properties. The search resulted in 3453 unique hits. Forty-two articles were included. The self-reported Haemophilia Acitivity List (HAL), Pediatric HAL (PedHAL) and the performance based Functional Independence Score in Haemophilia (FISH) were studied most extensively. Methodological quality of the studies was limited. Measurement error, cross-cultural validity and responsiveness have been insufficiently evaluated. Albeit based on limited evidence, the measurement properties of the PedHAL, HAL and FISH are currently considered most satisfactory. Further research needs to focus on measurement error, responsiveness, interpretability and cross-cultural validity of the self-reported tools and validity of performance based tools which are able to assess limitations in sports and leisure activities. © 2018 The Authors. Haemophilia Published by John Wiley & Sons Ltd.
Astrometric properties of the Tautenburg Plate Scanner
NASA Astrophysics Data System (ADS)
Brunzendorf, Jens; Meusinger, Helmut
The Tautenburg Plate Scanner (TPS) is an advanced plate-measuring machine run by the Thüringer Landessternwarte Tautenburg (Karl Schwarzschild Observatory), where the machine is housed. It is capable of digitising photographic plates up to 30 cm × 30 cm in size. In our poster, we reported on tests and preliminary results of its astrometric properties. The essential components of the TPS consist of an x-y table movable between an illumination system and a direct imaging system. A telecentric lens images the light transmitted through the photographic emulsion onto a CCD line of 6000 pixels of 10 µm square size each. All components are mounted on a massive air-bearing table. Scanning is performed in lanes of up to 55 mm width by moving the x-y table in a continuous drift-scan mode perpendicular to the CCD line. The analogue output from the CCD is digitised to 12 bit with a total signal/noise ratio of 1000 : 1, corresponding to a photographic density range of three. The pixel map is produced as a series of optionally overlapping lane scans. The pixel data are stored onto CD-ROM or DAT. A Tautenburg Schmidt plate 24 cm × 24 cm in size is digitised within 2.5 hours resulting in 1.3 GB of data. Subsequent high-level data processing is performed off-line on other computers. During the scanning process, the geometry of the optical components is kept fixed. The optimal focussing of the optics is performed prior to the scan. Due to the telecentric lens refocussing is not required. Therefore, the main source of astrometric errors (beside the emulsion itself) are mechanical imperfections in the drive system, which have to be divided into random and systematic ones. The r.m.s. repeatability over the whole plate as measured by repeated scans of the same plate is about 0.5 µm for each axis. The mean plate-to-plate accuracy of the object positions on two plates with the same epoch and the same plate centre has been determined to be about 1 µm. This accuracy is comparable to results obtained with established measuring machines used for astrometric purposes and is mainly limited by the emulsion itself. The mechanical design of the x-y table introduces low-frequency systematic errors of up to 5 µm on both axes. Because of the high stability of the machine it is expected that these deviations from a perfectly uniform coordinate system will remain systematic on a long timescale. Such systematic errors can be corrected either directly once they have been determined or in the course of the general astrometric reduction process. The TPS is well suited for accurate relative measurements like proper motions on plates with the same scale and plate centre. The systematic errors of the x-y table can be determined by interferometric means, and there are plans for this in the next future.
Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors
NASA Astrophysics Data System (ADS)
Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.
2018-04-01
The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.
An Ongoing Program of Radial Velocities of Nearby Stars
NASA Astrophysics Data System (ADS)
Sperauskas, J.; Boyle, R. P.; Harlow, J.; Jahreiss, H.; Upgren, A. R.
2003-12-01
The lists of stars found by Vyssotsky at the McCormick Observatory and the Fourth Edition of the Catalog of Nearby Stars (CNS4) complement each other. Each was limited in a different way, but together they can be used to evaluate sources of systematic error in either of them. The lists of Vyssotsky comprise almost 900 stars, brighter than a limiting visual magnitude of about 11.5. and thus form a magnitude-limited sample. The CNS4 includes all stars believed to be within 25 parsecs of the Sun, and thus forms a distance-limited group. Limits in magnitude are prone to the Malmquist bias by which stars of a given range in magnitude may average spuriously brighter than stars within a given distance range appropriate for the mean distance modulus. The CNS4 stars may be subject to a slight Lutz-Kelker effect. This also requires a correction that depends mainly on the ratios of the standard errors in the distances to the stars, to the distances, themselves. This is a status report on a survey seeking completeness in the six dynamical properties (positions along the three orthogonal axes, and their first time-derivatives). Parallax, proper motion and radial velocity are the stellar properties required for this information and, as is frequently the case among sets of faint stars, the radial velocities are not always available. We seek to obtain radial velocities for a full dynamical picture for more than one thousand nearby stars of which some two-thirds have been observed. It would be most desirable to follow with age-related measures for all stars
[Improving blood safety: errors management in transfusion medicine].
Bujandrić, Nevenka; Grujić, Jasmina; Krga-Milanović, Mirjana
2014-01-01
The concept of blood safety includes the entire transfusion chain starting with the collection of blood from the blood donor, and ending with blood transfusion to the patient. The concept involves quality management system as the systematic monitoring of adverse reactions and incidents regarding the blood donor or patient. Monitoring of near-miss errors show the critical points in the working process and increase transfusion safety. The aim of the study was to present the analysis results of adverse and unexpected events in transfusion practice with a potential risk to the health of blood donors and patients. One-year retrospective study was based on the collection, analysis and interpretation of written reports on medical errors in the Blood Transfusion Institute of Vojvodina. Errors were distributed according to the type, frequency and part of the working process where they occurred. Possible causes and corrective actions were described for each error. The study showed that there were not errors with potential health consequences for the blood donor/patient. Errors with potentially damaging consequences for patients were detected throughout the entire transfusion chain. Most of the errors were identified in the preanalytical phase. The human factor was responsible for the largest number of errors. Error reporting system has an important role in the error management and the reduction of transfusion-related risk of adverse events and incidents. The ongoing analysis reveals the strengths and weaknesses of the entire process and indicates the necessary changes. Errors in transfusion medicine can be avoided in a large percentage and prevention is cost-effective, systematic and applicable.
Non-linear dynamic compensation system
NASA Technical Reports Server (NTRS)
Lin, Yu-Hwan (Inventor); Lurie, Boris J. (Inventor)
1992-01-01
A non-linear dynamic compensation subsystem is added in the feedback loop of a high precision optical mirror positioning control system to smoothly alter the control system response bandwidth from a relatively wide response bandwidth optimized for speed of control system response to a bandwidth sufficiently narrow to reduce position errors resulting from the quantization noise inherent in the inductosyn used to measure mirror position. The non-linear dynamic compensation system includes a limiter for limiting the error signal within preselected limits, a compensator for modifying the limiter output to achieve the reduced bandwidth response, and an adder for combining the modified error signal with the difference between the limited and unlimited error signals. The adder output is applied to control system motor so that the system response is optimized for accuracy when the error signal is within the preselected limits, optimized for speed of response when the error signal is substantially beyond the preselected limits and smoothly varied therebetween as the error signal approaches the preselected limits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuangrod, T; Simpson, J; Greer, P
Purpose: A real-time patient treatment delivery verification system using EPID (Watchdog) has been developed as an advanced patient safety tool. In a pilot study data was acquired for 119 prostate and head and neck (HN) IMRT patient deliveries to generate body-site specific action limits using statistical process control. The purpose of this study is to determine the sensitivity of Watchdog to detect clinically significant errors during treatment delivery. Methods: Watchdog utilizes a physics-based model to generate a series of predicted transit cine EPID images as a reference data set, and compares these in real-time to measured transit cine-EPID images acquiredmore » during treatment using chi comparison (4%, 4mm criteria) after the initial 2s of treatment to allow for dose ramp-up. Four study cases were used; dosimetric (monitor unit) errors in prostate (7 fields) and HN (9 fields) IMRT treatments of (5%, 7%, 10%) and positioning (systematic displacement) errors in the same treatments of (5mm, 7mm, 10mm). These errors were introduced by modifying the patient CT scan and re-calculating the predicted EPID data set. The error embedded predicted EPID data sets were compared to the measured EPID data acquired during patient treatment. The treatment delivery percentage (measured from 2s) where Watchdog detected the error was determined. Results: Watchdog detected all simulated errors for all fields during delivery. The dosimetric errors were detected at average treatment delivery percentage of (4%, 0%, 0%) and (7%, 0%, 0%) for prostate and HN respectively. For patient positional errors, the average treatment delivery percentage was (52%, 43%, 25%) and (39%, 16%, 6%). Conclusion: These results suggest that Watchdog can detect significant dosimetric and positioning errors in prostate and HN IMRT treatments in real-time allowing for treatment interruption. Displacements of the patient require longer to detect however incorrect body site or very large geographic misses will be detected rapidly.« less
Miller, Marlene R; Robinson, Karen A; Lubomski, Lisa H; Rinke, Michael L; Pronovost, Peter J
2007-01-01
Background Although children are at the greatest risk for medication errors, little is known about the overall epidemiology of these errors, where the gaps are in our knowledge, and to what extent national medication error reduction strategies focus on children. Objective To synthesise peer reviewed knowledge on children's medication errors and on recommendations to improve paediatric medication safety by a systematic literature review. Data sources PubMed, Embase and Cinahl from 1 January 2000 to 30 April 2005, and 11 national entities that have disseminated recommendations to improve medication safety. Study selection Inclusion criteria were peer reviewed original data in English language. Studies that did not separately report paediatric data were excluded. Data extraction Two reviewers screened articles for eligibility and for data extraction, and screened all national medication error reduction strategies for relevance to children. Data synthesis From 358 articles identified, 31 were included for data extraction. The definition of medication error was non‐uniform across the studies. Dispensing and administering errors were the most poorly and non‐uniformly evaluated. Overall, the distributional epidemiological estimates of the relative percentages of paediatric error types were: prescribing 3–37%, dispensing 5–58%, administering 72–75%, and documentation 17–21%. 26 unique recommendations for strategies to reduce medication errors were identified; none were based on paediatric evidence. Conclusions Medication errors occur across the entire spectrum of prescribing, dispensing, and administering, are common, and have a myriad of non‐evidence based potential reduction strategies. Further research in this area needs a firmer standardisation for items such as dose ranges and definitions of medication errors, broader scope beyond inpatient prescribing errors, and prioritisation of implementation of medication error reduction strategies. PMID:17403758
Particle Tracking on the BNL Relativistic Heavy Ion Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dell, G. F.
1986-08-07
Tracking studies including the effects of random multipole errors as well as the effects of random and systematic multipole errors have been made for RHIC. Initial results for operating at an off diagonal working point are discussed.
Discovery of error-tolerant biclusters from noisy gene expression data.
Gupta, Rohit; Rao, Navneet; Kumar, Vipin
2011-11-24
An important analysis performed on microarray gene-expression data is to discover biclusters, which denote groups of genes that are coherently expressed for a subset of conditions. Various biclustering algorithms have been proposed to find different types of biclusters from these real-valued gene-expression data sets. However, these algorithms suffer from several limitations such as inability to explicitly handle errors/noise in the data; difficulty in discovering small bicliusters due to their top-down approach; inability of some of the approaches to find overlapping biclusters, which is crucial as many genes participate in multiple biological processes. Association pattern mining also produce biclusters as their result and can naturally address some of these limitations. However, traditional association mining only finds exact biclusters, which limits its applicability in real-life data sets where the biclusters may be fragmented due to random noise/errors. Moreover, as they only work with binary or boolean attributes, their application on gene-expression data require transforming real-valued attributes to binary attributes, which often results in loss of information. Many past approaches have tried to address the issue of noise and handling real-valued attributes independently but there is no systematic approach that addresses both of these issues together. In this paper, we first propose a novel error-tolerant biclustering model, 'ET-bicluster', and then propose a bottom-up heuristic-based mining algorithm to sequentially discover error-tolerant biclusters directly from real-valued gene-expression data. The efficacy of our proposed approach is illustrated by comparing it with a recent approach RAP in the context of two biological problems: discovery of functional modules and discovery of biomarkers. For the first problem, two real-valued S.Cerevisiae microarray gene-expression data sets are used to demonstrate that the biclusters obtained from ET-bicluster approach not only recover larger set of genes as compared to those obtained from RAP approach but also have higher functional coherence as evaluated using the GO-based functional enrichment analysis. The statistical significance of the discovered error-tolerant biclusters as estimated by using two randomization tests, reveal that they are indeed biologically meaningful and statistically significant. For the second problem of biomarker discovery, we used four real-valued Breast Cancer microarray gene-expression data sets and evaluate the biomarkers obtained using MSigDB gene sets. The results obtained for both the problems: functional module discovery and biomarkers discovery, clearly signifies the usefulness of the proposed ET-bicluster approach and illustrate the importance of explicitly incorporating noise/errors in discovering coherent groups of genes from gene-expression data.
NASA Technical Reports Server (NTRS)
Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.
2001-01-01
Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.
Verhoeven, Karolien; Weltens, Caroline; Van den Heuvel, Frank
2015-01-01
Quantification of the setup errors is vital to define appropriate setup margins preventing geographical misses. The no‐action–level (NAL) correction protocol reduces the systematic setup errors and, hence, the setup margins. The manual entry of the setup corrections in the record‐and‐verify software, however, increases the susceptibility of the NAL protocol to human errors. Moreover, the impact of the skin mobility on the anteroposterior patient setup reproducibility in whole‐breast radiotherapy (WBRT) is unknown. In this study, we therefore investigated the potential of fixed vertical couch position‐based patient setup in WBRT. The possibility to introduce a threshold for correction of the systematic setup errors was also explored. We measured the anteroposterior, mediolateral, and superior–inferior setup errors during fractions 1–12 and weekly thereafter with tangential angled single modality paired imaging. These setup data were used to simulate the residual setup errors of the NAL protocol, the fixed vertical couch position protocol, and the fixed‐action–level protocol with different correction thresholds. Population statistics of the setup errors of 20 breast cancer patients and 20 breast cancer patients with additional regional lymph node (LN) irradiation were calculated to determine the setup margins of each off‐line correction protocol. Our data showed the potential of the fixed vertical couch position protocol to restrict the systematic and random anteroposterior residual setup errors to 1.8 mm and 2.2 mm, respectively. Compared to the NAL protocol, a correction threshold of 2.5 mm reduced the frequency of mediolateral and superior–inferior setup corrections with 40% and 63%, respectively. The implementation of the correction threshold did not deteriorate the accuracy of the off‐line setup correction compared to the NAL protocol. The combination of the fixed vertical couch position protocol, for correction of the anteroposterior setup error, and the fixed‐action–level protocol with 2.5 mm correction threshold, for correction of the mediolateral and the superior–inferior setup errors, was proved to provide adequate and comparable patient setup accuracy in WBRT and WBRT with additional LN irradiation. PACS numbers: 87.53.Kn, 87.57.‐s
Heritability analyses of IQ scores: science or numerology?
Layzer, D
1974-03-29
Estimates of IQ heritability are subject to a variety of systematic errors. The IQ scores themselves contain uncontrollable, systematic errors of unknown magnitude. These arise because IQ scores, unlike conventional physical and biological measurements, have a purely instrumental definition. The effects of these errors are apparent in the very large discrepancies among IQ correlations measured by different investigators. Genotype-environment correlations, whose effects can sometimes be minimized, if not wholly eliminated, in experiments with plants and animals, are nearly always important in human populations. The absence of significant effects arising from genotype-environment correlations is a necessary condition for the applicability of conventional heritability analysis to phenotypically plastic traits. When this condition fails, no quantitative inferences about heritability can be drawn from measured phenotypic variances and covariances, except under special conditions that are unlikely to be satisfied by phenotypically plastic traits in human populations. Inadequate understanding of the precise environmental factors relevant to the development of specific behavioral traits is an important source of systematic errors, as is the inability to allow adequately for the effects of assortative mating and gene-gene interaction. Systematic cultural differences and differences in psychological environment among races and among sociocco-nomic groups vitiate any attempt to draw from IQ data meaningful inferences about genetic differences. Estimates based on phenotypic correlations between separated monozygotic twins-usually considered to be the most reliable kind of estimates-are vitiated by systematic errors inherent in IQ tests, by the presence of genotype-environment correlation, and by the lack of detailed understanding of environmental factors relevant to the development of behavioral traits. Other kinds of estimates are beset, in addition, by systematic errors arising from incomplete allowance for the effects of assortative mating and from gene-gene interactions. The only potentially useful data are phenotypic correlations between unrelated foster children reared together, which could, in principle, yield lower limits for e(2). Available data indicate that, for unrelated foster children reared together, the broad heritability (h(2)) may lie between 0.0 and 0.5. This estimate does not apply to populations composed of children reared by their biological parents or by near relatives. For such populations the heritability of IQ remains undefined. The only data that might yield meaningful estimates ot narrow heritability are phenotypic correlations between half-sibs reared in statistically independent environments. No useful data of this kind are available. Intervention studies like Heber's Milwaukee Project afford an alternative and comparatively direct way of studying the plasticity of cognitive and other behavioral traits in human populations. Results obtained so far strongly suggest that the development of cognitive skills is highly sensitive to variations in environmental factors. These conclusions have three obvious implications for the broader issues mentioned at the beginning of this article. 1) Published analyses of IQ data provide no support whatever for Jensen's thesis that inequalities in cognitive performance are due largely to genetic differences. As Lewontin (8) has clearly shown, the value of the broad heritability of IQ is in any case only marginally relevant to this question. I have argued that conventional estimates of the broad heritability of IQ are invalid and that the only data on which potentially valid estimates might be based are consistent with a broad heritability of less than 0.5. On the other hand, intervention studies, if their findings prove to be replicable, would directly establish that, under suitable conditions, the offspring of parents whose cognitive skills are so poorly developed as to exclude them from all but the most menial occupations can achieve what are regarded as distinctly high levels of cognitive performance. Thus, despite the fact that children differ suibstantially in cognitive aptitudes and appetites, and despite the very high probability that these differences have a substantial genetic component, available scientific evidence strongly suggests that environmental factors are responsible for the failure of children not suffering from specific neurological disorders to achieve adequate levels of cognitive performance. 2) Under prevailing social conditions, no valid inferences can be drawn from IQ data concerning systematic genetic differences among races or socioeconomic groups. Research along present lines directed toward this end-whatever its ethical status-is scientifically worthless. 3) Since there are no suitable data for estimating the narrow heritability of IQ, it seems pointless to speculate about the prospects for a hereditary meritocracy based on IQ.
Light meson form factors at high Q2 from lattice QCD
NASA Astrophysics Data System (ADS)
Koponen, Jonna; Zimermmane-Santos, André; Davies, Christine; Lepage, G. Peter; Lytle, Andrew
2018-03-01
Measurements and theoretical calculations of meson form factors are essential for our understanding of internal hadron structure and QCD, the dynamics that bind the quarks in hadrons. The pion electromagnetic form factor has been measured at small space-like momentum transfer |q2| < 0.3 GeV2 by pion scattering from atomic electrons and at values up to 2.5 GeV2 by scattering electrons from the pion cloud around a proton. On the other hand, in the limit of very large (or infinite) Q2 = -q2, perturbation theory is applicable. This leaves a gap in the intermediate Q2 where the form factors are not known. As a part of their 12 GeV upgrade Jefferson Lab will measure pion and kaon form factors in this intermediate region, up to Q2 of 6 GeV2. This is then an ideal opportunity for lattice QCD to make an accurate prediction ahead of the experimental results. Lattice QCD provides a from-first-principles approach to calculate form factors, and the challenge here is to control the statistical and systematic uncertainties as errors grow when going to higher Q2 values. Here we report on a calculation that tests the method using an ηs meson, a 'heavy pion' made of strange quarks, and also present preliminary results for kaon and pion form factors. We use the nf = 2 + 1 + 1 ensembles made by the MILC collaboration and Highly Improved Staggered Quarks, which allows us to obtain high statistics. The HISQ action is also designed to have small dicretisation errors. Using several light quark masses and lattice spacings allows us to control the chiral and continuum extrapolation and keep systematic errors in check. Warning, no authors found for 2018EPJWC.17506016.
Attia, A; Dhahbi, W; Chaouachi, A; Padulo, J; Wong, D P; Chamari, K
2017-03-01
Common methods to estimate vertical jump height (VJH) are based on the measurements of flight time (FT) or vertical reaction force. This study aimed to assess the measurement errors when estimating the VJH with flight time using photocell devices in comparison with the gold standard jump height measured by a force plate (FP). The second purpose was to determine the intrinsic reliability of the Optojump photoelectric cells in estimating VJH. For this aim, 20 subjects (age: 22.50±1.24 years) performed maximal vertical jumps in three modalities in randomized order: the squat jump (SJ), counter-movement jump (CMJ), and CMJ with arm swing (CMJarm). Each trial was simultaneously recorded by the FP and Optojump devices. High intra-class correlation coefficients (ICCs) for validity (0.98-0.99) and low limits of agreement (less than 1.4 cm) were found; even a systematic difference in jump height was consistently observed between FT and double integration of force methods (-31% to -27%; p<0.001) and a large effect size (Cohen's d >1.2). Intra-session reliability of Optojump was excellent, with ICCs ranging from 0.98 to 0.99, low coefficients of variation (3.98%), and low standard errors of measurement (0.8 cm). It was concluded that there was a high correlation between the two methods to estimate the vertical jump height, but the FT method cannot replace the gold standard, due to the large systematic bias. According to our results, the equations of each of the three jump modalities were presented in order to obtain a better estimation of the jump height.
NASA Technical Reports Server (NTRS)
Greenwald, Thomas J.; Christopher, Sundar A.; Chou, Joyce
1997-01-01
Satellite observations of the cloud liquid water path (LWP) are compared from special sensor microwave imager (SSM/I) measurements and GOES 8 imager solar reflectance (SR) measurements to ascertain the impact of sub-field-of-view (FOV) cloud effects on SSM/I 37 GHz retrievals. The SR retrievals also incorporate estimates of the cloud droplet effective radius derived from the GOES 8 3.9-micron channel. The comparisons consist of simultaneous collocated and full-resolution measurements and are limited to nonprecipitating marine stratocumulus in the eastern Pacific for two days in October 1995. The retrievals from these independent methods are consistent for overcast SSM/I FOVS, with RMS differences as low as 0.030 kg/sq m, although biases exist for clouds with more open spatial structure, where the RMS differences increase to 0.039 kg/sq m. For broken cloudiness within the SSM/I FOV the average beam-filling error (BFE) in the microwave retrievals is found to be about 22% (average cloud amount of 73%). This systematic error is comparable with the average random errors in the microwave retrievals. However, even larger BFEs can be expected for individual FOVs and for regions with less cloudiness. By scaling the microwave retrievals by the cloud amount within the FOV, the systematic BFE can be significantly reduced but with increased RMS differences of O.046-0.058 kg/sq m when compared to the SR retrievals. The beam-filling effects reported here are significant and are expected to impact directly upon studies that use instantaneous SSM/I measurements of cloud LWP, such as cloud classification studies and validation studies involving surface-based or in situ data.
Tsou, Amy Y; Lehmann, Christoph U; Michel, Jeremy; Solomon, Ronni; Possanza, Lorraine; Gandhi, Tejal
2017-01-11
Copy and paste functionality can support efficiency during clinical documentation, but may promote inaccurate documentation with risks for patient safety. The Partnership for Health IT Patient Safety was formed to gather data, conduct analysis, educate, and disseminate safe practices for safer care using health information technology (IT). To characterize copy and paste events in clinical care, identify safety risks, describe existing evidence, and develop implementable practice recommendations for safe reuse of information via copy and paste. The Partnership 1) reviewed 12 reported safety events, 2) solicited expert input, and 3) performed a systematic literature review (2010 to January 2015) to identify publications addressing frequency, perceptions/attitudes, patient safety risks, existing guidance, and potential interventions and mitigation practices. The literature review identified 51 publications that were included. Overall, 66% to 90% of clinicians routinely use copy and paste. One study of diagnostic errors found that copy and paste led to 2.6% of errors in which a missed diagnosis required patients to seek additional unplanned care. Copy and paste can promote note bloat, internal inconsistencies, error propagation, and documentation in the wrong patient chart. Existing guidance identified specific responsibilities for authors, organizations, and electronic health record (EHR) developers. Analysis of 12 reported copy and paste safety events was congruent with problems identified from the literature review. Despite regular copy and paste use, evidence regarding direct risk to patient safety remains sparse, with significant study limitations. Drawing on existing evidence, the Partnership developed four safe practice recommendations: 1) Provide a mechanism to make copy and paste material easily identifiable; 2) Ensure the provenance of copy and paste material is readily available; 3) Ensure adequate staff training and education; 4) Ensure copy and paste practices are regularly monitored, measured, and assessed.
Attia, A; Chaouachi, A; Padulo, J; Wong, DP; Chamari, K
2016-01-01
Common methods to estimate vertical jump height (VJH) are based on the measurements of flight time (FT) or vertical reaction force. This study aimed to assess the measurement errors when estimating the VJH with flight time using photocell devices in comparison with the gold standard jump height measured by a force plate (FP). The second purpose was to determine the intrinsic reliability of the Optojump photoelectric cells in estimating VJH. For this aim, 20 subjects (age: 22.50±1.24 years) performed maximal vertical jumps in three modalities in randomized order: the squat jump (SJ), counter-movement jump (CMJ), and CMJ with arm swing (CMJarm). Each trial was simultaneously recorded by the FP and Optojump devices. High intra-class correlation coefficients (ICCs) for validity (0.98-0.99) and low limits of agreement (less than 1.4 cm) were found; even a systematic difference in jump height was consistently observed between FT and double integration of force methods (-31% to -27%; p<0.001) and a large effect size (Cohen’s d>1.2). Intra-session reliability of Optojump was excellent, with ICCs ranging from 0.98 to 0.99, low coefficients of variation (3.98%), and low standard errors of measurement (0.8 cm). It was concluded that there was a high correlation between the two methods to estimate the vertical jump height, but the FT method cannot replace the gold standard, due to the large systematic bias. According to our results, the equations of each of the three jump modalities were presented in order to obtain a better estimation of the jump height. PMID:28416900
NASA Astrophysics Data System (ADS)
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf
2015-05-01
All surveying instruments and their measurements suffer from some errors. To refine the measurement results, it is necessary to use procedures restricting influence of the instrument errors on the measured values or to implement numerical corrections. In precise engineering surveying industrial applications the accuracy of the distances usually realized on relatively short distance is a key parameter limiting the resulting accuracy of the determined values (coordinates, etc.). To determine the size of systematic and random errors of the measured distances were made test with the idea of the suppression of the random error by the averaging of the repeating measurement, and reducing systematic errors influence of by identifying their absolute size on the absolute baseline realized in geodetic laboratory at the Faculty of Civil Engineering CTU in Prague. The 16 concrete pillars with forced centerings were set up and the absolute distances between the points were determined with a standard deviation of 0.02 millimetre using a Leica Absolute Tracker AT401. For any distance measured by the calibrated instruments (up to the length of the testing baseline, i.e. 38.6 m) can now be determined the size of error correction of the distance meter in two ways: Firstly by the interpolation on the raw data, or secondly using correction function derived by previous FFT transformation usage. The quality of this calibration and correction procedure was tested on three instruments (Trimble S6 HP, Topcon GPT-7501, Trimble M3) experimentally using Leica Absolute Tracker AT401. By the correction procedure was the standard deviation of the measured distances reduced significantly to less than 0.6 mm. In case of Topcon GPT-7501 is the nominal standard deviation 2 mm, achieved (without corrections) 2.8 mm and after corrections 0.55 mm; in case of Trimble M3 is nominal standard deviation 3 mm, achieved (without corrections) 1.1 mm and after corrections 0.58 mm; and finally in case of Trimble S6 is nominal standard deviation 1 mm, achieved (without corrections) 1.2 mm and after corrections 0.51 mm. Proposed procedure of the calibration and correction is in our opinion very suitable for increasing of the accuracy of the electronic distance measurement and allows the use of the common surveying instrument to achieve uncommonly high precision.
NASA Technical Reports Server (NTRS)
2008-01-01
When we began our study we sought to answer five fundamental implementation questions: 1) can foregrounds be measured and subtracted to a sufficiently low level?; 2) can systematic errors be controlled?; 3) can we develop optics with sufficiently large throughput, low polarization, and frequency coverage from 30 to 300 GHz?; 4) is there a technical path to realizing the sensitivity and systematic error requirements?; and 5) what are the specific mission architecture parameters, including cost? Detailed answers to these questions are contained in this report.
Probing the Cosmological Principle in the counts of radio galaxies at different frequencies
NASA Astrophysics Data System (ADS)
Bengaly, Carlos A. P.; Maartens, Roy; Santos, Mario G.
2018-04-01
According to the Cosmological Principle, the matter distribution on very large scales should have a kinematic dipole that is aligned with that of the CMB. We determine the dipole anisotropy in the number counts of two all-sky surveys of radio galaxies. For the first time, this analysis is presented for the TGSS survey, allowing us to check consistency of the radio dipole at low and high frequencies by comparing the results with the well-known NVSS survey. We match the flux thresholds of the catalogues, with flux limits chosen to minimise systematics, and adopt a strict masking scheme. We find dipole directions that are in good agreement with each other and with the CMB dipole. In order to compare the amplitude of the dipoles with theoretical predictions, we produce sets of lognormal realisations. Our realisations include the theoretical kinematic dipole, galaxy clustering, Poisson noise, simulated redshift distributions which fit the NVSS and TGSS source counts, and errors in flux calibration. The measured dipole for NVSS is ~2 times larger than predicted by the mock data. For TGSS, the dipole is almost ~ 5 times larger than predicted, even after checking for completeness and taking account of errors in source fluxes and in flux calibration. Further work is required to understand the nature of the systematics that are the likely cause of the anomalously large TGSS dipole amplitude.
NASA Astrophysics Data System (ADS)
Chen, Xiaodian; Wang, Shu; Deng, Licai; de Grijs, Richard
2018-06-01
Distances and extinction values are usually degenerate. To refine the distance to the general Galactic Center region, a carefully determined extinction law (taking into account the prevailing systematic errors) is urgently needed. We collected data for 55 classical Cepheids projected toward the Galactic Center region to derive the near- to mid-infrared extinction law using three different approaches. The relative extinction values obtained are {A}J/{A}{K{{s}}}=3.005,{A}H/{A}{K{{s}}}=1.717, {A}[3.6]/{A}{K{{s}}}=0.478,{A}[4.5]/{A}{K{{s}}}=0.341, {A}[5.8]/{A}{K{{s}}}=0.234,{A}[8.0]/{A}{K{{s}}} =0.321,{A}W1/{A}{K{{s}}}=0.506, and {A}W2/{A}{K{{s}}}=0.340. We also calculated the corresponding systematic errors. Compared with previous work, we report an extremely low and steep mid-infrared extinction law. Using a seven-passband “optimal distance” method, we improve the mean distance precision to our sample of 55 Cepheids to 4%. Based on four confirmed Galactic Center Cepheids, a solar Galactocentric distance of R 0 = 8.10 ± 0.19 ± 0.22 kpc is determined, featuring an uncertainty that is close to the limiting distance accuracy (2.8%) for Galactic Center Cepheids.
ERIC Educational Resources Information Center
Choe, Wook Kyung
2013-01-01
The current dissertation represents one of the first systematic studies of the distribution of speech errors within supralexical prosodic units. Four experiments were conducted to gain insight into the specific role of these units in speech planning and production. The first experiment focused on errors in adult English. These were found to be…
A geometric model for initial orientation errors in pigeon navigation.
Postlethwaite, Claire M; Walker, Michael M
2011-01-21
All mobile animals respond to gradients in signals in their environment, such as light, sound, odours and magnetic and electric fields, but it remains controversial how they might use these signals to navigate over long distances. The Earth's surface is essentially two-dimensional, so two stimuli are needed to act as coordinates for navigation. However, no environmental fields are known to be simple enough to act as perpendicular coordinates on a two-dimensional grid. Here, we propose a model for navigation in which we assume that an animal has a simplified 'cognitive map' in which environmental stimuli act as perpendicular coordinates. We then investigate how systematic deviation of the contour lines of the environmental signals from a simple orthogonal arrangement can cause errors in position determination and lead to systematic patterns of directional errors in initial homing directions taken by pigeons. The model reproduces patterns of initial orientation errors seen in previously collected data from homing pigeons, predicts that errors should increase with distance from the loft, and provides a basis for efforts to identify further sources of orientation errors made by homing pigeons. Copyright © 2010 Elsevier Ltd. All rights reserved.
Rb-Sr and Sm-Nd Ages of Zagami DML and SR Isotopic Heterogeneity in Zagami
NASA Technical Reports Server (NTRS)
Nyquist, L.aurenceE.; Shih, C.-Y.; Reese, Y. D.
2010-01-01
Zagami contains lithologic heterogeneity suggesting that it did not form in a homogeneous, thick lava flow [1]. We have previously investigated the Sr and Nd isotopic systematics of Coarse-Grained (CG) and Fine-Grained (FG) lithologies described by [2]. Both appear to belong to Normal Zagami (NZ) [1,3], but their initial Sr-isotopic compositions differ [4,5]. Here we report new analyses of the Dark Mottled Lithology (DML, [3]) that show its age and initial Sr and Nd isotopic compositions to be identical within error limits with those of CG, but Sr initial isotopic compositions differ from those of FG.
["Long-branch Attraction" artifact in phylogenetic reconstruction].
Li, Yi-Wei; Yu, Li; Zhang, Ya-Ping
2007-06-01
Phylogenetic reconstruction among various organisms not only helps understand their evolutionary history but also reveal several fundamental evolutionary questions. Understanding of the evolutionary relationships among organisms establishes the foundation for the investigations of other biological disciplines. However, almost all the widely used phylogenetic methods have limitations which fail to eliminate systematic errors effectively, preventing the reconstruction of true organismal relationships. "Long-branch Attraction" (LBA) artifact is one of the most disturbing factors in phylogenetic reconstruction. In this review, the conception and analytic method as well as the avoidance strategy of LBA were summarized. In addition, several typical examples were provided. The approach to avoid and resolve LBA artifact has been discussed.
On framing the research question and choosing the appropriate research design.
Parfrey, Patrick S; Ravani, Pietro
2015-01-01
Clinical epidemiology is the science of human disease investigation with a focus on diagnosis, prognosis, and treatment. The generation of a reasonable question requires definition of patients, interventions, controls, and outcomes. The goal of research design is to minimize error, to ensure adequate samples, to measure input and output variables appropriately, to consider external and internal validities, to limit bias, and to address clinical as well as statistical relevance. The hierarchy of evidence for clinical decision-making places randomized controlled trials (RCT) or systematic review of good quality RCTs at the top of the evidence pyramid. Prognostic and etiologic questions are best addressed with longitudinal cohort studies.
High-resolution interferometic microscope for traceable dimensional nanometrology in Brazil
NASA Astrophysics Data System (ADS)
Malinovski, I.; França, R. S.; Lima, M. S.; Bessa, M. S.; Silva, C. R.; Couceiro, I. B.
2016-07-01
The double color interferometric microscope is developed for step height standards nanometrology traceable to meter definition via primary wavelength laser standards. The setup is based on two stabilized lasers to provide traceable measurements of highest possible resolution down to the physical limits of the optical instruments in sub-nanometer to micrometer range of the heights. The wavelength reference is He-Ne 633 nm stabilized laser, the secondary source is Blue-Green 488 nm grating laser diode. Accurate fringe portion is measured by modulated phase-shift technique combined with imaging interferometry and Fourier processing. Self calibrating methods are developed to correct systematic interferometric errors.
On framing the research question and choosing the appropriate research design.
Parfrey, Patrick; Ravani, Pietro
2009-01-01
Clinical epidemiology is the science of human disease investigation with a focus on diagnosis, prognosis, and treatment. The generation of a reasonable question requires the definition of patients, interventions, controls, and outcomes. The goal of research design is to minimize error, ensure adequate samples, measure input and output variables appropriately, consider external and internal validities, limit bias, and address clinical as well as statistical relevance. The hierarchy of evidence for clinical decision making places randomized controlled trials (RCT) or systematic review of good quality RCTs at the top of the evidence pyramid. Prognostic and etiologic questions are best addressed with longitudinal cohort studies.
The Pandolf equation under-predicts the metabolic rate of contemporary military load carriage.
Drain, Jace R; Aisbett, Brad; Lewis, Michael; Billing, Daniel C
2017-11-01
This investigation assessed the accuracy of error of the Pandolf load carriage energy expenditure equation when simulating contemporary military conditions (load distribution, external load and walking speed). Within-participant design. Sixteen male participants completed 10 trials comprised of five walking speeds (2.5, 3.5, 4.5, 5.5 and 6.5km·h -1 ) and two external loads (22.7 and 38.4kg). The Pandolf equation demonstrated poor predictive precision, with a mean bias of 124.9W and -48.7 to 298.5W 95% limits of agreement. Furthermore, the Pandolf equation systematically under-predicted metabolic rate (p<0.05) across the 10 speed-load combinations. Predicted metabolic rate error ranged from 12-33% across all conditions with the 'moderate' walking speeds (i.e. 4.5-5.5km·h -1 ) yielding less prediction error (12-17%) when compared to the slower and faster walking speeds (21-33%). Factors such as mechanical efficiency and load distribution contribute to the impaired predictive accuracy. The authors suggest the Pandolf equation should be applied to military load carriage with caution. Copyright © 2017 Sports Medicine Australia. All rights reserved.
Modeling and Measurement Constraints in Fault Diagnostics for HVAC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Najafi, Massieh; Auslander, David M.; Bartlett, Peter L.
2010-05-30
Many studies have shown that energy savings of five to fifteen percent are achievable in commercial buildings by detecting and correcting building faults, and optimizing building control systems. However, in spite of good progress in developing tools for determining HVAC diagnostics, methods to detect faults in HVAC systems are still generally undeveloped. Most approaches use numerical filtering or parameter estimation methods to compare data from energy meters and building sensors to predictions from mathematical or statistical models. They are effective when models are relatively accurate and data contain few errors. In this paper, we address the case where models aremore » imperfect and data are variable, uncertain, and can contain error. We apply a Bayesian updating approach that is systematic in managing and accounting for most forms of model and data errors. The proposed method uses both knowledge of first principle modeling and empirical results to analyze the system performance within the boundaries defined by practical constraints. We demonstrate the approach by detecting faults in commercial building air handling units. We find that the limitations that exist in air handling unit diagnostics due to practical constraints can generally be effectively addressed through the proposed approach.« less
Systematic Error in Leaf Water Potential Measurements with a Thermocouple Psychrometer.
Rawlins, S L
1964-10-30
To allow for the error in measurement of water potentials in leaves, introduced by the presence of a water droplet in the chamber of the psychrometer, a correction must be made for the permeability of the leaf.
Heneka, Nicole; Shaw, Tim; Rowett, Debra; Phillips, Jane L
2016-06-01
Opioids are the primary pharmacological treatment for cancer pain and, in the palliative care setting, are routinely used to manage symptoms at the end of life. Opioids are one of the most frequently reported drug classes in medication errors causing patient harm. Despite their widespread use, little is known about the incidence and impact of opioid medication errors in oncology and palliative care settings. To determine the incidence, types and impact of reported opioid medication errors in adult oncology and palliative care patient settings. A systematic review. Five electronic databases and the grey literature were searched from 1980 to August 2014. Empirical studies published in English, reporting data on opioid medication error incidence, types or patient impact, within adult oncology and/or palliative care services, were included. Popay's narrative synthesis approach was used to analyse data. Five empirical studies were included in this review. Opioid error incidence rate was difficult to ascertain as each study focussed on a single narrow area of error. The predominant error type related to deviation from opioid prescribing guidelines, such as incorrect dosing intervals. None of the included studies reported the degree of patient harm resulting from opioid errors. This review has highlighted the paucity of the literature examining opioid error incidence, types and patient impact in adult oncology and palliative care settings. Defining, identifying and quantifying error reporting practices for these populations should be an essential component of future oncology and palliative care quality and safety initiatives. © The Author(s) 2015.
The effect of rainfall measurement uncertainties on rainfall-runoff processes modelling.
Stransky, D; Bares, V; Fatka, P
2007-01-01
Rainfall data are a crucial input for various tasks concerning the wet weather period. Nevertheless, their measurement is affected by random and systematic errors that cause an underestimation of the rainfall volume. Therefore, the general objective of the presented work was to assess the credibility of measured rainfall data and to evaluate the effect of measurement errors on urban drainage modelling tasks. Within the project, the methodology of the tipping bucket rain gauge (TBR) was defined and assessed in terms of uncertainty analysis. A set of 18 TBRs was calibrated and the results were compared to the previous calibration. This enables us to evaluate the ageing of TBRs. A propagation of calibration and other systematic errors through the rainfall-runoff model was performed on experimental catchment. It was found that the TBR calibration is important mainly for tasks connected with the assessment of peak values and high flow durations. The omission of calibration leads to up to 30% underestimation and the effect of other systematic errors can add a further 15%. The TBR calibration should be done every two years in order to catch up the ageing of TBR mechanics. Further, the authors recommend to adjust the dynamic test duration proportionally to generated rainfall intensity.
RCSLenS: The Red Cluster Sequence Lensing Survey
NASA Astrophysics Data System (ADS)
Hildebrandt, H.; Choi, A.; Heymans, C.; Blake, C.; Erben, T.; Miller, L.; Nakajima, R.; van Waerbeke, L.; Viola, M.; Buddendiek, A.; Harnois-Déraps, J.; Hojjati, A.; Joachimi, B.; Joudaki, S.; Kitching, T. D.; Wolf, C.; Gwyn, S.; Johnson, N.; Kuijken, K.; Sheikhbahaee, Z.; Tudorica, A.; Yee, H. K. C.
2016-11-01
We present the Red Cluster Sequence Lensing Survey (RCSLenS), an application of the methods developed for the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) to the ˜785 deg2, multi-band imaging data of the Red-sequence Cluster Survey 2. This project represents the largest public, sub-arcsecond seeing, multi-band survey to date that is suited for weak gravitational lensing measurements. With a careful assessment of systematic errors in shape measurements and photometric redshifts, we extend the use of this data set to allow cross-correlation analyses between weak lensing observables and other data sets. We describe the imaging data, the data reduction, masking, multi-colour photometry, photometric redshifts, shape measurements, tests for systematic errors, and a blinding scheme to allow for more objective measurements. In total, we analyse 761 pointings with r-band coverage, which constitutes our lensing sample. Residual large-scale B-mode systematics prevent the use of this shear catalogue for cosmic shear science. The effective number density of lensing sources over an unmasked area of 571.7 deg2 and down to a magnitude limit of r ˜ 24.5 is 8.1 galaxies per arcmin2 (weighted: 5.5 arcmin-2) distributed over 14 patches on the sky. Photometric redshifts based on four-band griz data are available for 513 pointings covering an unmasked area of 383.5 deg2. We present weak lensing mass reconstructions of some example clusters as well as the full survey representing the largest areas that have been mapped in this way. All our data products are publicly available through Canadian Astronomy Data Centre at http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/community/rcslens/query.html in a format very similar to the CFHTLenS data release.
Calibration of limited-area ensemble precipitation forecasts for hydrological predictions
NASA Astrophysics Data System (ADS)
Diomede, Tommaso; Marsigli, Chiara; Montani, Andrea; Nerozzi, Fabrizio; Paccagnella, Tiziana
2015-04-01
The main objective of this study is to investigate the impact of calibration for limited-area ensemble precipitation forecasts, to be used for driving discharge predictions up to 5 days in advance. A reforecast dataset, which spans 30 years, based on the Consortium for Small Scale Modeling Limited-Area Ensemble Prediction System (COSMO-LEPS) was used for testing the calibration strategy. Three calibration techniques were applied: quantile-to-quantile mapping, linear regression, and analogs. The performance of these methodologies was evaluated in terms of statistical scores for the precipitation forecasts operationally provided by COSMO-LEPS in the years 2003-2007 over Germany, Switzerland, and the Emilia-Romagna region (northern Italy). The analog-based method seemed to be preferred because of its capability of correct position errors and spread deficiencies. A suitable spatial domain for the analog search can help to handle model spatial errors as systematic errors. However, the performance of the analog-based method may degrade in cases where a limited training dataset is available. A sensitivity test on the length of the training dataset over which to perform the analog search has been performed. The quantile-to-quantile mapping and linear regression methods were less effective, mainly because the forecast-analysis relation was not so strong for the available training dataset. A comparison between the calibration based on the deterministic reforecast and the calibration based on the full operational ensemble used as training dataset has been considered, with the aim to evaluate whether reforecasts are really worthy for calibration, given that their computational cost is remarkable. The verification of the calibration process was then performed by coupling ensemble precipitation forecasts with a distributed rainfall-runoff model. This test was carried out for a medium-sized catchment located in Emilia-Romagna, showing a beneficial impact of the analog-based method on the reduction of missed events for discharge predictions.
Quotation accuracy in medical journal articles-a systematic review and meta-analysis.
Jergas, Hannah; Baethge, Christopher
2015-01-01
Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose-quotation errors-may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress.
Outcomes of a Failure Mode and Effects Analysis for medication errors in pediatric anesthesia.
Martin, Lizabeth D; Grigg, Eliot B; Verma, Shilpa; Latham, Gregory J; Rampersad, Sally E; Martin, Lynn D
2017-06-01
The Institute of Medicine has called for development of strategies to prevent medication errors, which are one important cause of preventable harm. Although the field of anesthesiology is considered a leader in patient safety, recent data suggest high medication error rates in anesthesia practice. Unfortunately, few error prevention strategies for anesthesia providers have been implemented. Using Toyota Production System quality improvement methodology, a multidisciplinary team observed 133 h of medication practice in the operating room at a tertiary care freestanding children's hospital. A failure mode and effects analysis was conducted to systematically deconstruct and evaluate each medication handling process step and score possible failure modes to quantify areas of risk. A bundle of five targeted countermeasures were identified and implemented over 12 months. Improvements in syringe labeling (73 to 96%), standardization of medication organization in the anesthesia workspace (0 to 100%), and two-provider infusion checks (23 to 59%) were observed. Medication error reporting improved during the project and was subsequently maintained. After intervention, the median medication error rate decreased from 1.56 to 0.95 per 1000 anesthetics. The frequency of medication error harm events reaching the patient also decreased. Systematic evaluation and standardization of medication handling processes by anesthesia providers in the operating room can decrease medication errors and improve patient safety. © 2017 John Wiley & Sons Ltd.
De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets
NASA Astrophysics Data System (ADS)
Hemati, Maziar S.; Rowley, Clarence W.; Deem, Eric A.; Cattafesta, Louis N.
2017-08-01
The dynamic mode decomposition (DMD)—a popular method for performing data-driven Koopman spectral analysis—has gained increased popularity for extracting dynamically meaningful spatiotemporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model forecasts. Despite its widespread use and utility, DMD can fail to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples. In particular, in the analysis of time-resolved particle image velocimetry data for a separated flow, TDMD outperforms standard DMD by providing dynamical interpretations that are consistent with alternative analysis techniques. Further, TDMD extracts modes that reveal detailed spatial structures missed by standard DMD.
Hinton-Bayre, Anton D
2011-02-01
There is an ongoing debate over the preferred method(s) for determining the reliable change (RC) in individual scores over time. In the present paper, specificity comparisons of several classic and contemporary RC models were made using a real data set. This included a more detailed review of a new RC model recently proposed in this journal, that used the within-subjects standard deviation (WSD) as the error term. It was suggested that the RC(WSD) was more sensitive to change and theoretically superior. The current paper demonstrated that even in the presence of mean practice effects, false-positive rates were comparable across models when reliability was good and initial and retest variances were equivalent. However, when variances differed, discrepancies in classification across models became evident. Notably, the RC using the WSD provided unacceptably high false-positive rates in this setting. It was considered that the WSD was never intended for measuring change in this manner. The WSD actually combines systematic and error variance. The systematic variance comes from measurable between-treatment differences, commonly referred to as practice effect. It was further demonstrated that removal of the systematic variance and appropriate modification of the residual error term for the purpose of testing individual change yielded an error term already published and criticized in the literature. A consensus on the RC approach is needed. To that end, further comparison of models under varied conditions is encouraged.
Global Warming Estimation from MSU: Correction for Drift and Calibration Errors
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)
2000-01-01
Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the entire error is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by about 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+-) 0.04 K/decade during 1980 to 1998.
Moisture Forecast Bias Correction in GEOS DAS
NASA Technical Reports Server (NTRS)
Dee, D.
1999-01-01
Data assimilation methods rely on numerous assumptions about the errors involved in measuring and forecasting atmospheric fields. One of the more disturbing of these is that short-term model forecasts are assumed to be unbiased. In case of atmospheric moisture, for example, observational evidence shows that the systematic component of errors in forecasts and analyses is often of the same order of magnitude as the random component. we have implemented a sequential algorithm for estimating forecast moisture bias from rawinsonde data in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The algorithm is designed to remove the systematic component of analysis errors and can be easily incorporated in an existing statistical data assimilation system. We will present results of initial experiments that show a significant reduction of bias in the GEOS DAS moisture analyses.
NASA Technical Reports Server (NTRS)
Young, A. T.
1974-01-01
An overlooked systematic error exists in the apparent radial velocities of solar lines reflected from regions of Venus near the terminator, owing to a combination of the finite angular size of the Sun and its large (2 km/sec) equatorial velocity of rotation. This error produces an apparent, but fictitious, retrograde component of planetary rotation, typically on the order of 40 meters/sec. Spectroscopic, photometric, and radiometric evidence against a 4-day atmospheric rotation is also reviewed. The bulk of the somewhat contradictory evidence seems to favor slow motions, on the order of 5 m/sec, in the atmosphere of Venus; the 4-day rotation may be due to a traveling wave-like disturbance, not bulk motions, driven by the UV albedo differences.
Sancho-García, J C
2011-09-13
Highly accurate coupled-cluster (CC) calculations with large basis sets have been performed to study the binding energy of the (CH)12, (CH)16, (CH)20, and (CH)24 polyhedral hydrocarbons in two, cage-like and planar, forms. We also considered the effect of other minor contributions: core-correlation, relativistic corrections, and extrapolations to the limit of the full CC expansion. Thus, chemically accurate values could be obtained for these complicated systems. These nearly exact results are used to evaluate next the performance of main approximations (i.e., pure, hybrid, and double-hybrid methods) within density functional theory (DFT) in a systematic fashion. Some commonly used functionals, including the B3LYP model, are affected by large errors, and only those having reduced self-interaction error (SIE), which includes the last family of conjectured expressions (double hybrids), are able to achieve reasonable low deviations of 1-2 kcal/mol especially when an estimate for dispersion interactions is also added.
Total error shift patterns for daily CT on rails image-guided radiotherapy to the prostate bed
2011-01-01
Background To evaluate the daily total error shift patterns on post-prostatectomy patients undergoing image guided radiotherapy (IGRT) with a diagnostic quality computer tomography (CT) on rails system. Methods A total of 17 consecutive post-prostatectomy patients receiving adjuvant or salvage IMRT using CT-on-rails IGRT were analyzed. The prostate bed's daily total error shifts were evaluated for a total of 661 CT scans. Results In the right-left, cranial-caudal, and posterior-anterior directions, 11.5%, 9.2%, and 6.5% of the 661 scans required no position adjustments; 75.3%, 66.1%, and 56.8% required a shift of 1 - 5 mm; 11.5%, 20.9%, and 31.2% required a shift of 6 - 10 mm; and 1.7%, 3.8%, and 5.5% required a shift of more than 10 mm, respectively. There was evidence of correlation between the x and y, x and z, and y and z axes in 3, 3, and 3 of 17 patients, respectively. Univariate (ANOVA) analysis showed that the total error pattern was random in the x, y, and z axis for 10, 5, and 2 of 17 patients, respectively, and systematic for the rest. Multivariate (MANOVA) analysis showed that the (x,y), (x,z), (y,z), and (x, y, z) total error pattern was random in 5, 1, 1, and 1 of 17 patients, respectively, and systematic for the rest. Conclusions The overall daily total error shift pattern for these 17 patients simulated with an empty bladder, and treated with CT on rails IGRT was predominantly systematic. Despite this, the temporal vector trends showed complex behaviors and unpredictable changes in magnitude and direction. These findings highlight the importance of using daily IGRT in post-prostatectomy patients. PMID:22024279
Analyzing false positives of four questions in the Force Concept Inventory
NASA Astrophysics Data System (ADS)
Yasuda, Jun-ichiro; Mae, Naohiro; Hull, Michael M.; Taniguchi, Masa-aki
2018-06-01
In this study, we analyze the systematic error from false positives of the Force Concept Inventory (FCI). We compare the systematic errors of question 6 (Q.6), Q.7, and Q.16, for which clearly erroneous reasoning has been found, with Q.5, for which clearly erroneous reasoning has not been found. We determine whether or not a correct response to a given FCI question is a false positive using subquestions. In addition to the 30 original questions, subquestions were introduced for Q.5, Q.6, Q.7, and Q.16. This modified version of the FCI was administered to 1145 university students in Japan from 2015 to 2017. In this paper, we discuss our finding that the systematic errors of Q.6, Q.7, and Q.16 are much larger than that of Q.5 for students with mid-level FCI scores. Furthermore, we find that, averaged over the data sample, the sum of the false positives from Q.5, Q.6, Q.7, and Q.16 is about 10% of the FCI score of a midlevel student.
NASA Astrophysics Data System (ADS)
Henry, William; Jefferson Lab Hall A Collaboration
2017-09-01
Jefferson Lab's cutting-edge parity-violating electron scattering program has increasingly stringent requirements for systematic errors. Beam polarimetry is often one of the dominant systematic errors in these experiments. A new Møller Polarimeter in Hall A of Jefferson Lab (JLab) was installed in 2015 and has taken first measurements for a polarized scattering experiment. Upcoming parity violation experiments in Hall A include CREX, PREX-II, MOLLER and SOLID with the latter two requiring <0.5% precision on beam polarization measurements. The polarimeter measures the Møller scattering rates of the polarized electron beam incident upon an iron target placed in a saturating magnetic field. The spectrometer consists of four focusing quadrapoles and one momentum selection dipole. The detector is designed to measure the scattered and knock out target electrons in coincidence. Beam polarization is extracted by constructing an asymmetry from the scattering rates when the incident electron spin is parallel and anti-parallel to the target electron spin. Initial data will be presented. Sources of systematic errors include target magnetization, spectrometer acceptance, the Levchuk effect, and radiative corrections which will be discussed. National Science Foundation.
NASA Technical Reports Server (NTRS)
James, W. P. (Principal Investigator); Hill, J. M.; Bright, J. B.
1977-01-01
The author has identified the following significant results. Correlations between the satellite radiance values water color, Secchi disk visibility, turbidity, and attenuation coefficients were generally good. The residual was due to several factors including systematic errors in the remotely sensed data, errors, small time and space variations in the water quality measurements, and errors caused by experimental design. Satellite radiance values were closely correlated with the optical properties of the water.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez, P; Olaciregui-Ruiz, I; Mijnheer, B
2016-06-15
Purpose: To investigate the sensitivity of an EPID-based 3D dose verification system to detect delivery errors in VMAT treatments. Methods: For this study 41 EPID-reconstructed 3D in vivo dose distributions of 15 different VMAT plans (H&N, lung, prostate and rectum) were selected. To simulate the effect of delivery errors, their TPS plans were modified by: 1) scaling of the monitor units by ±3% and ±6% and 2) systematic shifting of leaf bank positions by ±1mm, ±2mm and ±5mm. The 3D in vivo dose distributions where then compared to the unmodified and modified treatment plans. To determine the detectability of themore » various delivery errors, we made use of a receiver operator characteristic (ROC) methodology. True positive and false positive rates were calculated as a function of the γ-parameters γmean, γ1% (near-maximum γ) and the PTV dose parameter ΔD{sub 50} (i.e. D{sub 50}(EPID)-D{sub 50}(TPS)). The ROC curve is constructed by plotting the true positive rate vs. the false positive rate. The area under the ROC curve (AUC) then serves as a measure of the performance of the EPID dosimetry system in detecting a particular error; an ideal system has AUC=1. Results: The AUC ranges for the machine output errors and systematic leaf position errors were [0.64 – 0.93] and [0.48 – 0.92] respectively using γmean, [0.57 – 0.79] and [0.46 – 0.85] using γ1% and [0.61 – 0.77] and [ 0.48 – 0.62] using ΔD{sub 50}. Conclusion: For the verification of VMAT deliveries, the parameter γmean is the best discriminator for the detection of systematic leaf position errors and monitor unit scaling errors. Compared to γmean and γ1%, the parameter ΔD{sub 50} performs worse as a discriminator in all cases.« less
Strategic planning to reduce medical errors: Part I--diagnosis.
Waldman, J Deane; Smith, Howard L
2012-01-01
Despite extensive dialogue and a continuing stream of proposed medical practice revisions, medical errors and adverse impacts persist. Connectivity of vital elements is often underestimated or not fully understood. This paper analyzes medical errors from a systems dynamics viewpoint (Part I). Our analysis suggests in Part II that the most fruitful strategies for dissolving medical errors include facilitating physician learning, educating patients about appropriate expectations surrounding treatment regimens, and creating "systematic" patient protections rather than depending on (nonexistent) perfect providers.
Rational-Emotive Therapy versus Systematic Desensitization: A Comment on Moleski and Tosi.
ERIC Educational Resources Information Center
Atkinson, Leslie
1983-01-01
Questioned the statistical analyses of the Moleski and Tosi investigation of rational-emotive therapy versus systematic desensitization. Suggested means for lowering the error rate through a more efficient experimental design. Recommended a reanalysis of the original data. (LLL)
ASME B89.4.19 Performance Evaluation Tests and Geometric Misalignments in Laser Trackers
Muralikrishnan, B.; Sawyer, D.; Blackburn, C.; Phillips, S.; Borchardt, B.; Estler, W. T.
2009-01-01
Small and unintended offsets, tilts, and eccentricity of the mechanical and optical components in laser trackers introduce systematic errors in the measured spherical coordinates (angles and range readings) and possibly in the calculated lengths of reference artifacts. It is desirable that the tests described in the ASME B89.4.19 Standard [1] be sensitive to these geometric misalignments so that any resulting systematic errors are identified during performance evaluation. In this paper, we present some analysis, using error models and numerical simulation, of the sensitivity of the length measurement system tests and two-face system tests in the B89.4.19 Standard to misalignments in laser trackers. We highlight key attributes of the testing strategy adopted in the Standard and propose new length measurement system tests that demonstrate improved sensitivity to some misalignments. Experimental results with a tracker that is not properly error corrected for the effects of the misalignments validate claims regarding the proposed new length tests. PMID:27504211
Hoy, Robert S; Foteinopoulou, Katerina; Kröger, Martin
2009-09-01
Primitive path analyses of entanglements are performed over a wide range of chain lengths for both bead spring and atomistic polyethylene polymer melts. Estimators for the entanglement length N_{e} which operate on results for a single chain length N are shown to produce systematic O(1/N) errors. The mathematical roots of these errors are identified as (a) treating chain ends as entanglements and (b) neglecting non-Gaussian corrections to chain and primitive path dimensions. The prefactors for the O(1/N) errors may be large; in general their magnitude depends both on the polymer model and the method used to obtain primitive paths. We propose, derive, and test new estimators which eliminate these systematic errors using information obtainable from the variation in entanglement characteristics with chain length. The new estimators produce accurate results for N_{e} from marginally entangled systems. Formulas based on direct enumeration of entanglements appear to converge faster and are simpler to apply.
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
NASA Astrophysics Data System (ADS)
Kirstetter, P.; Hong, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Petersen, W. A.
2011-12-01
Proper characterization of the error structure of TRMM Precipitation Radar (PR) quantitative precipitation estimation (QPE) is needed for their use in TRMM combined products, water budget studies and hydrological modeling applications. Due to the variety of sources of error in spaceborne radar QPE (attenuation of the radar signal, influence of land surface, impact of off-nadir viewing angle, etc.) and the impact of correction algorithms, the problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements (GV) using NOAA/NSSL's National Mosaic QPE (NMQ) system. An investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) on the basis of a 3-month-long data sample. A significant effort has been carried out to derive a bias-corrected, robust reference rainfall source from NMQ. The GV processing details will be presented along with preliminary results of PR's error characteristics using contingency table statistics, probability distribution comparisons, scatter plots, semi-variograms, and systematic biases and random errors.
The Calibration System of the E989 Experiment at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anastasi, Antonio
The muon anomaly aµ is one of the most precise quantity known in physics experimentally and theoretically. The high level of accuracy permits to use the measurement of aµ as a test of the Standard Model comparing with the theoretical calculation. After the impressive result obtained at Brookhaven National Laboratory in 2001 with a total accuracy of 0.54 ppm, a new experiment E989 is under construction at Fermilab, motivated by the diff of aexp SM µ - aµ ~ 3σ. The purpose of the E989 experiment is a fourfold reduction of the error, with a goal of 0.14 ppm,more » improving both the systematic and statistical uncertainty. With the use of the Fermilab beam complex a statistic of × 21 with respect to BNL will be reached in almost 2 years of data taking improving the statistical uncertainty to 0.1 ppm. Improvement on the systematic error involves the measurement technique of ωa and ωp, the anomalous precession frequency of the muon and the Larmor precession frequency of the proton respectively. The measurement of ωp involves the magnetic field measurement and improvements on this sector related to the uniformity of the field should reduce the systematic uncertainty with respect to BNL from 170 ppb to 70 ppb. A reduction from 180 ppb to 70 ppb is also required for the measurement of ωa; new DAQ, a faster electronics and new detectors and calibration system will be implemented with respect to E821 to reach this goal. In particular the laser calibration system will reduce the systematic error due to gain fl of the photodetectors from 0.12 to 0.02 ppm. The 0.02 ppm limit on systematic requires a system with a stability of 10 -4 on short time scale (700 µs) while on longer time scale the stability is at the percent level. The 10 -4 stability level required is almost an order of magnitude better than the existing laser calibration system in particle physics, making the calibration system a very challenging item. In addition to the high level of stability a particular environment, due to the presence of a 14 m diameter storage ring, a highly uniform magnetic field and the detector distribution around the storage ring, set specific guidelines and constraints. This thesis will focus on the final design of the Laser Calibration System developed for the E989 experiment. Chapter 1 introduces the subject of the anomalous magnetic moment of the muon; chapter 2 presents previous measurement of g -2, while chapter 3 discusses the Standard Model prediction and possible new physics scenario. Chapter 4 describes the E989 experiment. In this chapter will be described the experimental technique and also will be presented the experimental apparatus focusing on the improvements necessary to reduce the statistical and systematic errors. The main item of the thesis is discussed in the last two chapters: chapter 5 is focused on the Laser Calibration system while chapter 6 describes the Test Beam performed at the Beam Test Facility of Laboratori Nazionali di Frascati from the 29th February to the 7th March as a final test for the full calibrations system. An introduction explain the physics motivation of the system and the diff t devices implemented. In the final chapter the setup used will be described and some of the results obtained will be presented.« less
The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors.
Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter
2010-07-01
Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. 9 head and neck (H&N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets (+/- 1 mm in two banks, +/- 0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H&N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.
Pion mass dependence of the HVP contribution to muon g - 2
NASA Astrophysics Data System (ADS)
Golterman, Maarten; Maltman, Kim; Peris, Santiago
2018-03-01
One of the systematic errors in some of the current lattice computations of the HVP contribution to the muon anomalous magnetic moment g - 2 is that associated with the extrapolation to the physical pion mass. We investigate this extrapolation assuming lattice pion masses in the range of 220 to 440 MeV with the help of two-loop chiral perturbation theory, and find that such an extrapolation is unlikely to lead to control of this systematic error at the 1% level. This remains true even if various proposed tricks to improve the chiral extrapolation are taken into account.
Comparison of different source calculations in two-nucleon channel at large quark mass
NASA Astrophysics Data System (ADS)
Yamazaki, Takeshi; Ishikawa, Ken-ichi; Kuramashi, Yoshinobu
2018-03-01
We investigate a systematic error coming from higher excited state contributions in the energy shift of light nucleus in the two-nucleon channel by comparing two different source calculations with the exponential and wall sources. Since it is hard to obtain a clear signal of the wall source correlation function in a plateau region, we employ a large quark mass as the pion mass is 0.8 GeV in quenched QCD. We discuss the systematic error in the spin-triplet channel of the two-nucleon system, and the volume dependence of the energy shift.
NASA Technical Reports Server (NTRS)
Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.
1999-01-01
Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.
Production and detection of atomic hexadecapole at Earth's magnetic field.
Acosta, V M; Auzinsh, M; Gawlik, W; Grisins, P; Higbie, J M; Jackson Kimball, D F; Krzemien, L; Ledbetter, M P; Pustelny, S; Rochester, S M; Yashchuk, V V; Budker, D
2008-07-21
Optical magnetometers measure magnetic fields with extremely high precision and without cryogenics. However, at geomagnetic fields, important for applications from landmine removal to archaeology, they suffer from nonlinear Zeeman splitting, leading to systematic dependence on sensor orientation. We present experimental results on a method of eliminating this systematic error, using the hexadecapole atomic polarization moment. In particular, we demonstrate selective production of the atomic hexadecapole moment at Earth's magnetic field and verify its immunity to nonlinear Zeeman splitting. This technique promises to eliminate directional errors in all-optical atomic magnetometers, potentially improving their measurement accuracy by several orders of magnitude.
O'Connor, Annette M; Totton, Sarah C; Cullen, Jonah N; Ramezani, Mahmood; Kalivarapu, Vijay; Yuan, Chaohui; Gilbert, Stephen B
2018-01-01
Systematic reviews are increasingly using data from preclinical animal experiments in evidence networks. Further, there are ever-increasing efforts to automate aspects of the systematic review process. When assessing systematic bias and unit-of-analysis errors in preclinical experiments, it is critical to understand the study design elements employed by investigators. Such information can also inform prioritization of automation efforts that allow the identification of the most common issues. The aim of this study was to identify the design elements used by investigators in preclinical research in order to inform unique aspects of assessment of bias and error in preclinical research. Using 100 preclinical experiments each related to brain trauma and toxicology, we assessed design elements described by the investigators. We evaluated Methods and Materials sections of reports for descriptions of the following design elements: 1) use of comparison group, 2) unit of allocation of the interventions to study units, 3) arrangement of factors, 4) method of factor allocation to study units, 5) concealment of the factors during allocation and outcome assessment, 6) independence of study units, and 7) nature of factors. Many investigators reported using design elements that suggested the potential for unit-of-analysis errors, i.e., descriptions of repeated measurements of the outcome (94/200) and descriptions of potential for pseudo-replication (99/200). Use of complex factor arrangements was common, with 112 experiments using some form of factorial design (complete, incomplete or split-plot-like). In the toxicology dataset, 20 of the 100 experiments appeared to use a split-plot-like design, although no investigators used this term. The common use of repeated measures and factorial designs means understanding bias and error in preclinical experimental design might require greater expertise than simple parallel designs. Similarly, use of complex factor arrangements creates novel challenges for accurate automation of data extraction and bias and error assessment in preclinical experiments.
Cabilan, C J; Kynoch, Kathryn
2017-09-01
Second victims are clinicians who have made adverse errors and feel traumatized by the experience. The current published literature on second victims is mainly representative of doctors, hence nurses' experiences are not fully depicted. This systematic review was necessary to understand the second victim experience for nurses, explore the support provided, and recommend appropriate support systems for nurses. To synthesize the best available evidence on nurses' experiences as second victims, and explore their experiences of the support they receive and the support they need. Participants were registered nurses who made adverse errors. The review included studies that described nurses' experiences as second victims and/or the support they received after making adverse errors. All studies conducted in any health care settings worldwide. The qualitative studies included were grounded theory, discourse analysis and phenomenology. A structured search strategy was used to locate all unpublished and published qualitative studies, but was limited to the English language, and published between 1980 and February 2017. The references of studies selected for eligibility screening were hand-searched for additional literature. Eligible studies were assessed by two independent reviewers for methodological quality using a standardized critical appraisal instrument from the Joanna Briggs Institute Qualitative Assessment and Review Instrument (JBI QARI). Themes and narrative statements were extracted from papers included in the review using the standardized data extraction tool from JBI QARI. Data synthesis was conducted using the Joanna Briggs Institute meta-aggregation approach. There were nine qualitative studies included in the review. The narratives of 284 nurses generated a total of 43 findings, which formed 15 categories based on similarity of meaning. Four synthesized findings were generated from the categories: (i) The error brings a considerable emotional burden to the nurse that can last for a long time. In some cases, the error can alter nurses' perspectives and disrupt workplace relations; (ii) The type of support received influences how the nurse will feel about the error. Often nurses choose to speak with colleagues who have had similar experiences. Strategies need to focus on helping them to overcome the negative emotions associated with being a second victim; (iii) After the error, nurses are confronted with the dilemma of disclosure. Disclosure is determined by the following factors: how nurses feel about the error, harm to the patient, the support available to the nurse, and how errors are dealt with in the past; and (iv) Reconciliation is every nurse's endeavor. Predominantly, this is achieved by accepting fallibility, followed by acts of restitution, such as making positive changes in practice and disclosure to attain closure (see "Summary of findings"). Adverse errors were distressing for nurses, but they did not always receive the support they needed from colleagues. The lack of support had a significant impact on nurses' decisions on whether to disclose the error and his/her recovery process. Therefore, a good support system is imperative in alleviating the emotional burden, promoting the disclosure process, and assisting nurses with reconciliation. This review also highlighted research gaps that encompass the characteristics of the support system preferred by nurses, and the scarcity of studies worldwide.
Data-collection strategy for challenging native SAD phasing.
Olieric, Vincent; Weinert, Tobias; Finke, Aaron D; Anders, Carolin; Li, Dianfan; Olieric, Natacha; Borca, Camelia N; Steinmetz, Michel O; Caffrey, Martin; Jinek, Martin; Wang, Meitian
2016-03-01
Recent improvements in data-collection strategies have pushed the limits of native SAD (single-wavelength anomalous diffraction) phasing, a method that uses the weak anomalous signal of light elements naturally present in macromolecules. These involve the merging of multiple data sets from either multiple crystals or from a single crystal collected in multiple orientations at a low X-ray dose. Both approaches yield data of high multiplicity while minimizing radiation damage and systematic error, thus ensuring accurate measurements of the anomalous differences. Here, the combined use of these two strategies is described to solve cases of native SAD phasing that were particular challenges: the integral membrane diacylglycerol kinase (DgkA) with a low Bijvoet ratio of 1% and the large 200 kDa complex of the CRISPR-associated endonuclease (Cas9) bound to guide RNA and target DNA crystallized in the low-symmetry space group C2. The optimal native SAD data-collection strategy based on systematic measurements performed on the 266 kDa multiprotein/multiligand tubulin complex is discussed.
A Ka-Band Celestial Reference Frame with Applications to Deep Space Navigation
NASA Technical Reports Server (NTRS)
Jacobs, Christopher S.; Clark, J. Eric; Garcia-Miro, Cristina; Horiuchi, Shinji; Sotuela, Ioana
2011-01-01
The Ka-band radio spectrum is now being used for a wide variety of applications. This paper highlights the use of Ka-band as a frequency for precise deep space navigation based on a set of reference beacons provided by extragalactic quasars which emit broadband noise at Ka-band. This quasar-based celestial reference frame is constructed using X/Ka-band (8.4/32 GHz) from fifty-five 24-hour sessions with the Deep Space Network antennas in California, Australia, and Spain. We report on observations which have detected 464 sources covering the full 24 hours of Right Ascension and declinations down to -45 deg. Comparison of this X/Ka-band frame to the international standard S/X-band (2.3/8.4 GHz) ICRF2 shows wRMS agreement of approximately 200 micro-arcsec in alpha cos(delta) and approximately 300 micro-arcsec in delta. There is evidence for systematic errors at the 100 micro-arcsec level. Known errors include limited SNR, lack of instrumental phase calibration, tropospheric refraction mis-modeling, and limited southern geometry. The motivation for extending the celestial reference frame to frequencies above 8 GHz is to access more compact source morphology for improved frame stability and to support spacecraft navigation for Ka-band based NASA missions.
NASA Astrophysics Data System (ADS)
Craciunescu, Teddy; Peluso, Emmanuele; Murari, Andrea; Gelfusa, Michela; JET Contributors
2018-05-01
The total emission of radiation is a crucial quantity to calculate the power balances and to understand the physics of any Tokamak. Bolometric systems are the main tool to measure this important physical quantity through quite sophisticated tomographic inversion methods. On the Joint European Torus, the coverage of the bolometric diagnostic, due to the availability of basically only two projection angles, is quite limited, rendering the inversion a very ill-posed mathematical problem. A new approach, based on the maximum likelihood, has therefore been developed and implemented to alleviate one of the major weaknesses of traditional tomographic techniques: the difficulty to determine routinely the confidence intervals in the results. The method has been validated by numerical simulations with phantoms to assess the quality of the results and to optimise the configuration of the parameters for the main types of emissivity encountered experimentally. The typical levels of statistical errors, which may significantly influence the quality of the reconstructions, have been identified. The systematic tests with phantoms indicate that the errors in the reconstructions are quite limited and their effect on the total radiated power remains well below 10%. A comparison with other approaches to the inversion and to the regularization has also been performed.
NASA Astrophysics Data System (ADS)
Grzybowski, J. M. V.; Macau, E. E. N.; Yoneyama, T.
2017-05-01
This paper presents a self-contained framework for the stability assessment of isochronal synchronization in networks of chaotic and limit-cycle oscillators. The results were based on the Lyapunov-Krasovskii theorem and they establish a sufficient condition for local synchronization stability of as a function of the system and network parameters. With this in mind, a network of mutually delay-coupled oscillators subject to direct self-coupling is considered and then the resulting error equations are block-diagonalized for the purpose of studying their stability. These error equations are evaluated by means of analytical stability results derived from the Lyapunov-Krasovskii theorem. The proposed approach is shown to be a feasible option for the investigation of local stability of isochronal synchronization for a variety of oscillators coupled through linear functions of the state variables under a given undirected graph structure. This ultimately permits the systematic identification of stability regions within the high-dimensionality of the network parameter space. Examples of applications of the results to a number of networks of delay-coupled chaotic and limit-cycle oscillators are provided, such as Lorenz, Rössler, Cubic Chua's circuit, Van der Pol oscillator and the Hindmarsh-Rose neuron.
Quotation accuracy in medical journal articles—a systematic review and meta-analysis
Jergas, Hannah
2015-01-01
Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose—quotation errors—may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress. PMID:26528420
Applying lessons learned to enhance human performance and reduce human error for ISS operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, W.R.
1999-01-01
A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy{close_quote}s Idaho National Engineering and Environmental Laboratory (INEEL) is developing a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper will describe previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS. {copyright} {ital 1999 American Institute of Physics.}« less
Applying lessons learned to enhance human performance and reduce human error for ISS operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, W.R.
1998-09-01
A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.« less
NASA Astrophysics Data System (ADS)
Mena, Marcelo Andres
During 2004 and 2006 the University of Iowa provided air quality forecast support for flight planning of the ICARTT and MILAGRO field campaigns. A method for improvement of model performance in comparison to observations is showed. The method allows identifying sources of model error from boundary conditions and emissions inventories. Simultaneous analysis of horizontal interpolation of model error and error covariance showed that error in ozone modeling is highly correlated to the error of its precursors, and that there is geographical correlation also. During ICARTT ozone modeling error was improved by updating from the National Emissions Inventory from 1999 and 2001, and furthermore by updating large point source emissions from continuous monitoring data. Further improvements were achieved by reducing area emissions of NOx y 60% for states in the Southeast United States. Ozone error was highly correlated to NOy error during this campaign. Also ozone production in the United States was most sensitive to NOx emissions. During MILAGRO model performance in terms of correlation coefficients was higher, but model error in ozone modeling was high due overestimation of NOx and VOC emissions in Mexico City during forecasting. Large model improvements were shown by decreasing NOx emissions in Mexico City by 50% and VOC by 60%. Recurring ozone error is spatially correlated to CO and NOy error. Sensitivity studies show that Mexico City aerosol can reduce regional photolysis rates by 40% and ozone formation by 5-10%. Mexico City emissions can enhance NOy and O3 concentrations over the Gulf of Mexico in up to 10-20%. Mexico City emissions can convert regional ozone production regimes from VOC to NOx limited. A method of interpolation of observations along flight tracks is shown, which can be used to infer on the direction of outflow plumes. The use of ratios such as O3/NOy and NOx/NOy can be used to provide information on chemical characteristics of the plume, such as age, and ozone production regime. Interpolated MTBE observations can be used as a tracer of urban mobile source emissions. Finally procedures for estimating and gridding emissions inventories in Brazil and Mexico are presented.
Ko, YuKyung; Yu, Soyoung
2017-09-01
This study was undertaken to explore the correlations among nurses' perceptions of patient safety culture, their intention to report errors, and leader coaching behaviors. The participants (N = 289) were nurses from 5 Korean hospitals with approximately 300 to 500 beds each. Sociodemographic variables, patient safety culture, intention to report errors, and coaching behavior were measured using self-report instruments. Data were analyzed using descriptive statistics, Pearson correlation coefficient, the t test, and the Mann-Whitney U test. Nurses' perceptions of patient safety culture and their intention to report errors showed significant differences between groups of nurses who rated their leaders as high-performing or low-performing coaches. Perceived coaching behavior showed a significant, positive correlation with patient safety culture and intention to report errors, i.e., as nurses' perceptions of coaching behaviors increased, so did their ratings of patient safety culture and error reporting. There is a need in health care settings for coaching by nurse managers to provide quality nursing care and thus improve patient safety. Programs that are systematically developed and implemented to enhance the coaching behaviors of nurse managers are crucial to the improvement of patient safety and nursing care. Moreover, a systematic analysis of the causes of malpractice, as opposed to a focus on the punitive consequences of errors, could increase error reporting and therefore promote a culture in which a higher level of patient safety can thrive.
NASA Technical Reports Server (NTRS)
Pavlis, Nikolaos K.
1991-01-01
An error analysis study was conducted in order to assess the current accuracies and the future anticipated improvements in the estimation of geopotential differences over intercontinental locations. An observation/estimation scheme was proposed and studied, whereby gravity disturbance measurements on the Earth's surface, in caps surrounding the estimation points, are combined with corresponding data in caps directly over these points at the altitude of a low orbiting satellite, for the estimation of the geopotential difference between the terrestrial stations. The mathematical modeling required to relate the primary observables to the parameters to be estimated, was studied for the terrestrial data and the data at altitude. Emphasis was placed on the examination of systematic effects and on the corresponding reductions that need to be applied to the measurements to avoid systematic errors. The error estimation for the geopotential differences was performed using both truncation theory and least squares collocation with ring averages, in case observations on the Earth's surface only are used. The error analysis indicated that with the currently available global geopotential model OSU89B and with gravity disturbance data in 2 deg caps surrounding the estimation points, the error of the geopotential difference arising from errors in the reference model and the cap data is about 23 kgal cm, for 30 deg station separation.
Luijten, Maartje; Machielsen, Marise W.J.; Veltman, Dick J.; Hester, Robert; de Haan, Lieuwe; Franken, Ingmar H.A.
2014-01-01
Background Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The combined evaluation of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) findings in the present review offers unique information on neural deficits in addicted individuals. Methods We selected 19 ERP and 22 fMRI studies using stop-signal, go/no-go or Flanker paradigms based on a search of PubMed and Embase. Results The most consistent findings in addicted individuals relative to healthy controls were lower N2, error-related negativity and error positivity amplitudes as well as hypoactivation in the anterior cingulate cortex (ACC), inferior frontal gyrus and dorsolateral prefrontal cortex. These neural deficits, however, were not always associated with impaired task performance. With regard to behavioural addictions, some evidence has been found for similar neural deficits; however, studies are scarce and results are not yet conclusive. Differences among the major classes of substances of abuse were identified and involve stronger neural responses to errors in individuals with alcohol dependence versus weaker neural responses to errors in other substance-dependent populations. Limitations Task design and analysis techniques vary across studies, thereby reducing comparability among studies and the potential of clinical use of these measures. Conclusion Current addiction theories were supported by identifying consistent abnormalities in prefrontal brain function in individuals with addiction. An integrative model is proposed, suggesting that neural deficits in the dorsal ACC may constitute a hallmark neurocognitive deficit underlying addictive behaviours, such as loss of control. PMID:24359877
Running coupling constant from lattice studies of gluon and ghost propagators
NASA Astrophysics Data System (ADS)
Cucchieri, A.; Mendes, T.
2004-12-01
We present a numerical study of the running coupling constant in four-dimensional pure-SU(2) lattice gauge theory. The running coupling is evaluated by fitting data for the gluon and ghost propagators in minimal Landau gauge. Following Refs. [1, 2], the fitting formulae are obtained by a simultaneous integration of the β function and of a function coinciding with the anomalous dimension of the propagator in the momentum subtraction scheme. We consider these formulae at three and four loops. The fitting method works well, especially for the ghost case, for which statistical error and hyper-cubic effects are very small. Our present result for ΛMS is 200-40+60 MeV, where the error is purely systematic. We are currently extending this analysis to five loops in order to reduce this systematic error.
Simplified model of pinhole imaging for quantifying systematic errors in image shape
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benedetti, Laura Robin; Izumi, N.; Khan, S. F.
In this paper, we examine systematic errors in x-ray imaging by pinhole optics for quantifying uncertainties in the measurement of convergence and asymmetry in inertial confinement fusion implosions. We present a quantitative model for the total resolution of a pinhole optic with an imaging detector that more effectively describes the effect of diffraction than models that treat geometry and diffraction as independent. This model can be used to predict loss of shape detail due to imaging across the transition from geometric to diffractive optics. We find that fractional error in observable shapes is proportional to the total resolution element wemore » present and inversely proportional to the length scale of the asymmetry being observed. Finally, we have experimentally validated our results by imaging a single object with differently sized pinholes and with different magnifications.« less
Simplified model of pinhole imaging for quantifying systematic errors in image shape
Benedetti, Laura Robin; Izumi, N.; Khan, S. F.; ...
2017-10-30
In this paper, we examine systematic errors in x-ray imaging by pinhole optics for quantifying uncertainties in the measurement of convergence and asymmetry in inertial confinement fusion implosions. We present a quantitative model for the total resolution of a pinhole optic with an imaging detector that more effectively describes the effect of diffraction than models that treat geometry and diffraction as independent. This model can be used to predict loss of shape detail due to imaging across the transition from geometric to diffractive optics. We find that fractional error in observable shapes is proportional to the total resolution element wemore » present and inversely proportional to the length scale of the asymmetry being observed. Finally, we have experimentally validated our results by imaging a single object with differently sized pinholes and with different magnifications.« less
León-Reina, L; García-Maté, M; Álvarez-Pinazo, G; Santacruz, I; Vallcorba, O; De la Torre, A G; Aranda, M A G
2016-06-01
This study reports 78 Rietveld quantitative phase analyses using Cu K α 1 , Mo K α 1 and synchrotron radiations. Synchrotron powder diffraction has been used to validate the most challenging analyses. From the results for three series with increasing contents of an analyte (an inorganic crystalline phase, an organic crystalline phase and a glass), it is inferred that Rietveld analyses from high-energy Mo K α 1 radiation have slightly better accuracies than those obtained from Cu K α 1 radiation. This behaviour has been established from the results of the calibration graphics obtained through the spiking method and also from Kullback-Leibler distance statistic studies. This outcome is explained, in spite of the lower diffraction power for Mo radiation when compared to Cu radiation, as arising because of the larger volume tested with Mo and also because higher energy allows one to record patterns with fewer systematic errors. The limit of detection (LoD) and limit of quantification (LoQ) have also been established for the studied series. For similar recording times, the LoDs in Cu patterns, ∼0.2 wt%, are slightly lower than those derived from Mo patterns, ∼0.3 wt%. The LoQ for a well crystallized inorganic phase using laboratory powder diffraction was established to be close to 0.10 wt% in stable fits with good precision. However, the accuracy of these analyses was poor with relative errors near to 100%. Only contents higher than 1.0 wt% yielded analyses with relative errors lower than 20%.
Search, Memory, and Choice Error: An Experiment
Sanjurjo, Adam
2015-01-01
Multiple attribute search is a central feature of economic life: we consider much more than price when purchasing a home, and more than wage when choosing a job. An experiment is conducted in order to explore the effects of cognitive limitations on choice in these rich settings, in accordance with the predictions of a new model of search memory load. In each task, subjects are made to search the same information in one of two orders, which differ in predicted memory load. Despite standard models of choice treating such variations in order of acquisition as irrelevant, lower predicted memory load search orders are found to lead to substantially fewer choice errors. An implication of the result for search behavior, more generally, is that in order to reduce memory load (thus choice error) a limited memory searcher ought to deviate from the search path of an unlimited memory searcher in predictable ways-a mechanism that can explain the systematic deviations from optimal sequential search that have recently been discovered in peoples' behavior. Further, as cognitive load is induced endogenously (within the task), and found to affect choice behavior, this result contributes to the cognitive load literature (in which load is induced exogenously), as well as the cognitive ability literature (in which cognitive ability is measured in a separate task). In addition, while the information overload literature has focused on the detrimental effects of the quantity of information on choice, this result suggests that, holding quantity constant, the order that information is observed in is an essential determinant of choice failure. PMID:26121356
Flow tilt angles near forest edges - Part 2: Lidar anemometry
NASA Astrophysics Data System (ADS)
Dellwik, E.; Mann, J.; Bingöl, F.
2010-05-01
A novel way of estimating near-surface mean flow tilt angles from ground based Doppler lidar measurements is presented. The results are compared with traditional mast based in-situ sonic anemometry. The tilt angle assessed with the lidar is based on 10 or 30 min mean values of the velocity field from a conically scanning lidar. In this mode of measurement, the lidar beam is rotated in a circle by a prism with a fixed angle to the vertical at varying focus distances. By fitting a trigonometric function to the scans, the mean vertical velocity can be estimated. Lidar measurements from (1) a fetch-limited beech forest site taken at 48-175 m a.g.l. (above ground level), (2) a reference site in flat agricultural terrain and (3) a second reference site in complex terrain are presented. The method to derive flow tilt angles and mean vertical velocities from lidar has several advantages compared to sonic anemometry; there is no flow distortion caused by the instrument itself, there are no temperature effects and the instrument misalignment can be corrected for by assuming zero tilt angle at high altitudes. Contrary to mast-based instruments, the lidar measures the wind field with the exact same alignment error at a multitude of heights. Disadvantages with estimating vertical velocities from a lidar compared to mast-based measurements are potentially slightly increased levels of statistical errors due to limited sampling time, because the sampling is disjunct, and a requirement for homogeneous flow. The estimated mean vertical velocity is biased if the flow over the scanned circle is not homogeneous. It is demonstrated that the error on the mean vertical velocity due to flow inhomogeneity can be approximated by a function of the angle of the lidar beam to the vertical and the vertical gradient of the mean vertical velocity, whereas the error due to flow inhomogeneity on the horizontal mean wind speed is independent of the lidar beam angle. For the presented measurements over forest, it is evaluated that the systematic error due to the inhomogeneity of the flow is less than 0.2°. The results of the vertical conical scans were promising, and yielded positive flow angles for a sector where the forest is fetch-limited. However, more data and analysis are needed for a complete evaluation of the lidar technique.
NASA Astrophysics Data System (ADS)
Wu, Lifu; Zhu, Jianguo; Xie, Huimin; Zhou, Mengmeng
2016-12-01
Recently, we proposed a single-lens 3D digital image correlation (3D DIC) method and established a measurement system on the basis of a bilateral telecentric lens (BTL) and a bi-prism. This system can retrieve the 3D morphology of a target and measure its deformation using a single BTL with relatively high accuracy. Nevertheless, the system still suffers from systematic errors caused by manufacturing deficiency of the bi-prism and distortion of the BTL. In this study, in-depth evaluations of these errors and their effects on the measurement results are performed experimentally. The bi-prism deficiency and the BTL distortion are characterized by two in-plane rotation angles and several distortion coefficients, respectively. These values are obtained from a calibration process using a chessboard placed into the field of view of the system; this process is conducted after the measurement of tested specimen. A modified mathematical model is proposed, which takes these systematic errors into account and corrects them during 3D reconstruction. Experiments on retrieving the 3D positions of the chessboard grid corners and the morphology of a ceramic plate specimen are performed. The results of the experiments reveal that ignoring the bi-prism deficiency will induce attitude error to the retrieved morphology, and the BTL distortion can lead to its pseudo out-of-plane deformation. Correcting these problems can further improve the measurement accuracy of the bi-prism-based single-lens 3D DIC system.
NASA Astrophysics Data System (ADS)
Goh, K. L.; Liew, S. C.; Hasegawa, B. H.
1997-12-01
Computer simulation results from our previous studies showed that energy dependent systematic errors exist in the values of attenuation coefficient synthesized using the basis material decomposition technique with acrylic and aluminum as the basis materials, especially when a high atomic number element (e.g., iodine from radiographic contrast media) was present in the body. The errors were reduced when a basis set was chosen from materials mimicking those found in the phantom. In the present study, we employed a basis material coefficients transformation method to correct for the energy-dependent systematic errors. In this method, the basis material coefficients were first reconstructed using the conventional basis materials (acrylic and aluminum) as the calibration basis set. The coefficients were then numerically transformed to those for a more desirable set materials. The transformation was done at the energies of the low and high energy windows of the X-ray spectrum. With this correction method using acrylic and an iodine-water mixture as our desired basis set, computer simulation results showed that accuracy of better than 2% could be achieved even when iodine was present in the body at a concentration as high as 10% by mass. Simulation work had also been carried out on a more inhomogeneous 2D thorax phantom of the 3D MCAT phantom. The results of the accuracy of quantitation were presented here.
Martín-Rodríguez, Saúl; Loturco, Irineu; Hunter, Angus M; Rodríguez-Ruiz, David; Munguia-Izquierdo, Diego
2017-12-01
Martín-Rodríguez, S, Loturco, I, Hunter, AM, Rodríguez-Ruiz, D, and Munguia-Izquierdo, D. Reliability and measurement error of tensiomyography to assess mechanical muscle function: A systematic review. J Strength Cond Res 31(12): 3524-3536, 2017-Interest in studying mechanical skeletal muscle function through tensiomyography (TMG) has increased in recent years. This systematic review aimed to (a) report the reliability and measurement error of all TMG parameters (i.e., maximum radial displacement of the muscle belly [Dm], contraction time [Tc], delay time [Td], half-relaxation time [½ Tr], and sustained contraction time [Ts]) and (b) to provide critical reflection on how to perform accurate and appropriate measurements for informing clinicians, exercise professionals, and researchers. A comprehensive literature search was performed of the Pubmed, Scopus, Science Direct, and Cochrane databases up to July 2017. Eight studies were included in this systematic review. Meta-analysis could not be performed because of the low quality of the evidence of some studies evaluated. Overall, the review of the 9 studies involving 158 participants revealed high relative reliability (intraclass correlation coefficient [ICC]) for Dm (0.91-0.99); moderate-to-high ICC for Ts (0.80-0.96), Tc (0.70-0.98), and ½ Tr (0.77-0.93); and low-to-high ICC for Td (0.60-0.98), independently of the evaluated muscles. In addition, absolute reliability (coefficient of variation [CV]) was low for all TMG parameters except for ½ Tr (CV = >20%), whereas measurement error indexes were high for this parameter. In conclusion, this study indicates that 3 of the TMG parameters (Dm, Td, and Tc) are highly reliable, whereas ½ Tr demonstrate insufficient reliability, and thus should not be used in future studies.
Entanglement-assisted quantum feedback control
NASA Astrophysics Data System (ADS)
Yamamoto, Naoki; Mikami, Tomoaki
2017-07-01
The main advantage of quantum metrology relies on the effective use of entanglement, which indeed allows us to achieve strictly better estimation performance over the standard quantum limit. In this paper, we propose an analogous method utilizing entanglement for the purpose of feedback control. The system considered is a general linear dynamical quantum system, where the control goal can be systematically formulated as a linear quadratic Gaussian control problem based on the quantum Kalman filtering method; in this setting, an entangled input probe field is effectively used to reduce the estimation error and accordingly the control cost function. In particular, we show that, in the problem of cooling an opto-mechanical oscillator, the entanglement-assisted feedback control can lower the stationary occupation number of the oscillator below the limit attainable by the controller with a coherent probe field and furthermore beats the controller with an optimized squeezed probe field.
NASA Technical Reports Server (NTRS)
Koch, S. E.; Skillman, W. C.; Kocin, P. J.; Wetzel, P. J.; Brill, K.; Keyser, D. A.; Mccumber, M. C.
1983-01-01
The overall performance characteristics of a limited area, hydrostatic, fine (52 km) mesh, primitive equation, numerical weather prediction model are determined in anticipation of satellite data assimilations with the model. The synoptic and mesoscale predictive capabilities of version 2.0 of this model, the Mesoscale Atmospheric Simulation System (MASS 2.0), were evaluated. The two part study is based on a sample of approximately thirty 12h and 24h forecasts of atmospheric flow patterns during spring and early summer. The synoptic scale evaluation results benchmark the performance of MASS 2.0 against that of an operational, synoptic scale weather prediction model, the Limited area Fine Mesh (LFM). The large sample allows for the calculation of statistically significant measures of forecast accuracy and the determination of systematic model errors. The synoptic scale benchmark is required before unsmoothed mesoscale forecast fields can be seriously considered.
Universal dimer–dimer scattering in lattice effective field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elhatisari, Serdar; Katterjohn, Kris; Lee, Dean
We consider two-component fermions with short-range interactions and large scattering length. This system has universal properties that are realized in several different fields of physics. In the limit of large fermion–fermion scattering length a ff and zero-range interaction, all properties of the system scale proportionally with a ff. For the case with shallow bound dimers, we calculate the dimer–dimer scattering phase shifts using lattice effective field theory. We extract the universal dimer–dimer scattering length a dd/a ff=0.618(30) and effective range r dd/a ff=-0.431(48). This result for the effective range is the first calculation with quantified and controlled systematic errors. Wemore » also benchmark our methods by computing the fermion–dimer scattering parameters and testing some predictions of conformal scaling of irrelevant operators near the unitarity limit.« less
Universal dimer–dimer scattering in lattice effective field theory
Elhatisari, Serdar; Katterjohn, Kris; Lee, Dean; ...
2017-03-14
We consider two-component fermions with short-range interactions and large scattering length. This system has universal properties that are realized in several different fields of physics. In the limit of large fermion–fermion scattering length a ff and zero-range interaction, all properties of the system scale proportionally with a ff. For the case with shallow bound dimers, we calculate the dimer–dimer scattering phase shifts using lattice effective field theory. We extract the universal dimer–dimer scattering length a dd/a ff=0.618(30) and effective range r dd/a ff=-0.431(48). This result for the effective range is the first calculation with quantified and controlled systematic errors. Wemore » also benchmark our methods by computing the fermion–dimer scattering parameters and testing some predictions of conformal scaling of irrelevant operators near the unitarity limit.« less
Double-wedged Wollaston-type polarimeter design and integration to RTT150-TFOSC
NASA Astrophysics Data System (ADS)
Helhel, Selcuk; Kirbiyik, Halil; Bayar, Cevdet; Khamitov, Irek; Kahya, Gizem; Okuyan, Oguzhan
2016-07-01
Photometric and spectroscopic observation capabilities of 1.5-m Russian- Turkish Telescope RTT150 has been broadened with the integration of presented polarimeter. The well-known double-wedged Wollaston-type dual-beam technique was preferred and applied to design and produce it. The designed polarimeter was integrated into the telescope detector TFOSC, and called TFOSC-WP. Its capabil- ities and limitations were attempted to be determined by a number of observation sets. Non-polarized and strongly polarized stars were observed to determine its limi- tations as well as its linearity. An instrumental intrinsic polarization was determined for the 1×5 arcmin field of view in equatorial coordinate system, the systematic error of polarization degree as 0.2% %, and position angle as 1.9°. These limitations and capabilities are denoted as good enough to satisfy telescopes' present and future astrophysical space missions related to GAIA and SRG projects.
Exploring the Large Scale Anisotropy in the Cosmic Microwave Background Radiation at 170 GHz
NASA Astrophysics Data System (ADS)
Ganga, Kenneth Matthew
1994-01-01
In this thesis, data from the Far Infra-Red Survey (FIRS), a balloon-borne experiment designed to measure the large scale anisotropy in the cosmic microwave background radiation, are analyzed. The FIRS operates in four frequency bands at 170, 280, 480, and 670 GHz, using an approximately Gaussian beam with a 3.8 deg full-width-at-half-maximum. A cross-correlation with the COBE/DMR first-year maps yields significant results, confirming the DMR detection of anisotropy in the cosmic microwave background radiation. Analysis of the FIRS data alone sets bounds on the amplitude of anisotropy under the assumption that the fluctuations are described by a Harrison-Peebles-Zel'dovich spectrum and further analysis sets limits on the index of the primordial density fluctuations for an Einstein-DeSitter universe. Galactic dust emission is discussed and limits are set on the magnitude of possible systematic errors in the measurement.
NASA Astrophysics Data System (ADS)
Baker, D. F.; Oda, T.; O'Dell, C.; Wunch, D.; Jacobson, A. R.; Yoshida, Y.; Partners, T.
2012-12-01
Measurements of column CO2 concentration from space are now being taken at a spatial and temporal density that permits regional CO2 sources and sinks to be estimated. Systematic errors in the satellite retrievals must be minimized for these estimates to be useful, however. CO2 retrievals from the TANSO instrument aboard the GOSAT satellite are compared to similar column retrievals from the Total Carbon Column Observing Network (TCCON) as the primary method of validation; while this is a powerful approach, it can only be done for overflights of 10-20 locations and has not, for example, permitted validation of GOSAT data over the oceans or deserts. Here we present a complementary approach that uses a global atmospheric transport model and flux inversion method to compare different types of CO2 measurements (GOSAT, TCCON, surface in situ, and aircraft) at different locations, at the cost of added transport error. The measurements from any single type of data are used in a variational carbon data assimilation method to optimize surface CO2 fluxes (with a CarbonTracker prior), then the corresponding optimized CO2 concentration fields are compared to those data types not inverted, using the appropriate vertical weighting. With this approach, we find that GOSAT column CO2 retrievals from the ACOS project (version 2.9 and 2.10) contain systematic errors that make the modeled fit to the independent data worse. However, we find that the differences between the GOSAT data and our prior model are correlated with certain physical variables (aerosol amount, surface albedo, correction to total column mass) that are likely driving errors in the retrievals, independent of CO2 concentration. If we correct the GOSAT data using a fit to these variables, then we find the GOSAT data to improve the fit to independent CO2 data, which suggests that the useful information in the measurements outweighs the negative impact of the remaining systematic errors. With this assurance, we compare the flux estimates given by assimilating the ACOS GOSAT retrievals to similar ones given by NIES GOSAT column retrievals, bias-corrected in a similar manner. Finally, we have found systematic differences on the order of a half ppm between column CO2 integrals from 18 TCCON sites and those given by assimilating NOAA in situ data (both surface and aircraft profile) in this approach. We assess how these differences change in switching to a newer version of the TCCON retrieval software.
Barroso, Teresa G; Martins, Rui C; Fernandes, Elisabete; Cardoso, Susana; Rivas, José; Freitas, Paulo P
2018-02-15
Tuberculosis is one of the major public health concerns. This highly contagious disease affects more than 10.4 million people, being a leading cause of morbidity by infection. Tuberculosis is diagnosed at the point-of-care by the Ziehl-Neelsen sputum smear microscopy test. Ziehl-Neelsen is laborious, prone to human error and infection risk, with a limit of detection of 10 4 cells/mL. In resource-poor nations, a more practical test, with lower detection limit, is paramount. This work uses a magnetoresistive biosensor to detect BCG bacteria for tuberculosis diagnosis. Herein we report: i) nanoparticle assembly method and specificity for tuberculosis detection; ii) demonstration of proportionality between BCG cell concentration and magnetoresistive voltage signal; iii) application of multiplicative signal correction for systematic effects removal; iv) investigation of calibration effectiveness using chemometrics methods; and v) comparison with state-of-the-art point-of-care tuberculosis biosensors. Results present a clear correspondence between voltage signal and cell concentration. Multiplicative signal correction removes baseline shifts within and between biochip sensors, allowing accurate and precise voltage signal between different biochips. The corrected signal was used for multivariate regression models, which significantly decreased the calibration standard error from 0.50 to 0.03log 10 (cells/mL). Results show that Ziehl-Neelsen detection limits and below are achievable with the magnetoresistive biochip, when pre-processing and chemometrics are used. Copyright © 2017 Elsevier B.V. All rights reserved.
Determination of the number of ψ' events at BESIII
NASA Astrophysics Data System (ADS)
Ablikim, M.; N. Achasov, M.; Albayrak, O.; J. Ambrose, D.; F. An, F.; Q., An; Z. Bai, J.; Ban, Y.; Becker, J.; V. Bennett, J.; Berger, N.; Bertani, M.; M. Bian, J.; Boger, E.; Bondarenko, O.; Boyko, I.; A. Briere, R.; Bytev, V.; Cai, X.; Cakir, O.; Calcaterra, A.; F. Cao, G.; A. Cetin, S.; F. Chang, J.; Chelkov, G.; G., Chen; S. Chen, H.; C. Chen, J.; L. Chen, M.; J. Chen, S.; X., Chen; B. Chen, Y.; P. Cheng, H.; P. Chu, Y.; Cronin-Hennessy, D.; L. Dai, H.; P. Dai, J.; Dedovich, D.; Y. Deng, Z.; Denig, A.; Denysenko, I.; Destefanis, M.; M. Ding, W.; Y., Ding; Y. Dong, L.; Y. Dong, M.; X. Du, S.; J., Fang; S. Fang, S.; Fava, L.; Q. Feng, C.; B. Ferroli, R.; Friedel, P.; D. Fu, C.; Gao, Y.; C., Geng; Goetzen, K.; X. Gong, W.; Gradl, W.; Greco, M.; H. Gu, M.; T. Gu, Y.; H. Guan, Y.; Q. Guo, A.; B. Guo, L.; T., Guo; P. Guo, Y.; L. Han, Y.; A. Harris, F.; L. He, K.; M., He; Y. He, Z.; Held, T.; K. Heng, Y.; L. Hou, Z.; C., Hu; M. Hu, H.; F. Hu, J.; T., Hu; M. Huang, G.; S. Huang, G.; S. Huang, J.; L., Huang; T. Huang, X.; Y., Huang; P. Huang, Y.; Hussain, T.; S. Ji, C.; Q., Ji; P. Ji, Q.; B. Ji, X.; L. Ji, X.; L. Jiang, L.; S. Jiang, X.; B. Jiao, J.; Jiao, Z.; P. Jin, D.; S., Jin; F. Jing, F.; Kalantar-Nayestanaki, N.; Kavatsyuk, M.; Kopf, B.; Kornicer, M.; Kuehn, W.; Lai, W.; S. Lange, J.; Leyhe, M.; H. Li, C.; Cheng, Li; Cui, Li; M. Li, D.; F., Li; G., Li; B. Li, H.; C. Li, J.; K., Li; Lei, Li; J. Li, Q.; L. Li, S.; D. Li, W.; G. Li, W.; L. Li, X.; N. Li, X.; Q. Li, X.; R. Li, X.; B. Li, Z.; H., Liang; F. Liang, Y.; T. Liang, Y.; R. Liao, G.; T. Liao, X.; Lin(Lin, D.; J. Liu, B.; L. Liu, C.; X. Liu, C.; H. Liu, F.; Fang, Liu; Feng, Liu; H., Liu; B. Liu, H.; H. Liu, H.; M. Liu, H.; W. Liu, H.; P. Liu, J.; K., Liu; Y. Liu, K.; Kai, Liu; L. Liu, P.; Q., Liu; B. Liu, S.; X., Liu; B. Liu, Y.; A. Liu, Z.; Zhiqiang, Liu; Zhiqing, Liu; Loehner, H.; R. Lu, G.; J. Lu, H.; G. Lu, J.; W. Lu, Q.; R. Lu, X.; P. Lu, Y.; L. Luo, C.; X. Luo, M.; Luo, T.; L. Luo, X.; Lv, M.; L. Ma, C.; C. Ma, F.; L. Ma, H.; M. Ma, Q.; Ma, S.; Ma, T.; Y. Ma, X.; E. Maas, F.; Maggiora, M.; A. Malik, Q.; J. Mao, Y.; P. Mao, Z.; G. Messchendorp, J.; J., Min; J. Min, T.; E. Mitchell, R.; H. Mo, X.; C. Morales, Morales; Yu. Muchnoi, N.; Muramatsu, H.; Nefedov, Y.; Nicholson, C.; B. Nikolaev, I.; Z., Ning; L. Olsen, S.; Ouyang, Q.; Pacetti, S.; W. Park, J.; Pelizaeus, M.; P. Peng, H.; Peters, K.; L. Ping, J.; G. Ping, R.; Poling, R.; Prencipe, E.; M., Qi; Qian, S.; F. Qiao, C.; Q. Qin, L.; S. Qin, X.; Y., Qin; H. Qin, Z.; F. Qiu, J.; H. Rashid, K.; G., Rong; D. Ruan, X.; Sarantsev, A.; D. Schaefer, B.; Shao, M.; P. Shen, C.; Y. Shen, X.; Y. Sheng, H.; R. Shepherd, M.; Y. Song, X.; Spataro, S.; Spruck, B.; H. Sun, D.; X. Sun, G.; F. Sun, J.; S. Sun, S.; J. Sun, Y.; Z. Sun, Y.; J. Sun, Z.; T. Sun, Z.; J. Tang, C.; Tang, X.; Tapan, I.; H. Thorndike, E.; Toth, D.; Ullrich, M.; S. Varner, G.; Q. Wang, B.; D., Wang; Y. Wang, D.; K., Wang; L. Wang, L.; S. Wang, L.; M., Wang; P., Wang; L. Wang, P.; J. Wang, Q.; G. Wang, S.; F. Wang, X.; L. Wang, X.; F. Wang, Y.; Z., Wang; G. Wang, Z.; Y. Wang, Z.; H. Wei, D.; B. Wei, J.; Weidenkaff, P.; G. Wen, Q.; P. Wen, S.; M., Werner; Wiedner, U.; H. Wu, L.; N., Wu; X. Wu, S.; W., Wu; Z., Wu; G. Xia, L.; X Xia, Y.; J. Xiao, Z.; G. Xie, Y.; L. Xiu, Q.; F. Xu, G.; M. Xu, G.; J. Xu, Q.; N. Xu, Q.; P. Xu, X.; R. Xu, Z.; Xue, F.; Xue, Z.; L., Yan; B. Yan, W.; H. Yan, Y.; X. Yang, H.; Y., Yang; X. Yang, Y.; Ye, H.; Ye, M.; H. Ye, M.; X. Yu, B.; X. Yu, C.; W. Yu, H.; S. Yu, J.; P. Yu, S.; Z. Yuan, C.; Y., Yuan; A. Zafar, A.; Zallo, A.; Zeng, Y.; X. Zhang, B.; Y. Zhang, B.; Zhang, C.; C. Zhang, C.; H. Zhang, D.; H. Zhang, H.; Y. Zhang, H.; Q. Zhang, J.; W. Zhang, J.; Y. Zhang, J.; Z. Zhang, J.; Lili, Zhang; Zhang, R.; H. Zhang, S.; J. Zhang, X.; Y. Zhang, X.; Zhang, Y.; H. Zhang, Y.; P. Zhang, Z.; Y. Zhang, Z.; Zhenghao, Zhang; Zhao, G.; S. Zhao, H.; W. Zhao, J.; X. Zhao, K.; Lei, Zhao; Ling, Zhao; G. Zhao, M.; Zhao, Q.; Z. Zhao, Q.; J. Zhao, S.; C. Zhao, T.; B. Zhao, Y.; G. Zhao, Z.; Zhemchugov, A.; B., Zheng; P. Zheng, J.; H. Zheng, Y.; B., Zhong; Z., Zhong; L., Zhou; K. Zhou, X.; R. Zhou, X.; Zhu, C.; Zhu, K.; J. Zhu, K.; H. Zhu, S.; L. Zhu, X.; C. Zhu, Y.; M. Zhu, Y.; S. Zhu, Y.; A. Zhu, Z.; J., Zhuang; S. Zou, B.; H. Zou, J.
2013-06-01
The number of ψ' events accumulated by the BESIII experiment from March 3 through April 14, 2009, is determined by counting inclusive hadronic events. The result is 106.41×(1.00±0.81%)×106. The error is systematic dominant; the statistical error is negligible.
Improving Student Results in the Crystal Violet Chemical Kinetics Experiment
ERIC Educational Resources Information Center
Kazmierczak, Nathanael; Vander Griend, Douglas A.
2017-01-01
Despite widespread use in general chemistry laboratories, the crystal violet chemical kinetics experiment frequently suffers from erroneous student results. Student calculations for the reaction order in hydroxide often contain large asymmetric errors, pointing to the presence of systematic error. Through a combination of "in silico"…
Theory of Test Translation Error
ERIC Educational Resources Information Center
Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel
2009-01-01
In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…
Error sources in passive and active microwave satellite soil moisture over Australia
USDA-ARS?s Scientific Manuscript database
Development of a long-term climate record of soil moisture (SM) involves combining historic and present satellite-retrieved SM data sets. This in turn requires a consistent characterization and deep understanding of the systematic differences and errors in the individual data sets, which vary due to...
Weighing Rocky Exoplanets with Improved Radial Velocimetry
NASA Astrophysics Data System (ADS)
Xuesong Wang, Sharon; Wright, Jason; California Planet Survey Consortium
2016-01-01
The synergy between Kepler and the ground-based radial velocity (RV) surveys have made numerous discoveries of small and rocky exoplanets, opening the age of Earth analogs. However, most (29/33) of the RV-detected exoplanets that are smaller than 3 Earth radii do not have their masses constrained to better than 20% - limited by the current RV precision (1-2 m/s). Our work improves the RV precision of the Keck telescope, which is responsible for most of the mass measurements for small Kepler exoplanets. We have discovered and verified, for the first time, two of the dominant terms in Keck's RV systematic error budget: modeling errors (mostly in deconvolution) and telluric contamination. These two terms contribute 1 m/s and 0.6 m/s, respectively, to the RV error budget (RMS in quadrature), and they create spurious signals at periods of one sidereal year and its harmonics with amplitudes of 0.2-1 m/s. Left untreated, these errors can mimic the signals of Earth-like or Super-Earth planets in the Habitable Zone. Removing these errors will bring better precision to ten-year worth of Keck data and better constraints on the masses and compositions of small Kepler planets. As more precise RV instruments coming online, we need advanced data analysis tools to overcome issues like these in order to detect the Earth twin (RV amplitude 8 cm/s). We are developing a new, open-source RV data analysis tool in Python, which uses Bayesian MCMC and Gaussian processes, to fully exploit the hardware improvements brought by new instruments like MINERVA and NASA's WIYN/EPDS.
NASA Technical Reports Server (NTRS)
Beck, S. M.
1975-01-01
A mobile self-contained Faraday cup system for beam current measurments of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 + or - 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stone, Daithi A.; Hansen, Gerrit
Despite being a well-established research field, the detection and attribution of observed climate change to anthropogenic forcing is not yet provided as a climate service. One reason for this is the lack of a methodology for performing tailored detection and attribution assessments on a rapid time scale. Here we develop such an approach, based on the translation of quantitative analysis into the “confidence” language employed in recent Assessment Reports of the Intergovernmental Panel on Climate Change. While its systematic nature necessarily ignores some nuances examined in detailed expert assessments, the approach nevertheless goes beyond most detection and attribution studies inmore » considering contributors to building confidence such as errors in observational data products arising from sparse monitoring networks. When compared against recent expert assessments, the results of this approach closely match those of the existing assessments. Where there are small discrepancies, these variously reflect ambiguities in the details of what is being assessed, reveal nuances or limitations of the expert assessments, or indicate limitations of the accuracy of the sort of systematic approach employed here. Deployment of the method on 116 regional assessments of recent temperature and precipitation changes indicates that existing rules of thumb concerning the detectability of climate change ignore the full range of sources of uncertainty, most particularly the importance of adequate observational monitoring.« less
Time-resolved dosimetry using a pinpoint ionization chamber as quality assurance for IMRT and VMAT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Louwe, Robert J. W., E-mail: rob.louwe@ccdbh.org.nz; Satherley, Thomas; Day, Rebecca A.
Purpose: To develop a method to verify the dose delivery in relation to the individual control points of intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) using an ionization chamber. In addition to more effective problem solving during patient-specific quality assurance (QA), the aim is to eventually map out the limitations in the treatment chain and enable a targeted improvement of the treatment technique in an efficient way. Methods: Pretreatment verification was carried out for 255 treatment plans that included a broad range of treatment indications in two departments using the equipment of different vendors. In-house developed softwaremore » was used to enable calculation of the dose delivery for the individual beamlets in the treatment planning system (TPS), for data acquisition, and for analysis of the data. The observed deviations were related to various delivery and measurement parameters such as gantry angle, field size, and the position of the detector with respect to the field edge to distinguish between error sources. Results: The average deviation of the integral fraction dose during pretreatment verification of the planning target volume dose was −2.1% ± 2.2% (1 SD), −1.7% ± 1.7% (1 SD), and 0.0% ± 1.3% (1 SD) for IMRT at the Radboud University Medical Center (RUMC), VMAT (RUMC), and VMAT at the Wellington Blood and Cancer Centre, respectively. Verification of the dose to organs at risk gave very similar results but was generally subject to a larger measurement uncertainty due to the position of the detector at a high dose gradient. The observed deviations could be related to limitations of the TPS beam models, attenuation of the treatment couch, as well as measurement errors. The apparent systematic error of about −2% in the average deviation of the integral fraction dose in the RUMC results could be explained by the limitations of the TPS beam model in the calculation of the beam penumbra. Conclusions: This study showed that time-resolved dosimetry using an ionization chamber is feasible and can be largely automated which limits the required additional time compared to integrated dose measurements. It provides a unique QA method which enables identification and quantification of the contribution of various error sources during IMRT and VMAT delivery.« less
MERLIN: a Franco-German LIDAR space mission for atmospheric methane
NASA Astrophysics Data System (ADS)
Bousquet, P.; Ehret, G.; Pierangelo, C.; Marshall, J.; Bacour, C.; Chevallier, F.; Gibert, F.; Armante, R.; Crevoisier, C. D.; Edouart, D.; Esteve, F.; Julien, E.; Kiemle, C.; Alpers, M.; Millet, B.
2017-12-01
The Methane Remote Sensing Lidar Mission (MERLIN), currently in phase C, is a joint cooperation between France and Germany on the development, launch and operation of a space LIDAR dedicated to the retrieval of total weighted methane (CH4) atmospheric columns. Atmospheric methane is the second most potent anthropogenic greenhouse gas, contributing 20% to climate radiative forcing but also plying an important role in atmospheric chemistry as a precursor of tropospheric ozone and low-stratosphere water vapour. Its short lifetime ( 9 years) and the nature and variety of its anthropogenic sources also offer interesting mitigation options in regards to the 2° objective of the Paris agreement. For the first time, measurements of atmospheric composition will be performed from space thanks to an IPDA (Integrated Path Differential Absorption) LIDAR (Light Detecting And Ranging), with a precision (target ±27 ppb for a 50km aggregation along the trace) and accuracy (target <3.7 ppb at 68%) sufficient to significantly reduce the uncertainties on methane emissions. The very low targeted systematic error target is particularly ambitious compared to current passive methane space mission. It is achievable because of the differential active measurements of MERLIN, which guarantees almost no contamination by aerosols or water vapour cross-sensitivity. As an active mission, MERLIN will deliver global methane weighted columns (XCH4) for all seasons and all latitudes, day and night Here, we recall the MERLIN objectives and mission characteristics. We also propose an end-to-end error analysis, from the causes of random and systematic errors of the instrument, of the platform and of the data treatment, to the error on methane emissions. To do so, we propose an OSSE analysis (observing system simulation experiment) to estimate the uncertainty reduction on methane emissions brought by MERLIN XCH4. The originality of our inversion system is to transfer both random and systematic errors from the observation space to the flux space, thus providing more realistic error reductions than usually provided in OSSE only using the random part of errors. Uncertainty reductions are presented using two different atmospheric transport models, TM3 and LMDZ, and compared with error reduction achieved with the GOSAT passive mission.
NASA Astrophysics Data System (ADS)
Lozano, A. I.; Oller, J. C.; Krupa, K.; Ferreira da Silva, F.; Limão-Vieira, P.; Blanco, F.; Muñoz, A.; Colmenares, R.; García, G.
2018-06-01
A novel experimental setup has been implemented to provide accurate electron scattering cross sections from molecules at low and intermediate impact energies (1-300 eV) by measuring the attenuation of a magnetically confined linear electron beam from a molecular target. High-resolution electron energy is achieved through confinement in a magnetic gas trap where electrons are cooled by successive collisions with N2. Additionally, we developed and present a method to correct systematic errors arising from energy and angular resolution limitations. The accuracy of the entire measurement procedure is validated by comparing the N2 total scattering cross section in the considered energy range with benchmark values available in the literature.
NASA Technical Reports Server (NTRS)
Lites, B. W.; Skumanich, A.
1985-01-01
A method is presented for recovery of the vector magnetic field and thermodynamic parameters from polarization measurement of photospheric line profiles measured with filtergraphs. The method includes magneto-optic effects and may be utilized on data sampled at arbitrary wavelengths within the line profile. The accuracy of this method is explored through inversion of synthetic Stokes profiles subjected to varying levels of random noise, instrumental wave-length resolution, and line profile sampling. The level of error introduced by the systematic effect of profile sampling over a finite fraction of the 5 minute oscillation cycle is also investigated. The results presented here are intended to guide instrumental design and observational procedure.
General ultrafast pulse measurement using the cross-correlation single-shot sonogram technique.
Reid, Derryck T; Garduno-Mejia, Jesus
2004-03-15
The cross-correlation single-shot sonogram technique offers exact pulse measurement and real-time pulse monitoring via an intuitive time-frequency trace whose shape and orientation directly indicate the spectral chirp of an ultrashort laser pulse. We demonstrate an algorithm that solves a fundamental limitation of the cross-correlation sonogram method, namely, that the time-gating operation is implemented using a replica of the measured pulse rather than the ideal delta-function-like pulse. Using a modified principal-components generalized projections algorithm, we experimentally show accurate pulse retrieval of an asymmetric double pulse, a case that is prone to systematic error when one is using the original sonogram retrieval algorithm.
Human Error and the International Space Station: Challenges and Triumphs in Science Operations
NASA Technical Reports Server (NTRS)
Harris, Samantha S.; Simpson, Beau C.
2016-01-01
Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.
Global Warming Estimation from MSU: Correction for Drift and Calibration Errors
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.
2000-01-01
Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have approximately 7am/7pm orbital geometry) and. afternoon satellites (NOAA 7, 9, 11 and 14 that have approximately 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error eo. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error eo. We find eo can decrease the global temperature trend by approximately 0.07 K/decade. In addition there are systematic time dependent errors ed and ec present in the data that are introduced by the drift in the satellite orbital geometry. ed arises from the diurnal cycle in temperature and ec is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error ed can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observation made in the MSU Ch 1 (50.3 GHz) support this approach. The error ec is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the error ec on the global temperature trend. In one path the entire error ec is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by approximately 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+/-) 0.04 K/decade during 1980 to 1998.
Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-Su; Ramamirtham, Ramkumar; Smith, Earl L
2010-08-23
We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. Copyright 2010 Elsevier Ltd. All rights reserved.
Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-su; Ramamirtham, Ramkumar; Smith, Earl L.
2010-01-01
We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. PMID:20600237
Density Imaging of Puy de Dôme Volcano by Joint Inversion of Muographic and Gravimetric Data
NASA Astrophysics Data System (ADS)
Barnoud, A.; Niess, V.; Le Ménédeu, E.; Cayol, V.; Carloganu, C.
2016-12-01
We aim at jointly inverting high density muographic and gravimetric data to robustly infer the density structure of volcanoes. We use the puy de Dôme volcano in France as a proof of principle since high quality data sets are available for both muography and gravimetry. Gravimetric inversion and muography are independent methods that provide an estimation of density distributions. On the one hand, gravimetry allows to reconstruct 3D density variations by inversion. This process is well known to be ill-posed and intrinsically non unique, thus it requires additional constraints (eg. a priori density model). On the other hand, muography provides a direct measurement of 2D mean densities (radiographic images) from the detection of high energy atmospheric muons crossing the volcanic edifice. 3D density distributions can be computed from several radiographic images, but the number of images is generally limited by field constraints and by the limited number of available telescopes. Thus, muon tomography is also ill-posed in practice.In the case of the puy de Dôme volcano, the density structures inferred from gravimetric data (Portal et al. 2016) and from muographic data (Le Ménédeu et al. 2016) show a qualitative agreement but cannot be compared quantitatively. Because each method has different intrinsic resolutions due to the physics (Jourde et al., 2015), the joint inversion is expected to improve the robustness of the inversion. Such joint inversion has already been applied in a volcanic context (Nishiyama et al., 2013).Volcano muography requires state-of-art, high-resolution and large-scale muon detectors (Ambrosino et al., 2015). Instrumental uncertainties and systematic errors may constitute an important limitation for muography and should not be overlooked. For instance, low-energy muons are detected together with ballistic high-energy muons, decreasing the measured value of the mean density closed to the topography.Here, we jointly invert the gravimetric and muographic data to characterize the 3D density distribution of the puy de Dôme volcano. We attempt to precisely identify and estimate the different uncertainties and systematic errors so that they can be accounted for in the inversion scheme.
Black hole spectroscopy: Systematic errors and ringdown energy estimates
NASA Astrophysics Data System (ADS)
Baibhav, Vishal; Berti, Emanuele; Cardoso, Vitor; Khanna, Gaurav
2018-02-01
The relaxation of a distorted black hole to its final state provides important tests of general relativity within the reach of current and upcoming gravitational wave facilities. In black hole perturbation theory, this phase consists of a simple linear superposition of exponentially damped sinusoids (the quasinormal modes) and of a power-law tail. How many quasinormal modes are necessary to describe waveforms with a prescribed precision? What error do we incur by only including quasinormal modes, and not tails? What other systematic effects are present in current state-of-the-art numerical waveforms? These issues, which are basic to testing fundamental physics with distorted black holes, have hardly been addressed in the literature. We use numerical relativity waveforms and accurate evolutions within black hole perturbation theory to provide some answers. We show that (i) a determination of the fundamental l =m =2 quasinormal frequencies and damping times to within 1% or better requires the inclusion of at least the first overtone, and preferably of the first two or three overtones; (ii) a determination of the black hole mass and spin with precision better than 1% requires the inclusion of at least two quasinormal modes for any given angular harmonic mode (ℓ , m ). We also improve on previous estimates and fits for the ringdown energy radiated in the various multipoles. These results are important to quantify theoretical (as opposed to instrumental) limits in parameter estimation accuracy and tests of general relativity allowed by ringdown measurements with high signal-to-noise ratio gravitational wave detectors.
Financial forecasts accuracy in Brazil's social security system.
Silva, Carlos Patrick Alves da; Puty, Claudio Alberto Castelo Branco; Silva, Marcelino Silva da; Carvalho, Solon Venâncio de; Francês, Carlos Renato Lisboa
2017-01-01
Long-term social security statistical forecasts produced and disseminated by the Brazilian government aim to provide accurate results that would serve as background information for optimal policy decisions. These forecasts are being used as support for the government's proposed pension reform that plans to radically change the Brazilian Constitution insofar as Social Security is concerned. However, the reliability of official results is uncertain since no systematic evaluation of these forecasts has ever been published by the Brazilian government or anyone else. This paper aims to present a study of the accuracy and methodology of the instruments used by the Brazilian government to carry out long-term actuarial forecasts. We base our research on an empirical and probabilistic analysis of the official models. Our empirical analysis shows that the long-term Social Security forecasts are systematically biased in the short term and have significant errors that render them meaningless in the long run. Moreover, the low level of transparency in the methods impaired the replication of results published by the Brazilian Government and the use of outdated data compromises forecast results. In the theoretical analysis, based on a mathematical modeling approach, we discuss the complexity and limitations of the macroeconomic forecast through the computation of confidence intervals. We demonstrate the problems related to error measurement inherent to any forecasting process. We then extend this exercise to the computation of confidence intervals for Social Security forecasts. This mathematical exercise raises questions about the degree of reliability of the Social Security forecasts.
Financial forecasts accuracy in Brazil’s social security system
2017-01-01
Long-term social security statistical forecasts produced and disseminated by the Brazilian government aim to provide accurate results that would serve as background information for optimal policy decisions. These forecasts are being used as support for the government’s proposed pension reform that plans to radically change the Brazilian Constitution insofar as Social Security is concerned. However, the reliability of official results is uncertain since no systematic evaluation of these forecasts has ever been published by the Brazilian government or anyone else. This paper aims to present a study of the accuracy and methodology of the instruments used by the Brazilian government to carry out long-term actuarial forecasts. We base our research on an empirical and probabilistic analysis of the official models. Our empirical analysis shows that the long-term Social Security forecasts are systematically biased in the short term and have significant errors that render them meaningless in the long run. Moreover, the low level of transparency in the methods impaired the replication of results published by the Brazilian Government and the use of outdated data compromises forecast results. In the theoretical analysis, based on a mathematical modeling approach, we discuss the complexity and limitations of the macroeconomic forecast through the computation of confidence intervals. We demonstrate the problems related to error measurement inherent to any forecasting process. We then extend this exercise to the computation of confidence intervals for Social Security forecasts. This mathematical exercise raises questions about the degree of reliability of the Social Security forecasts. PMID:28859172
Emken, Jeremy L; Benitez, Raul; Reinkensmeyer, David J
2007-03-28
A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. The assist-as-needed algorithm proposed here can limit error during the learning of a dynamic motor task. The algorithm encourages learning by decreasing its assistance as a function of the ongoing progression of movement error. This type of algorithm is well suited for helping people learn dynamic tasks for which large kinematic errors are dangerous or discouraging, and thus may prove useful for robot-assisted movement training of walking or reaching following neurologic injury.
Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander
2011-01-01
This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, P.; Seth, D.L.; Ray, A.K.
A detailed and systematic study of the nature of the discretization error associated with the upwind finite-difference method is presented. A basic model problem has been identified and based upon the results for this problem, a basic hypothesis regarding the accuracy of the computational solution of the Spencer-Lewis equation is formulated. The basic hypothesis is then tested under various systematic single complexifications of the basic model problem. The results of these tests provide the framework of the refined hypothesis presented in the concluding comments. 27 refs., 3 figs., 14 tabs.
Modeling longitudinal data, I: principles of multivariate analysis.
Ravani, Pietro; Barrett, Brendan; Parfrey, Patrick
2009-01-01
Statistical models are used to study the relationship between exposure and disease while accounting for the potential role of other factors' impact on outcomes. This adjustment is useful to obtain unbiased estimates of true effects or to predict future outcomes. Statistical models include a systematic component and an error component. The systematic component explains the variability of the response variable as a function of the predictors and is summarized in the effect estimates (model coefficients). The error element of the model represents the variability in the data unexplained by the model and is used to build measures of precision around the point estimates (confidence intervals).
Bundle Block Adjustment of Airborne Three-Line Array Imagery Based on Rotation Angles
Zhang, Yongjun; Zheng, Maoteng; Huang, Xu; Xiong, Jinxin
2014-01-01
In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs), which are measured by the integrated positioning and orientation system (POS) of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models. PMID:24811075
Bundle block adjustment of airborne three-line array imagery based on rotation angles.
Zhang, Yongjun; Zheng, Maoteng; Huang, Xu; Xiong, Jinxin
2014-05-07
In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs), which are measured by the integrated positioning and orientation system (POS) of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models.
Black hole mass measurement using molecular gas kinematics: what ALMA can do
NASA Astrophysics Data System (ADS)
Yoon, Ilsang
2017-04-01
We study the limits of the spatial and velocity resolution of radio interferometry to infer the mass of supermassive black holes (SMBHs) in galactic centres using the kinematics of circum-nuclear molecular gas, by considering the shapes of the galaxy surface brightness profile, signal-to-noise ratios (S/Ns) of the position-velocity diagram (PVD) and systematic errors due to the spatial and velocity structure of the molecular gas. We argue that for fixed galaxy stellar mass and SMBH mass, the spatial and velocity scales that need to be resolved increase and decrease, respectively, with decreasing Sérsic index of the galaxy surface brightness profile. We validate our arguments using simulated PVDs for varying beam size and velocity channel width. Furthermore, we consider the systematic effects on the inference of the SMBH mass by simulating PVDs including the spatial and velocity structure of the molecular gas, which demonstrates that their impacts are not significant for a PVD with good S/N unless the spatial and velocity scale associated with the systematic effects are comparable to or larger than the angular resolution and velocity channel width of the PVD from pure circular motion. Also, we caution that a bias in a galaxy surface brightness profile owing to the poor resolution of a galaxy photometric image can largely bias the SMBH mass by an order of magnitude. This study shows the promise and the limits of ALMA observations for measuring SMBH mass using molecular gas kinematics and provides a useful technical justification for an ALMA proposal with the science goal of measuring SMBH mass.