Sample records for current measurement errors

  1. Method and apparatus for correcting eddy current signal voltage for temperature effects

    DOEpatents

    Kustra, Thomas A.; Caffarel, Alfred J.

    1990-01-01

    An apparatus and method for measuring physical characteristics of an electrically conductive material by the use of eddy-current techniques and compensating measurement errors caused by changes in temperature includes a switching arrangement connected between primary and reference coils of an eddy-current probe which allows the probe to be selectively connected between an eddy current output oscilloscope and a digital ohm-meter for measuring the resistances of the primary and reference coils substantially at the time of eddy current measurement. In this way, changes in resistance due to temperature effects can be completely taken into account in determining the true error in the eddy current measurement. The true error can consequently be converted into an equivalent eddy current measurement correction.

  2. Circular Array of Magnetic Sensors for Current Measurement: Analysis for Error Caused by Position of Conductor.

    PubMed

    Yu, Hao; Qian, Zheng; Liu, Huayi; Qu, Jiaqi

    2018-02-14

    This paper analyzes the measurement error, caused by the position of the current-carrying conductor, of a circular array of magnetic sensors for current measurement. The circular array of magnetic sensors is an effective approach for AC or DC non-contact measurement, as it is low-cost, light-weight, has a large linear range, wide bandwidth, and low noise. Especially, it has been claimed that such structure has excellent reduction ability for errors caused by the position of the current-carrying conductor, crosstalk current interference, shape of the conduction cross-section, and the Earth's magnetic field. However, the positions of the current-carrying conductor-including un-centeredness and un-perpendicularity-have not been analyzed in detail until now. In this paper, for the purpose of having minimum measurement error, a theoretical analysis has been proposed based on vector inner and exterior product. In the presented mathematical model of relative error, the un-center offset distance, the un-perpendicular angle, the radius of the circle, and the number of magnetic sensors are expressed in one equation. The comparison of the relative error caused by the position of the current-carrying conductor between four and eight sensors is conducted. Tunnel magnetoresistance (TMR) sensors are used in the experimental prototype to verify the mathematical model. The analysis results can be the reference to design the details of the circular array of magnetic sensors for current measurement in practical situations.

  3. Systematic error of diode thermometer.

    PubMed

    Iskrenovic, Predrag S

    2009-08-01

    Semiconductor diodes are often used for measuring temperatures. The forward voltage across a diode decreases, approximately linearly, with the increase in temperature. The applied method is mainly the simplest one. A constant direct current flows through the diode, and voltage is measured at diode terminals. The direct current that flows through the diode, putting it into operating mode, heats up the diode. The increase in temperature of the diode-sensor, i.e., the systematic error due to self-heating, depends on the intensity of current predominantly and also on other factors. The results of systematic error measurements due to heating up by the forward-bias current have been presented in this paper. The measurements were made at several diodes over a wide range of bias current intensity.

  4. DC-Compensated Current Transformer.

    PubMed

    Ripka, Pavel; Draxler, Karel; Styblíková, Renata

    2016-01-20

    Instrument current transformers (CTs) measure AC currents. The DC component in the measured current can saturate the transformer and cause gross error. We use fluxgate detection and digital feedback compensation of the DC flux to suppress the overall error to 0.15%. This concept can be used not only for high-end CTs with a nanocrystalline core, but it also works for low-cost CTs with FeSi cores. The method described here allows simultaneous measurements of the DC current component.

  5. DC-Compensated Current Transformer †

    PubMed Central

    Ripka, Pavel; Draxler, Karel; Styblíková, Renata

    2016-01-01

    Instrument current transformers (CTs) measure AC currents. The DC component in the measured current can saturate the transformer and cause gross error. We use fluxgate detection and digital feedback compensation of the DC flux to suppress the overall error to 0.15%. This concept can be used not only for high-end CTs with a nanocrystalline core, but it also works for low-cost CTs with FeSi cores. The method described here allows simultaneous measurements of the DC current component. PMID:26805830

  6. Evaluation of Acoustic Doppler Current Profiler measurements of river discharge

    USGS Publications Warehouse

    Morlock, S.E.

    1996-01-01

    The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.

  7. A precision analogue integrator system for heavy current measurement in MFDC resistance spot welding

    NASA Astrophysics Data System (ADS)

    Xia, Yu-Jun; Zhang, Zhong-Dian; Xia, Zhen-Xin; Zhu, Shi-Liang; Zhang, Rui

    2016-02-01

    In order to control and monitor the quality of middle frequency direct current (MFDC) resistance spot welding (RSW), precision measurement of the welding current up to 100 kA is required, for which Rogowski coils are the only viable current transducers at present. Thus, a highly accurate analogue integrator is the key to restoring the converted signals collected from the Rogowski coils. Previous studies emphasised that the integration drift is a major factor that influences the performance of analogue integrators, but capacitive leakage error also has a significant impact on the result, especially in long-time pulse integration. In this article, new methods of measuring and compensating capacitive leakage error are proposed to fabricate a precision analogue integrator system for MFDC RSW. A voltage holding test is carried out to measure the integration error caused by capacitive leakage, and an original integrator with a feedback adder is designed to compensate capacitive leakage error in real time. The experimental results and statistical analysis show that the new analogue integrator system could constrain both drift and capacitive leakage error, of which the effect is robust to different voltage levels of output signals. The total integration error is limited within  ±0.09 mV s-1 0.005% s-1 or full scale at a 95% confidence level, which makes it possible to achieve the precision measurement of the welding current of MFDC RSW with Rogowski coils of 0.1% accuracy class.

  8. Unavoidable electric current caused by inhomogeneities and its influence on measured material parameters of thermoelectric materials

    NASA Astrophysics Data System (ADS)

    Song, K.; Song, H. P.; Gao, C. F.

    2018-03-01

    It is well known that the key factor determining the performance of thermoelectric materials is the figure of merit, which depends on the thermal conductivity (TC), electrical conductivity, and Seebeck coefficient (SC). The electric current must be zero when measuring the TC and SC to avoid the occurrence of measurement errors. In this study, the complex-variable method is used to analyze the thermoelectric field near an elliptic inhomogeneity in an open circuit, and the field distributions are obtained in closed form. Our analysis shows that an electric current inevitably exists in both the matrix and the inhomogeneity even though the circuit is open. This unexpected electric current seriously affects the accuracy with which the TC and SC are measured. These measurement errors, both overall and local, are analyzed in detail. In addition, an error correction method is proposed based on the analytical results.

  9. Effect of electrical coupling on ionic current and synaptic potential measurements.

    PubMed

    Rabbah, Pascale; Golowasch, Jorge; Nadim, Farzan

    2005-07-01

    Recent studies have found electrical coupling to be more ubiquitous than previously thought, and coupling through gap junctions is known to play a crucial role in neuronal function and network output. In particular, current spread through gap junctions may affect the activation of voltage-dependent conductances as well as chemical synaptic release. Using voltage-clamp recordings of two strongly electrically coupled neurons of the lobster stomatogastric ganglion and conductance-based models of these neurons, we identified effects of electrical coupling on the measurement of leak and voltage-gated outward currents, as well as synaptic potentials. Experimental measurements showed that both leak and voltage-gated outward currents are recruited by gap junctions from neurons coupled to the clamped cell. Nevertheless, in spite of the strong coupling between these neurons, the errors made in estimating voltage-gated conductance parameters were relatively minor (<10%). Thus in many cases isolation of coupled neurons may not be required if a small degree of measurement error of the voltage-gated currents or the synaptic potentials is acceptable. Modeling results show, however, that such errors may be as high as 20% if the gap-junction position is near the recording site or as high as 90% when measuring smaller voltage-gated ionic currents. Paradoxically, improved space clamp increases the errors arising from electrical coupling because voltage control across gap junctions is poor for even the highest realistic coupling conductances. Furthermore, the common procedure of leak subtraction can add an extra error to the conductance measurement, the sign of which depends on the maximal conductance.

  10. Methods for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, M.R.; Bland, R.

    2000-01-01

    Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second, or less than 0.5% of a typical peak tidal discharge rate of 750 cubic meters per second.

  11. Study of Current Measurement Method Based on Circular Magnetic Field Sensing Array

    PubMed Central

    Li, Zhenhua; Zhang, Siqiu; Wu, Zhengtian; Tao, Yuan

    2018-01-01

    Classic core-based instrument transformers are more prone to magnetic saturation. This affects the measurement accuracy of such transformers and limits their applications in measuring large direct current (DC). Moreover, protection and control systems may exhibit malfunctions due to such measurement errors. This paper presents a more accurate method for current measurement based on a circular magnetic field sensing array. The proposed measurement approach utilizes multiple hall sensors that are evenly distributed on a circle. The average value of all hall sensors is regarded as the final measurement. The calculation model is established in the case of magnetic field interference of the parallel wire, and the simulation results show that the error decreases significantly when the number of hall sensors n is greater than 8. The measurement error is less than 0.06% when the wire spacing is greater than 2.5 times the radius of the sensor array. A simulation study on the off-center primary conductor is conducted, and a kind of hall sensor compensation method is adopted to improve the accuracy. The simulation and test results indicate that the measurement error of the system is less than 0.1%. PMID:29734742

  12. Study of Current Measurement Method Based on Circular Magnetic Field Sensing Array.

    PubMed

    Li, Zhenhua; Zhang, Siqiu; Wu, Zhengtian; Abu-Siada, Ahmed; Tao, Yuan

    2018-05-05

    Classic core-based instrument transformers are more prone to magnetic saturation. This affects the measurement accuracy of such transformers and limits their applications in measuring large direct current (DC). Moreover, protection and control systems may exhibit malfunctions due to such measurement errors. This paper presents a more accurate method for current measurement based on a circular magnetic field sensing array. The proposed measurement approach utilizes multiple hall sensors that are evenly distributed on a circle. The average value of all hall sensors is regarded as the final measurement. The calculation model is established in the case of magnetic field interference of the parallel wire, and the simulation results show that the error decreases significantly when the number of hall sensors n is greater than 8. The measurement error is less than 0.06% when the wire spacing is greater than 2.5 times the radius of the sensor array. A simulation study on the off-center primary conductor is conducted, and a kind of hall sensor compensation method is adopted to improve the accuracy. The simulation and test results indicate that the measurement error of the system is less than 0.1%.

  13. Technique for temperature compensation of eddy-current proximity probes

    NASA Technical Reports Server (NTRS)

    Masters, Robert M.

    1989-01-01

    Eddy-current proximity probes are used in turbomachinery evaluation testing and operation to measure distances, primarily vibration, deflection, or displacment of shafts, bearings and seals. Measurements of steady-state conditions made with standard eddy-current proximity probes are susceptible to error caused by temperature variations during normal operation of the component under investigation. Errors resulting from temperature effects for the specific probes used in this study were approximately 1.016 x 10 to the -3 mm/deg C over the temperature range of -252 to 100 C. This report examines temperature caused changes on the eddy-current proximity probe measurement system, establishes their origin, and discusses what may be done to minimize their effect on the output signal. In addition, recommendations are made for the installation and operation of the electronic components associated with an eddy-current proximity probe. Several techniques are described that provide active on-line error compensation for over 95 percent of the temperature effects.

  14. A Comprehensive Radial Velocity Error Budget for Next Generation Doppler Spectrometers

    NASA Technical Reports Server (NTRS)

    Halverson, Samuel; Ryan, Terrien; Mahadevan, Suvrath; Roy, Arpita; Bender, Chad; Stefansson, Guomundur Kari; Monson, Andrew; Levi, Eric; Hearty, Fred; Blake, Cullen; hide

    2016-01-01

    We describe a detailed radial velocity error budget for the NASA-NSF Extreme Precision Doppler Spectrometer instrument concept NEID (NN-explore Exoplanet Investigations with Doppler spectroscopy). Such an instrument performance budget is a necessity for both identifying the variety of noise sources currently limiting Doppler measurements, and estimating the achievable performance of next generation exoplanet hunting Doppler spectrometers. For these instruments, no single source of instrumental error is expected to set the overall measurement floor. Rather, the overall instrumental measurement precision is set by the contribution of many individual error sources. We use a combination of numerical simulations, educated estimates based on published materials, extrapolations of physical models, results from laboratory measurements of spectroscopic subsystems, and informed upper limits for a variety of error sources to identify likely sources of systematic error and construct our global instrument performance error budget. While natively focused on the performance of the NEID instrument, this modular performance budget is immediately adaptable to a number of current and future instruments. Such an approach is an important step in charting a path towards improving Doppler measurement precisions to the levels necessary for discovering Earth-like planets.

  15. Multiple imputation to account for measurement error in marginal structural models

    PubMed Central

    Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.

    2015-01-01

    Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338

  16. Rapid Measurement and Correction of Phase Errors from B0 Eddy Currents: Impact on Image Quality for Non-Cartesian Imaging

    PubMed Central

    Brodsky, Ethan K.; Klaers, Jessica L.; Samsonov, Alexey A.; Kijowski, Richard; Block, Walter F.

    2014-01-01

    Non-Cartesian imaging sequences and navigational methods can be more sensitive to scanner imperfections that have little impact on conventional clinical sequences, an issue which has repeatedly complicated the commercialization of these techniques by frustrating transitions to multi-center evaluations. One such imperfection is phase errors caused by resonant frequency shifts from eddy currents induced in the cryostat by time-varying gradients, a phenomemon known as B0 eddy currents. These phase errors can have a substantial impact on sequences that use ramp sampling, bipolar gradients, and readouts at varying azimuthal angles. We present a method for measuring and correcting phase errors from B0 eddy currents and examine the results on two different scanner models. This technique yields significant improvements in image quality for high-resolution joint imaging on certain scanners. The results suggest that correction of short time B0 eddy currents in manufacturer provided service routines would simplify adoption of non-Cartesian sampling methods. PMID:22488532

  17. Blood transfusion sampling and a greater role for error recovery.

    PubMed

    Oldham, Jane

    Patient identification errors in pre-transfusion blood sampling ('wrong blood in tube') are a persistent area of risk. These errors can potentially result in life-threatening complications. Current measures to address root causes of incidents and near misses have not resolved this problem and there is a need to look afresh at this issue. PROJECT PURPOSE: This narrative review of the literature is part of a wider system-improvement project designed to explore and seek a better understanding of the factors that contribute to transfusion sampling error as a prerequisite to examining current and potential approaches to error reduction. A broad search of the literature was undertaken to identify themes relating to this phenomenon. KEY DISCOVERIES: Two key themes emerged from the literature. Firstly, despite multi-faceted causes of error, the consistent element is the ever-present potential for human error. Secondly, current focus on error prevention could potentially be augmented with greater attention to error recovery. Exploring ways in which clinical staff taking samples might learn how to better identify their own errors is proposed to add to current safety initiatives.

  18. Extinction measurements with low-power hsrl systems—error limits

    NASA Astrophysics Data System (ADS)

    Eloranta, Ed

    2018-04-01

    HSRL measurements of extinction are more difficult than backscatter measurements. This is particularly true for low-power, eye-safe systems. This paper looks at error sources that currently provide an error limit of 10-5 m-1 for boundary layer extinction measurements made with University of Wisconsin HSRL systems. These eye-safe systems typically use 300mW transmitters and 40 cm diameter receivers with a 10-4 radian field-of-view.

  19. Altimeter error sources at the 10-cm performance level

    NASA Technical Reports Server (NTRS)

    Martin, C. F.

    1977-01-01

    Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.

  20. Measurement error is often neglected in medical literature: a systematic review.

    PubMed

    Brakenhoff, Timo B; Mitroiu, Marian; Keogh, Ruth H; Moons, Karel G M; Groenwold, Rolf H H; van Smeden, Maarten

    2018-06-01

    In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. A new assessment method of pHEMT models by comparing relative errors of drain current and its derivatives up to the third order

    NASA Astrophysics Data System (ADS)

    Dobeš, Josef; Grábner, Martin; Puričer, Pavel; Vejražka, František; Míchal, Jan; Popp, Jakub

    2017-05-01

    Nowadays, there exist relatively precise pHEMT models available for computer-aided design, and they are frequently compared to each other. However, such comparisons are mostly based on absolute errors of drain-current equations and their derivatives. In the paper, a novel method is suggested based on relative root-mean-square errors of both drain current and its derivatives up to the third order. Moreover, the relative errors are subsequently relativized to the best model in each category to further clarify obtained accuracies of both drain current and its derivatives. Furthermore, one our older and two newly suggested models are also included in comparison with the traditionally precise Ahmed, TOM-2 and Materka ones. The assessment is performed using measured characteristics of a pHEMT operating up to 110 GHz. Finally, a usability of the proposed models including the higher-order derivatives is illustrated using s-parameters analysis and measurement at more operating points as well as computation and measurement of IP3 points of a low-noise amplifier of a multi-constellation satellite navigation receiver with ATF-54143 pHEMT.

  2. Techniques for Down-Sampling a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces

  3. A Novel Methodology to Validate the Accuracy of Extraoral Dental Scanners and Digital Articulation Systems.

    PubMed

    Ellakwa, A; Elnajar, S; Littlefair, D; Sara, G

    2018-05-03

    The aim of the current study is to develop a novel method to investigate the accuracy of 3D scanners and digital articulation systems. An upper and a lower poured stone model were created by taking impression of fully dentate male (fifty years old) participant. Titanium spheres were added to the models to allow for an easily recognisable geometric shape for measurement after scanning and digital articulation. Measurements were obtained using a Coordinate Measuring Machine to record volumetric error, articulation error and clinical effect error. Three scanners were compared, including the Imetric 3D iScan d104i, Shining 3D AutoScan-DS100 and 3Shape D800, as well as their respective digital articulation software packages. Stoneglass Industries PDC digital articulation system was also applied to the Imetric scans for comparison with the CMM measurements. All the scans displayed low volumetric error (p⟩0.05), indicating that the scanners themselves had a minor contribution to the articulation and clinical effect errors. The PDC digital articulation system was found to deliver the lowest average errors, with good repeatability of results. The new measuring technique in the current study was able to assess the scanning and articulation accuracy of the four systems investigated. The PDC digital articulation system using Imetric scans was recommended as it displayed the lowest articulation error and clinical effect error with good repeatability. The low errors from the PDC system may have been due to its use of a 3D axis for alignment rather than the use of a best fit. Copyright© 2018 Dennis Barber Ltd.

  4. Single-ping ADCP measurements in the Strait of Gibraltar

    NASA Astrophysics Data System (ADS)

    Sammartino, Simone; García Lafuente, Jesús; Naranjo, Cristina; Sánchez Garrido, José Carlos; Sánchez Leal, Ricardo

    2016-04-01

    In most Acoustic Doppler Current Profiler (ADCP) user manuals, it is widely recommended to apply ensemble averaging of the single-pings measurements, in order to obtain reliable observations of the current speed. The random error related to the single-ping measurement is typically too high to be used directly, while the averaging operation reduces the ensemble error of a factor of approximately √N, with N the number of averaged pings. A 75 kHz ADCP moored in the western exit of the Strait of Gibraltar, included in the long-term monitoring of the Mediterranean outflow, has recently served as test setup for a different approach to current measurements. The ensemble averaging has been disabled, while maintaining the internal coordinate conversion made by the instrument, and a series of single-ping measurements has been collected every 36 seconds during a period of approximately 5 months. The huge amount of data has been fluently handled by the instrument, and no abnormal battery consumption has been recorded. On the other hand a long and unique series of very high frequency current measurements has been collected. Results of this novel approach have been exploited in a dual way: from a statistical point of view, the availability of single-ping measurements allows a real estimate of the (a posteriori) ensemble average error of both current and ancillary variables. While the theoretical random error for horizontal velocity is estimated a priori as ˜2 cm s-1 for a 50 pings ensemble, the value obtained by the a posteriori averaging is ˜15 cm s-1, with an asymptotical behavior starting from an averaging size of 10 pings per ensemble. This result suggests the presence of external sources of random error (e.g.: turbulence), of higher magnitude than the internal sources (ADCP intrinsic precision), which cannot be reduced by the ensemble averaging. On the other hand, although the instrumental configuration is clearly not suitable for a precise estimation of turbulent parameters, some hints of the turbulent structure of the flow can be obtained by the empirical computation of zonal Reynolds stress (along the predominant direction of the current) and rate of production and dissipation of turbulent kinetic energy. All the parameters show a clear correlation with tidal fluctuations of the current, with maximum values coinciding with flood tides, during the maxima of the outflow Mediterranean current.

  5. Pyrometer with tracking balancing

    NASA Astrophysics Data System (ADS)

    Ponomarev, D. B.; Zakharenko, V. A.; Shkaev, A. G.

    2018-04-01

    Currently, one of the main metrological noncontact temperature measurement challenges is the emissivity uncertainty. This paper describes a pyrometer with emissivity effect diminishing through the use of a measuring scheme with tracking balancing in which the radiation receiver is a null-indicator. In this paper the results of the prototype pyrometer absolute error study in surfaces temperature measurement of aluminum and nickel samples are presented. There is absolute error calculated values comparison considering the emissivity table values with errors on the results of experimental measurements by the proposed method. The practical implementation of the proposed technical solution has allowed two times to reduce the error due to the emissivity uncertainty.

  6. Phase Error Correction in Time-Averaged 3D Phase Contrast Magnetic Resonance Imaging of the Cerebral Vasculature

    PubMed Central

    MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard

    2016-01-01

    Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600

  7. Aliasing errors in measurements of beam position and ellipticity

    NASA Astrophysics Data System (ADS)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  8. Error analysis and experiments of attitude measurement using laser gyroscope

    NASA Astrophysics Data System (ADS)

    Ren, Xin-ran; Ma, Wen-li; Jiang, Ping; Huang, Jin-long; Pan, Nian; Guo, Shuai; Luo, Jun; Li, Xiao

    2018-03-01

    The precision of photoelectric tracking and measuring equipment on the vehicle and vessel is deteriorated by the platform's movement. Specifically, the platform's movement leads to the deviation or loss of the target, it also causes the jitter of visual axis and then produces image blur. In order to improve the precision of photoelectric equipment, the attitude of photoelectric equipment fixed with the platform must be measured. Currently, laser gyroscope is widely used to measure the attitude of the platform. However, the measurement accuracy of laser gyro is affected by its zero bias, scale factor, installation error and random error. In this paper, these errors were analyzed and compensated based on the laser gyro's error model. The static and dynamic experiments were carried out on a single axis turntable, and the error model was verified by comparing the gyro's output with an encoder with an accuracy of 0.1 arc sec. The accuracy of the gyroscope has increased from 7000 arc sec to 5 arc sec for an hour after error compensation. The method used in this paper is suitable for decreasing the laser gyro errors in inertial measurement applications.

  9. Demonstration of Nonlinearity Bias in the Measurement of the Apparent Diffusion Coefficient in Multicenter Trials

    PubMed Central

    Malyarenko, Dariya; Newitt, David; Wilmes, Lisa; Tudorica, Alina; Helmer, Karl G.; Arlinghaus, Lori R.; Jacobs, Michael A.; Jajamovich, Guido; Taouli, Bachir; Yankeelov, Thomas E.; Huang, Wei; Chenevert, Thomas L.

    2015-01-01

    Purpose Characterize system-specific bias across common magnetic resonance imaging (MRI) platforms for quantitative diffusion measurements in multicenter trials. Methods Diffusion weighted imaging (DWI) was performed on an ice-water phantom along the superior-inferior (SI) and right-left (RL) orientations spanning ±150 mm. The same scanning protocol was implemented on 14 MRI systems at seven imaging centers. The bias was estimated as a deviation of measured from known apparent diffusion coefficient (ADC) along individual DWI directions. The relative contributions of gradient nonlinearity, shim errors, imaging gradients and eddy currents were assessed independently. The observed bias errors were compared to numerical models. Results The measured systematic ADC errors scaled quadratically with offset from isocenter, and ranged between −55% (SI) and 25% (RL). Nonlinearity bias was dependent on system design and diffusion gradient direction. Consistent with numerical models, minor ADC errors (±5%) due to shim, imaging and eddy currents were mitigated by double echo DWI and image co-registration of individual gradient directions. Conclusion The analysis confirms gradient nonlinearity as a major source of spatial DW bias and variability in off-center ADC measurements across MRI platforms, with minor contributions from shim, imaging gradients and eddy currents. The developed protocol enables empiric description of systematic bias in multicenter quantitative DWI studies. PMID:25940607

  10. Demonstration of nonlinearity bias in the measurement of the apparent diffusion coefficient in multicenter trials.

    PubMed

    Malyarenko, Dariya I; Newitt, David; J Wilmes, Lisa; Tudorica, Alina; Helmer, Karl G; Arlinghaus, Lori R; Jacobs, Michael A; Jajamovich, Guido; Taouli, Bachir; Yankeelov, Thomas E; Huang, Wei; Chenevert, Thomas L

    2016-03-01

    Characterize system-specific bias across common magnetic resonance imaging (MRI) platforms for quantitative diffusion measurements in multicenter trials. Diffusion weighted imaging (DWI) was performed on an ice-water phantom along the superior-inferior (SI) and right-left (RL) orientations spanning ± 150 mm. The same scanning protocol was implemented on 14 MRI systems at seven imaging centers. The bias was estimated as a deviation of measured from known apparent diffusion coefficient (ADC) along individual DWI directions. The relative contributions of gradient nonlinearity, shim errors, imaging gradients, and eddy currents were assessed independently. The observed bias errors were compared with numerical models. The measured systematic ADC errors scaled quadratically with offset from isocenter, and ranged between -55% (SI) and 25% (RL). Nonlinearity bias was dependent on system design and diffusion gradient direction. Consistent with numerical models, minor ADC errors (± 5%) due to shim, imaging and eddy currents were mitigated by double echo DWI and image coregistration of individual gradient directions. The analysis confirms gradient nonlinearity as a major source of spatial DW bias and variability in off-center ADC measurements across MRI platforms, with minor contributions from shim, imaging gradients and eddy currents. The developed protocol enables empiric description of systematic bias in multicenter quantitative DWI studies. © 2015 Wiley Periodicals, Inc.

  11. Use of units of measurement error in anthropometric comparisons.

    PubMed

    Lucas, Teghan; Henneberg, Maciej

    2017-09-01

    Anthropometrists attempt to minimise measurement errors, however, errors cannot be eliminated entirely. Currently, measurement errors are simply reported. Measurement errors should be included into analyses of anthropometric data. This study proposes a method which incorporates measurement errors into reported values, replacing metric units with 'units of technical error of measurement (TEM)' by applying these to forensics, industrial anthropometry and biological variation. The USA armed forces anthropometric survey (ANSUR) contains 132 anthropometric dimensions of 3982 individuals. Concepts of duplication and Euclidean distance calculations were applied to the forensic-style identification of individuals in this survey. The National Size and Shape Survey of Australia contains 65 anthropometric measurements of 1265 women. This sample was used to show how a woman's body measurements expressed in TEM could be 'matched' to standard clothing sizes. Euclidean distances show that two sets of repeated anthropometric measurements of the same person cannot be matched (> 0) on measurements expressed in millimetres but can in units of TEM (= 0). Only 81 women can fit into any standard clothing size when matched using centimetres, with units of TEM, 1944 women fit. The proposed method can be applied to all fields that use anthropometry. Units of TEM are considered a more reliable unit of measurement for comparisons.

  12. Finger blood content, light transmission, and pulse oximetry errors.

    PubMed

    Craft, T M; Lawson, R A; Young, J D

    1992-01-01

    The changes in light emitting diode current necessary to maintain a constant level of light incident upon a photodetector were measured in 20 volunteers at the two wavelengths employed by pulse oximeters. Three states of finger blood content were assessed; exsanguinated, hyperaemic, and normal. The changes in light emitting diode current with changes in finger blood content were small and are not thought to represent a significant source of error in saturation as measured by pulse oximetry.

  13. Aquatic habitat mapping with an acoustic doppler current profiler: Considerations for data quality

    USGS Publications Warehouse

    Gaeuman, David; Jacobson, Robert B.

    2005-01-01

    When mounted on a boat or other moving platform, acoustic Doppler current profilers (ADCPs) can be used to map a wide range of ecologically significant phenomena, including measures of fluid shear, turbulence, vorticity, and near-bed sediment transport. However, the instrument movement necessary for mapping applications can generate significant errors, many of which have not been inadequately described. This report focuses on the mechanisms by which moving-platform errors are generated, and quantifies their magnitudes under typical habitat-mapping conditions. The potential for velocity errors caused by mis-alignment of the instrument?s internal compass are widely recognized, but has not previously been quantified for moving instruments. Numerical analyses show that even relatively minor compass mis-alignments can produce significant velocity errors, depending on the ratio of absolute instrument velocity to the target velocity and on the relative directions of instrument and target motion. A maximum absolute instrument velocity of about 1 m/s is recommended for most mapping applications. Lower velocities are appropriate when making bed velocity measurements, an emerging application that makes use of ADCP bottom-tracking to measure the velocity of sediment particles at the bed. The mechanisms by which heterogeneities in the flow velocity field generate horizontal velocities errors are also quantified, and some basic limitations in the effectiveness of standard error-detection criteria for identifying these errors are described. Bed velocity measurements may be particularly vulnerable to errors caused by spatial variability in the sediment transport field.

  14. Microwave Resonator Measurements of Atmospheric Absorption Coefficients: A Preliminary Design Study

    NASA Technical Reports Server (NTRS)

    Walter, Steven J.; Spilker, Thomas R.

    1995-01-01

    A preliminary design study examined the feasibility of using microwave resonator measurements to improve the accuracy of atmospheric absorption coefficients and refractivity between 18 and 35 GHz. Increased accuracies would improve the capability of water vapor radiometers to correct for radio signal delays caused by Earth's atmosphere. Calibration of delays incurred by radio signals traversing the atmosphere has applications to both deep space tracking and planetary radio science experiments. Currently, the Cassini gravity wave search requires 0.8-1.0% absorption coefficient accuracy. This study examined current atmospheric absorption models and estimated that current model accuracy ranges from 5% to 7%. The refractivity of water vapor is known to 1% accuracy, while the refractivity of many dry gases (oxygen, nitrogen, etc.) are known to better than 0.1%. Improvements to the current generation of models will require that both the functional form and absolute absorption of the water vapor spectrum be calibrated and validated. Several laboratory techniques for measuring atmospheric absorption and refractivity were investigated, including absorption cells, single and multimode rectangular cavity resonators, and Fabry-Perot resonators. Semi-confocal Fabry-Perot resonators were shown to provide the most cost-effective and accurate method of measuring atmospheric gas refractivity. The need for accurate environmental measurement and control was also addressed. A preliminary design for the environmental control and measurement system was developed to aid in identifying significant design issues. The analysis indicated that overall measurement accuracy will be limited by measurement errors and imprecise control of the gas sample's thermodynamic state, thermal expansion and vibration- induced deformation of the resonator structure, and electronic measurement error. The central problem is to identify systematic errors because random errors can be reduced by averaging. Calibrating the resonator measurements by checking the refractivity of dry gases which are known to better than 0.1% provides a method of controlling the systematic errors to 0.1%. The primary source of error in absorptivity and refractivity measurements is thus the ability to measure the concentration of water vapor in the resonator path. Over the whole thermodynamic range of interest the accuracy of water vapor measurement is 1.5%. However, over the range responsible for most of the radio delay (i.e. conditions in the bottom two kilometers of the atmosphere) the accuracy of water vapor measurements ranges from 0.5% to 1.0%. Therefore the precision of the resonator measurements could be held to 0.3% and the overall absolute accuracy of resonator-based absorption and refractivity measurements will range from 0.6% to 1.

  15. Role of turbulence fluctuations on uncertainties of acoutic Doppler current profiler discharge measurements

    USGS Publications Warehouse

    Tarrab, Leticia; Garcia, Carlos M.; Cantero, Mariano I.; Oberg, Kevin

    2012-01-01

    This work presents a systematic analysis quantifying the role of the presence of turbulence fluctuations on uncertainties (random errors) of acoustic Doppler current profiler (ADCP) discharge measurements from moving platforms. Data sets of three-dimensional flow velocities with high temporal and spatial resolution were generated from direct numerical simulation (DNS) of turbulent open channel flow. Dimensionless functions relating parameters quantifying the uncertainty in discharge measurements due to flow turbulence (relative variance and relative maximum random error) to sampling configuration were developed from the DNS simulations and then validated with field-scale discharge measurements. The validated functions were used to evaluate the role of the presence of flow turbulence fluctuations on uncertainties in ADCP discharge measurements. The results of this work indicate that random errors due to the flow turbulence are significant when: (a) a low number of transects is used for a discharge measurement, and (b) measurements are made in shallow rivers using high boat velocity (short time for the boat to cross a flow turbulence structure).

  16. An Improved Measurement Method for the Strength of Radiation of Reflective Beam in an Industrial Optical Sensor Based on Laser Displacement Meter.

    PubMed

    Bae, Youngchul

    2016-05-23

    An optical sensor such as a laser range finder (LRF) or laser displacement meter (LDM) uses reflected and returned laser beam from a target. The optical sensor has been mainly used to measure the distance between a launch position and the target. However, optical sensor based LRF and LDM have numerous and various errors such as statistical errors, drift errors, cyclic errors, alignment errors and slope errors. Among these errors, an alignment error that contains measurement error for the strength of radiation of returned laser beam from the target is the most serious error in industrial optical sensors. It is caused by the dependence of the measurement offset upon the strength of radiation of returned beam incident upon the focusing lens from the target. In this paper, in order to solve these problems, we propose a novel method for the measurement of the output of direct current (DC) voltage that is proportional to the strength of radiation of returned laser beam in the received avalanche photo diode (APD) circuit. We implemented a measuring circuit that is able to provide an exact measurement of reflected laser beam. By using the proposed method, we can measure the intensity or strength of radiation of laser beam in real time and with a high degree of precision.

  17. An Improved Measurement Method for the Strength of Radiation of Reflective Beam in an Industrial Optical Sensor Based on Laser Displacement Meter

    PubMed Central

    Bae, Youngchul

    2016-01-01

    An optical sensor such as a laser range finder (LRF) or laser displacement meter (LDM) uses reflected and returned laser beam from a target. The optical sensor has been mainly used to measure the distance between a launch position and the target. However, optical sensor based LRF and LDM have numerous and various errors such as statistical errors, drift errors, cyclic errors, alignment errors and slope errors. Among these errors, an alignment error that contains measurement error for the strength of radiation of returned laser beam from the target is the most serious error in industrial optical sensors. It is caused by the dependence of the measurement offset upon the strength of radiation of returned beam incident upon the focusing lens from the target. In this paper, in order to solve these problems, we propose a novel method for the measurement of the output of direct current (DC) voltage that is proportional to the strength of radiation of returned laser beam in the received avalanche photo diode (APD) circuit. We implemented a measuring circuit that is able to provide an exact measurement of reflected laser beam. By using the proposed method, we can measure the intensity or strength of radiation of laser beam in real time and with a high degree of precision. PMID:27223291

  18. Thermal Error Test and Intelligent Modeling Research on the Spindle of High Speed CNC Machine Tools

    NASA Astrophysics Data System (ADS)

    Luo, Zhonghui; Peng, Bin; Xiao, Qijun; Bai, Lu

    2018-03-01

    Thermal error is the main factor affecting the accuracy of precision machining. Through experiments, this paper studies the thermal error test and intelligent modeling for the spindle of vertical high speed CNC machine tools in respect of current research focuses on thermal error of machine tool. Several testing devices for thermal error are designed, of which 7 temperature sensors are used to measure the temperature of machine tool spindle system and 2 displacement sensors are used to detect the thermal error displacement. A thermal error compensation model, which has a good ability in inversion prediction, is established by applying the principal component analysis technology, optimizing the temperature measuring points, extracting the characteristic values closely associated with the thermal error displacement, and using the artificial neural network technology.

  19. Space charge enhanced plasma gradient effects on satellite electric field measurements

    NASA Technical Reports Server (NTRS)

    Diebold, Dan; Hershkowitz, Noah; Dekock, J.; Intrator, T.; Hsieh, M-K.

    1991-01-01

    It has been recognized that plasma gradients can cause error in magnetospheric electric field measurements made by double probes. Space charge enhanced Plasma Gradient Induced Error (PGIE) is discussed in general terms, presenting the results of a laboratory experiment designed to demonstrate this error, and deriving a simple expression that quantifies this error. Experimental conditions were not identical to magnetospheric conditions, although efforts were made to insure the relevant physics applied to both cases. The experimental data demonstrate some of the possible errors in electric field measurements made by strongly emitting probes due to space charge effects in the presence of plasma gradients. Probe errors in space and laboratory conditions are discussed, as well as experimental error. In the final section, theoretical aspects are examined and an expression is derived for the maximum steady state space charge enhanced PGIE taken by two identical current biased probes.

  20. Note: Eddy current displacement sensors independent of target conductivity.

    PubMed

    Wang, Hongbo; Li, Wei; Feng, Zhihua

    2015-01-01

    Eddy current sensors (ECSs) are widely used for non-contact displacement measurement. In this note, the quantitative error of an ECS caused by target conductivity was analyzed using a complex image method. The response curves (L-x) of the ECS with different targets were similar and could be overlapped by shifting the curves on x direction with √2δ/2. Both finite element analysis and experiments match well with the theoretical analysis, which indicates that the measured error of high precision ECSs caused by target conductivity can be completely eliminated, and the ECSs can measure different materials precisely without calibration.

  1. Current measurement by Faraday effect on GEPOPU

    NASA Astrophysics Data System (ADS)

    N, Correa; H, Chuaqui; E, Wyndham; F, Veloso; J, Valenzuela; M, Favre; H, Bhuyan

    2014-05-01

    The design and calibration of an optical current sensor using BK7 glass is presented. The current sensor is based on the polarization rotation by Faraday effect. GEPOPU is a pulsed power generator, double transit time 120ns, 1.5 Ohm impedance, coaxial geometry, where Z pinch experiment are performed. The measurements were performed at the Optics and Plasma Physics Laboratory of Pontificia Universidad Catolica de Chile. The verdet constant for two different optical materials was obtained using He-Ne laser. The values obtained are within the experimental error bars of measurements published in the literature (less than 15% difference). Two different sensor geometries were tried. We present the preliminary results for one of the geometries. The values obtained for the current agree within the measurement error with those obtained by means of a Spice simulation of the generator. Signal traces obtained are completely noise free.

  2. Estimating Uncertainty in Annual Forest Inventory Estimates

    Treesearch

    Ronald E. McRoberts; Veronica C. Lessard

    1999-01-01

    The precision of annual forest inventory estimates may be negatively affected by uncertainty from a variety of sources including: (1) sampling error; (2) procedures for updating plots not measured in the current year; and (3) measurement errors. The impact of these sources of uncertainty on final inventory estimates is investigated using Monte Carlo simulation...

  3. Volume error analysis for lung nodules attached to pulmonary vessels in an anthropomorphic thoracic phantom

    NASA Astrophysics Data System (ADS)

    Kinnard, Lisa M.; Gavrielides, Marios A.; Myers, Kyle J.; Zeng, Rongping; Peregoy, Jennifer; Pritchard, William; Karanian, John W.; Petrick, Nicholas

    2008-03-01

    High-resolution CT, three-dimensional (3D) methods for nodule volumetry have been introduced, with the hope that such methods will be more accurate and consistent than currently used planar measures of size. However, the error associated with volume estimation methods still needs to be quantified. Volume estimation error is multi-faceted in the sense that it is impacted by characteristics of the patient, the software tool and the CT system. The overall goal of this research is to quantify the various sources of measurement error and, when possible, minimize their effects. In the current study, we estimated nodule volume from ten repeat scans of an anthropomorphic phantom containing two synthetic spherical lung nodules (diameters: 5 and 10 mm; density: -630 HU), using a 16-slice Philips CT with 20, 50, 100 and 200 mAs exposures and 0.8 and 3.0 mm slice thicknesses. True volume was estimated from an average of diameter measurements, made using digital calipers. We report variance and bias results for volume measurements as a function of slice thickness, nodule diameter, and X-ray exposure.

  4. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    NASA Technical Reports Server (NTRS)

    Beck, S. M.

    1975-01-01

    A mobile self-contained Faraday cup system for beam current measurments of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 + or - 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV.

  5. Empirical Synthesis of the Effect of Standard Error of Measurement on Decisions Made within Brief Experimental Analyses of Reading Fluency

    ERIC Educational Resources Information Center

    Burns, Matthew K.; Taylor, Crystal N.; Warmbold-Brann, Kristy L.; Preast, June L.; Hosp, John L.; Ford, Jeremy W.

    2017-01-01

    Intervention researchers often use curriculum-based measurement of reading fluency (CBM-R) with a brief experimental analysis (BEA) to identify an effective intervention for individual students. The current study synthesized data from 22 studies that used CBM-R data within a BEA by computing the standard error of measure (SEM) for the median data…

  6. Potential benefit of electronic pharmacy claims data to prevent medication history errors and resultant inpatient order errors

    PubMed Central

    Palmer, Katherine A; Shane, Rita; Wu, Cindy N; Bell, Douglas S; Diaz, Frank; Cook-Wiens, Galen; Jackevicius, Cynthia A

    2016-01-01

    Objective We sought to assess the potential of a widely available source of electronic medication data to prevent medication history errors and resultant inpatient order errors. Methods We used admission medication history (AMH) data from a recent clinical trial that identified 1017 AMH errors and 419 resultant inpatient order errors among 194 hospital admissions of predominantly older adult patients on complex medication regimens. Among the subset of patients for whom we could access current Surescripts electronic pharmacy claims data (SEPCD), two pharmacists independently assessed error severity and our main outcome, which was whether SEPCD (1) was unrelated to the medication error; (2) probably would not have prevented the error; (3) might have prevented the error; or (4) probably would have prevented the error. Results Seventy patients had both AMH errors and current, accessible SEPCD. SEPCD probably would have prevented 110 (35%) of 315 AMH errors and 46 (31%) of 147 resultant inpatient order errors. When we excluded the least severe medication errors, SEPCD probably would have prevented 99 (47%) of 209 AMH errors and 37 (61%) of 61 resultant inpatient order errors. SEPCD probably would have prevented at least one AMH error in 42 (60%) of 70 patients. Conclusion When current SEPCD was available for older adult patients on complex medication regimens, it had substantial potential to prevent AMH errors and resultant inpatient order errors, with greater potential to prevent more severe errors. Further study is needed to measure the benefit of SEPCD in actual use at hospital admission. PMID:26911817

  7. Estimation of geopotential differences over intercontinental locations using satellite and terrestrial measurements. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Pavlis, Nikolaos K.

    1991-01-01

    An error analysis study was conducted in order to assess the current accuracies and the future anticipated improvements in the estimation of geopotential differences over intercontinental locations. An observation/estimation scheme was proposed and studied, whereby gravity disturbance measurements on the Earth's surface, in caps surrounding the estimation points, are combined with corresponding data in caps directly over these points at the altitude of a low orbiting satellite, for the estimation of the geopotential difference between the terrestrial stations. The mathematical modeling required to relate the primary observables to the parameters to be estimated, was studied for the terrestrial data and the data at altitude. Emphasis was placed on the examination of systematic effects and on the corresponding reductions that need to be applied to the measurements to avoid systematic errors. The error estimation for the geopotential differences was performed using both truncation theory and least squares collocation with ring averages, in case observations on the Earth's surface only are used. The error analysis indicated that with the currently available global geopotential model OSU89B and with gravity disturbance data in 2 deg caps surrounding the estimation points, the error of the geopotential difference arising from errors in the reference model and the cap data is about 23 kgal cm, for 30 deg station separation.

  8. Errors in radial velocity variance from Doppler wind lidar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, H.; Barthelmie, R. J.; Doubrawa, P.

    A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less

  9. Errors in radial velocity variance from Doppler wind lidar

    DOE PAGES

    Wang, H.; Barthelmie, R. J.; Doubrawa, P.; ...

    2016-08-29

    A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less

  10. On the Reliability of Photovoltaic Short-Circuit Current Temperature Coefficient Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osterwald, Carl R.; Campanelli, Mark; Kelly, George J.

    2015-06-14

    The changes in short-circuit current of photovoltaic (PV) cells and modules with temperature are routinely modeled through a single parameter, the temperature coefficient (TC). This parameter is vital for the translation equations used in system sizing, yet in practice is very difficult to measure. In this paper, we discuss these inherent problems and demonstrate how they can introduce unacceptably large errors in PV ratings. A method for quantifying the spectral dependence of TCs is derived, and then used to demonstrate that databases of module parameters commonly contain values that are physically unreasonable. Possible ways to reduce measurement errors are alsomore » discussed.« less

  11. Simultaneous measurement of temperature and strain using four connecting wires

    NASA Technical Reports Server (NTRS)

    Parker, Allen R., Jr.

    1993-01-01

    This paper describes a new signal-conditioning technique for measuring strain and temperature which uses fewer connecting wires than conventional techniques. Simultaneous measurement of temperature and strain has been achieved by using thermocouple wire to connect strain gages to signal conditioning. This signal conditioning uses a new method for demultiplexing sampled analog signals and the Anderson current loop circuit. Theory is presented along with data to confirm that strain gage resistance change is sensed without appreciable error because of thermoelectric effects. Furthermore, temperature is sensed without appreciable error because of voltage drops caused by strain gage excitation current flowing through the gage resistance.

  12. A New Design of the Test Rig to Measure the Transmission Error of Automobile Gearbox

    NASA Astrophysics Data System (ADS)

    Hou, Yixuan; Zhou, Xiaoqin; He, Xiuzhi; Liu, Zufei; Liu, Qiang

    2017-12-01

    Noise and vibration affect the performance of automobile gearbox. And transmission error has been regarded as an important excitation source in gear system. Most of current research is focused on the measurement and analysis of single gear drive, and few investigations on the transmission error measurement in complete gearbox were conducted. In order to measure transmission error in a complete automobile gearbox, a kind of electrically closed test rig is developed. Based on the principle of modular design, the test rig can be used to test different types of gearbox by adding necessary modules. The test rig for front engine, rear-wheel-drive gearbox is constructed. And static and modal analysis methods are taken to verify the performance of a key component.

  13. Practical scheme for error control using feedback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarovar, Mohan; Milburn, Gerard J.; Ahn, Charlene

    2004-05-01

    We describe a scheme for quantum-error correction that employs feedback and weak measurement rather than the standard tools of projective measurement and fast controlled unitary gates. The advantage of this scheme over previous protocols [for example, Ahn et al. Phys. Rev. A 65, 042301 (2001)], is that it requires little side processing while remaining robust to measurement inefficiency, and is therefore considerably more practical. We evaluate the performance of our scheme by simulating the correction of bit flips. We also consider implementation in a solid-state quantum-computation architecture and estimate the maximal error rate that could be corrected with current technology.

  14. Estimation of Separation Buffers for Wind-Prediction Error in an Airborne Separation Assistance System

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette

    2009-01-01

    Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.

  15. Measurements of the toroidal torque balance of error field penetration locked modes

    DOE PAGES

    Shiraki, Daisuke; Paz-Soldan, Carlos; Hanson, Jeremy M.; ...

    2015-01-05

    Here, detailed measurements from the DIII-D tokamak of the toroidal dynamics of error field penetration locked modes under the influence of slowly evolving external fields, enable study of the toroidal torques on the mode, including interaction with the intrinsic error field. The error field in these low density Ohmic discharges is well known based on the mode penetration threshold, allowing resonant and non-resonant torque effects to be distinguished. These m/n = 2/1 locked modes are found to be well described by a toroidal torque balance between the resonant interaction with n = 1 error fields, and a viscous torque inmore » the electron diamagnetic drift direction which is observed to scale as the square of the perturbed field due to the island. Fitting to this empirical torque balance allows a time-resolved measurement of the intrinsic error field of the device, providing evidence for a time-dependent error field in DIII-D due to ramping of the Ohmic coil current.« less

  16. The currency and tempo of extinction.

    PubMed

    Regan, H M; Lupia, R; Drinnan, A N; Burgman, M A

    2001-01-01

    This study examines estimates of extinction rates for the current purported biotic crisis and from the fossil record. Studies that compare current and geological extinctions sometimes use metrics that confound different sources of error and reflect different features of extinction processes. The per taxon extinction rate is a standard measure in paleontology that avoids some of the pitfalls of alternative approaches. Extinction rates reported in the conservation literature are rarely accompanied by measures of uncertainty, despite many elements of the calculations being subject to considerable error. We quantify some of the most important sources of uncertainty and carry them through the arithmetic of extinction rate calculations using fuzzy numbers. The results emphasize that estimates of current and future rates rely heavily on assumptions about the tempo of extinction and on extrapolations among taxa. Available data are unlikely to be useful in measuring magnitudes or trends in current extinction rates.

  17. The Impact of Measurement Error on the Accuracy of Individual and Aggregate SGP

    ERIC Educational Resources Information Center

    McCaffrey, Daniel F.; Castellano, Katherine E.; Lockwood, J. R.

    2015-01-01

    Student growth percentiles (SGPs) express students' current observed scores as percentile ranks in the distribution of scores among students with the same prior-year scores. A common concern about SGPs at the student level, and mean or median SGPs (MGPs) at the aggregate level, is potential bias due to test measurement error (ME). Shang,…

  18. A Limited In-Flight Evaluation of the Constant Current Loop Strain Measurement Method

    NASA Technical Reports Server (NTRS)

    Olney, Candida D.; Collura, Joseph V.

    1997-01-01

    For many years, the Wheatstone bridge has been used successfully to measure electrical resistance and changes in that resistance. However, the inherent problem of varying lead wire resistance can cause errors when the Wheatstone bridge is used to measure strain in a flight environment. The constant current loop signal-conditioning card was developed to overcome that difficulty. This paper describes a limited evaluation of the constant current loop strain measurement method as used in the F-16XL ship 2 Supersonic Laminar Flow Control flight project. Several identical strain gages were installed in close proximity on a shock fence which was mounted under the left wing of the F- 1 6XL ship 2. Two strain gage bridges were configured using the constant current loop, and two were configured using the Wheatstone bridge circuitry. Flight data comparing the output from the constant current loop configured gages to that of the Wheatstone bridges with respect to signal output, error, and noise are given. Results indicate that the constant current loop strain measurement method enables an increased output, unaffected by lead wire resistance variations, to be obtained from strain gages.

  19. An in-situ measuring method for planar straightness error

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Fu, Luhua; Yang, Tongyu; Sun, Changku; Wang, Zhong; Zhao, Yan; Liu, Changjie

    2018-01-01

    According to some current problems in the course of measuring the plane shape error of workpiece, an in-situ measuring method based on laser triangulation is presented in this paper. The method avoids the inefficiency of traditional methods like knife straightedge as well as the time and cost requirements of coordinate measuring machine(CMM). A laser-based measuring head is designed and installed on the spindle of a numerical control(NC) machine. The measuring head moves in the path planning to measure measuring points. The spatial coordinates of the measuring points are obtained by the combination of the laser triangulation displacement sensor and the coordinate system of the NC machine, which could make the indicators of measurement come true. The method to evaluate planar straightness error adopts particle swarm optimization(PSO). To verify the feasibility and accuracy of the measuring method, simulation experiments were implemented with a CMM. Comparing the measurement results of measuring head with the corresponding measured values obtained by composite measuring machine, it is verified that the method can realize high-precise and automatic measurement of the planar straightness error of the workpiece.

  20. Determination of head conductivity frequency response in vivo with optimized EIT-EEG.

    PubMed

    Dabek, Juhani; Kalogianni, Konstantina; Rotgans, Edwin; van der Helm, Frans C T; Kwakkel, Gert; van Wegen, Erwin E H; Daffertshofer, Andreas; de Munck, Jan C

    2016-02-15

    Electroencephalography (EEG) benefits from accurate head models. Dipole source modelling errors can be reduced from over 1cm to a few millimetres by replacing generic head geometry and conductivity with tailored ones. When adequate head geometry is available, electrical impedance tomography (EIT) can be used to infer the conductivities of head tissues. In this study, the boundary element method (BEM) is applied with three-compartment (scalp, skull and brain) subject-specific head models. The optimal injection of small currents to the head with a modular EIT current injector, and voltage measurement by an EEG amplifier is first sought by simulations. The measurement with a 64-electrode EEG layout is studied with respect to three noise sources affecting EIT: background EEG, deviations from the fitting assumption of equal scalp and brain conductivities, and smooth model geometry deviations from the true head geometry. The noise source effects were investigated depending on the positioning of the injection and extraction electrode and the number of their combinations used sequentially. The deviation from equal scalp and brain conductivities produces rather deterministic errors in the three conductivities irrespective of the current injection locations. With a realistic measurement of around 2 min and around 8 distant distinct current injection pairs, the error from the other noise sources is reduced to around 10% or less in the skull conductivity. The analysis of subsequent real measurements, however, suggests that there could be subject-specific local thinnings in the skull, which could amplify the conductivity fitting errors. With proper analysis of multiplexed sinusoidal EIT current injections, the measurements on average yielded conductivities of 340 mS/m (scalp and brain) and 6.6 mS/m (skull) at 2 Hz. From 11 to 127 Hz, the conductivities increased by 1.6% (scalp and brain) and 6.7% (skull) on the average. The proper analysis was ensured by using recombination of the current injections into virtual ones, avoiding problems in location-specific skull morphology variations. The observed large intersubject variations support the need for in vivo measurement of skull conductivity, resulting in calibrated subject-specific head models. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Wire-positioning algorithm for coreless Hall array sensors in current measurement

    NASA Astrophysics Data System (ADS)

    Chen, Wenli; Zhang, Huaiqing; Chen, Lin; Gu, Shanyun

    2018-05-01

    This paper presents a scheme of circular-arrayed, coreless Hall-effect current transformers. It can satisfy the demands of wide dynamic range and bandwidth current in the distribution system, as well as the demand of AC and DC simultaneous measurements. In order to improve the signal to noise ratio (SNR) of the sensor, a wire-positioning algorithm is proposed, which can improve the measurement accuracy based on the post-processing of measurement data. The simulation results demonstrate that the maximum errors are 70%, 6.1% and 0.95% corresponding to Ampère’s circuital method, approximate positioning algorithm and precise positioning algorithm, respectively. It is obvious that the accuracy of the positioning algorithm is significantly improved when compared with that of the Ampère’s circuital method. The maximum error of the positioning algorithm is smaller in the experiment.

  2. Theoretical and experimental errors for in situ measurements of plant water potential.

    PubMed

    Shackel, K A

    1984-07-01

    Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (-0.6 to -1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design.

  3. Theoretical and Experimental Errors for In Situ Measurements of Plant Water Potential 1

    PubMed Central

    Shackel, Kenneth A.

    1984-01-01

    Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (−0.6 to −1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design. PMID:16663701

  4. Improving Localization Accuracy: Successive Measurements Error Modeling

    PubMed Central

    Abu Ali, Najah; Abu-Elkheir, Mervat

    2015-01-01

    Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345

  5. Which Measures of Online Control Are Least Sensitive to Offline Processes?

    PubMed

    de Grosbois, John; Tremblay, Luc

    2018-02-28

    A major challenge to the measurement of online control is the contamination by offline, planning-based processes. The current study examined the sensitivity of four measures of online control to offline changes in reaching performance induced by prism adaptation and terminal feedback. These measures included the squared Z scores (Z 2 ) of correlations of limb position at 75% movement time versus movement end, variable error, time after peak velocity, and a frequency-domain analysis (pPower). The results indicated that variable error and time after peak velocity were sensitive to the prism adaptation. Furthermore, only the Z 2 values were biased by the terminal feedback. Ultimately, the current study has demonstrated the sensitivity of limb kinematic measures to offline control processes and that pPower analyses may yield the most suitable measure of online control.

  6. Multivariate Statistics Applied to Seismic Phase Picking

    NASA Astrophysics Data System (ADS)

    Velasco, A. A.; Zeiler, C. P.; Anderson, D.; Pingitore, N. E.

    2008-12-01

    The initial effort of the Seismogram Picking Error from Analyst Review (SPEAR) project has been to establish a common set of seismograms to be picked by the seismological community. Currently we have 13 analysts from 4 institutions that have provided picks on the set of 26 seismograms. In comparing the picks thus far, we have identified consistent biases between picks from different institutions; effects of the experience of analysts; and the impact of signal-to-noise on picks. The institutional bias in picks brings up the important concern that picks will not be the same between different catalogs. This difference means less precision and accuracy when combing picks from multiple institutions. We also note that depending on the experience level of the analyst making picks for a catalog the error could fluctuate dramatically. However, the experience level is based off of number of years in picking seismograms and this may not be an appropriate criterion for determining an analyst's precision. The common data set of seismograms provides a means to test an analyst's level of precision and biases. The analyst is also limited by the quality of the signal and we show that the signal-to-noise ratio and pick error are correlated to the location, size and distance of the event. This makes the standard estimate of picking error based on SNR more complex because additional constraints are needed to accurately constrain the measurement error. We propose to extend the current measurement of error by adding the additional constraints of institutional bias and event characteristics to the standard SNR measurement. We use multivariate statistics to model the data and provide constraints to accurately assess earthquake location and measurement errors.

  7. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  8. Geometric Quality Assessment of LIDAR Data Based on Swath Overlap

    NASA Astrophysics Data System (ADS)

    Sampath, A.; Heidemann, H. K.; Stensaas, G. L.

    2016-06-01

    This paper provides guidelines on quantifying the relative horizontal and vertical errors observed between conjugate features in the overlapping regions of lidar data. The quantification of these errors is important because their presence quantifies the geometric quality of the data. A data set can be said to have good geometric quality if measurements of identical features, regardless of their position or orientation, yield identical results. Good geometric quality indicates that the data are produced using sensor models that are working as they are mathematically designed, and data acquisition processes are not introducing any unforeseen distortion in the data. High geometric quality also leads to high geolocation accuracy of the data when the data acquisition process includes coupling the sensor with geopositioning systems. Current specifications (e.g. Heidemann 2014) do not provide adequate means to quantitatively measure these errors, even though they are required to be reported. Current accuracy measurement and reporting practices followed in the industry and as recommended by data specification documents also potentially underestimate the inter-swath errors, including the presence of systematic errors in lidar data. Hence they pose a risk to the user in terms of data acceptance (i.e. a higher potential for Type II error indicating risk of accepting potentially unsuitable data). For example, if the overlap area is too small or if the sampled locations are close to the center of overlap, or if the errors are sampled in flat regions when there are residual pitch errors in the data, the resultant Root Mean Square Differences (RMSD) can still be small. To avoid this, the following are suggested to be used as criteria for defining the inter-swath quality of data: a) Median Discrepancy Angle b) Mean and RMSD of Horizontal Errors using DQM measured on sloping surfaces c) RMSD for sampled locations from flat areas (defined as areas with less than 5 degrees of slope) It is suggested that 4000-5000 points are uniformly sampled in the overlapping regions of the point cloud, and depending on the surface roughness, to measure the discrepancy between swaths. Care must be taken to sample only areas of single return points only. Point-to-Plane distance based data quality measures are determined for each sample point. These measurements are used to determine the above mentioned parameters. This paper details the measurements and analysis of measurements required to determine these metrics, i.e. Discrepancy Angle, Mean and RMSD of errors in flat regions and horizontal errors obtained using measurements extracted from sloping regions (slope greater than 10 degrees). The research is a result of an ad-hoc joint working group of the US Geological Survey and the American Society for Photogrammetry and Remote Sensing (ASPRS) Airborne Lidar Committee.

  9. Toward a new culture in verified quantum operations

    NASA Astrophysics Data System (ADS)

    Flammia, Steve

    Measuring error rates of quantum operations has become an indispensable component in any aspiring platform for quantum computation. As the quality of controlled quantum operations increases, the demands on the accuracy and precision with which we measure these error rates also grows. However, well-meaning scientists that report these error measures are faced with a sea of non-standardized methodologies and are often asked during publication for only coarse information about how their estimates were obtained. Moreover, there are serious incentives to use methodologies and measures that will continually produce numbers that improve with time to show progress. These problems will only get exacerbated as our typical error rates go from 1 in 100 to 1 in 1000 or less. This talk will survey existing challenges presented by the current paradigm and offer some suggestions for solutions than can help us move toward fair and standardized methods for error metrology in quantum computing experiments, and towards a culture that values full disclose of methodologies and higher standards for data analysis.

  10. On using summary statistics from an external calibration sample to correct for covariate measurement error.

    PubMed

    Guo, Ying; Little, Roderick J; McConnell, Daniel S

    2012-01-01

    Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.

  11. Evaluation of Fast-Time Wake Vortex Prediction Models

    NASA Technical Reports Server (NTRS)

    Proctor, Fred H.; Hamilton, David W.

    2009-01-01

    Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.

  12. Tissue resistivity estimation in the presence of positional and geometrical uncertainties.

    PubMed

    Baysal, U; Eyüboğlu, B M

    2000-08-01

    Geometrical uncertainties (organ boundary variation and electrode position uncertainties) are the biggest sources of error in estimating electrical resistivity of tissues from body surface measurements. In this study, in order to decrease estimation errors, the statistically constrained minimum mean squared error estimation algorithm (MiMSEE) is constrained with a priori knowledge of the geometrical uncertainties in addition to the constraints based on geometry, resistivity range, linearization and instrumentation errors. The MiMSEE calculates an optimum inverse matrix, which maps the surface measurements to the unknown resistivity distribution. The required data are obtained from four-electrode impedance measurements, similar to injected-current electrical impedance tomography (EIT). In this study, the surface measurements are simulated by using a numerical thorax model. The data are perturbed with additive instrumentation noise. Simulated surface measurements are then used to estimate the tissue resistivities by using the proposed algorithm. The results are compared with the results of conventional least squares error estimator (LSEE). Depending on the region, the MiMSEE yields an estimation error between 0.42% and 31.3% compared with 7.12% to 2010% for the LSEE. It is shown that the MiMSEE is quite robust even in the case of geometrical uncertainties.

  13. Error Consistency Analysis Scheme for Infrared Ultraspectral Sounding Retrieval Error Budget Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

    2013-01-01

    Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).

  14. Improved uncertainty quantification in nondestructive assay for nonproliferation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Croft, Stephen; Jarman, Ken

    2016-12-01

    This paper illustrates methods to improve uncertainty quantification (UQ) for non-destructive assay (NDA) measurements used in nuclear nonproliferation. First, it is shown that current bottom-up UQ applied to calibration data is not always adequate, for three main reasons: (1) Because there are errors in both the predictors and the response, calibration involves a ratio of random quantities, and calibration data sets in NDA usually consist of only a modest number of samples (3–10); therefore, asymptotic approximations involving quantities needed for UQ such as means and variances are often not sufficiently accurate; (2) Common practice overlooks that calibration implies a partitioningmore » of total error into random and systematic error, and (3) In many NDA applications, test items exhibit non-negligible departures in physical properties from calibration items, so model-based adjustments are used, but item-specific bias remains in some data. Therefore, improved bottom-up UQ using calibration data should predict the typical magnitude of item-specific bias, and the suggestion is to do so by including sources of item-specific bias in synthetic calibration data that is generated using a combination of modeling and real calibration data. Second, for measurements of the same nuclear material item by both the facility operator and international inspectors, current empirical (top-down) UQ is described for estimating operator and inspector systematic and random error variance components. A Bayesian alternative is introduced that easily accommodates constraints on variance components, and is more robust than current top-down methods to the underlying measurement error distributions.« less

  15. The performance of integrated transconductance amplifiers as variable current sources for bio-electric impedance measurements.

    PubMed

    Smith, D N

    1992-01-01

    Multiple applied current impedance measurement systems require numbers of current sources which operate simultaneously at the same frequency and within the same phase but at variable amplitudes. Investigations into the performance of some integrated operational transconductance amplifiers as variable current sources are described. Measurements of breakthrough, non-linearity and common-mode output levels for LM13600, NE5517 and CA3280 were carried out. The effects of such errors on the overall performance and stability of multiple current systems when driving floating loads are considered.

  16. Impact of Resident Duty Hour Limits on Safety in the ICU: A National Survey of Pediatric and Neonatal Intensivists

    PubMed Central

    Typpo, Katri V.; Tcharmtchi, M. Hossein; Thomas, Eric J.; Kelly, P. Adam; Castillo, Leticia D.; Singh, Hardeep

    2011-01-01

    Objective Resident duty-hour regulations potentially shift workload from resident to attending physicians. We sought to understand how current or future regulatory changes might impact safety in academic pediatric and neonatal intensive care units (ICUs). Design Web-based survey Setting US academic pediatric and neonatal ICUs Subjects Attending pediatric and neonatal intensivists Interventions We evaluated perceptions on four ICU safety-related risk measures potentially affected by current duty-hour regulations: 1) Attending physician and resident fatigue, 2) Attending physician work-load, 3) Errors (self-reported rates by attending physicians or perceived resident error rates), and 4) Safety culture. We also evaluated perceptions of how these risks would change with further duty hour restrictions. Measurements and Main Results We administered our survey between February and April 2010 to 688 eligible physicians, of which 360 (52.3%) responded. Most believed that resident error rates were unchanged or worse (91.9%) and safety culture was unchanged or worse (84.4%) with current duty-hour regulations. Of respondents, 61.9% believed their own work-hours providing direct patient care increased and 55.8% believed they were more fatigued while providing direct patient care. Most (85.3%) perceived no increase in their own error rates currently, but in the scenario of further reduction in resident duty-hours, over half (53.3%) believed that safety culture would worsen and a significant proportion (40.3%) believed that their own error rates would increase. Conclusions Pediatric intensivists do not perceive improved patient safety from current resident duty hour restrictions. Policies to further restrict resident duty hours should consider unintended consequences of worsening certain aspects of ICU safety. PMID:22614570

  17. Low-Energy Proton Testing Methodology

    NASA Technical Reports Server (NTRS)

    Pellish, Jonathan A.; Marshall, Paul W.; Heidel, David F.; Schwank, James R.; Shaneyfelt, Marty R.; Xapsos, M.A.; Ladbury, Raymond L.; LaBel, Kenneth A.; Berg, Melanie; Kim, Hak S.; hide

    2009-01-01

    Use of low-energy protons and high-energy light ions is becoming necessary to investigate current-generation SEU thresholds. Systematic errors can dominate measurements made with low-energy protons. Range and energy straggling contribute to systematic error. Low-energy proton testing is not a step-and-repeat process. Low-energy protons and high-energy light ions can be used to measure SEU cross section of single sensitive features; important for simulation.

  18. Compensation for positioning error of industrial robot for flexible vision measuring system

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  19. Improved Correction System for Vibration Sensitive Inertial Angle of Attack Measurement Devices

    NASA Technical Reports Server (NTRS)

    Crawford, Bradley L.; Finley, Tom D.

    2000-01-01

    Inertial angle of attack (AoA) devices currently in use at NASA Langley Research Center (LaRC) are subject to inaccuracies due to centrifugal accelerations caused by model dynamics, also known as sting whip. Recent literature suggests that these errors can be as high as 0.25 deg. With the current AoA accuracy target at LaRC being 0.01 deg., there is a dire need for improvement. With other errors in the inertial system (temperature, rectification, resolution, etc.) having been reduced to acceptable levels, a system is currently being developed at LaRC to measure and correct for the sting-whip-induced errors. By using miniaturized piezoelectric accelerometers and magnetohydrodynamic rate sensors, not only can the total centrifugal acceleration be measured, but yaw and pitch dynamics in the tunnel can also be characterized. These corrections can be used to determine a tunnel's past performance and can also indicate where efforts need to be concentrated to reduce these dynamics. Included in this paper are data on individual sensors, laboratory testing techniques, package evaluation, and wind tunnel test results on a High Speed Research (HSR) model in the Langley 16-Foot Transonic Wind Tunnel.

  20. Current profilers and current meters: compass and tilt sensors errors and calibration

    NASA Astrophysics Data System (ADS)

    Le Menn, M.; Lusven, A.; Bongiovanni, E.; Le Dû, P.; Rouxel, D.; Lucas, S.; Pacaud, L.

    2014-08-01

    Current profilers and current meters have a magnetic compass and tilt sensors for relating measurements to a terrestrial reference frame. As compasses are sensitive to their magnetic environment, they must be calibrated in the configuration in which they will be used. A calibration platform for magnetic compasses and tilt sensors was built, based on a method developed in 2007, to correct angular errors and guarantee a measurement uncertainty for instruments mounted in mooring cages. As mooring cages can weigh up to 800 kg, it was necessary to find a suitable place to set up this platform, map the magnetic fields in this area and dimension the platform to withstand these loads. It was calibrated using a GPS positioning technique. The platform has a table that can be tilted to calibrate the tilt sensors. The measurement uncertainty of the system was evaluated. Sinusoidal corrections based on the anomalies created by soft and hard magnetic materials were tested, as well as manufacturers’ calibration methods.

  1. Cost effectiveness of the stream-gaging program in Louisiana

    USGS Publications Warehouse

    Herbert, R.A.; Carlson, D.D.

    1985-01-01

    This report documents the results of a study of the cost effectiveness of the stream-gaging program in Louisiana. Data uses and funding sources were identified for the 68 continuous-record stream gages currently (1984) in operation with a budget of $408,700. Three stream gages have uses specific to a short-term study with no need for continued data collection beyond the study. The remaining 65 stations should be maintained in the program for the foreseeable future. In addition to the current operation of continuous-record stations, a number of wells, flood-profile gages, crest-stage gages, and stage stations, are serviced on the continuous-record station routes; thus, increasing the current budget to $423,000. The average standard error of estimate for data collected at the stations is 34.6%. Standard errors computed in this study are one measure of streamflow errors, and can be used as guidelines in comparing the effectiveness of alternative networks. By using the routes and number of measurements prescribed by the ' Traveling Hydrographer Program, ' the standard error could be reduced to 31.5% with the current budget of $423,000. If the gaging resources are redistributed, the 34.6% overall level of accuracy at the 68 continuous-record sites and the servicing of the additional wells or gages could be maintained with a budget of approximately $410,000. (USGS)

  2. Utilizing measure-based feedback in control-mastery theory: A clinical error.

    PubMed

    Snyder, John; Aafjes-van Doorn, Katie

    2016-09-01

    Clinical errors and ruptures are an inevitable part of clinical practice. Often times, therapists are unaware that a clinical error or rupture has occurred, leaving no space for repair, and potentially leading to patient dropout and/or less effective treatment. One way to overcome our blind spots is by frequently and systematically collecting measure-based feedback from the patient. Patient feedback measures that focus on the process of psychotherapy such as the Patient's Experience of Attunement and Responsiveness scale (PEAR) can be used in conjunction with treatment outcome measures such as the Outcome Questionnaire 45.2 (OQ-45.2) to monitor the patient's therapeutic experience and progress. The regular use of these types of measures can aid clinicians in the identification of clinical errors and the associated patient deterioration that might otherwise go unnoticed and unaddressed. The current case study describes an instance of clinical error that occurred during the 2-year treatment of a highly traumatized young woman. The clinical error was identified using measure-based feedback and subsequently understood and addressed from the theoretical standpoint of the control-mastery theory of psychotherapy. An alternative hypothetical response is also presented and explained using control-mastery theory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Advanced Water Vapor Lidar Detection System

    NASA Technical Reports Server (NTRS)

    Elsayed-Ali, Hani

    1998-01-01

    In the present water vapor lidar system, the detected signal is sent over long cables to a waveform digitizer in a CAMAC crate. This has the disadvantage of transmitting analog signals for a relatively long distance, which is subjected to pickup noise, leading to a decrease in the signal to noise ratio. Generally, errors in the measurement of water vapor with the DIAL method arise from both random and systematic sources. Systematic errors in DIAL measurements are caused by both atmospheric and instrumentation effects. The selection of the on-line alexandrite laser with a narrow linewidth, suitable intensity and high spectral purity, and its operation at the center of the water vapor lines, ensures minimum influence in the DIAL measurement that are caused by the laser spectral distribution and avoid system overloads. Random errors are caused by noise in the detected signal. Variability of the photon statistics in the lidar return signal, noise resulting from detector dark current, and noise in the background signal are the main sources of random error. This type of error can be minimized by maximizing the signal to noise ratio. The increase in the signal to noise ratio can be achieved by several ways. One way is to increase the laser pulse energy, by increasing its amplitude or the pulse repetition rate. Another way, is to use a detector system with higher quantum efficiency and lower noise, on the other hand, the selection of a narrow band optical filter that rejects most of the day background light and retains high optical efficiency is an important issue. Following acquisition of the lidar data, we minimize random errors in the DIAL measurement by averaging the data, but this will result in the reduction of the vertical and horizontal resolutions. Thus, a trade off is necessary to achieve a balance between the spatial resolution and the measurement precision. Therefore, the main goal of this research effort is to increase the signal to noise ratio by a factor of 10 over the current system, using a newly evaluated, very low noise avalanche photo diode detector and constructing a 10 MHz waveform digitizer which will replace the current CAMAC system.

  4. SU-F-T-310: Does a Head-Mounted Ionization Chamber Detect IMRT Errors?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wegener, S; Herzog, B; Sauer, O

    2016-06-15

    Purpose: The conventional plan verification strategy is delivering a plan to a QA-phantom before the first treatment. Monitoring each fraction of the patient treatment in real-time would improve patient safety. We evaluated how well a new detector, the IQM (iRT Systems, Germany), is capable of detecting errors we induced into IMRT plans of three different treatment regions. Results were compared to an established phantom. Methods: Clinical plans of a brain, prostate and head-and-neck patient were modified in the Pinnacle planning system, such that they resulted in either several percent lower prescribed doses to the target volume or several percent highermore » doses to relevant organs at risk. Unaltered plans were measured on three days, modified plans once, each with the IQM at an Elekta Synergy with an Agility MLC. All plans were also measured with the ArcCHECK with the cavity plug and a PTW semiflex 31010 ionization chamber inserted. Measurements were evaluated with SNC patient software. Results: Repeated IQM measurements of the original plans were reproducible, such that a 1% deviation from the mean as warning and 3% as action level as suggested by the manufacturer seemed reasonable. The IQM detected most of the simulated errors including wrong energy, a faulty leaf, wrong trial exported and a 2 mm shift of one leaf bank. Detection limits were reached for two plans - a 2 mm field position error and a leaf bank offset combined with an MU change. ArcCHECK evaluation according to our current standards also left undetected errors. Ionization chamber evaluation alone would leave most errors undetected. Conclusion: The IQM detected most errors and performed as well as currently established phantoms with the advantage that it can be used throughout the whole treatment. Drawback is that it does not indicate the source of the error.« less

  5. Fluid dynamic design and experimental study of an aspirated temperature measurement platform used in climate observation.

    PubMed

    Yang, Jie; Liu, Qingquan; Dai, Wei; Ding, Renhui

    2016-08-01

    Due to the solar radiation effect, current air temperature sensors inside a thermometer screen or radiation shield may produce measurement errors that are 0.8 °C or higher. To improve the observation accuracy, an aspirated temperature measurement platform is designed. A computational fluid dynamics (CFD) method is implemented to analyze and calculate the radiation error of the aspirated temperature measurement platform under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using a genetic algorithm (GA) method. In order to verify the performance of the temperature sensor, the aspirated temperature measurement platform, temperature sensors with a naturally ventilated radiation shield, and a thermometer screen are characterized in the same environment to conduct the intercomparison. The average radiation errors of the sensors in the naturally ventilated radiation shield and the thermometer screen are 0.44 °C and 0.25 °C, respectively. In contrast, the radiation error of the aspirated temperature measurement platform is as low as 0.05 °C. This aspirated temperature sensor allows the radiation error to be reduced by approximately 88.6% compared to the naturally ventilated radiation shield, and allows the error to be reduced by a percentage of approximately 80% compared to the thermometer screen. The mean absolute error and root mean square error between the correction equation and experimental results are 0.032 °C and 0.036 °C, respectively, which demonstrates the accuracy of the CFD and GA methods proposed in this research.

  6. Fluid dynamic design and experimental study of an aspirated temperature measurement platform used in climate observation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Jie, E-mail: yangjie396768@163.com; School of Atmospheric Physics, Nanjing University of Information Science and Technology, Nanjing 210044; Liu, Qingquan

    Due to the solar radiation effect, current air temperature sensors inside a thermometer screen or radiation shield may produce measurement errors that are 0.8 °C or higher. To improve the observation accuracy, an aspirated temperature measurement platform is designed. A computational fluid dynamics (CFD) method is implemented to analyze and calculate the radiation error of the aspirated temperature measurement platform under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using a genetic algorithm (GA) method. In order to verify the performance of the temperature sensor, the aspirated temperature measurement platform, temperature sensors withmore » a naturally ventilated radiation shield, and a thermometer screen are characterized in the same environment to conduct the intercomparison. The average radiation errors of the sensors in the naturally ventilated radiation shield and the thermometer screen are 0.44 °C and 0.25 °C, respectively. In contrast, the radiation error of the aspirated temperature measurement platform is as low as 0.05 °C. This aspirated temperature sensor allows the radiation error to be reduced by approximately 88.6% compared to the naturally ventilated radiation shield, and allows the error to be reduced by a percentage of approximately 80% compared to the thermometer screen. The mean absolute error and root mean square error between the correction equation and experimental results are 0.032 °C and 0.036 °C, respectively, which demonstrates the accuracy of the CFD and GA methods proposed in this research.« less

  7. A water-vapor radiometer error model. [for ionosphere in geodetic microwave techniques

    NASA Technical Reports Server (NTRS)

    Beckman, B.

    1985-01-01

    The water-vapor radiometer (WVR) is used to calibrate unpredictable delays in the wet component of the troposphere in geodetic microwave techniques such as very-long-baseline interferometry (VLBI) and Global Positioning System (GPS) tracking. Based on experience with Jet Propulsion Laboratory (JPL) instruments, the current level of accuracy in wet-troposphere calibration limits the accuracy of local vertical measurements to 5-10 cm. The goal for the near future is 1-3 cm. Although the WVR is currently the best calibration method, many instruments are prone to systematic error. In this paper, a treatment of WVR data is proposed and evaluated. This treatment reduces the effect of WVR systematic errors by estimating parameters that specify an assumed functional form for the error. The assumed form of the treatment is evaluated by comparing the results of two similar WVR's operating near each other. Finally, the observability of the error parameters is estimated by covariance analysis.

  8. Estimating Ocean Currents from Automatic Identification System Based Ship Drift Measurements

    NASA Astrophysics Data System (ADS)

    Jakub, Thomas D.

    Ship drift is a technique that has been used over the last century and a half to estimate ocean currents. Several of the shortcomings of the ship drift technique include obtaining the data from multiple ships, the time delay in getting those ship positions to a data center for processing and the limited resolution based on the amount of time between position measurements. These shortcomings can be overcome through the use of the Automatic Identification System (AIS). AIS enables more precise ocean current estimates, the option of finer resolution and more timely estimates. In this work, a demonstration of the use of AIS to compute ocean currents is performed. A corresponding error and sensitivity analysis is performed to help identify under which conditions errors will be smaller. A case study in San Francisco Bay with constant AIS message updates was compared against high frequency radar and demonstrated ocean current magnitude residuals of 19 cm/s for ship tracks in a high signal to noise environment. These ship tracks were only minutes long compared to the normally 12 to 24 hour ship tracks. The Gulf of Mexico case study demonstrated the ability to estimate ocean currents over longer baselines and identified the dependency of the estimates on the accuracy of time measurements. Ultimately, AIS measurements when combined with ship drift can provide another method of estimating ocean currents, particularly when other measurements techniques are not available.

  9. Improvements on the accuracy of beam bugs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Y.J.; Fessenden, T.

    1998-08-17

    At LLNL resistive wall monitors are used to measure the current and position used on ETA-II show a droop in signal due to a fast redistribution time constant of the signals. This paper presents the analysis and experimental test of the beam bugs used for beam current and position measurements in and after the fast kicker. It concludes with an outline of present and future changes that can be made to improve the accuracy of these beam bugs. of intense electron beams in electron induction linacs and beam transport lines. These, known locally as ''beam bugs'', have been used throughoutmore » linear induction accelerators as essential diagnostics of beam current and location. Recently, the development of a fast beam kicker has required improvement in the accuracy of measuring the position of beams. By picking off signals at more than the usual four positions around the monitor, beam position measurement error can be greatly reduced. A second significant source of error is the mechanical variation of the resistor around the bug.« less

  10. Improvements on the accuracy of beam bugs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Y J; Fessenden, T

    1998-09-02

    At LLNL resistive wall monitors are used to measure the current and position used on ETA-II show a droop in signal due to a fast redistribution time constant of the signals. This paper presents the analysis and experimental test of the beam bugs used for beam current and position measurements in and after the fast kicker. It concludes with an outline of present and future changes that can be made to improve the accuracy of these beam bugs. of intense electron beams in electron induction linacs and beam transport lines. These, known locally as "beam bugs", have been used throughoutmore » linear induction accelerators as essential diagnostics of beam current and location. Recently, the development of a fast beam kicker has required improvement in the accuracy of measuring the position of beams. By picking off signals at more than the usual four positions around the monitor, beam position measurement error can be greatly reduced. A second significant source of error is the mechanical variation of the resistor around the bug.« less

  11. Improved Design of Stellarator Coils for Current Carrying Plasmas

    NASA Astrophysics Data System (ADS)

    Drevlak, M.; Strumberger, E.; Hirshman, S.; Boozer, A.; Brooks, A.; Valanju, P.

    1998-11-01

    The method of automatic optimization (P. Merkel, Nucl. Fus. 27), (1987) 867; P. Merkel, M. Drevlak, Proc 25th EPS Conf. on Cont. Fus. and Plas. Phys., Prague, in print. for the design of stellarator coils consists essentially of determining filaments such that the average relative field error int dS [ (B_coil + B_j) \\cdot n]^2/B^2_coil is minimized on the prescribed plasma boundary. Bj is the magnetic field produced by the plasma currents of the given finite β fixed boundary equilibrium. For equilibria of the W7-X type, Bj can be neglected, because of the reduced parallel plasma currents. This is not true for quasi-axisymmetric stellarator (QAS) configurations (A. Reiman, et al., to be published.) with large equilibrium and net plasma (bootstrap) currents. Although the coils for QAS exhibit low values of the field error, free boundary calculations indicate that the shape of the plasma is usually not accurately reproduced , particularly when saddle coils are used. We investigate if the surface reconstruction can be improved by introducing a modified measure of the field error based on a measure of the resonant components of the normal field.

  12. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  13. Design and test of voltage and current probes for EAST ICRF antenna impedance measurement

    NASA Astrophysics Data System (ADS)

    Jianhua, WANG; Gen, CHEN; Yanping, ZHAO; Yuzhou, MAO; Shuai, YUAN; Xinjun, ZHANG; Hua, YANG; Chengming, QIN; Yan, CHENG; Yuqing, YANG; Guillaume, URBANCZYK; Lunan, LIU; Jian, CHENG

    2018-04-01

    On the experimental advanced superconducting tokamak (EAST), a pair of voltage and current probes (V/I probes) is installed on the ion cyclotron radio frequency transmission lines to measure the antenna input impedance, and supplement the conventional measurement technique based on voltage probe arrays. The coupling coefficients of V/I probes are sensitive to their sizes and installing locations, thus they should be determined properly to match the measurement range of data acquisition card. The V/I probes are tested in a testing platform at low power with various artificial loads. The testing results show that the deviation of coupling resistance is small for loads R L > 2.5 Ω, while the resistance deviations appear large for loads R L < 1.5 Ω, which implies that the power loss cannot be neglected at high VSWR. As the factors that give rise to the deviation of coupling resistance calculation, the phase measurement error is the more significant factor leads to deleterious results rather than the amplitude measurement error. To exclude the possible ingredients that may lead to phase measurement error, the phase detector can be calibrated in steady L-mode scenario and then use the calibrated data for calculation under H-mode cases in EAST experiments.

  14. Biennial Guidance Test Symposium (13th) Held in Holloman Air Force Base, New Mexico on 6-8 October 1987. Volume 1

    DTIC Science & Technology

    1987-10-15

    Guardiani, R. Strane, J. Profeta, Contraves Goerz Corporation, 610 Epsilon Dr., Pittsburg PA S04A "The Global Positioning System as an Aid to the Testing...errors. The weights defining the current error state as a linear combination of the gravity errors at the previous vehicle locations are maintained and...updated at each time step. These weights can also be used to compute the cross-correlation of the system errors with measured gravity quantities for use

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Clifton, Andrew

    Currently, cup anemometers on meteorological towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability; however, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install meteorological towers at potential sites. As a result, remote-sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. Although lidars can accurately estimate mean wind speeds and wind directions, there is still a large amount ofmore » uncertainty surrounding the measurement of turbulence using these devices. Errors in lidar turbulence estimates are caused by a variety of factors, including instrument noise, volume averaging, and variance contamination, in which the magnitude of these factors is highly dependent on measurement height and atmospheric stability. As turbulence has a large impact on wind power production, errors in turbulence measurements will translate into errors in wind power prediction. The impact of using lidars rather than cup anemometers for wind power prediction must be understood if lidars are to be considered a viable alternative to cup anemometers.In this poster, the sensitivity of power prediction error to typical lidar turbulence measurement errors is assessed. Turbulence estimates from a vertically profiling WINDCUBE v2 lidar are compared to high-resolution sonic anemometer measurements at field sites in Oklahoma and Colorado to determine the degree of lidar turbulence error that can be expected under different atmospheric conditions. These errors are then incorporated into a power prediction model to estimate the sensitivity of power prediction error to turbulence measurement error. Power prediction models, including the standard binning method and a random forest method, were developed using data from the aeroelastic simulator FAST for a 1.5 MW turbine. The impact of lidar turbulence error on the predicted power from these different models is examined to determine the degree of turbulence measurement accuracy needed for accurate power prediction.« less

  16. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    PubMed

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  17. Poster Presentation: Optical Test of NGST Developmental Mirrors

    NASA Technical Reports Server (NTRS)

    Hadaway, James B.; Geary, Joseph; Reardon, Patrick; Peters, Bruce; Keidel, John; Chavers, Greg

    2000-01-01

    An Optical Testing System (OTS) has been developed to measure the figure and radius of curvature of NGST developmental mirrors in the vacuum, cryogenic environment of the X-Ray Calibration Facility (XRCF) at Marshall Space Flight Center (MSFC). The OTS consists of a WaveScope Shack-Hartmann sensor from Adaptive Optics Associates as the main instrument, a Point Diffraction Interferometer (PDI), a Point Spread Function (PSF) imager, an alignment system, a Leica Disto Pro distance measurement instrument, and a laser source palette (632.8 nm wavelength) that is fiber-coupled to the sensor instruments. All of the instruments except the laser source palette are located on a single breadboard known as the Wavefront Sensor Pallet (WSP). The WSP is located on top of a 5-DOF motion system located at the center of curvature of the test mirror. Two PC's are used to control the OTS. The error in the figure measurement is dominated by the WaveScope's measurement error. An analysis using the absolute wavefront gradient error of 1/50 wave P-V (at 0.6328 microns) provided by the manufacturer leads to a total surface figure measurement error of approximately 1/100 wave rms. This easily meets the requirement of 1/10 wave P-V. The error in radius of curvature is dominated by the Leica's absolute measurement error of VI.5 mm and the focus setting error of Vi.4 mm, giving an overall error of V2 mm. The OTS is currently being used to test the NGST Mirror System Demonstrators (NMSD's) and the Subscale Beryllium Mirror Demonstrator (SBNM).

  18. 3-D direct current resistivity anisotropic modelling by goal-oriented adaptive finite element methods

    NASA Astrophysics Data System (ADS)

    Ren, Zhengyong; Qiu, Lewen; Tang, Jingtian; Wu, Xiaoping; Xiao, Xiao; Zhou, Zilong

    2018-01-01

    Although accurate numerical solvers for 3-D direct current (DC) isotropic resistivity models are current available even for complicated models with topography, reliable numerical solvers for the anisotropic case are still an open question. This study aims to develop a novel and optimal numerical solver for accurately calculating the DC potentials for complicated models with arbitrary anisotropic conductivity structures in the Earth. First, a secondary potential boundary value problem is derived by considering the topography and the anisotropic conductivity. Then, two a posteriori error estimators with one using the gradient-recovery technique and one measuring the discontinuity of the normal component of current density are developed for the anisotropic cases. Combing the goal-oriented and non-goal-oriented mesh refinements and these two error estimators, four different solving strategies are developed for complicated DC anisotropic forward modelling problems. A synthetic anisotropic two-layer model with analytic solutions verified the accuracy of our algorithms. A half-space model with a buried anisotropic cube and a mountain-valley model are adopted to test the convergence rates of these four solving strategies. We found that the error estimator based on the discontinuity of current density shows better performance than the gradient-recovery based a posteriori error estimator for anisotropic models with conductivity contrasts. Both error estimators working together with goal-oriented concepts can offer optimal mesh density distributions and highly accurate solutions.

  19. Machine Learning for Discriminating Quantum Measurement Trajectories and Improving Readout.

    PubMed

    Magesan, Easwar; Gambetta, Jay M; Córcoles, A D; Chow, Jerry M

    2015-05-22

    Current methods for classifying measurement trajectories in superconducting qubit systems produce fidelities systematically lower than those predicted by experimental parameters. Here, we place current classification methods within the framework of machine learning (ML) algorithms and improve on them by investigating more sophisticated ML approaches. We find that nonlinear algorithms and clustering methods produce significantly higher assignment fidelities that help close the gap to the fidelity possible under ideal noise conditions. Clustering methods group trajectories into natural subsets within the data, which allows for the diagnosis of systematic errors. We find large clusters in the data associated with T1 processes and show these are the main source of discrepancy between our experimental and ideal fidelities. These error diagnosis techniques help provide a path forward to improve qubit measurements.

  20. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  1. Impact of transport and modelling errors on the estimation of methane sources and sinks by inverse modelling

    NASA Astrophysics Data System (ADS)

    Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric

    2013-04-01

    Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors lead to a discrepancy of 27 TgCH4 per year at global scale, representing 5% of the total methane emissions for 2005. At continental scale, transport and modelling errors have bigger impacts in proportion to the area of the regions, ranging from 36 TgCH4 in North America to 7 TgCH4 in Boreal Eurasian, with a percentage range from 23% to 48%. Thus, contribution of transport and modelling errors to the mismatch between measurements and simulated methane concentrations is large considering the present questions on the methane budget. Moreover, diagnostics of statistics errors included in our inversions have been computed. It shows that errors contained in measurement errors covariance matrix are under-estimated in current inversions, suggesting to include more properly transport and modelling errors in future inversions.

  2. Point-of-care blood glucose measurement errors overestimate hypoglycaemia rates in critically ill patients.

    PubMed

    Nya-Ngatchou, Jean-Jacques; Corl, Dawn; Onstad, Susan; Yin, Tom; Tylee, Tracy; Suhr, Louise; Thompson, Rachel E; Wisse, Brent E

    2015-02-01

    Hypoglycaemia is associated with morbidity and mortality in critically ill patients, and many hospitals have programmes to minimize hypoglycaemia rates. Recent studies have established the hypoglycaemic patient-day as a key metric and have published benchmark inpatient hypoglycaemia rates on the basis of point-of-care blood glucose data even though these values are prone to measurement errors. A retrospective, cohort study including all patients admitted to Harborview Medical Center Intensive Care Units (ICUs) during 2010 and 2011 was conducted to evaluate a quality improvement programme to reduce inappropriate documentation of point-of-care blood glucose measurement errors. Laboratory Medicine point-of-care blood glucose data and patient charts were reviewed to evaluate all episodes of hypoglycaemia. A quality improvement intervention decreased measurement errors from 31% of hypoglycaemic (<70 mg/dL) patient-days in 2010 to 14% in 2011 (p < 0.001) and decreased the observed hypoglycaemia rate from 4.3% of ICU patient-days to 3.4% (p < 0.001). Hypoglycaemic events were frequently recurrent or prolonged (~40%), and these events are not identified by the hypoglycaemic patient-day metric, which also may be confounded by a large number of very low risk or minimally monitored patient-days. Documentation of point-of-care blood glucose measurement errors likely overestimates ICU hypoglycaemia rates and can be reduced by a quality improvement effort. The currently used hypoglycaemic patient-day metric does not evaluate recurrent or prolonged events that may be more likely to cause patient harm. The monitored patient-day as currently defined may not be the optimal denominator to determine inpatient hypoglycaemic risk. Copyright © 2014 John Wiley & Sons, Ltd.

  3. On the sensitivity of TG-119 and IROC credentialing to TPS commissioning errors.

    PubMed

    McVicker, Drew; Yin, Fang-Fang; Adamson, Justus D

    2016-01-08

    We investigate the sensitivity of IMRT commissioning using the TG-119 C-shape phantom and credentialing with the IROC head and neck phantom to treatment planning system commissioning errors. We introduced errors into the various aspects of the commissioning process for a 6X photon energy modeled using the analytical anisotropic algorithm within a commercial treatment planning system. Errors were implemented into the various components of the dose calculation algorithm including primary photons, secondary photons, electron contamination, and MLC parameters. For each error we evaluated the probability that it could be committed unknowingly during the dose algorithm commissioning stage, and the probability of it being identified during the verification stage. The clinical impact of each commissioning error was evaluated using representative IMRT plans including low and intermediate risk prostate, head and neck, mesothelioma, and scalp; the sensitivity of the TG-119 and IROC phantoms was evaluated by comparing dosimetric changes to the dose planes where film measurements occur and change in point doses where dosimeter measurements occur. No commissioning errors were found to have both a low probability of detection and high clinical severity. When errors do occur, the IROC credentialing and TG 119 commissioning criteria are generally effective at detecting them; however, for the IROC phantom, OAR point-dose measurements are the most sensitive despite being currently excluded from IROC analysis. Point-dose measurements with an absolute dose constraint were the most effective at detecting errors, while film analysis using a gamma comparison and the IROC film distance to agreement criteria were less effective at detecting the specific commissioning errors implemented here.

  4. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting.

    PubMed

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-10-02

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method.

  5. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting

    PubMed Central

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-01-01

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method. PMID:27706099

  6. Evaluation of the depth-integration method of measuring water discharge in large rivers

    USGS Publications Warehouse

    Moody, J.A.; Troutman, B.M.

    1992-01-01

    The depth-integration method oor measuring water discharge makes a continuos measurement of the water velocity from the water surface to the bottom at 20 to 40 locations or verticals across a river. It is especially practical for large rivers where river traffic makes it impractical to use boats attached to taglines strung across the river or to use current meters suspended from bridges. This method has the additional advantage over the standard two- and eight-tenths method in that a discharge-weighted suspended-sediment sample can be collected at the same time. When this method is used in large rivers such as the Missouri, Mississippi and Ohio, a microwave navigation system is used to determine the ship's position at each vertical sampling location across the river, and to make accurate velocity corrections to compensate for shift drift. An essential feature is a hydraulic winch that can lower and raise the current meter at a constant transit velocity so that the velocities at all depths are measured for equal lengths of time. Field calibration measurements show that: (1) the mean velocity measured on the upcast (bottom to surface) is within 1% of the standard mean velocity determined by 9-11 point measurements; (2) if the transit velocity is less than 25% of the mean velocity, then average error in the mean velocity is 4% or less. The major source of bias error is a result of mounting the current meter above a sounding weight and sometimes above a suspended-sediment sampling bottle, which prevents measurement of the velocity all the way to the bottom. The measured mean velocity is slightly larger than the true mean velocity. This bias error in the discharge is largest in shallow water (approximately 8% for the Missouri River at Hermann, MO, where the mean depth was 4.3 m) and smallest in deeper water (approximately 3% for the Mississippi River at Vickbsurg, MS, where the mean depth was 14.5 m). The major source of random error in the discharge is the natural variability of river velocities, which we assumed to be independent and random at each vertical. The standard error of the estimated mean velocity, at an individual vertical sampling location, may be as large as 9%, for large sand-bed alluvial rivers. The computed discharge, however, is a weighted mean of these random velocities. Consequently the standard error of computed discharge is divided by the square root of the number of verticals, producing typical values between 1 and 2%. The discharges measured by the depth-integrated method agreed within ??5% of those measured simultaneously by the standard two- and eight-tenths, six-tenth and moving boat methods. ?? 1992.

  7. CameraHRV: robust measurement of heart rate variability using a camera

    NASA Astrophysics Data System (ADS)

    Pai, Amruta; Veeraraghavan, Ashok; Sabharwal, Ashutosh

    2018-02-01

    The inter-beat-interval (time period of the cardiac cycle) changes slightly for every heartbeat; this variation is measured as Heart Rate Variability (HRV). HRV is presumed to occur due to interactions between the parasym- pathetic and sympathetic nervous system. Therefore, it is sometimes used as an indicator of the stress level of an individual. HRV also reveals some clinical information about cardiac health. Currently, HRV is accurately measured using contact devices such as a pulse oximeter. However, recent research in the field of non-contact imaging Photoplethysmography (iPPG) has made vital sign measurements using just the video recording of any exposed skin (such as a person's face) possible. The current signal processing methods for extracting HRV using peak detection perform well for contact-based systems but have poor performance for the iPPG signals. The main reason for this poor performance is the fact that current methods are sensitive to large noise sources which are often present in iPPG data. Further, current methods are not robust to motion artifacts that are common in iPPG systems. We developed a new algorithm, CameraHRV, for robustly extracting HRV even in low SNR such as is common with iPPG recordings. CameraHRV combined spatial combination and frequency demodulation to obtain HRV from the instantaneous frequency of the iPPG signal. CameraHRV outperforms other current methods of HRV estimation. Ground truth data was obtained from FDA-approved pulse oximeter for validation purposes. CameraHRV on iPPG data showed an error of 6 milliseconds for low motion and varying skin tone scenarios. The improvement in error was 14%. In case of high motion scenarios like reading, watching and talking, the error was 10 milliseconds.

  8. Time dependent wind fields

    NASA Technical Reports Server (NTRS)

    Chelton, D. B.

    1986-01-01

    Two tasks were performed: (1) determination of the accuracy of Seasat scatterometer, altimeter, and scanning multichannel microwave radiometer measurements of wind speed; and (2) application of Seasat altimeter measurements of sea level to study the spatial and temporal variability of geostrophic flow in the Antarctic Circumpolar Current. The results of the first task have identified systematic errors in wind speeds estimated by all three satellite sensors. However, in all cases the errors are correctable and corrected wind speeds agree between the three sensors to better than 1 ms sup -1 in 96-day 2 deg. latitude by 6 deg. longitude averages. The second task has resulted in development of a new technique for using altimeter sea level measurements to study the temporal variability of large scale sea level variations. Application of the technique to the Antarctic Circumpolar Current yielded new information about the ocean circulation in this region of the ocean that is poorly sampled by conventional ship-based measurements.

  9. Assessing Suturing Skills in a Self-Guided Learning Setting: Absolute Symmetry Error

    ERIC Educational Resources Information Center

    Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam

    2009-01-01

    Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be…

  10. The international food unit: a new measurement aid that can improve portion size estimation.

    PubMed

    Bucher, T; Weltert, M; Rollo, M E; Smith, S P; Jia, W; Collins, C E; Sun, M

    2017-09-12

    Portion size education tools, aids and interventions can be effective in helping prevent weight gain. However consumers have difficulties in estimating food portion sizes and are confused by inconsistencies in measurement units and terminologies currently used. Visual cues are an important mediator of portion size estimation, but standardized measurement units are required. In the current study, we present a new food volume estimation tool and test the ability of young adults to accurately quantify food volumes. The International Food Unit™ (IFU™) is a 4x4x4 cm cube (64cm 3 ), subdivided into eight 2 cm sub-cubes for estimating smaller food volumes. Compared with currently used measures such as cups and spoons, the IFU™ standardizes estimation of food volumes with metric measures. The IFU™ design is based on binary dimensional increments and the cubic shape facilitates portion size education and training, memory and recall, and computer processing which is binary in nature. The performance of the IFU™ was tested in a randomized between-subject experiment (n = 128 adults, 66 men) that estimated volumes of 17 foods using four methods; the IFU™ cube, a deformable modelling clay cube, a household measuring cup or no aid (weight estimation). Estimation errors were compared between groups using Kruskall-Wallis tests and post-hoc comparisons. Estimation errors differed significantly between groups (H(3) = 28.48, p < .001). The volume estimations were most accurate in the group using the IFU™ cube (Mdn = 18.9%, IQR = 50.2) and least accurate using the measuring cup (Mdn = 87.7%, IQR = 56.1). The modelling clay cube led to a median error of 44.8% (IQR = 41.9). Compared with the measuring cup, the estimation errors using the IFU™ were significantly smaller for 12 food portions and similar for 5 food portions. Weight estimation was associated with a median error of 23.5% (IQR = 79.8). The IFU™ improves volume estimation accuracy compared to other methods. The cubic shape was perceived as favourable, with subdivision and multiplication facilitating volume estimation. Further studies should investigate whether the IFU™ can facilitate portion size training and whether portion size education using the IFU™ is effective and sustainable without the aid. A 3-dimensional IFU™ could serve as a reference object for estimating food volume.

  11. Eddy current measurement of the thickness of top Cu film of the multilayer interconnects in the integrated circuit (IC) manufacturing process

    NASA Astrophysics Data System (ADS)

    Qu, Zilian; Meng, Yonggang; Zhao, Qian

    2015-03-01

    This paper proposes a new eddy current method, named equivalent unit method (EUM), for the thickness measurement of the top copper film of multilayer interconnects in the chemical mechanical polishing (CMP) process, which is an important step in the integrated circuit (IC) manufacturing. The influence of the underneath circuit layers on the eddy current is modeled and treated as an equivalent film thickness. By subtracting this equivalent film component, the accuracy of the thickness measurement of the top copper layer with an eddy current sensor is improved and the absolute error is 3 nm for sampler measurement.

  12. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future work in LiDAR sensor measurement uncertainty must focus on the development of vegetative error models to create more robust error prediction algorithms. To achieve this objective, comprehensive empirical exploratory analysis is recommended to relate vegetative parameters to observed errors.

  13. Commentary: Reducing diagnostic errors: another role for checklists?

    PubMed

    Winters, Bradford D; Aswani, Monica S; Pronovost, Peter J

    2011-03-01

    Diagnostic errors are a widespread problem, although the true magnitude is unknown because they cannot currently be measured validly. These errors have received relatively little attention despite alarming estimates of associated harm and death. One promising intervention to reduce preventable harm is the checklist. This intervention has proven successful in aviation, in which situations are linear and deterministic (one alarm goes off and a checklist guides the flight crew to evaluate the cause). In health care, problems are multifactorial and complex. A checklist has been used to reduce central-line-associated bloodstream infections in intensive care units. Nevertheless, this checklist was incorporated in a culture-based safety program that engaged and changed behaviors and used robust measurement of infections to evaluate progress. In this issue, Ely and colleagues describe how three checklists could reduce the cognitive biases and mental shortcuts that underlie diagnostic errors, but point out that these tools still need to be tested. To be effective, they must reduce diagnostic errors (efficacy) and be routinely used in practice (effectiveness). Such tools must intuitively support how the human brain works, and under time pressures, clinicians rarely think in conditional probabilities when making decisions. To move forward, it is necessary to accurately measure diagnostic errors (which could come from mapping out the diagnostic process as the medication process has done and measuring errors at each step) and pilot test interventions such as these checklists to determine whether they work.

  14. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    USGS Publications Warehouse

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  15. Irregular analytical errors in diagnostic testing - a novel concept.

    PubMed

    Vogeser, Michael; Seger, Christoph

    2018-02-23

    In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC-isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.

  16. Performance of the NASA Digitizing Core-Loss Instrumentation

    NASA Technical Reports Server (NTRS)

    Schwarze, Gene E. (Technical Monitor); Niedra, Janis M.

    2003-01-01

    The standard method of magnetic core loss measurement was implemented on a high frequency digitizing oscilloscope in order to explore the limits to accuracy when characterizing high Q cores at frequencies up to 1 MHz. This method computes core loss from the cycle mean of the product of the exciting current in a primary winding and induced voltage in a separate flux sensing winding. It is pointed out that just 20 percent accuracy for a Q of 100 core material requires a phase angle accuracy of 0.1 between the voltage and current measurements. Experiment shows that at 1 MHz, even high quality, high frequency current sensing transformers can introduce phase errors of a degree or more. Due to the fact that the Q of some quasilinear core materials can exceed 300 at frequencies below 100 kHz, phase angle errors can be a problem even at 50 kHz. Hence great care is necessary with current sensing and ground loops when measuring high Q cores. Best high frequency current sensing accuracy was obtained from a fabricated 0.1-ohm coaxial resistor, differentially sensed. Sample high frequency core loss data taken with the setup for a permeability-14 MPP core is presented.

  17. Resection plane-dependent error in computed tomography volumetry of the right hepatic lobe in living liver donors

    PubMed Central

    Kwon, Heon-Ju; Kim, Bohyun; Kim, So Yeon; Lee, Chul Seung; Lee, Jeongjin; Song, Gi Won; Lee, Sung Gyu

    2018-01-01

    Background/Aims Computed tomography (CT) hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT). However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Methods Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (VP) was measured via the assumptive hepatectomy plane. Retrospective liver volume (VR) was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W), errors in percentage (%) VP and VR were evaluated. Plane-dependent error in VP was defined as the absolute difference between VP and VR. % plane-dependent error was defined as follows: |VP–VR|/W∙100. Results Mean VP, VR, and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in VP were 73.3 mL and 10.7%. Mean error and % error in VR were 64.4 mL and 9.3%. Mean plane-dependent error in VP was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in VP exceeded 10% of W in approximately 10% of the subjects in our study. Conclusions There was approximately 5% plane-dependent error in liver VP on CT volumetry. Plane-dependent error in VP exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane. PMID:28759989

  18. Linear error analysis of slope-area discharge determinations

    USGS Publications Warehouse

    Kirby, W.H.

    1987-01-01

    The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.

  19. Measurement Error and Bias in Value-Added Models. Research Report. ETS RR-17-25

    ERIC Educational Resources Information Center

    Kane, Michael T.

    2017-01-01

    By aggregating residual gain scores (the differences between each student's current score and a predicted score based on prior performance) for a school or a teacher, value-added models (VAMs) can be used to generate estimates of school or teacher effects. It is known that random errors in the prior scores will introduce bias into predictions of…

  20. Image-based overlay measurement using subsurface ultrasonic resonance force microscopy

    NASA Astrophysics Data System (ADS)

    Tamer, M. S.; van der Lans, M. J.; Sadeghian, H.

    2018-03-01

    Image Based Overlay (IBO) measurement is one of the most common techniques used in Integrated Circuit (IC) manufacturing to extract the overlay error values. The overlay error is measured using dedicated overlay targets which are optimized to increase the accuracy and the resolution, but these features are much larger than the IC feature size. IBO measurements are realized on the dedicated targets instead of product features, because the current overlay metrology solutions, mainly based on optics, cannot provide sufficient resolution on product features. However, considering the fact that the overlay error tolerance is approaching 2 nm, the overlay error measurement on product features becomes a need for the industry. For sub-nanometer resolution metrology, Scanning Probe Microscopy (SPM) is widely used, though at the cost of very low throughput. The semiconductor industry is interested in non-destructive imaging of buried structures under one or more layers for the application of overlay and wafer alignment, specifically through optically opaque media. Recently an SPM technique has been developed for imaging subsurface features which can be potentially considered as a solution for overlay metrology. In this paper we present the use of Subsurface Ultrasonic Resonance Force Microscopy (SSURFM) used for IBO measurement. We used SSURFM for imaging the most commonly used overlay targets on a silicon substrate and photoresist. As a proof of concept we have imaged surface and subsurface structures simultaneously. The surface and subsurface features of the overlay targets are fabricated with programmed overlay errors of +/-40 nm, +/-20 nm, and 0 nm. The top layer thickness changes between 30 nm and 80 nm. Using SSURFM the surface and subsurface features were successfully imaged and the overlay errors were extracted, via a rudimentary image processing algorithm. The measurement results are in agreement with the nominal values of the programmed overlay errors.

  1. Error field optimization in DIII-D using extremum seeking control

    NASA Astrophysics Data System (ADS)

    Lanctot, M. J.; Olofsson, K. E. J.; Capella, M.; Humphreys, D. A.; Eidietis, N.; Hanson, J. M.; Paz-Soldan, C.; Strait, E. J.; Walker, M. L.

    2016-07-01

    DIII-D experiments have demonstrated a new real-time approach to tokamak error field control based on maximizing the toroidal angular momentum. This approach uses extremum seeking control theory to optimize the error field in real time without inducing instabilities. Slowly-rotating n  =  1 fields (the dither), generated by external coils, are used to perturb the angular momentum, monitored in real-time using a charge-exchange spectroscopy diagnostic. Simple signal processing of the rotation measurements extracts information about the rotation gradient with respect to the control coil currents. This information is used to converge the control coil currents to a point that maximizes the toroidal angular momentum. The technique is well-suited for multi-coil, multi-harmonic error field optimizations in disruption sensitive devices as it does not require triggering locked tearing modes or plasma current disruptions. Control simulations highlight the importance of the initial search direction on the rate of the convergence, and identify future algorithm upgrades that may allow more rapid convergence that projects to convergence times in ITER on the order of tens of seconds.

  2. Global and regional kinematics with GPS

    NASA Technical Reports Server (NTRS)

    King, Robert W.

    1994-01-01

    The inherent precision of the doubly differenced phase measurement and the low cost of instrumentation made GPS the space geodetic technique of choice for regional surveys as soon as the constellation reached acceptable geometry in the area of interest: 1985 in western North America, the early 1990's in most of the world. Instrument and site-related errors for horizontal positioning are usually less than 3 mm, so that the dominant source of error is uncertainty in the reference frame defined by the satellites orbits and the tracking stations used to determine them. Prior to about 1992, when the tracking network for most experiments was globally sparse, the number of fiducial sites or the level at which they could be tied to an SLR or VLBI reference frame usually, set the accuracy limit. Recently, with a global network of over 30 stations, the limit is set more often by deficiencies in models for non-gravitational forces acting on the satellites. For regional networks in the northern hemisphere, reference frame errors are currently about 3 parts per billion (ppb) in horizontal position, allowing centimeter-level accuracies over intercontinental distances and less than 1 mm for a 100 km baseline. The accuracy of GPS measurements for monitoring height variations is generally 2-3 times worse than for horizontal motions. As for VLBI, the primary source of error is unmodeled fluctuations in atmospheric water vapor, but both reference frame uncertainties and some instrument errors are more serious for vertical than horizontal measurements. Under good conditions, daily repeatabilities at the level of 10 mm rms were achieved. This paper will summarize the current accuracy of GPS measurements and their implication for the use of SLR to study regional kinematics.

  3. Assimilation of TOPEX/Poseidon altimeter data into a global ocean circulation model: How good are the results?

    NASA Astrophysics Data System (ADS)

    Fukumori, Ichiro; Raghunath, Ramanujam; Fu, Lee-Lueng; Chao, Yi

    1999-11-01

    The feasibility of assimilating satellite altimetry data into a global ocean general circulation model is studied. Three years of TOPEX/Poseidon data are analyzed using a global, three-dimensional, nonlinear primitive equation model. The assimilation's success is examined by analyzing its consistency and reliability measured by formal error estimates with respect to independent measurements. Improvements in model solution are demonstrated, in particular, properties not directly measured. Comparisons are performed with sea level measured by tide gauges, subsurface temperatures and currents from moorings, and bottom pressure measurements. Model representation errors dictate what can and cannot be resolved by assimilation, and its identification is emphasized.

  4. Correlation methods in optical metrology with state-of-the-art x-ray mirrors

    NASA Astrophysics Data System (ADS)

    Yashchuk, Valeriy V.; Centers, Gary; Gevorkyan, Gevork S.; Lacey, Ian; Smith, Brian V.

    2018-01-01

    The development of fully coherent free electron lasers and diffraction limited storage ring x-ray sources has brought to focus the need for higher performing x-ray optics with unprecedented tolerances for surface slope and height errors and roughness. For example, the proposed beamlines for the future upgraded Advance Light Source, ALS-U, require optical elements characterized by a residual slope error of <100 nrad (root-mean-square) and height error of <1-2 nm (peak-tovalley). These are for optics with a length of up to one meter. However, the current performance of x-ray optical fabrication and metrology generally falls short of these requirements. The major limitation comes from the lack of reliable and efficient surface metrology with required accuracy and with reasonably high measurement rate, suitable for integration into the modern deterministic surface figuring processes. The major problems of current surface metrology relate to the inherent instrumental temporal drifts, systematic errors, and/or an unacceptably high cost, as in the case of interferometry with computer-generated holograms as a reference. In this paper, we discuss the experimental methods and approaches based on correlation analysis to the acquisition and processing of metrology data developed at the ALS X-Ray Optical Laboratory (XROL). Using an example of surface topography measurements of a state-of-the-art x-ray mirror performed at the XROL, we demonstrate the efficiency of combining the developed experimental correlation methods to the advanced optimal scanning strategy (AOSS) technique. This allows a significant improvement in the accuracy and capacity of the measurements via suppression of the instrumental low frequency noise, temporal drift, and systematic error in a single measurement run. Practically speaking, implementation of the AOSS technique leads to an increase of the measurement accuracy, as well as the capacity of ex situ metrology by a factor of about four. The developed method is general and applicable to a broad spectrum of high accuracy measurements.

  5. A New Correction Technique for Strain-Gage Measurements Acquired in Transient-Temperature Environments

    NASA Technical Reports Server (NTRS)

    Richards, W. Lance

    1996-01-01

    Significant strain-gage errors may exist in measurements acquired in transient-temperature environments if conventional correction methods are applied. As heating or cooling rates increase, temperature gradients between the strain-gage sensor and substrate surface increase proportionally. These temperature gradients introduce strain-measurement errors that are currently neglected in both conventional strain-correction theory and practice. Therefore, the conventional correction theory has been modified to account for these errors. A new experimental method has been developed to correct strain-gage measurements acquired in environments experiencing significant temperature transients. The new correction technique has been demonstrated through a series of tests in which strain measurements were acquired for temperature-rise rates ranging from 1 to greater than 100 degrees F/sec. Strain-gage data from these tests have been corrected with both the new and conventional methods and then compared with an analysis. Results show that, for temperature-rise rates greater than 10 degrees F/sec, the strain measurements corrected with the conventional technique produced strain errors that deviated from analysis by as much as 45 percent, whereas results corrected with the new technique were in good agreement with analytical results.

  6. Comparison of Measurements from Pressure-recording Inverted Echo Sounders and Satellite Altimetry in the North Equatorial Current Region of the Western Pacific

    NASA Astrophysics Data System (ADS)

    Jeon, Chanhyung; Park, Jae-Hun; Kim, Dong Guk; Kim, Eung; Jeon, Dongchull

    2018-04-01

    An array of 5 pressure-recording inverted echo sounders (PIESs) was deployed along the Jason-2 214 ground track in the North Equatorial Current (NEC) region of the western Pacific Ocean for about 2 years from June 2012. Round-trip acoustic travel time from the bottom to the sea surface and bottom pressure measurements from PIES were converted to sea level anomaly (SLA). AVISO along-track mono-mission SLA (Mono-SLA), reference mapped SLA (Ref-MSLA), and up-to-date mapped SLA (Upd-MSLA) products were used for comparison with PIESderived SLA (η tot). Comparisons of η tot with Mono-SLA revealed that hump artifact errors significantly contaminate the Mono-SLA. Differences of η tot from both Ref-MSLA and Upd-MSLA decreased as the hump errors were reduced in mapped SLA products. Comparisons of Mono-SLA measurements at crossover points of ground tracks near the observation sites revealed large differences though the time differences of their measurements were only 1.53 and 4.58 days. Comparisons between Mono-SLA and mapped SLA suggested that mapped SLA smooths out the hump artifact errors by taking values between the two discrepant Mono-SLA measurements at the crossover points. Consequently, mapped SLA showed better agreement with η tot at our observation sites. AVISO mapped sea surface height (SSH) products are the preferable dataset for studying SSH variability in the NEC region of the western Pacific, though some portions of hump artifact errors seem to still remain in them.

  7. Experiments and simulation of thermal behaviors of the dual-drive servo feed system

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Mei, Xuesong; Feng, Bin; Zhao, Liang; Ma, Chi; Shi, Hu

    2015-01-01

    The machine tool equipped with the dual-drive servo feed system could realize high feed speed as well as sharp precision. Currently, there is no report about the thermal behaviors of the dual-drive machine, and the current research of the thermal characteristics of machines mainly focuses on steady simulation. To explore the influence of thermal characterizations on the precision of a jib boring machine assembled dual-drive feed system, the thermal equilibrium tests and the research on thermal-mechanical transient behaviors are carried out. A laser interferometer, infrared thermography and a temperature-displacement acquisition system are applied to measure the temperature distribution and thermal deformation at different feed speeds. Subsequently, the finite element method (FEM) is used to analyze the transient thermal behaviors of the boring machine. The complex boundary conditions, such as heat sources and convective heat transfer coefficient, are calculated. Finally, transient variances in temperatures and deformations are compared with the measured values, and the errors between the measurement and the simulation of the temperature and the thermal error are 2 °C and 2.5 μm, respectively. The researching results demonstrate that the FEM model can predict the thermal error and temperature distribution very well under specified operating condition. Moreover, the uneven temperature gradient is due to the asynchronous dual-drive structure that results in thermal deformation. Additionally, the positioning accuracy decreases as the measured point became further away from the motor, and the thermal error and equilibrium period both increase with feed speeds. The research proposes a systematical method to measure and simulate the boring machine transient thermal behaviors.

  8. Benefits Derived From Laser Ranging Measurements for Orbit Determination of the GPS Satellite Orbit

    NASA Technical Reports Server (NTRS)

    Welch, Bryan W.

    2007-01-01

    While navigation systems for the determination of the orbit of the Global Position System (GPS) have proven to be very effective, the current research is examining methods to lower the error in the GPS satellite ephemerides below their current level. Two GPS satellites that are currently in orbit carry retro-reflectors onboard. One notion to reduce the error in the satellite ephemerides is to utilize the retro-reflectors via laser ranging measurements taken from multiple Earth ground stations. Analysis has been performed to determine the level of reduction in the semi-major axis covariance of the GPS satellites, when laser ranging measurements are supplemented to the radiometric station keeping, which the satellites undergo. Six ground tracking systems are studied to estimate the performance of the satellite. The first system is the baseline current system approach which provides pseudo-range and integrated Doppler measurements from six ground stations. The remaining five ground tracking systems utilize all measurements from the current system and laser ranging measurements from the additional ground stations utilized within those systems. Station locations for the additional ground sites were taken from a listing of laser ranging ground stations from the International Laser Ranging Service. Results show reductions in state covariance estimates when utilizing laser ranging measurements to solve for the satellite s position component of the state vector. Results also show dependency on the number of ground stations providing laser ranging measurements, orientation of the satellite to the ground stations, and the initial covariance of the satellite's state vector.

  9. Experimental study on performance verification tests for coordinate measuring systems with optical distance sensors

    NASA Astrophysics Data System (ADS)

    Carmignato, Simone

    2009-01-01

    Optical sensors are increasingly used for dimensional and geometrical metrology. However, the lack of international standards for testing optical coordinate measuring systems is currently limiting the traceability of measurements and the easy comparison of different optical systems. This paper presents an experimental investigation on artefacts and procedures for testing coordinate measuring systems equipped with optical distance sensors. The work is aimed at contributing to the standardization of testing methods. The VDI/VDE 2617-6.2:2005 guideline, which is probably the most complete document available at the state of the art for testing systems with optical distance sensors, is examined with specific experiments. Results from the experiments are discussed, with particular reference to the tests used for determining the following characteristics: error of indication for size measurement, probing error and structural resolution. Particular attention is given to the use of artefacts alternative to gauge blocks for determining the error of indication for size measurement.

  10. Sediment movement along the U.S. east coast continental shelf-I. Estimates of bottom stress using the Grant-Madsen model and near-bottom wave and current measurements

    USGS Publications Warehouse

    Lyne, V.D.; Butman, B.; Grant, W.D.

    1990-01-01

    Bottom stress is calculated for several long-term time-series observations, made on the U.S. east coast continental shelf during winter, using the wave-current interaction and moveable bed models of Grant and Madsen (1979, Journal of Geophysical Research, 84, 1797-1808; 1982, Journal of Geophysical Research, 87, 469-482). The wave and current measurements were obtained by means of a bottom tripod system which measured current using a Savonius rotor and vane and waves by means of a pressure sensor. The variables were burst sampled about 10% of the time. Wave energy was reasonably resolved, although aliased by wave groupiness, and wave period was accurate to 1-2 s during large storms. Errors in current speed and direction depend on the speed of the mean current relative to the wave current. In general, errors in bottom stress caused by uncertainties in measured current speed and wave characteristics were 10-20%. During storms, the bottom stress calculated using the Grant-Madsen models exceeded stress computed from conventional drag laws by a factor of about 1.5 on average and 3 or more during storm peaks. Thus, even in water as deep as 80 m, oscillatory near-bottom currents associated with surface gravity waves of period 12 s or longer will contribute substantially to bottom stress. Given that the Grant-Madsen model is correct, parameterizations of bottom stress that do not incorporate wave effects will substantially underestimate stress and sediment transport in this region of the continental shelf.

  11. Error in Radar-Derived Soil Moisture due to Roughness Parameterization: An Analysis Based on Synthetical Surface Profiles

    PubMed Central

    Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.

    2009-01-01

    In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956

  12. Development of an accurate portable recording peak-flow meter for the diagnosis of asthma.

    PubMed

    Hitchings, D J; Dickinson, S A; Miller, M R; Fairfax, A J

    1993-05-01

    This article describes the systematic design of an electronic recording peak expiratory flow (PEF) meter to provide accurate data for the diagnosis of occupational asthma. Traditional diagnosis of asthma relies on accurate data of PEF tests performed by the patients in their own homes and places of work. Unfortunately there are high error rates in data produced and recorded by the patient, most of these are transcription errors and some patients falsify their records. The PEF measurement itself is not effort independent, the data produced depending on the way in which the patient performs the test. Patients are taught how to perform the test giving maximal effort to the expiration being measured. If the measurement is performed incorrectly then errors will occur. Accurate data can be produced if an electronically recording PEF instrument is developed, thus freeing the patient from the task of recording the test data. It should also be capable of determining whether the PEF measurement has been correctly performed. A requirement specification for a recording PEF meter was produced. A commercially available electronic PEF meter was modified to provide the functions required for accurate serial recording of the measurements produced by the patients. This is now being used in three hospitals in the West Midlands for investigations into the diagnosis of occupational asthma. In investigating current methods of measuring PEF and other pulmonary quantities a greater understanding was obtained of the limitations of current methods of measurement, and quantities being measured.(ABSTRACT TRUNCATED AT 250 WORDS)

  13. Eddy Current Assessment of Engineered Components Containing Nanofibers

    NASA Astrophysics Data System (ADS)

    Ko, Ray T.; Hoppe, Wally; Pierce, Jenny

    2009-03-01

    The eddy current approach has been used to assess engineered components containing nanofibers. Five specimens with different programmed defects were fabricated. A 4-point collinear probe was used to verify the electrical resistivity of each specimen. The liftoff component of the eddy current signal was used to test two extreme cases with different nano contents. Additional eddy current measurements were also used in detecting a missing nano layer simulating a manufacturing process error. The results of this assessment suggest that eddy current liftoff measurement can be a useful tool in evaluating the electrical properties of materials containing nanofibers.

  14. Micro CT based truth estimation of nodule volume

    NASA Astrophysics Data System (ADS)

    Kinnard, L. M.; Gavrielides, M. A.; Myers, K. J.; Zeng, R.; Whiting, B.; Lin-Gibson, S.; Petrick, N.

    2010-03-01

    With the advent of high-resolution CT, three-dimensional (3D) methods for nodule volumetry have been introduced, with the hope that such methods will be more accurate and consistent than currently used planar measures of size. However, the error associated with volume estimation methods still needs to be quantified. Volume estimation error is multi-faceted in the sense that there is variability associated with the patient, the software tool and the CT system. A primary goal of our current research efforts is to quantify the various sources of measurement error and, when possible, minimize their effects. In order to assess the bias of an estimate, the actual value, or "truth," must be known. In this work we investigate the reliability of micro CT to determine the "true" volume of synthetic nodules. The advantage of micro CT over other truthing methods is that it can provide both absolute volume and shape information in a single measurement. In the current study we compare micro CT volume truth to weight-density truth for spherical, elliptical, spiculated and lobulated nodules with diameters from 5 to 40 mm, and densities of -630 and +100 HU. The percent differences between micro CT and weight-density volume for -630 HU nodules range from [-21.7%, -0.6%] (mean= -11.9%) and the differences for +100 HU nodules range from [-0.9%, 3.0%] (mean=1.7%).

  15. Error analysis in stereo vision for location measurement of 3D point

    NASA Astrophysics Data System (ADS)

    Li, Yunting; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

  16. The determination of carbon dioxide concentration using atmospheric pressure ionization mass spectrometry/isotopic dilution and errors in concentration measurements caused by dryers.

    PubMed

    DeLacy, Brendan G; Bandy, Alan R

    2008-01-01

    An atmospheric pressure ionization mass spectrometry/isotopically labeled standard (APIMS/ILS) method has been developed for the determination of carbon dioxide (CO(2)) concentration. Descriptions of the instrumental components, the ionization chemistry, and the statistics associated with the analytical method are provided. This method represents an alternative to the nondispersive infrared (NDIR) technique, which is currently used in the atmospheric community to determine atmospheric CO(2) concentrations. The APIMS/ILS and NDIR methods exhibit a decreased sensitivity for CO(2) in the presence of water vapor. Therefore, dryers such as a nafion dryer are used to remove water before detection. The APIMS/ILS method measures mixing ratios and demonstrates linearity and range in the presence or absence of a dryer. The NDIR technique, on the other hand, measures molar concentrations. The second half of this paper describes errors in molar concentration measurements that are caused by drying. An equation describing the errors was derived from the ideal gas law, the conservation of mass, and Dalton's Law. The purpose of this derivation was to quantify errors in the NDIR technique that are caused by drying. Laboratory experiments were conducted to verify the errors created solely by the dryer in CO(2) concentration measurements post-dryer. The laboratory experiments verified the theoretically predicted errors in the derived equations. There are numerous references in the literature that describe the use of a dryer in conjunction with the NDIR technique. However, these references do not address the errors that are caused by drying.

  17. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    DOE PAGES

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; ...

    2016-06-01

    Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  18. Impact of Tropospheric Aerosol Absorption on Ozone Retrieval from buv Measurements

    NASA Technical Reports Server (NTRS)

    Torres, O.; Bhartia, P. K.

    1998-01-01

    The impact of tropospheric aerosols on the retrieval of column ozone amounts using spaceborne measurements of backscattered ultraviolet radiation is examined. Using radiative transfer calculations, we show that uv-absorbing desert dust may introduce errors as large as 10% in ozone column amount, depending on the aerosol layer height and optical depth. Smaller errors are produced by carbonaceous aerosols that result from biomass burning. Though the error is produced by complex interactions between ozone absorption (both stratospheric and tropospheric), aerosol scattering, and aerosol absorption, a surprisingly simple correction procedure reduces the error to about 1%, for a variety of aerosols and for a wide range of aerosol loading. Comparison of the corrected TOMS data with operational data indicates that though the zonal mean total ozone derived from TOMS are not significantly affected by these errors, localized affects in the tropics can be large enough to seriously affect the studies of tropospheric ozone that are currently undergoing using the TOMS data.

  19. Handbook of satellite pointing errors and their statistical treatment

    NASA Astrophysics Data System (ADS)

    Weinberger, M. C.

    1980-03-01

    This handbook aims to provide both satellite payload and attitude control system designers with a consistent, unambiguous approach to the formulation, definition and interpretation of attitude pointing and measurement specifications. It reviews and assesses the current terminology and practices, and from them establishes a set of unified terminology, giving the user a sound basis to understand the meaning and implications of various specifications and requirements. Guidelines are presented for defining the characteristics of the error sources influencing satellite pointing and attitude measurement, and their combination in performance verification.

  20. Self-Reported and Observed Punitive Parenting Prospectively Predicts Increased Error-Related Brain Activity in Six-Year-Old Children.

    PubMed

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J; Kujawa, Autumn J; Laptook, Rebecca S; Torpey, Dana C; Klein, Daniel N

    2015-07-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission--although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children's ERN approximately 3 years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately 3 years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children's error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this hypothesis.

  1. Application of acoustic-Doppler current profiler and expendable bathythermograph measurements to the study of the velocity structure and transport of the Gulf Stream

    NASA Technical Reports Server (NTRS)

    Joyce, T. M.; Dunworth, J. A.; Schubert, D. M.; Stalcup, M. C.; Barbour, R. L.

    1988-01-01

    The degree to which Acoustic-Doppler Current Profiler (ADCP) and expendable bathythermograph (XBT) data can provide quantitative measurements of the velocity structure and transport of the Gulf Stream is addressed. An algorithm is used to generate salinity from temperature and depth using an historical Temperature/Salinity relation for the NW Atlantic. Results have been simulated using CTD data and comparing real and pseudo salinity files. Errors are typically less than 2 dynamic cm for the upper 800 m out of a total signal of 80 cm (across the Gulf Stream). When combined with ADCP data for a near-surface reference velocity, transport errors in isopycnal layers are less than about 1 Sv (10 to the 6th power cu m/s), as is the difference in total transport for the upper 800 m between real and pseudo data. The method is capable of measuring the real variability of the Gulf Stream, and when combined with altimeter data, can provide estimates of the geoid slope with oceanic errors of a few parts in 10 to the 8th power over horizontal scales of 500 km.

  2. Repeatability and oblique flow response characteristics of current meters

    USGS Publications Warehouse

    Fulford, Janice M.; Thibodeaux, Kirk G.; Kaehrle, William R.; ,

    1993-01-01

    Laboratory investigation into the precision and accuracy of various mechanical-current meters are presented. Horizontal-axis and vertical-axis meters that are used for the measurement of point velocities in streams and rivers were tested. Meters were tested for repeatability and response to oblique flows. Both horizontal- and vertical-axis meters were found to under- and over-register oblique flows with errors generally increasing as the velocity and angle of flow increased. For the oblique flow tests, magnitude of errors were smallest for horizontal-axis meters. Repeatability of all meters tested was good, with the horizontal- and vertical-axis meters performing similarly.

  3. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part II—Experimental Implementation

    PubMed Central

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    Coordinate measuring machines (CMM) are main instruments of measurement in laboratories and in industrial quality control. A compensation error model has been formulated (Part I). It integrates error and uncertainty in the feature measurement model. Experimental implementation for the verification of this model is carried out based on the direct testing on a moving bridge CMM. The regression results by axis are quantified and compared to CMM indication with respect to the assigned values of the measurand. Next, testing of selected measurements of length, flatness, dihedral angle, and roundness features are accomplished. The measurement of calibrated gauge blocks for length or angle, flatness verification of the CMM granite table and roundness of a precision glass hemisphere are presented under a setup of repeatability conditions. The results are analysed and compared with alternative methods of estimation. The overall performance of the model is endorsed through experimental verification, as well as the practical use and the model capability to contribute in the improvement of current standard CMM measuring capabilities. PMID:27754441

  4. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    NASA Astrophysics Data System (ADS)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  5. Evaluation and statistical inference for human connectomes.

    PubMed

    Pestilli, Franco; Yeatman, Jason D; Rokem, Ariel; Kay, Kendrick N; Wandell, Brian A

    2014-10-01

    Diffusion-weighted imaging coupled with tractography is currently the only method for in vivo mapping of human white-matter fascicles. Tractography takes diffusion measurements as input and produces the connectome, a large collection of white-matter fascicles, as output. We introduce a method to evaluate the evidence supporting connectomes. Linear fascicle evaluation (LiFE) takes any connectome as input and predicts diffusion measurements as output, using the difference between the measured and predicted diffusion signals to quantify the prediction error. We use the prediction error to evaluate the evidence that supports the properties of the connectome, to compare tractography algorithms and to test hypotheses about tracts and connections.

  6. Reliable and accurate extraction of Hamaker constants from surface force measurements.

    PubMed

    Miklavcic, S J

    2018-08-15

    A simple and accurate closed-form expression for the Hamaker constant that best represents experimental surface force data is presented. Numerical comparisons are made with the current standard least squares approach, which falsely assumes error-free separation measurements, and a nonlinear version assuming independent measurements of force and separation are subject to error. The comparisons demonstrate that not only is the proposed formula easily implemented it is also considerably more accurate. This option is appropriate for any value of Hamaker constant, high or low, and certainly for any interacting system exhibiting an inverse square distance dependent van der Waals force. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Single plane angiography: Current applications and limitations

    NASA Technical Reports Server (NTRS)

    Falsetti, H. L.; Carroll, R. J.

    1975-01-01

    Technical errors in measurement of one plane cineangiography are identified. Examples of angiographic estimates of left ventricular geometry are given. These estimates of contractility are useful in evaluating myocardial performance.

  8. Addressing Phase Errors in Fat-Water Imaging Using a Mixed Magnitude/Complex Fitting Method

    PubMed Central

    Hernando, D.; Hines, C. D. G.; Yu, H.; Reeder, S.B.

    2012-01-01

    Accurate, noninvasive measurements of liver fat content are needed for the early diagnosis and quantitative staging of nonalcoholic fatty liver disease. Chemical shift-based fat quantification methods acquire images at multiple echo times using a multiecho spoiled gradient echo sequence, and provide fat fraction measurements through postprocessing. However, phase errors, such as those caused by eddy currents, can adversely affect fat quantification. These phase errors are typically most significant at the first echo of the echo train, and introduce bias in complex-based fat quantification techniques. These errors can be overcome using a magnitude-based technique (where the phase of all echoes is discarded), but at the cost of significantly degraded signal-to-noise ratio, particularly for certain choices of echo time combinations. In this work, we develop a reconstruction method that overcomes these phase errors without the signal-to-noise ratio penalty incurred by magnitude fitting. This method discards the phase of the first echo (which is often corrupted) while maintaining the phase of the remaining echoes (where phase is unaltered). We test the proposed method on 104 patient liver datasets (from 52 patients, each scanned twice), where the fat fraction measurements are compared to coregistered spectroscopy measurements. We demonstrate that mixed fitting is able to provide accurate fat fraction measurements with high signal-to-noise ratio and low bias over a wide choice of echo combinations. PMID:21713978

  9. Formaldehyde Distribution over North America: Implications for Satellite Retrievals of Formaldehyde Columns and Isoprene Emission

    NASA Technical Reports Server (NTRS)

    Millet, Dylan B.; Jacob, Daniel J.; Turquety, Solene; Hudman, Rynda C.; Wu, Shiliang; Anderson, Bruce E.; Fried, Alan; Walega, James; Heikes, Brian G.; Blake, Donald R.; hide

    2006-01-01

    Formaldehyde (HCHO) columns measured from space provide constraints on emissions of volatile organic compounds (VOCs). Quantitative interpretation requires characterization of errors in HCHO column retrievals and relating these columns to VOC emissions. Retrieval error is mainly in the air mass factor (AMF) which relates fitted backscattered radiances to vertical columns and requires external information on HCHO, aerosols, and clouds. Here we use aircraft data collected over North America and the Atlantic to determine the local relationships between HCHO columns and VOC emissions, calculate AMFs for HCHO retrievals, assess the errors in deriving AMFs with a chemical transport model (GEOS-Chem), and draw conclusions regarding space-based mapping of VOC emissions. We show that isoprene drives observed HCHO column variability over North America; HCHO column data from space can thus be used effectively as a proxy for isoprene emission. From observed HCHO and isoprene profiles we find an HCHO molar yield from isoprene oxidation of 1.6 +/- 0.5, consistent with current chemical mechanisms. Clouds are the primary error source in the AMF calculation; errors in the HCHO vertical profile and aerosols have comparatively little effect. The mean bias and 1Q uncertainty in the GEOS-Chem AMF calculation increase from <1% and 15% for clear skies to 17% and 24% for half-cloudy scenes. With fitting errors, this gives an overall 1 Q error in HCHO satellite measurements of 25-31%. Retrieval errors, combined with uncertainties in the HCHO yield from isoprene oxidation, result in a 40% (1sigma) error in inferring isoprene emissions from HCHO satellite measurements.

  10. Resection plane-dependent error in computed tomography volumetry of the right hepatic lobe in living liver donors.

    PubMed

    Kwon, Heon-Ju; Kim, Kyoung Won; Kim, Bohyun; Kim, So Yeon; Lee, Chul Seung; Lee, Jeongjin; Song, Gi Won; Lee, Sung Gyu

    2018-03-01

    Computed tomography (CT) hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT). However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (V P ) was measured via the assumptive hepatectomy plane. Retrospective liver volume (V R ) was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W), errors in percentage (%) V P and V R were evaluated. Plane-dependent error in V P was defined as the absolute difference between V P and V R . % plane-dependent error was defined as follows: |V P -V R |/W∙100. Mean V P , V R , and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in V P were 73.3 mL and 10.7%. Mean error and % error in V R were 64.4 mL and 9.3%. Mean plane-dependent error in V P was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in V P exceeded 10% of W in approximately 10% of the subjects in our study. There was approximately 5% plane-dependent error in liver V P on CT volumetry. Plane-dependent error in V P exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane.

  11. Bayesian Meta-Analysis of Coefficient Alpha

    ERIC Educational Resources Information Center

    Brannick, Michael T.; Zhang, Nanhua

    2013-01-01

    The current paper describes and illustrates a Bayesian approach to the meta-analysis of coefficient alpha. Alpha is the most commonly used estimate of the reliability or consistency (freedom from measurement error) for educational and psychological measures. The conventional approach to meta-analysis uses inverse variance weights to combine…

  12. Error processing in current and former cocaine users

    PubMed Central

    Castelluccio, Brian C.; Meda, Shashwath A.; Muska, Christine E.; Stevens, Michael C.; Pearlson, Godfrey D.

    2013-01-01

    Deficits in response inhibition and error processing can result in maladaptive behavior, including failure to use past mistakes to inform present decisions. A specific deficit in inhibiting a prepotent response represents one aspect of impulsivity and is a prominent feature of addictive behaviors in general, including cocaine abuse/dependence. Brain regions implicated in cognitive control exhibit reduced activation in cocaine abusers. The purposes of the present investigation were (1) to identify neural differences associated with error processing in current and former cocaine-dependent individuals compared to healthy controls and (2) to determine whether former, long-term abstinent cocaine users showed similar differences compared with current users. The present study used an fMRI Go/No-Go task to investigate differences in BOLD response to correct rejections and false alarms between current cocaine users (n=30), former cocaine users (n=29), and healthy controls (n=35). Impulsivity trait measures were also assessed and compared with BOLD activity. Nineteen regions of interest previously implicated in errors of disinhibition were queried. There were no group differences in the correct rejections condition, but both current and former users exhibited increased BOLD response relative to controls for false alarms. In current users, the pregenual cingulate gyrus and left angular/supramarginal gyri overactivated. In former users, the right middle frontal/precentral gyri, right inferior parietal lobule, and left angular/supramarginal gyri overactivated. Overall, our results support a hypothesis that neural activity in former users differs more from healthy controls than that of current users due to cognitive compensation that facilitates abstinence. PMID:23949893

  13. GUM Analysis for TIMS and SIMS Isotopic Ratios in Graphite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heasler, Patrick G.; Gerlach, David C.; Cliff, John B.

    2007-04-01

    This report describes GUM calculations for TIMS and SIMS isotopic ratio measurements of reactor graphite samples. These isotopic ratios are used to estimate reactor burn-up, and currently consist of various ratios of U, Pu, and Boron impurities in the graphite samples. The GUM calculation is a propagation of error methodology that assigns uncertainties (in the form of standard error and confidence bound) to the final estimates.

  14. Monitoring surface currents and transport variability in Drake Passage using altimetry and hydrography

    NASA Astrophysics Data System (ADS)

    Pavic, M.; Cunningham, S. A.; Challenor, P.; Duncan, L.

    2003-04-01

    Between 1993 and 2001 the UK has completed seven occupations of WOCE section SR1b from Burdwood Bank to Elephant Island across Drake Passage. The section consists of a minimum of 31 full depth CTD stations, shipboard ADCP measurements of currents in the upper 300m, and in three of the years full depth lowered ADCP measurements at each station. The section lies under the satellite track of ERS2. The satellite altimeter can determine the along track slope of the sea surface relative to a reference satellite pass once every 35 days. From this we can calculate the relative SSH slope or geostrophic surface current anomalies. If we measure simultaneously with any satellite pass, we can estimate the absolute surface geostrophic current for any subsequent pass. This says that by combining in situ absolute velocity measurements - the reference velocities with altimetry at one time the absolute geostrophic current can be estimated on any subsequent (or previous) altimeter pass. This is the method of Challenor et al. 1996, though they did not have the data to test this relationship. We have seven estimates of the surface reference velocity: one for each of the seven occupations of the WOCE line. The difference in any pair of reference velocities is predicted by the difference of the corresponding altimeter measurements. Errors in combining the satellite and hydrographic data are estimated by comparing pairs of these differences: errors arise from the in situ observations and from the altimetric measurements. Finally we produce our best estimates of eight years of absolute surface geostrophic currents and transport variability along WOCE section SR1 in Drake Passage.

  15. Coil motion effects in watt balances: a theoretical check

    NASA Astrophysics Data System (ADS)

    Li, Shisong; Schlamminger, Stephan; Haddad, Darine; Seifert, Frank; Chao, Leon; Pratt, Jon R.

    2016-04-01

    A watt balance is a precision apparatus for the measurement of the Planck constant that has been proposed as a primary method for realizing the unit of mass in a revised International System of Units. In contrast to an ampere balance, which was historically used to realize the unit of current in terms of the kilogram, the watt balance relates electrical and mechanical units through a virtual power measurement and has far greater precision. However, because the virtual power measurement requires the execution of a prescribed motion of a coil in a fixed magnetic field, systematic errors introduced by horizontal and rotational deviations of the coil from its prescribed path will compromise the accuracy. We model these potential errors using an analysis that accounts for the fringing field in the magnet, creating a framework for assessing the impact of this class of errors on the uncertainty of watt balance results.

  16. Performance analysis of Rogowski coils and the measurement of the total toroidal current in the ITER machine

    NASA Astrophysics Data System (ADS)

    Quercia, A.; Albanese, R.; Fresa, R.; Minucci, S.; Arshad, S.; Vayakis, G.

    2017-12-01

    The paper carries out a comprehensive study of the performances of Rogowski coils. It describes methodologies that were developed in order to assess the capabilities of the Continuous External Rogowski (CER), which measures the total toroidal current in the ITER machine. Even though the paper mainly considers the CER, the contents are general and relevant to any Rogowski sensor. The CER consists of two concentric helical coils which are wound along a complex closed path. Modelling and computational activities were performed to quantify the measurement errors, taking detailed account of the ITER environment. The geometrical complexity of the sensor is accurately accounted for and the standard model which provides the classical expression to compute the flux linkage of Rogowski sensors is quantitatively validated. Then, in order to take into account the non-ideality of the winding, a generalized expression, formally analogue to the classical one, is presented. Models to determine the worst case and the statistical measurement accuracies are hence provided. The following sources of error are considered: effect of the joints, disturbances due to external sources of field (the currents flowing in the poloidal field coils and the ferromagnetic inserts of ITER), deviations from ideal geometry, toroidal field variations, calibration, noise and integration drift. The proposed methods are applied to the measurement error of the CER, in particular in its high and low operating ranges, as prescribed by the ITER system design description documents, and during transients, which highlight the large time constant related to the shielding of the vacuum vessel. The analyses presented in the paper show that the design of the CER diagnostic is capable of achieving the requisite performance as needed for the operation of the ITER machine.

  17. New Class of Quantum Error-Correcting Codes for a Bosonic Mode

    NASA Astrophysics Data System (ADS)

    Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.

    2016-07-01

    We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.

  18. Assessment of Functional Change and Cognitive Correlates in the Progression from Healthy Cognitive Aging to Dementia

    PubMed Central

    Schmitter-Edgecombe, Maureen; Parsey, Carolyn M.

    2014-01-01

    Objective There is currently limited understanding of the course of change in everyday functioning that occurs with normal aging and dementia. To better characterize the nature of this change, we evaluated the types of errors made by participants as they performed everyday tasks in a naturalistic environment. Method Participants included cognitively healthy younger adults (YA; N = 55) and older adults (OA; N =88), and individuals with mild cognitive impairment (MCI: N =55) and dementia (N = 18). Participants performed eight scripted everyday activities (e.g., filling a medication dispenser) while under direct observation in a campus apartment. Task performances were coded for the following errors: inefficient actions, omissions, substitutions, and irrelevant actions. Results Performance accuracy decreased with age and level of cognitive impairment. Relative to the YAs, the OA group exhibited more inefficient actions which were linked to performance on neuropsychological measures of executive functioning. Relative to the OAs, the MCI group committed significantly more omission errors which were strongly linked to performance on memory measures. All error types were significantly more prominent in individuals with dementia. Omission errors uniquely predicted everyday functional status as measured by both informant-report and a performance-based measure. Conclusions These findings suggest that in the progression from healthy aging to MCI, everyday task difficulties may evolve from task inefficiencies to task omission errors, leading to inaccuracies in task completion that are recognized by knowledgeable informants. Continued decline in cognitive functioning then leads to more substantial everyday errors, which compromise ability to live independently. PMID:24933485

  19. Diagnostics of flexible workpiece using acoustic emission, acceleration and eddy current sensors in milling operation

    NASA Astrophysics Data System (ADS)

    Filippov, A. V.; Tarasov, S. Yu.; Filippova, E. O.; Chazov, P. A.; Shamarin, N. N.; Podgornykh, O. A.

    2016-11-01

    Monitoring of the edge clamped workpiece deflection during milling has been carried our using acoustic emission, accelerometer and eddy current sensors. Such a monitoring is necessary in precision machining of vital parts used in air-space engineering where a majority of them made by milling. The applicability of the AE, accelerometers and eddy current sensors has been discussed together with the analysis of measurement errors. The appropriate sensor installation diagram has been proposed for measuring the workpiece elastic deflection exerted by the cutting force.

  20. The sensitivity of derived estimates to the measurment quality objectives for independent variables

    Treesearch

    Francis A. Roesch

    2002-01-01

    The effect of varying the allowed measurement error for individual tree variables upon county estimates of gross cubic-foot volume was examined. Measurement Quality Ob~ectives (MQOs) for three forest tree variables (biological identity, diameter, and height) used in individual tree gross cubic-foot volume equations were varied from the current USDA Forest Service...

  1. The Sensitivity of Derived Estimates to the Measurement Quality Objectives for Independent Variables

    Treesearch

    Francis A. Roesch

    2005-01-01

    The effect of varying the allowed measurement error for individual tree variables upon county estimates of gross cubic-foot volume was examined. Measurement Quality Objectives (MQOs) for three forest tree variables (biological identity, diameter, and height) used in individual tree gross cubic-foot volume equations were varied from the current USDA Forest Service...

  2. The effects of vertical motion on the performance of current meters

    USGS Publications Warehouse

    Thibodeaux, K.G.; Futrell, J. C.

    1987-01-01

    A series of tests to determine the correction coefficients for Price type AA and Price type OAA current meters, when subjected to vertical motion in a towing tank, have been conducted. During these tests, the meters were subjected to vertical travel that ranged from 1.0 to 4.0 ft and vertical rates of travel that ranged from 0.33 to 1.20 ft/sec while being towed through the water at speeds ranging from 0 to 8 ft/sec. The tests show that type AA and type OAA current meters are affected adversely by the rate of vertical motion and the distance of vertical travel. In addition, the tests indicate that when current meters are moved vertically, correction coefficients must be applied to the observed meter velocities to correct for the registration errors that are induced by the vertical motion. The type OAA current meter under-registers and the type AA current meter over-registers in observed meter velocity. These coefficients for the type OAA current meter range from 0.99 to 1.49 and for the type AA current meter range from 0.33 to 1.07. When making current meter measurements from a boat or a cableway, errors in observed current meter velocity will occur when the bobbing of a boat or cableway places the current meter into vertical motion. These errors will be significant when flowing water is < 2 ft/sec and the rate of vertical motion is > 0.3 ft/sec. (Author 's abstract)

  3. The effect of the dynamic wet troposphere on VLBI measurements

    NASA Technical Reports Server (NTRS)

    Treuhaft, R. N.; Lanyi, G. E.

    1986-01-01

    Calculations using a statistical model of water vapor fluctuations yield the effect of the dynamic wet troposphere on Very Long Baseline Interferometry (VLBI) measurements. The statistical model arises from two primary assumptions: (1) the spatial structure of refractivity fluctuations can be closely approximated by elementary (Kolmogorov) turbulence theory, and (2) temporal fluctuations are caused by spatial patterns which are moved over a site by the wind. The consequences of these assumptions are outlined for the VLBI delay and delay rate observables. For example, wet troposphere induced rms delays for Deep Space Network (DSN) VLBI at 20-deg elevation are about 3 cm of delay per observation, which is smaller, on the average, than other known error sources in the current DSN VLBI data set. At 20-deg elevation for 200-s time intervals, water vapor induces approximately 1.5 x 10 to the minus 13th power s/s in the Allan standard deviation of interferometric delay, which is a measure of the delay rate observable error. In contrast to the delay error, the delay rate measurement error is dominated by water vapor fluctuations. Water vapor induced VLBI parameter errors and correlations are calculated. For the DSN, baseline length parameter errors due to water vapor fluctuations are in the range of 3 to 5 cm. The above physical assumptions also lead to a method for including the water vapor fluctuations in the parameter estimation procedure, which is used to extract baseline and source information from the VLBI observables.

  4. Measurement error: Implications for diagnosis and discrepancy models of developmental dyslexia.

    PubMed

    Cotton, Sue M; Crewther, David P; Crewther, Sheila G

    2005-08-01

    The diagnosis of developmental dyslexia (DD) is reliant on a discrepancy between intellectual functioning and reading achievement. Discrepancy-based formulae have frequently been employed to establish the significance of the difference between 'intelligence' and 'actual' reading achievement. These formulae, however, often fail to take into consideration test reliability and the error associated with a single test score. This paper provides an illustration of the potential effects that test reliability and measurement error can have on the diagnosis of dyslexia, with particular reference to discrepancy models. The roles of reliability and standard error of measurement (SEM) in classic test theory are also briefly reviewed. This is followed by illustrations of how SEM and test reliability can aid with the interpretation of a simple discrepancy-based formula of DD. It is proposed that a lack of consideration of test theory in the use of discrepancy-based models of DD can lead to misdiagnosis (both false positives and false negatives). Further, misdiagnosis in research samples affects reproducibility and generalizability of findings. This in turn, may explain current inconsistencies in research on the perceptual, sensory, and motor correlates of dyslexia.

  5. Errors in short circuit measurements due to spectral mismatch between sunlight and solar simulators

    NASA Technical Reports Server (NTRS)

    Curtis, H. B.

    1976-01-01

    Errors in short circuit current measurement were calculated for a variety of spectral mismatch conditions. The differences in spectral irradiance between terrestrial sunlight and three types of solar simulator were studied, as well as the differences in spectral response between three types of reference solar cells and various test cells. The simulators considered were a short arc xenon lamp AMO sunlight simulator, an ordinary quartz halogen lamp, and an ELH-type quartz halogen lamp. Three types of solar cells studied were a silicon cell, a cadmium sulfide cell and a gallium arsenide cell.

  6. Error measure comparison of currently employed dose-modulation schemes for e-beam proximity effect control

    NASA Astrophysics Data System (ADS)

    Peckerar, Martin C.; Marrian, Christie R.

    1995-05-01

    Standard matrix inversion methods of e-beam proximity correction are compared with a variety of pseudoinverse approaches based on gradient descent. It is shown that the gradient descent methods can be modified using 'regularizers' (terms added to the cost function minimized during gradient descent). This modification solves the 'negative dose' problem in a mathematically sound way. Different techniques are contrasted using a weighted error measure approach. It is shown that the regularization approach leads to the highest quality images. In some cases, ignoring negative doses yields results which are worse than employing an uncorrected dose file.

  7. Predicting tropical cyclone intensity using satellite measured equivalent blackbody temperatures of cloud tops. [regression analysis

    NASA Technical Reports Server (NTRS)

    Gentry, R. C.; Rodgers, E.; Steranka, J.; Shenk, W. E.

    1978-01-01

    A regression technique was developed to forecast 24 hour changes of the maximum winds for weak (maximum winds less than or equal to 65 Kt) and strong (maximum winds greater than 65 Kt) tropical cyclones by utilizing satellite measured equivalent blackbody temperatures around the storm alone and together with the changes in maximum winds during the preceding 24 hours and the current maximum winds. Independent testing of these regression equations shows that the mean errors made by the equations are lower than the errors in forecasts made by the peristence techniques.

  8. Error analysis of multi-needle Langmuir probe measurement technique.

    PubMed

    Barjatya, Aroh; Merritt, William

    2018-04-01

    Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.

  9. Error analysis of multi-needle Langmuir probe measurement technique

    NASA Astrophysics Data System (ADS)

    Barjatya, Aroh; Merritt, William

    2018-04-01

    Multi-needle Langmuir probe is a fairly new instrument technique that has been flown on several recent sounding rockets and is slated to fly on a subset of QB50 CubeSat constellation. This paper takes a fundamental look into the data analysis procedures used for this instrument to derive absolute electron density. Our calculations suggest that while the technique remains promising, the current data analysis procedures could easily result in errors of 50% or more. We present a simple data analysis adjustment that can reduce errors by at least a factor of five in typical operation.

  10. Optimization of Control Points Number at Coordinate Measurements based on the Monte-Carlo Method

    NASA Astrophysics Data System (ADS)

    Korolev, A. A.; Kochetkov, A. V.; Zakharov, O. V.

    2018-01-01

    Improving the quality of products causes an increase in the requirements for the accuracy of the dimensions and shape of the surfaces of the workpieces. This, in turn, raises the requirements for accuracy and productivity of measuring of the workpieces. The use of coordinate measuring machines is currently the most effective measuring tool for solving similar problems. The article proposes a method for optimizing the number of control points using Monte Carlo simulation. Based on the measurement of a small sample from batches of workpieces, statistical modeling is performed, which allows one to obtain interval estimates of the measurement error. This approach is demonstrated by examples of applications for flatness, cylindricity and sphericity. Four options of uniform and uneven arrangement of control points are considered and their comparison is given. It is revealed that when the number of control points decreases, the arithmetic mean decreases, the standard deviation of the measurement error increases and the probability of the measurement α-error increases. In general, it has been established that it is possible to repeatedly reduce the number of control points while maintaining the required measurement accuracy.

  11. Error analysis for fast scintillator-based inertial confinement fusion burn history measurements

    NASA Astrophysics Data System (ADS)

    Lerche, R. A.; Ognibene, T. J.

    1999-01-01

    Plastic scintillator material acts as a neutron-to-light converter in instruments that make inertial confinement fusion burn history measurements. Light output for a detected neutron in current instruments has a fast rise time (<20 ps) and a relatively long decay constant (1.2 ns). For a burst of neutrons whose duration is much shorter than the decay constant, instantaneous light output is approximately proportional to the integral of the neutron interaction rate with the scintillator material. Burn history is obtained by deconvolving the exponential decay from the recorded signal. The error in estimating signal amplitude for these integral measurements is calculated and compared with a direct measurement in which light output is linearly proportional to the interaction rate.

  12. Impact of resident duty hour limits on safety in the intensive care unit: a national survey of pediatric and neonatal intensivists.

    PubMed

    Typpo, Katri V; Tcharmtchi, M Hossein; Thomas, Eric J; Kelly, P Adam; Castillo, Leticia D; Singh, Hardeep

    2012-09-01

    Resident duty-hour regulations potentially shift the workload from resident to attending physicians. We sought to understand how current or future regulatory changes might impact safety in academic pediatric and neonatal intensive care units. Web-based survey. U.S. academic pediatric and neonatal intensive care units. Attending pediatric and neonatal intensivists. We evaluated perceptions on four intensive care unit safety-related risk measures potentially affected by current duty-hour regulations: 1) attending physician and resident fatigue; 2) attending physician workload; 3) errors (self-reported rates by attending physicians or perceived resident error rates); and 4) safety culture. We also evaluated perceptions of how these risks would change with further duty-hour restrictions. We administered our survey between February and April 2010 to 688 eligible physicians, of whom 360 (52.3%) responded. Most believed that resident error rates were unchanged or worse (91.9%) and safety culture was unchanged or worse (84.4%) with current duty-hour regulations. Of respondents, 61.9% believed their own work-hours providing direct patient care increased and 55.8% believed they were more fatigued while providing direct patient care. Most (85.3%) perceived no increase in their own error rates currently, but in the scenario of further reduction in resident duty-hours, over half (53.3%) believed that safety culture would worsen and a significant proportion (40.3%) believed that their own error rates would increase. Pediatric intensivists do not perceive improved patient safety from current resident duty-hour restrictions. Policies to further restrict resident duty-hours should consider unintended consequences of worsening certain aspects of intensive care unit safety.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.

    Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence ofmore » the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  15. New developments in spatial interpolation methods of Sea-Level Anomalies in the Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Troupin, Charles; Barth, Alexander; Beckers, Jean-Marie; Pascual, Ananda

    2014-05-01

    The gridding of along-track Sea-Level Anomalies (SLA) measured by a constellation of satellites has numerous applications in oceanography, such as model validation, data assimilation or eddy tracking. Optimal Interpolation (OI) is often the preferred method for this task, as it leads to the lowest expected error and provides an error field associated to the analysed field. However, the numerical cost of the method may limit its utilization in situations where the number of data points is significant. Furthermore, the separation of non-adjacent regions with OI requires adaptation of the code, leading to a further increase of the numerical cost. To solve these issues, the Data-Interpolating Variational Analysis (DIVA), a technique designed to produce gridded from sparse in situ measurements, is applied on SLA data in the Mediterranean Sea. DIVA and OI have been shown to be equivalent (provided some assumptions on the covariances are made). The main difference lies in the covariance function, which is not explicitly formulated in DIVA. The particular spatial and temporal distributions of measurements required adaptation in the Software tool (data format, parameter determinations, ...). These adaptation are presented in the poster. The daily analysed and error fields obtained with this technique are compared with available products such as the gridded field from the Archiving, Validation and Interpretation of Satellite Oceanographic data (AVISO) data server. The comparison reveals an overall good agreement between the products. The time evolution of the mean error field evidences the need of a large number of simultaneous altimetry satellites: in period during which 4 satellites are available, the mean error is on the order of 17.5%, while when only 2 satellites are available, the error exceeds 25%. Finally, we propose the use sea currents to improve the results of the interpolation, especially in the coastal area. These currents can be constructed from the bathymetry or extracted from a HF radar located in the Balearic Sea.

  16. An error reduction algorithm to improve lidar turbulence estimates for wind energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Clifton, Andrew

    Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidarsmore » in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine-learning methods in L-TERRA was highly dependent on the input variables and training dataset used, suggesting that machine learning may not be the best technique for reducing lidar turbulence intensity (TI) error. Future work will include the use of a lidar simulator to better understand how different factors affect lidar turbulence error and to determine how these errors can be reduced using information from a stand-alone lidar.« less

  17. An error reduction algorithm to improve lidar turbulence estimates for wind energy

    DOE PAGES

    Newman, Jennifer F.; Clifton, Andrew

    2017-02-10

    Remote-sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers for the measurement of wind speed and direction. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, vertically profiling lidarsmore » in locations where high-resolution sonic anemometer data are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The Lidar Turbulence Error Reduction Algorithm, L-TERRA, can be applied using only data from a stand-alone vertically profiling lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of physics-based corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. The lessons learned from creating the L-TERRA model for a WINDCUBE v2 lidar can also be applied to other lidar devices. L-TERRA was tested on data from two sites in the Southern Plains region of the United States. The physics-based corrections in L-TERRA brought regression line slopes much closer to 1 at both sites and significantly reduced the sensitivity of lidar turbulence errors to atmospheric stability. The accuracy of machine-learning methods in L-TERRA was highly dependent on the input variables and training dataset used, suggesting that machine learning may not be the best technique for reducing lidar turbulence intensity (TI) error. Future work will include the use of a lidar simulator to better understand how different factors affect lidar turbulence error and to determine how these errors can be reduced using information from a stand-alone lidar.« less

  18. GUM Analysis for SIMS Isotopic Ratios in BEP0 Graphite Qualification Samples, Round 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerlach, David C.; Heasler, Patrick G.; Reid, Bruce D.

    2009-01-01

    This report describes GUM calculations for TIMS and SIMS isotopic ratio measurements of reactor graphite samples. These isotopic ratios are used to estimate reactor burn-up, and currently consist of various ratios of U, Pu, and Boron impurities in the graphite samples. The GUM calculation is a propagation of error methodology that assigns uncertainties (in the form of standard error and confidence bound) to the final estimates.

  19. Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil †

    PubMed Central

    Xu, Mengyuan; Yan, Jing; Geng, Yingsan; Zhang, Kun; Sun, Chao

    2018-01-01

    An innovative array of magnetic coils (the discrete Rogowski coil—RC) with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP) method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC’s interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors. PMID:29534006

  20. Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil.

    PubMed

    Xu, Mengyuan; Yan, Jing; Geng, Yingsan; Zhang, Kun; Sun, Chao

    2018-03-13

    An innovative array of magnetic coils (the discrete Rogowski coil-RC) with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP) method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC's interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors.

  1. Precision and Error of Three-dimensional Phenotypic Measures Acquired from 3dMD Photogrammetric Images

    PubMed Central

    Aldridge, Kristina; Boyadjiev, Simeon A.; Capone, George T.; DeLeon, Valerie B.; Richtsmeier, Joan T.

    2015-01-01

    The genetic basis for complex phenotypes is currently of great interest for both clinical investigators and basic scientists. In order to acquire a thorough understanding of the translation from genotype to phenotype, highly precise measures of phenotypic variation are required. New technologies, such as 3D photogrammetry are being implemented in phenotypic studies due to their ability to collect data rapidly and non-invasively. Before these systems can be broadly implemented the error associated with data collected from images acquired using these technologies must be assessed. This study investigates the precision, error, and repeatability associated with anthropometric landmark coordinate data collected from 3D digital photogrammetric images acquired with the 3dMDface System. Precision, error due to the imaging system, error due to digitization of the images, and repeatability are assessed in a sample of children and adults (N=15). Results show that data collected from images with the 3dMDface System are highly repeatable and precise. The average error associated with the placement of landmarks is sub-millimeter; both the error due to digitization and to the imaging system are very low. The few measures showing a higher degree of error include those crossing the labial fissure, which are influenced by even subtle movement of the mandible. These results suggest that 3D anthropometric data collected using the 3dMDface System are highly reliable and therefore useful for evaluation of clinical dysmorphology and surgery, analyses of genotype-phenotype correlations, and inheritance of complex phenotypes. PMID:16158436

  2. Refinements in the short-circuit technique and its application to active potassium transport across the cecropia midgut.

    PubMed

    Wood, J L; Moreton, R B

    1978-12-01

    1. The conventional, two-electrode method for measuring potential difference across an epithelium is subject to error due to potential gradients caused by current flow in the bathing medium. Mathematical analysis shows that the error in measuring short-circuit current is proportional to the resistivity of the bathing medium and to the separation of the two recording electrodes. It is particularly serious for the insect larval midgut, where the resistivity of the medium is high, and that of the tissue is low. 2. A system has been devised, which uses a third recording electrode to monitor directly the potential gradient in the bathing medium. By suitable electrical connexions, the gradient can be automatically compensated, leaving a residual error which depends on the thickness of the tissue, but not on the electrode separation. Because the thicknesses of most epithelia are smaller than the smallest practical electrode spacing, this error is smaller than that inherent in a two-electrode system. 3. Since voltage-gradients are automatically compensated, it is possible to obtain continuous readings of potential and current. A 'voltage-clamp' circuit is described, which allows the time-course of the short-circuit current to be studied. 4.The three-electrode system has been used to study the larval midgut of Hyalophora cecropia. The average results from five experiments were: initial potential difference (open-circuit): 98+/-11 mV (S.E.M.); short-circuit current at time 60 min: 498+/-160 microA cm=2; 'steady-state' resistance at 60 min: 150+/-26 omega cm2. The current is equivalent to a net potassium transport of 18.6 mu-equiv cm-2 h-1. 5. The electrical parameters of the midgut change rapidly with time. The potential difference decays with a half-time of about 158 min, the resistance increases with a half-time of about 16 min, and the short-circuit current decays as the sum of two exponential terms, with half-times of about 16 and 158 min respectively. In addition, potential and short-circuit current show transient responses to step changes. 6. The properties of the midgut are compared with those of other transporting epithelia, and their dependence on the degree of folding of the preparation is discussed. Their time-dependence is discussed in the context of changes in potassium content of the tissue, and the implications for measurements depending on the assumption of a steady state are outlined.

  3. Measurement of tokamak error fields using plasma response and its applicability to ITER

    DOE PAGES

    Strait, Edward J.; Buttery, Richard J.; Casper, T. A.; ...

    2014-04-17

    The nonlinear response of a low-beta tokamak plasma to non-axisymmetric fields offers an alternative to direct measurement of the non-axisymmetric part of the vacuum magnetic fields, often termed “error fields”. Possible approaches are discussed for determination of error fields and the required current in non-axisymmetric correction coils, with an emphasis on two relatively new methods: measurement of the torque balance on a saturated magnetic island, and measurement of the braking of plasma rotation in the absence of an island. The former is well suited to ohmically heated discharges, while the latter is more appropriate for discharges with a modest amountmore » of neutral beam heating to drive rotation. Both can potentially provide continuous measurements during a discharge, subject to the limitation of a minimum averaging time. The applicability of these methods to ITER is discussed, and an estimate is made of their uncertainties in light of the specifications of ITER’s diagnostic systems. Furthermore, the use of plasma response-based techniques in normal ITER operational scenarios may allow identification of the error field contributions by individual central solenoid coils, but identification of the individual contributions by the outer poloidal field coils or other sources is less likely to be feasible.« less

  4. Architectural elements of hybrid navigation systems for future space transportation

    NASA Astrophysics Data System (ADS)

    Trigo, Guilherme F.; Theil, Stephan

    2018-06-01

    The fundamental limitations of inertial navigation, currently employed by most launchers, have raised interest for GNSS-aided solutions. Combination of inertial measurements and GNSS outputs allows inertial calibration online, solving the issue of inertial drift. However, many challenges and design options unfold. In this work we analyse several architectural elements and design aspects of a hybrid GNSS/INS navigation system conceived for space transportation. The most fundamental architectural features such as coupling depth, modularity between filter and inertial propagation, and open-/closed-loop nature of the configuration, are discussed in the light of the envisaged application. Importance of the inertial propagation algorithm and sensor class in the overall system are investigated, being the handling of sensor errors and uncertainties that arise with lower grade sensory also considered. In terms of GNSS outputs we consider receiver solutions (position and velocity) and raw measurements (pseudorange, pseudorange-rate and time-difference carrier phase). Receiver clock error handling options and atmospheric error correction schemes for these measurements are analysed under flight conditions. System performance with different GNSS measurements is estimated through covariance analysis, being the differences between loose and tight coupling emphasized through partial outage simulation. Finally, we discuss options for filter algorithm robustness against non-linearities and system/measurement errors. A possible scheme for fault detection, isolation and recovery is also proposed.

  5. Prescribers' expectations and barriers to electronic prescribing of controlled substances

    PubMed Central

    Kim, Meelee; McDonald, Ann; Kreiner, Peter; Kelleher, Stephen J; Blackman, Michael B; Kaufman, Peter N; Carrow, Grant M

    2011-01-01

    Objective To better understand barriers associated with the adoption and use of electronic prescribing of controlled substances (EPCS), a practice recently established by US Drug Enforcement Administration regulation. Materials and methods Prescribers of controlled substances affiliated with a regional health system were surveyed regarding current electronic prescribing (e-prescribing) activities, current prescribing of controlled substances, and expectations and barriers to the adoption of EPCS. Results 246 prescribers (response rate of 64%) represented a range of medical specialties, with 43.1% of these prescribers current users of e-prescribing for non-controlled substances. Reported issues with controlled substances included errors, pharmacy call-backs, and diversion; most prescribers expected EPCS to address many of these problems, specifically reduce medical errors, improve work flow and efficiency of practice, help identify prescription diversion or misuse, and improve patient treatment management. Prescribers expected, however, that it would be disruptive to practice, and over one-third of respondents reported that carrying a security authentication token at all times would be so burdensome as to discourage adoption. Discussion Although adoption of e-prescribing has been shown to dramatically reduce medication errors, challenges to efficient processes and errors still persist from the perspective of the prescriber, that may interfere with the adoption of EPCS. Most prescribers regarded EPCS security measures as a small or moderate inconvenience (other than carrying a security token), with advantages outweighing the burden. Conclusion Prescribers are optimistic about the potential for EPCS to improve practice, but view certain security measures as a burden and potential barrier. PMID:21946239

  6. Decodoku: Quantum error rorrection as a simple puzzle game

    NASA Astrophysics Data System (ADS)

    Wootton, James

    To build quantum computers, we need to detect and manage any noise that occurs. This will be done using quantum error correction. At the hardware level, QEC is a multipartite system that stores information non-locally. Certain measurements are made which do not disturb the stored information, but which do allow signatures of errors to be detected. Then there is a software problem. How to take these measurement outcomes and determine: a) The errors that caused them, and (b) how to remove their effects. For qubit error correction, the algorithms required to do this are well known. For qudits, however, current methods are far from optimal. We consider the error correction problem of qubit surface codes. At the most basic level, this is a problem that can be expressed in terms of a grid of numbers. Using this fact, we take the inherent problem at the heart of quantum error correction, remove it from its quantum context, and presented in terms of simple grid based puzzle games. We have developed three versions of these puzzle games, focussing on different aspects of the required algorithms. These have been presented and iOS and Android apps, allowing the public to try their hand at developing good algorithms to solve the puzzles. For more information, see www.decodoku.com. Funding from the NCCR QSIT.

  7. Flame exposure time on Langmuir probe degradation, ion density, and thermionic emission for flame temperature.

    PubMed

    Doyle, S J; Salvador, P R; Xu, K G

    2017-11-01

    The paper examines the effect of exposure time of Langmuir probes in an atmospheric premixed methane-air flame. The effects of probe size and material composition on current measurements were investigated, with molybdenum and tungsten probe tips ranging in diameter from 0.0508 to 0.1651 mm. Repeated prolonged exposures to the flame, with five runs of 60 s, resulted in gradual probe degradations (-6% to -62% area loss) which affected the measurements. Due to long flame exposures, two ion saturation currents were observed, resulting in significantly different ion densities ranging from 1.16 × 10 16 to 2.71 × 10 19 m -3 . The difference between the saturation currents is caused by thermionic emissions from the probe tip. As thermionic emission is temperature dependent, the flame temperature could thus be estimated from the change in current. The flame temperatures calculated from the difference in saturation currents (1734-1887 K) were compared to those from a conventional thermocouple (1580-1908 K). Temperature measurements obtained from tungsten probes placed in rich flames yielded the highest percent error (9.66%-18.70%) due to smaller emission current densities at lower temperatures. The molybdenum probe yielded an accurate temperature value with only 1.29% error. Molybdenum also demonstrated very low probe degradation in comparison to the tungsten probe tips (area reductions of 6% vs. 58%, respectively). The results also show that very little exposure time (<5 s) is needed to obtain a valid ion density measurement and that prolonged flame exposures can yield the flame temperature but also risks damage to the Langmuir probe tip.

  8. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmissionmore » and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  9. Remote sensing of ocean currents

    NASA Technical Reports Server (NTRS)

    Goldstein, R. M.; Zebker, H. A.; Barnett, T. P.

    1989-01-01

    A method of remotely measuring near-surface ocean currents with a synthetic aperture radar (SAR) is described. The apparatus consists of a single SAR transmitter and two receiving antennas. The phase difference between SAR image scenes obtained from the antennas forms an interferogram that is directly proportional to the surface current. The first field test of this technique against conventional measurements gives estimates of mean currents accurate to order 20 percent, that is, root-mean-square errors of 5 to 10 centimeters per second in mean flows of 27 to 56 centimeters per second. If the full potential of the method could be realized with spacecraft, then it might be possible to routinely monitor the surface currents of the world's oceans.

  10. Self-reported and observed punitive parenting prospectively predicts increased error-related brain activity in six-year-old children

    PubMed Central

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J.; Kujawa, Autumn J.; Laptook, Rebecca S.; Torpey, Dana C.; Klein, Daniel N.

    2017-01-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission—although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children’s ERN approximately three years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately three years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children’s error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this hypothesis. PMID:25092483

  11. Determining the near-surface current profile from measurements of the wave dispersion relation

    NASA Astrophysics Data System (ADS)

    Smeltzer, Benjamin; Maxwell, Peter; Aesøy, Eirik; Ellingsen, Simen

    2017-11-01

    The current-induced Doppler shifts of waves can yield information about the background mean flow, providing an attractive method of inferring the current profile in the upper layer of the ocean. We present measurements of waves propagating on shear currents in a laboratory water channel, as well as theoretical investigations of inversion techniques for determining the vertical current structure. Spatial and temporal measurements of the free surface profile obtained using a synthetic Schlieren method are analyzed to determine the wave dispersion relation and Doppler shifts as a function of wavelength. The vertical current profile can then be inferred from the Doppler shifts using an inversion algorithm. Most existing algorithms rely on a priori assumptions of the shape of the current profile, and developing a method that uses less stringent assumptions is a focus of this study, allowing for measurement of more general current profiles. The accuracy of current inversion algorithms are evaluated by comparison to measurements of the mean flow profile from particle image velocimetry (PIV), and a discussion of the sensitivity to errors in the Doppler shifts is presented.

  12. Patient motion tracking in the presence of measurement errors.

    PubMed

    Haidegger, Tamás; Benyó, Zoltán; Kazanzides, Peter

    2009-01-01

    The primary aim of computer-integrated surgical systems is to provide physicians with superior surgical tools for better patient outcome. Robotic technology is capable of both minimally invasive surgery and microsurgery, offering remarkable advantages for the surgeon and the patient. Current systems allow for sub-millimeter intraoperative spatial positioning, however certain limitations still remain. Measurement noise and unintended changes in the operating room environment can result in major errors. Positioning errors are a significant danger to patients in procedures involving robots and other automated devices. We have developed a new robotic system at the Johns Hopkins University to support cranial drilling in neurosurgery procedures. The robot provides advanced visualization and safety features. The generic algorithm described in this paper allows for automated compensation of patient motion through optical tracking and Kalman filtering. When applied to the neurosurgery setup, preliminary results show that it is possible to identify patient motion within 700 ms, and apply the appropriate compensation with an average of 1.24 mm positioning error after 2 s of setup time.

  13. Error Modeling of Multibaseline Optical Truss: Part 1: Modeling of System Level Performance

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.; Korechoff, R. E.; Zhang, L. D.

    2004-01-01

    Global astrometry is the measurement of stellar positions and motions. These are typically characterized by five parameters, including two position parameters, two proper motion parameters, and parallax. The Space Interferometry Mission (SIM) will derive these parameters for a grid of approximately 1300 stars covering the celestial sphere to an accuracy of approximately 4uas, representing a two orders of magnitude improvement over the most precise current star catalogues. Narrow angle astrometry will be performed to a 1uas accuracy. A wealth of scientific information will be obtained from these accurate measurements encompassing many aspects of both galactic (and extragalactic science. SIM will be subject to a number of instrument errors that can potentially degrade performance. Many of these errors are systematic in that they are relatively static and repeatable with respect to the time frame and direction of the observation. This paper and its companion define the modeling of the, contributing factors to these errors and the analysis of how they impact SIM's ability to perform astrometric science.

  14. Optical surface pressure measurements: Accuracy and application field evaluation

    NASA Astrophysics Data System (ADS)

    Bukov, A.; Mosharov, V.; Orlov, A.; Pesetsky, V.; Radchenko, V.; Phonov, S.; Matyash, S.; Kuzmin, M.; Sadovskii, N.

    1994-07-01

    Optical pressure measurement (OPM) is a new pressure measurement method rapidly developed in several aerodynamic research centers: TsAGI (Russia), Boeing, NASA, McDonnell Douglas (all USA), and DLR (Germany). Present level of OPM-method provides its practice as standard experimental method of aerodynamic investigations in definite application fields. Applications of OPM-method are determined mainly by its accuracy. The accuracy of OPM-method is determined by the errors of three following groups: (1) errors of the luminescent pressure sensor (LPS) itself, such as uncompensated temperature influence, photo degradation, temperature and pressure hysteresis, variation of the LPS parameters from point to point on the model surface, etc.; (2) errors of the measurement system, such as noise of the photodetector, nonlinearity and nonuniformity of the photodetector, time and temperature offsets, etc.; and (3) methodological errors, owing to displacement and deformation of the model in an airflow, a contamination of the model surface, scattering of the excitation and luminescent light from the model surface and test section walls, etc. OPM-method allows getting total error of measured pressure not less than 1 percent. This accuracy is enough to visualize the pressure field and allows determining total and distributed aerodynamic loads and solving some problems of local aerodynamic investigations at transonic and supersonic velocities. OPM is less effective at low subsonic velocities (M less than 0.4), and for precise measurements, for example, an airfoil optimization. Current limitations of the OPM-method are discussed on an example of the surface pressure measurements and calculations of the integral loads on the wings of canard-aircraft model. The pressure measurement system and data reduction methods used on these tests are also described.

  15. A hybrid solution using computational prediction and measured data to accurately determine process corrections with reduced overlay sampling

    NASA Astrophysics Data System (ADS)

    Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen

    2017-03-01

    Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.

  16. Decisions to shoot in a weapon identification task: The influence of cultural stereotypes and perceived threat on false positive errors.

    PubMed

    Fleming, Kevin K; Bandy, Carole L; Kimble, Matthew O

    2010-01-01

    The decision to shoot a gun engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC), where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and electroencephalogram (EEG) activity were recorded. Cadets reacted more quickly and accurately when guns were primed by images of Middle-Eastern males wearing traditional clothing. However, cadets also made more false positive errors when tools were primed by these images. Error-related negativity (ERN) was measured for each response. Deeper ERNs were found in the medial-frontal cortex following false positive responses. Cadets who made fewer errors also produced deeper ERNs, indicating stronger executive control. Pupil size was used to measure autonomic arousal related to perceived threat. Images of Middle-Eastern males in traditional clothing produced larger pupil sizes. An image of Osama bin Laden induced the largest pupil size, as would be predicted for the exemplar of Middle East terrorism. Cadets who showed greater increases in pupil size also made more false positive errors. Regression analyses were performed to evaluate predictions based on current models of perceived threat, stereotype activation, and cognitive control. Measures of pupil size (perceived threat) and ERN (cognitive control) explained significant proportions of the variance in false positive errors to Middle-Eastern males in traditional clothing, while measures of reaction time, signal detection response bias, and stimulus discriminability explained most of the remaining variance.

  17. Decisions to Shoot in a Weapon Identification Task: The Influence of Cultural Stereotypes and Perceived Threat on False Positive Errors

    PubMed Central

    Fleming, Kevin K.; Bandy, Carole L.; Kimble, Matthew O.

    2014-01-01

    The decision to shoot engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC) where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and EEG activity were recorded. Cadets reacted more quickly and accurately when guns were primed by images of middle-eastern males wearing traditional clothing. However, cadets also made more false positive errors when tools were primed by these images. Error-related negativity (ERN) was measured for each response. Deeper ERN’s were found in the medial-frontal cortex following false positive responses. Cadets who made fewer errors also produced deeper ERN’s, indicating stronger executive control. Pupil size was used to measure autonomic arousal related to perceived threat. Images of middle-eastern males in traditional clothing produced larger pupil sizes. An image of Osama bin Laden induced the largest pupil size, as would be predicted for the exemplar of Middle East terrorism. Cadets who showed greater increases in pupil size also made more false positive errors. Regression analyses were performed to evaluate predictions based on current models of perceived threat, stereotype activation, and cognitive control. Measures of pupil size (perceived threat) and ERN (cognitive control) explained significant proportions of the variance in false positive errors to middle-eastern males in traditional clothing, while measures of reaction time, signal detection response bias, and stimulus discriminability explained most of the remaining variance. PMID:19813139

  18. Understanding diagnostic errors in medicine: a lesson from aviation

    PubMed Central

    Singh, H; Petersen, L A; Thomas, E J

    2006-01-01

    The impact of diagnostic errors on patient safety in medicine is increasingly being recognized. Despite the current progress in patient safety research, the understanding of such errors and how to prevent them is inadequate. Preliminary research suggests that diagnostic errors have both cognitive and systems origins. Situational awareness is a model that is primarily used in aviation human factors research that can encompass both the cognitive and the systems roots of such errors. This conceptual model offers a unique perspective in the study of diagnostic errors. The applicability of this model is illustrated by the analysis of a patient whose diagnosis of spinal cord compression was substantially delayed. We suggest how the application of this framework could lead to potential areas of intervention and outline some areas of future research. It is possible that the use of such a model in medicine could help reduce errors in diagnosis and lead to significant improvements in patient care. Further research is needed, including the measurement of situational awareness and correlation with health outcomes. PMID:16751463

  19. Method for controlling a vehicle with two or more independently steered wheels

    DOEpatents

    Reister, D.B.; Unseren, M.A.

    1995-03-28

    A method is described for independently controlling each steerable drive wheel of a vehicle with two or more such wheels. An instantaneous center of rotation target and a tangential velocity target are inputs to a wheel target system which sends the velocity target and a steering angle target for each drive wheel to a pseudo-velocity target system. The pseudo-velocity target system determines a pseudo-velocity target which is compared to a current pseudo-velocity to determine a pseudo-velocity error. The steering angle targets and the steering angles are inputs to a steering angle control system which outputs to the steering angle encoders, which measure the steering angles. The pseudo-velocity error, the rate of change of the pseudo-velocity error, and the wheel slip between each pair of drive wheels are used to calculate intermediate control variables which, along with the steering angle targets are used to calculate the torque to be applied at each wheel. The current distance traveled for each wheel is then calculated. The current wheel velocities and steering angle targets are used to calculate the cumulative and instantaneous wheel slip and the current pseudo-velocity. 6 figures.

  20. Error field optimization in DIII-D using extremum seeking control

    DOE PAGES

    Lanctot, M. J.; Olofsson, K. E. J.; Capella, M.; ...

    2016-06-03

    A closed-loop error field control algorithm is implemented in the Plasma Control System of the DIII-D tokamak and used to identify optimal control currents during a single plasma discharge. The algorithm, based on established extremum seeking control theory, exploits the link in tokamaks between maximizing the toroidal angular momentum and minimizing deleterious non-axisymmetric magnetic fields. Slowly-rotating n = 1 fields (the dither), generated by external coils, are used to perturb the angular momentum, monitored in real-time using a charge-exchange spectroscopy diagnostic. Simple signal processing of the rotation measurements extracts information about the rotation gradient with respect to the control coilmore » currents. This information is used to converge the control coil currents to a point that maximizes the toroidal angular momentum. The technique is well-suited for multi-coil, multi-harmonic error field optimizations in disruption sensitive devices as it does not require triggering locked tearing modes or plasma current disruptions. Control simulations highlight the importance of the initial search direction on the rate of the convergence, and identify future algorithm upgrades that may allow more rapid convergence that projects to convergence times in ITER on the order of tens of seconds.« less

  1. Dual-wavelengths photoacoustic temperature measurement

    NASA Astrophysics Data System (ADS)

    Liao, Yu; Jian, Xiaohua; Dong, Fenglin; Cui, Yaoyao

    2017-02-01

    Thermal therapy is an approach applied in cancer treatment by heating local tissue to kill the tumor cells, which requires a high sensitivity of temperature monitoring during therapy. Current clinical methods like fMRI near infrared or ultrasound for temperature measurement still have limitations on penetration depth or sensitivity. Photoacoustic temperature sensing is a newly developed temperature sensing method that has a potential to be applied in thermal therapy, which usually employs a single wavelength laser for signal generating and temperature detecting. Because of the system disturbances including laser intensity, ambient temperature and complexity of target, the accidental errors of measurement is unavoidable. For solving these problems, we proposed a new method of photoacoustic temperature sensing by using two wavelengths to reduce random error and increase the measurement accuracy in this paper. Firstly a brief theoretical analysis was deduced. Then in the experiment, a temperature measurement resolution of about 1° in the range of 23-48° in ex vivo pig blood was achieved, and an obvious decrease of absolute error was observed with averagely 1.7° in single wavelength pattern while nearly 1° in dual-wavelengths pattern. The obtained results indicates that dual-wavelengths photoacoustic sensing of temperature is able to reduce random error and improve accuracy of measuring, which could be a more efficient method for photoacoustic temperature sensing in thermal therapy of tumor.

  2. Multiparameter measurement utilizing poloidal polarimeter for burning plasma reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imazawa, Ryota; Kawano, Yasunori; Itami, Kiyoshi

    2014-08-21

    The authors have made the basic and applied research on the polarimeter for plasma diagnostics. Recently, the authors have proposed an application of multiparameter measurement (magnetic field, B, electron density, n{sub e}, electron temperature, T{sub e}, and total plasma current, I{sub p}) utilizing polarimeter to future fusion reactors. In this proceedings, the brief review of the polarimeter, the principle of the multiparameter measurement and the progress of the research on the multiparameter measurement are explained. The measurement method that the authors have proposed is suitable for the reactor for the following reasons; multiparameters can be obtained from a small numbermore » of diagnostics, the proposed method does not depend on time-history, and far-infrared light utilized by the polarimeter is less sensitive to degradation of of optical components. Taking into account the measuring error, performance assessment of the proposed method was carried. Assuming that the error of Δθ and Δε were 0.1° and 0.6°, respectively, the error of reconstructed j{sub φ}, n{sub e} and T{sub e} were 12 %, 8.4 % and 31 %, respectively. This study has shown that the reconstruction error can be decreased by increasing the number of the wavelength of the probing laser and by increasing the number of the viewing chords. For example, By increasing the number of viewing chords to forty-five, the error of j{sub φ}, n{sub e} and T{sub e} were reduced to 4.4 %, 4.4 %, and 17 %, respectively.« less

  3. Unattended Dual Current Monitor (UDCM) FY17 Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newell, Matthew R.

    The UDCM is a low current measurement device designed to record pico-amp to micro-amp currents from radiation detectors. The UDCM is the planned replacement for the IAEA’s obsolete MiniGRAND data acquisition module. Preliminary testing of the UDCM at the IAEA facilities lead to the following recommendations from the IAEA: Increase the measurement range. Lower range by a factor of 5 and upper range by 2 orders of magnitude; Modifications to the web interface; Increase programmable acquisition time to 3600s; Develop a method to handle current offsets and negative current; Error checking when writing data to the uSD card; and Writingmore » BID files along with the currently stored BI0 files.« less

  4. Verification of Satellite Rainfall Estimates from the Tropical Rainfall Measuring Mission over Ground Validation Sites

    NASA Astrophysics Data System (ADS)

    Fisher, B. L.; Wolff, D. B.; Silberstein, D. S.; Marks, D. M.; Pippitt, J. L.

    2007-12-01

    The Tropical Rainfall Measuring Mission's (TRMM) Ground Validation (GV) Program was originally established with the principal long-term goal of determining the random errors and systematic biases stemming from the application of the TRMM rainfall algorithms. The GV Program has been structured around two validation strategies: 1) determining the quantitative accuracy of the integrated monthly rainfall products at GV regional sites over large areas of about 500 km2 using integrated ground measurements and 2) evaluating the instantaneous satellite and GV rain rate statistics at spatio-temporal scales compatible with the satellite sensor resolution (Simpson et al. 1988, Thiele 1988). The GV Program has continued to evolve since the launch of the TRMM satellite on November 27, 1997. This presentation will discuss current GV methods of validating TRMM operational rain products in conjunction with ongoing research. The challenge facing TRMM GV has been how to best utilize rain information from the GV system to infer the random and systematic error characteristics of the satellite rain estimates. A fundamental problem of validating space-borne rain estimates is that the true mean areal rainfall is an ideal, scale-dependent parameter that cannot be directly measured. Empirical validation uses ground-based rain estimates to determine the error characteristics of the satellite-inferred rain estimates, but ground estimates also incur measurement errors and contribute to the error covariance. Furthermore, sampling errors, associated with the discrete, discontinuous temporal sampling by the rain sensors aboard the TRMM satellite, become statistically entangled in the monthly estimates. Sampling errors complicate the task of linking biases in the rain retrievals to the physics of the satellite algorithms. The TRMM Satellite Validation Office (TSVO) has made key progress towards effective satellite validation. For disentangling the sampling and retrieval errors, TSVO has developed and applied a methodology that statistically separates the two error sources. Using TRMM monthly estimates and high-resolution radar and gauge data, this method has been used to estimate sampling and retrieval error budgets over GV sites. More recently, a multi- year data set of instantaneous rain rates from the TRMM microwave imager (TMI), the precipitation radar (PR), and the combined algorithm was spatio-temporally matched and inter-compared to GV radar rain rates collected during satellite overpasses of select GV sites at the scale of the TMI footprint. The analysis provided a more direct probe of the satellite rain algorithms using ground data as an empirical reference. TSVO has also made significant advances in radar quality control through the development of the Relative Calibration Adjustment (RCA) technique. The RCA is currently being used to provide a long-term record of radar calibration for the radar at Kwajalein, a strategically important GV site in the tropical Pacific. The RCA technique has revealed previously undetected alterations in the radar sensitivity due to engineering changes (e.g., system modifications, antenna offsets, alterations of the receiver, or the data processor), making possible the correction of the radar rainfall measurements and ensuring the integrity of nearly a decade of TRMM GV observations and resources.

  5. Problem of Mistakes in Databases, Processing and Interpretation of Observations of the Sun. I.

    NASA Astrophysics Data System (ADS)

    Lozitska, N. I.

    In databases of observations unnoticed mistakes and misprints could occur at any stage of observation, preparation and processing of databases. The current detection of errors is complicated by the fact that the works of observer, databases compiler and researcher were divided. Data acquisition from a spacecraft requires the greater amount of researchers than for ground-based observations. As a result, the probability of errors is increasing. Keeping track of the errors on each stage is very difficult, so we use of cross-comparison of data from different sources. We revealed some misprints in the typographic and digital results of sunspot group area measurements.

  6. Evaluation of Two Crew Module Boilerplate Tests Using Newly Developed Calibration Metrics

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.

    2012-01-01

    The paper discusses a application of multi-dimensional calibration metrics to evaluate pressure data from water drop tests of the Max Launch Abort System (MLAS) crew module boilerplate. Specifically, three metrics are discussed: 1) a metric to assess the probability of enveloping the measured data with the model, 2) a multi-dimensional orthogonality metric to assess model adequacy between test and analysis, and 3) a prediction error metric to conduct sensor placement to minimize pressure prediction errors. Data from similar (nearly repeated) capsule drop tests shows significant variability in the measured pressure responses. When compared to expected variability using model predictions, it is demonstrated that the measured variability cannot be explained by the model under the current uncertainty assumptions.

  7. Prediction Accuracy of Error Rates for MPTB Space Experiment

    NASA Technical Reports Server (NTRS)

    Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.

    1998-01-01

    This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.

  8. A dual-phantom system for validation of velocity measurements in stenosis models under steady flow.

    PubMed

    Blake, James R; Easson, William J; Hoskins, Peter R

    2009-09-01

    A dual-phantom system is developed for validation of velocity measurements in stenosis models. Pairs of phantoms with identical geometry and flow conditions are manufactured, one for ultrasound and one for particle image velocimetry (PIV). The PIV model is made from silicone rubber, and a new PIV fluid is made that matches the refractive index of 1.41 of silicone. Dynamic scaling was performed to correct for the increased viscosity of the PIV fluid compared with that of the ultrasound blood mimic. The degree of stenosis in the models pairs agreed to less than 1%. The velocities in the laminar flow region up to the peak velocity location agreed to within 15%, and the difference could be explained by errors in ultrasound velocity estimation. At low flow rates and in mild stenoses, good agreement was observed in the distal flow fields, excepting the maximum velocities. At high flow rates, there was considerable difference in velocities in the poststenosis flow field (maximum centreline differences of 30%), which would seem to represent real differences in hydrodynamic behavior between the two models. Sources of error included: variation of viscosity because of temperature (random error, which could account for differences of up to 7%); ultrasound velocity estimation errors (systematic errors); and geometry effects in each model, particularly because of imperfect connectors and corners (systematic errors, potentially affecting the inlet length and flow stability). The current system is best placed to investigate measurement errors in the laminar flow region rather than the poststenosis turbulent flow region.

  9. Reflectance calibration of focal plane array hyperspectral imaging system for agricultural and food safety applications

    NASA Astrophysics Data System (ADS)

    Lawrence, Kurt C.; Park, Bosoon; Windham, William R.; Mao, Chengye; Poole, Gavin H.

    2003-03-01

    A method to calibrate a pushbroom hyperspectral imaging system for "near-field" applications in agricultural and food safety has been demonstrated. The method consists of a modified geometric control point correction applied to a focal plane array to remove smile and keystone distortion from the system. Once a FPA correction was applied, single wavelength and distance calibrations were used to describe all points on the FPA. Finally, a percent reflectance calibration, applied on a pixel-by-pixel basis, was used for accurate measurements for the hyperspectral imaging system. The method was demonstrated with a stationary prism-grating-prism, pushbroom hyperspectral imaging system. For the system described, wavelength and distance calibrations were used to reduce the wavelength errors to <0.5 nm and distance errors to <0.01mm (across the entrance slit width). The pixel-by-pixel percent reflectance calibration, which was performed at all wavelengths with dark current and 99% reflectance calibration-panel measurements, was verified with measurements on a certified gradient Spectralon panel with values ranging from about 14% reflectance to 99% reflectance with errors generally less than 5% at the mid-wavelength measurements. Results from the calibration method, indicate the hyperspectral imaging system has a usable range between 420 nm and 840 nm. Outside this range, errors increase significantly.

  10. [Characteristics of specifications of transportable inverter-type X-ray equipment].

    PubMed

    Yamamoto, Keiichi; Miyazaki, Shigeru; Asano, Hiroshi; Shinohara, Fuminori; Ishikawa, Mitsuo; Ide, Toshinori; Abe, Shinji; Negishi, Toru; Miyake, Hiroyuki; Imai, Yoshio; Okuaki, Tomoyuki

    2003-07-01

    Our X-ray systems study group measured and examined the characteristics of four transportable inverter-type X-ray equipments. X-ray tube voltage and X-ray tube current were measured with the X-ray tube voltage and the X-ray tube current measurement terminals provided with the equipment. X-ray tube voltage, irradiation time, and dose were measured with a non-invasive X-ray tube voltage-measuring device, and X-ray output was measured by fluorescence meter. The items investigated were the reproducibility and linearity of X-ray output, error of pre-set X-ray tube voltage and X-ray tube current, and X-ray tube voltage ripple percentage. The waveforms of X-ray tube voltage, the X-ray tube current, and fluorescence intensity draw were analyzed using the oscilloscope gram and a personal computer. All of the equipment had a preset error of X-ray tube voltage and X-ray tube current that met JIS standards. The X-ray tube voltage ripple percentage of each equipment conformed to the tendency to decrease when X-ray tube voltage increased. Although the X-ray output reproducibility of system A exceeded the JIS standard, the other systems were within the JIS standard. Equipment A required 40 ms for X-ray tube current to reach the target value, and there was some X-ray output loss because of a trough in X-ray tube current. Owing to the influence of the ripple in X-ray tube current, the strength of the fluorescence waveform rippled in equipments B and C. Waveform analysis could not be done by aliasing of the recording device in equipment D. The maximum X-ray tube current of transportable inverter-type X-ray equipment is as low as 10-20 mA, and the irradiation time of chest X-ray photography exceeds 0.1 sec. However, improvement of the radiophotographic technique is required for patients who cannot move their bodies or halt respiration. It is necessary to make the irradiation time of the equipments shorter for remote medical treatment.

  11. Tracking and shape errors measurement of concentrating heliostats

    NASA Astrophysics Data System (ADS)

    Coquand, Mathieu; Caliot, Cyril; Hénault, François

    2017-09-01

    In solar tower power plants, factors such as tracking accuracy, facets misalignment and surface shape errors of concentrating heliostats are of prime importance on the efficiency of the system. At industrial scale, one critical issue is the time and effort required to adjust the different mirrors of the faceted heliostats, which could take several months using current techniques. Thus, methods enabling quick adjustment of a field with a huge number of heliostats are essential for the rise of solar tower technology. In this communication is described a new method for heliostat characterization that makes use of four cameras located near the solar receiver and simultaneously recording images of the sun reflected by the optical surfaces. From knowledge of a measured sun profile, data processing of the acquired images allows reconstructing the slope and shape errors of the heliostats, including tracking and canting errors. The mathematical basis of this shape reconstruction process is explained comprehensively. Numerical simulations demonstrate that the measurement accuracy of this "backward-gazing method" is compliant with the requirements of solar concentrating optics. Finally, we present our first experimental results obtained at the THEMIS experimental solar tower plant in Targasonne, France.

  12. Subnanosecond GPS-based clock synchronization and precision deep-space tracking

    NASA Technical Reports Server (NTRS)

    Dunn, C. E.; Lichten, S. M.; Jefferson, D. C.; Border, J. S.

    1992-01-01

    Interferometric spacecraft tracking is accomplished by the Deep Space Network (DSN) by comparing the arrival time of electromagnetic spacecraft signals at ground antennas separated by baselines on the order of 8000 km. Clock synchronization errors within and between DSN stations directly impact the attainable tracking accuracy, with a 0.3-nsec error in clock synchronization resulting in an 11-nrad angular position error. This level of synchronization is currently achieved by observing a quasar which is angularly close to the spacecraft just after the spacecraft observations. By determining the differential arrival times of the random quasar signal at the stations, clock offsets and propagation delays within the atmosphere and within the DSN stations are calibrated. Recent developments in time transfer techniques may allow medium accuracy (50-100 nrad) spacecraft tracking without near-simultaneous quasar-based calibrations. Solutions are presented for a worldwide network of Global Positioning System (GPS) receivers in which the formal errors for DSN clock offset parameters are less than 0.5 nsec. Comparisons of clock rate offsets derived from GPS measurements and from very long baseline interferometry (VLBI), as well as the examination of clock closure, suggest that these formal errors are a realistic measure of GPS-based clock offset precision and accuracy. Incorporating GPS-based clock synchronization measurements into a spacecraft differential ranging system would allow tracking without near-simultaneous quasar observations. The impact on individual spacecraft navigation-error sources due to elimination of quasar-based calibrations is presented. System implementation, including calibration of station electronic delays, is discussed.

  13. Sub-nanosecond clock synchronization and precision deep space tracking

    NASA Technical Reports Server (NTRS)

    Dunn, Charles; Lichten, Stephen; Jefferson, David; Border, James S.

    1992-01-01

    Interferometric spacecraft tracking is accomplished at the NASA Deep Space Network (DSN) by comparing the arrival time of electromagnetic spacecraft signals to ground antennas separated by baselines on the order of 8000 km. Clock synchronization errors within and between DSN stations directly impact the attainable tracking accuracy, with a 0.3 ns error in clock synchronization resulting in an 11 nrad angular position error. This level of synchronization is currently achieved by observing a quasar which is angularly close to the spacecraft just after the spacecraft observations. By determining the differential arrival times of the random quasar signal at the stations, clock synchronization and propagation delays within the atmosphere and within the DSN stations are calibrated. Recent developments in time transfer techniques may allow medium accuracy (50-100 nrad) spacecraft observations without near-simultaneous quasar-based calibrations. Solutions are presented for a global network of GPS receivers in which the formal errors in clock offset parameters are less than 0.5 ns. Comparisons of clock rate offsets derived from GPS measurements and from very long baseline interferometry and the examination of clock closure suggest that these formal errors are a realistic measure of GPS-based clock offset precision and accuracy. Incorporating GPS-based clock synchronization measurements into a spacecraft differential ranging system would allow tracking without near-simultaneous quasar observations. The impact on individual spacecraft navigation error sources due to elimination of quasar-based calibrations is presented. System implementation, including calibration of station electronic delays, is discussed.

  14. Shunt resistance and saturation current determination in CdTe and CIGS solar cells. Part 2: application to experimental IV measurements and comparison with other methods

    NASA Astrophysics Data System (ADS)

    Rangel-Kuoppa, Victor-Tapio; Albor-Aguilera, María-de-Lourdes; Hérnandez-Vásquez, César; Flores-Márquez, José-Manuel; Jiménez-Olarte, Daniel; Sastré-Hernández, Jorge; González-Trujillo, Miguel-Ángel; Contreras-Puente, Gerardo-Silverio

    2018-04-01

    In this Part 2 of this series of articles, the procedure proposed in Part 1, namely a new parameter extraction technique of the shunt resistance (R sh ) and saturation current (I sat ) of a current-voltage (I-V) measurement of a solar cell, within the one-diode model, is applied to CdS-CdTe and CIGS-CdS solar cells. First, the Cheung method is used to obtain the series resistance (R s ) and the ideality factor n. Afterwards, procedures A and B proposed in Part 1 are used to obtain R sh and I sat . The procedure is compared with two other commonly used procedures. Better accuracy on the simulated I-V curves used with the parameters extracted by our method is obtained. Also, the integral percentage errors from the simulated I-V curves using the method proposed in this study are one order of magnitude smaller compared with the integral percentage errors using the other two methods.

  15. Similarities in error processing establish a link between saccade prediction at baseline and adaptation performance.

    PubMed

    Wong, Aaron L; Shelhamer, Mark

    2014-05-01

    Adaptive processes are crucial in maintaining the accuracy of body movements and rely on error storage and processing mechanisms. Although classically studied with adaptation paradigms, evidence of these ongoing error-correction mechanisms should also be detectable in other movements. Despite this connection, current adaptation models are challenged when forecasting adaptation ability with measures of baseline behavior. On the other hand, we have previously identified an error-correction process present in a particular form of baseline behavior, the generation of predictive saccades. This process exhibits long-term intertrial correlations that decay gradually (as a power law) and are best characterized with the tools of fractal time series analysis. Since this baseline task and adaptation both involve error storage and processing, we sought to find a link between the intertrial correlations of the error-correction process in predictive saccades and the ability of subjects to alter their saccade amplitudes during an adaptation task. Here we find just such a relationship: the stronger the intertrial correlations during prediction, the more rapid the acquisition of adaptation. This reinforces the links found previously between prediction and adaptation in motor control and suggests that current adaptation models are inadequate to capture the complete dynamics of these error-correction processes. A better understanding of the similarities in error processing between prediction and adaptation might provide the means to forecast adaptation ability with a baseline task. This would have many potential uses in physical therapy and the general design of paradigms of motor adaptation. Copyright © 2014 the American Physiological Society.

  16. Induced electric currents in the Alaska oil pipeline measured by gradient, fluxgate, and SQUID magnetometers

    NASA Technical Reports Server (NTRS)

    Campbell, W. H.; Zimmerman, J. E.

    1979-01-01

    The field gradient method for observing the electric currents in the Alaska pipeline provided consistent values for both the fluxgate and SQUID method of observation. These currents were linearly related to the regularly measured electric and magnetic field changes. Determinations of pipeline current were consistent with values obtained by a direct connection, current shunt technique at a pipeline site about 9.6 km away. The gradient method has the distinct advantage of portability and buried- pipe capability. Field gradients due to the pipe magnetization, geological features, or ionospheric source currents do not seem to contribute a measurable error to such pipe current determination. The SQUID gradiometer is inherently sensitive enough to detect very small currents in a linear conductor at 10 meters, or conversely, to detect small currents of one amphere or more at relatively great distances. It is fairly straightforward to achieve imbalance less than one part in ten thousand, and with extreme care, one part in one million or better.

  17. Quantifying acoustic doppler current profiler discharge uncertainty: A Monte Carlo based tool for moving-boat measurements

    USGS Publications Warehouse

    Mueller, David S.

    2017-01-01

    This paper presents a method using Monte Carlo simulations for assessing uncertainty of moving-boat acoustic Doppler current profiler (ADCP) discharge measurements using a software tool known as QUant, which was developed for this purpose. Analysis was performed on 10 data sets from four Water Survey of Canada gauging stations in order to evaluate the relative contribution of a range of error sources to the total estimated uncertainty. The factors that differed among data sets included the fraction of unmeasured discharge relative to the total discharge, flow nonuniformity, and operator decisions about instrument programming and measurement cross section. As anticipated, it was found that the estimated uncertainty is dominated by uncertainty of the discharge in the unmeasured areas, highlighting the importance of appropriate selection of the site, the instrument, and the user inputs required to estimate the unmeasured discharge. The main contributor to uncertainty was invalid data, but spatial inhomogeneity in water velocity and bottom-track velocity also contributed, as did variation in the edge velocity, uncertainty in the edge distances, edge coefficients, and the top and bottom extrapolation methods. To a lesser extent, spatial inhomogeneity in the bottom depth also contributed to the total uncertainty, as did uncertainty in the ADCP draft at shallow sites. The estimated uncertainties from QUant can be used to assess the adequacy of standard operating procedures. They also provide quantitative feedback to the ADCP operators about the quality of their measurements, indicating which parameters are contributing most to uncertainty, and perhaps even highlighting ways in which uncertainty can be reduced. Additionally, QUant can be used to account for self-dependent error sources such as heading errors, which are a function of heading. The results demonstrate the importance of a Monte Carlo method tool such as QUant for quantifying random and bias errors when evaluating the uncertainty of moving-boat ADCP measurements.

  18. Improving laboratory data entry quality using Six Sigma.

    PubMed

    Elbireer, Ali; Le Chasseur, Julie; Jackson, Brooks

    2013-01-01

    The Uganda Makerere University provides clinical laboratory support to over 70 clients in Uganda. With increased volume, manual data entry errors have steadily increased, prompting laboratory managers to employ the Six Sigma method to evaluate and reduce their problems. The purpose of this paper is to describe how laboratory data entry quality was improved by using Six Sigma. The Six Sigma Quality Improvement (QI) project team followed a sequence of steps, starting with defining project goals, measuring data entry errors to assess current performance, analyzing data and determining data-entry error root causes. Finally the team implemented changes and control measures to address the root causes and to maintain improvements. Establishing the Six Sigma project required considerable resources and maintaining the gains requires additional personnel time and dedicated resources. After initiating the Six Sigma project, there was a 60.5 percent reduction in data entry errors from 423 errors a month (i.e. 4.34 Six Sigma) in the first month, down to an average 166 errors/month (i.e. 4.65 Six Sigma) over 12 months. The team estimated the average cost of identifying and fixing a data entry error to be $16.25 per error. Thus, reducing errors by an average of 257 errors per month over one year has saved the laboratory an estimated $50,115 a year. The Six Sigma QI project provides a replicable framework for Ugandan laboratory staff and other resource-limited organizations to promote quality environment. Laboratory staff can deliver excellent care at a lower cost, by applying QI principles. This innovative QI method of reducing data entry errors in medical laboratories may improve the clinical workflow processes and make cost savings across the health care continuum.

  19. The Influence of the Terrestrial Reference Frame on Studies of Sea Level Change

    NASA Astrophysics Data System (ADS)

    Nerem, R. S.; Bar-Sever, Y. E.; Haines, B. J.; Desai, S.; Heflin, M. B.

    2015-12-01

    The terrestrial reference frame (TRF) provides the foundation for the accurate monitoring of sea level using both ground-based (tide gauges) and space-based (satellite altimetry) techniques. For the latter, tide gauges are also used to monitor drifts in the satellite instruments over time. The accuracy of the terrestrial reference frame (TRF) is thus a critical component for both types of sea level measurements. The TRF is central to the formation of geocentric sea-surface height (SSH) measurements from satellite altimeter data. The computed satellite orbits are linked to a particular TRF via the assumed locations of the ground-based tracking systems. The manner in which TRF errors are expressed in the orbit solution (and thus SSH) is not straightforward, and depends on the models of the forces underlying the satellite's motion. We discuss this relationship, and provide examples of the systematic TRF-induced errors in the altimeter derived sea-level record. The TRF is also crucial to the interpretation of tide-gauge measurements, as it enables the separation of vertical land motion from volumetric changes in the water level. TRF errors affect tide gauge measurements through GNSS estimates of the vertical land motion at each tide gauge. This talk will discuss the current accuracy of the TRF and how errors in the TRF impact both satellite altimeter and tide gauge sea level measurements. We will also discuss simulations of how the proposed Geodetic Reference Antenna in SPace (GRASP) satellite mission could reduce these errors and revolutionize how reference frames are computed in general.

  20. Frequency analysis of DC tolerant current transformers

    NASA Astrophysics Data System (ADS)

    Mlejnek, P.; Kaspar, P.

    2013-09-01

    This article deals with wide frequency range behaviour of DC tolerant current transformers that are usually used in modern static energy meters. In this application current transformers must comply with European and International Standards in their accuracy and DC tolerance. Therefore, the linear DC tolerant current transformers and double core current transformers are used in this field. More details about the problems of these particular types of transformers can be found in our previous works. Although these transformers are designed mainly for power distribution network frequency (50/60 Hz), it can be interesting to understand their behaviour in wider frequency range. Based on this knowledge the new generations of energy meters with measuring quality of electric energy will be produced. This solution brings better measurement of consumption of nonlinear loads or measurement of non-sinusoidal voltage and current sources such as solar cells or fuel cells. The determination of actual power consumption in such energy meters is done using particular harmonics component of current and voltage. We measured the phase and ratio errors that are the most important parameters of current transformers, to characterize several samples of current transformers of both types.

  1. Improving LADCP Velocity Profiles with External Attitude Sensors

    NASA Astrophysics Data System (ADS)

    Thurnherr, A. M.; Goszczko, I.

    2016-12-01

    Data collected with Acoustic Doppler Current Profilers installed on CTD rosettes and lowered through the water column (LADCP systems) are routinely used to derive full-depth profiles of ocean velocity. In addition to the uncertainties arising from random noise in the along-beam velocity measurements, LADCP derived velocities are commonly contaminated by bias errors due to imperfectly measured instrument attitude (pitch, roll and heading). Of particular concern are the heading measurements because it is not usually feasible to calibrate the internal ADCP compasses with the instruments installed on a CTD rosette, away from the magnetic disturbances of the ship as well as the current-carrying winch wire. Heading data from dual-headed LADCP systems, which consist of upward and downward-pointing ADCPs installed on the same rosette, commonly indicate heading-dependent compass errors with amplitudes exceeding 10 degrees. In an attempt to reduce LADCP velocity errors, over 200 full-depth profiles were collected during several recent projects, including GO-SHIP, DIMES and ECOGIG, with an inexpensive (<$200) external magnetometer/accelerometer package. The resulting data permit full compass calibrations (for both hard- and soft-iron effects) from in-situ profile data and yields improved pitch and roll measurements. Results indicate greatly reduced inconsistencies between the data from the two ADCPs (horizontal-velocity processing residuals), as well as smaller biases in vertical -velocity (w) measurements. In addition, the external magnetometer package allows processing of some LADCP data collected in regions where the horizontal magnitude of the earth's magnetic field is insufficient for the ADCPs internal compasses to work at all.

  2. Determination of the precision error of the pulmonary artery thermodilution catheter using an in vitro continuous flow test rig.

    PubMed

    Yang, Xiao-Xing; Critchley, Lester A; Joynt, Gavin M

    2011-01-01

    Thermodilution cardiac output using a pulmonary artery catheter is the reference method against which all new methods of cardiac output measurement are judged. However, thermodilution lacks precision and has a quoted precision error of ± 20%. There is uncertainty about its true precision and this causes difficulty when validating new cardiac output technology. Our aim in this investigation was to determine the current precision error of thermodilution measurements. A test rig through which water circulated at different constant rates with ports to insert catheters into a flow chamber was assembled. Flow rate was measured by an externally placed transonic flowprobe and meter. The meter was calibrated by timed filling of a cylinder. Arrow and Edwards 7Fr thermodilution catheters, connected to a Siemens SC9000 cardiac output monitor, were tested. Thermodilution readings were made by injecting 5 mL of ice-cold water. Precision error was divided into random and systematic components, which were determined separately. Between-readings (random) variability was determined for each catheter by taking sets of 10 readings at different flow rates. Coefficient of variation (CV) was calculated for each set and averaged. Between-catheter systems (systematic) variability was derived by plotting calibration lines for sets of catheters. Slopes were used to estimate the systematic component. Performances of 3 cardiac output monitors were compared: Siemens SC9000, Siemens Sirecust 1261, and Philips MP50. Five Arrow and 5 Edwards catheters were tested using the Siemens SC9000 monitor. Flow rates between 0.7 and 7.0 L/min were studied. The CV (random error) for Arrow was 5.4% and for Edwards was 4.8%. The random precision error was ± 10.0% (95% confidence limits). CV (systematic error) was 5.8% and 6.0%, respectively. The systematic precision error was ± 11.6%. The total precision error of a single thermodilution reading was ± 15.3% and ± 13.0% for triplicate readings. Precision error increased by 45% when using the Sirecust monitor and 100% when using the Philips monitor. In vitro testing of pulmonary artery catheters enabled us to measure both the random and systematic error components of thermodilution cardiac output measurement, and thus calculate the precision error. Using the Siemens monitor, we established a precision error of ± 15.3% for single and ± 13.0% for triplicate reading, which was similar to the previous estimate of ± 20%. However, this precision error was significantly worsened by using the Sirecust and Philips monitors. Clinicians should recognize that the precision error of thermodilution cardiac output is dependent on the selection of catheter and monitor model.

  3. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances

    PubMed Central

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvořáček, Filip

    2015-01-01

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777

  4. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.

    PubMed

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip

    2015-08-06

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%.

  5. The impact of reflectivity correction and conversion methods to improve precipitation estimation by weather radar for an extreme low-land Mesoscale Convective System

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2014-05-01

    Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands. For most of the country this led to over 15 hours of near-continuous precipitation, which resulted in total event accumulations exceeding 150 mm in the eastern part of the Netherlands. Such accumulations belong to the largest sums ever recorded in this country and gave rise to local flooding. Measuring precipitation by weather radar within such mesoscale convective systems is known to be a challenge, since measurements are affected by multiple sources of error. For the current event the operational weather radar rainfall product only estimated about 30% of the actual amount of precipitation as measured by rain gauges. In the current presentation we will try to identify what gave rise to such large underestimations. In general weather radar measurement errors can be subdivided into two different groups: 1) errors affecting the volumetric reflectivity measurements taken, and 2) errors related to the conversion of reflectivity values in rainfall intensity and attenuation estimates. To correct for the first group of errors, the quality of the weather radar reflectivity data was improved by successively correcting for 1) clutter and anomalous propagation, 2) radar calibration, 3) wet radome attenuation, 4) signal attenuation and 5) the vertical profile of reflectivity. Such consistent corrections are generally not performed by operational meteorological services. Results show a large improvement in the quality of the precipitation data, however still only ~65% of the actual observed accumulations was estimated. To further improve the quality of the precipitation estimates, the second group of errors are corrected for by making use of disdrometer measurements taken in close vicinity of the radar. Based on these data the parameters of a normalized drop size distribution are estimated for the total event as well as for each precipitation type separately (convective, stratiform and undefined). These are then used to obtain coherent parameter sets for the radar reflectivity-rainfall rate (Z-R) and radar reflectivity-attenuation (Z-k) relationship, specifically applicable for this event. By applying a single parameter set to correct for both sources of errors, the quality of the rainfall product improves further, leading to >80% of the observed accumulations. However, by differentiating between precipitation type no better results are obtained as when using the operational relationships. This leads to the question: how representative are local disdrometer observations to correct large scale weather radar measurements? In order to tackle this question a Monte Carlo approach was used to generate >10000 sets of the normalized dropsize distribution parameters and to assess their impact on the estimated precipitation amounts. Results show that a large number of parameter sets result in improved precipitation estimated by the weather radar closely resembling observations. However, these optimal sets vary considerably as compared to those obtained from the local disdrometer measurements.

  6. The Pot Calling the Kettle Black? A Comparison of Measures of Current Tobacco Use

    PubMed Central

    ROSENMAN, ROBERT

    2014-01-01

    Researchers often use the discrepancy between self-reported and biochemically assessed active smoking status to argue that self-reported smoking status is not reliable, ignoring the limitations of biochemically assessed measures and treating it as the gold standard in their comparisons. Here, we employ econometric techniques to compare the accuracy of self-reported and biochemically assessed current tobacco use, taking into account measurement errors with both methods. Our approach allows estimating and comparing the sensitivity and specificity of each measure without directly observing true smoking status. The results, robust to several alternative specifications, suggest that there is no clear reason to think that one measure dominates the other in accuracy. PMID:25587199

  7. Long-term academic stress increases the late component of error processing: an ERP study.

    PubMed

    Wu, Jianhui; Yuan, Yiran; Duan, Hongxia; Qin, Shaozheng; Buchanan, Tony W; Zhang, Kan; Zhang, Liang

    2014-05-01

    Exposure to long-term stress has a variety of consequences on the brain and cognition. Few studies have examined the influence of long-term stress on event related potential (ERP) indices of error processing. The current study investigated how long-term academic stress modulates the error related negativity (Ne or ERN) and the error positivity (Pe) components of error processing. Forty-one male participants undergoing preparation for a major academic examination and 20 non-exam participants completed a Go-NoGo task while ERP measures were collected. The exam group reported higher perceived stress levels and showed increased Pe amplitude compared with the non-exam group. Participants' rating of the importance of the exam was positively associated with the amplitude of Pe, but these effects were not found for the Ne/ERN. These results suggest that long-term academic stress leads to greater motivational assessment of and higher emotional response to errors. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Actualities and Development of Heavy-Duty CNC Machine Tool Thermal Error Monitoring Technology

    NASA Astrophysics Data System (ADS)

    Zhou, Zu-De; Gui, Lin; Tan, Yue-Gang; Liu, Ming-Yao; Liu, Yi; Li, Rui-Ya

    2017-09-01

    Thermal error monitoring technology is the key technological support to solve the thermal error problem of heavy-duty CNC (computer numerical control) machine tools. Currently, there are many review literatures introducing the thermal error research of CNC machine tools, but those mainly focus on the thermal issues in small and medium-sized CNC machine tools and seldom introduce thermal error monitoring technologies. This paper gives an overview of the research on the thermal error of CNC machine tools and emphasizes the study of thermal error of the heavy-duty CNC machine tool in three areas. These areas are the causes of thermal error of heavy-duty CNC machine tool and the issues with the temperature monitoring technology and thermal deformation monitoring technology. A new optical measurement technology called the "fiber Bragg grating (FBG) distributed sensing technology" for heavy-duty CNC machine tools is introduced in detail. This technology forms an intelligent sensing and monitoring system for heavy-duty CNC machine tools. This paper fills in the blank of this kind of review articles to guide the development of this industry field and opens up new areas of research on the heavy-duty CNC machine tool thermal error.

  9. Error assessment of local tie vectors in space geodesy

    NASA Astrophysics Data System (ADS)

    Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald

    2014-05-01

    For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.

  10. A 'Quad-Disc' static pressure probe for measurement in adverse atmospheres - With a comparative review of static pressure probe designs

    NASA Astrophysics Data System (ADS)

    Nishiyama, Randall T.; Bedard, Alfred J., Jr.

    1991-09-01

    There are many areas of need for accurate measurements of atmospheric static pressure. These include observations of surface meteorology, airport altimeter settings, pressure distributions around buildings, moving measurement platforms, as well as basic measurements of fluctuating pressures in turbulence. Most of these observations require long-term observations in adverse environments (e.g., rain, dust, or snow). Currently, many pressure measurements are made, of necessity, within buildings, thus involving potential errors of several millibars in mean pressure during moderate winds, accompanied by large fluctuating pressures induced by the structure. In response to these needs, a 'Quad-Disk' pressure probe for continuous, outdoor monitoring purposes was designed which is inherently weather-protected. This Quad-Disk probe has the desirable features of omnidirectional response and small error in pitch. A review of past static pressure probes contrasts design approaches and capabilities.

  11. MERLIN: a Franco-German LIDAR space mission for atmospheric methane

    NASA Astrophysics Data System (ADS)

    Bousquet, P.; Ehret, G.; Pierangelo, C.; Marshall, J.; Bacour, C.; Chevallier, F.; Gibert, F.; Armante, R.; Crevoisier, C. D.; Edouart, D.; Esteve, F.; Julien, E.; Kiemle, C.; Alpers, M.; Millet, B.

    2017-12-01

    The Methane Remote Sensing Lidar Mission (MERLIN), currently in phase C, is a joint cooperation between France and Germany on the development, launch and operation of a space LIDAR dedicated to the retrieval of total weighted methane (CH4) atmospheric columns. Atmospheric methane is the second most potent anthropogenic greenhouse gas, contributing 20% to climate radiative forcing but also plying an important role in atmospheric chemistry as a precursor of tropospheric ozone and low-stratosphere water vapour. Its short lifetime ( 9 years) and the nature and variety of its anthropogenic sources also offer interesting mitigation options in regards to the 2° objective of the Paris agreement. For the first time, measurements of atmospheric composition will be performed from space thanks to an IPDA (Integrated Path Differential Absorption) LIDAR (Light Detecting And Ranging), with a precision (target ±27 ppb for a 50km aggregation along the trace) and accuracy (target <3.7 ppb at 68%) sufficient to significantly reduce the uncertainties on methane emissions. The very low targeted systematic error target is particularly ambitious compared to current passive methane space mission. It is achievable because of the differential active measurements of MERLIN, which guarantees almost no contamination by aerosols or water vapour cross-sensitivity. As an active mission, MERLIN will deliver global methane weighted columns (XCH4) for all seasons and all latitudes, day and night Here, we recall the MERLIN objectives and mission characteristics. We also propose an end-to-end error analysis, from the causes of random and systematic errors of the instrument, of the platform and of the data treatment, to the error on methane emissions. To do so, we propose an OSSE analysis (observing system simulation experiment) to estimate the uncertainty reduction on methane emissions brought by MERLIN XCH4. The originality of our inversion system is to transfer both random and systematic errors from the observation space to the flux space, thus providing more realistic error reductions than usually provided in OSSE only using the random part of errors. Uncertainty reductions are presented using two different atmospheric transport models, TM3 and LMDZ, and compared with error reduction achieved with the GOSAT passive mission.

  12. Using Redundancy To Reduce Errors in Magnetometer Readings

    NASA Technical Reports Server (NTRS)

    Kulikov, Igor; Zak, Michail

    2004-01-01

    A method of reducing errors in noisy magnetic-field measurements involves exploitation of redundancy in the readings of multiple magnetometers in a cluster. By "redundancy"is meant that the readings are not entirely independent of each other because the relationships among the magnetic-field components that one seeks to measure are governed by the fundamental laws of electromagnetism as expressed by Maxwell's equations. Assuming that the magnetometers are located outside a magnetic material, that the magnetic field is steady or quasi-steady, and that there are no electric currents flowing in or near the magnetometers, the applicable Maxwell 's equations are delta x B = 0 and delta(raised dot) B = 0, where B is the magnetic-flux-density vector. By suitable algebraic manipulation, these equations can be shown to impose three independent constraints on the values of the components of B at the various magnetometer positions. In general, the problem of reducing the errors in noisy measurements is one of finding a set of corrected values that minimize an error function. In the present method, the error function is formulated as (1) the sum of squares of the differences between the corrected and noisy measurement values plus (2) a sum of three terms, each comprising the product of a Lagrange multiplier and one of the three constraints. The partial derivatives of the error function with respect to the corrected magnetic-field component values and the Lagrange multipliers are set equal to zero, leading to a set of equations that can be put into matrix.vector form. The matrix can be inverted to solve for a vector that comprises the corrected magnetic-field component values and the Lagrange multipliers.

  13. Multi-interface level in oil tanks and applications of optical fiber sensors

    NASA Astrophysics Data System (ADS)

    Leal-Junior, Arnaldo G.; Marques, Carlos; Frizera, Anselmo; Pontes, Maria José

    2018-01-01

    On the oil production also involves the production of water, gas and suspended solids, which are separated from the oil on three-phase separators. However, the control strategies of an oil separator are limited due to unavailability of suitable multi-interface level sensors. This paper presents a description of the multi-phase level problem on the oil industry and a review of the current technologies for multi-interface level assessment. Since optical fiber sensors present chemical stability, intrinsic safety, electromagnetic immunity, lightweight and multiplexing capabilities, it can be an alternative for multi-interface level measurement that can overcome some of the limitations of the current technologies. For this reason, Fiber Bragg Gratings (FBGs) based optical fiber sensor system for multi-interface level assessment is proposed, simulated and experimentally assessed. The results show that the proposed sensor system is capable of measuring interface level with a relative error of only 2.38%. Furthermore, the proposed sensor system is also capable of measuring the oil density with an error of 0.8 kg/m3.

  14. Optimal interpolation schemes to constrain pmPM2.5 in regional modeling over the United States

    NASA Astrophysics Data System (ADS)

    Sousan, Sinan Dhia Jameel

    This thesis presents the use of data assimilation with optimal interpolation (OI) to develop atmospheric aerosol concentration estimates for the United States at high spatial and temporal resolutions. Concentration estimates are highly desirable for a wide range of applications, including visibility, climate, and human health. OI is a viable data assimilation method that can be used to improve Community Multiscale Air Quality (CMAQ) model fine particulate matter (PM2.5) estimates. PM2.5 is the mass of solid and liquid particles with diameters less than or equal to 2.5 µm suspended in the gas phase. OI was employed by combining model estimates with satellite and surface measurements. The satellite data assimilation combined 36 x 36 km aerosol concentrations from CMAQ with aerosol optical depth (AOD) measured by MODIS and AERONET over the continental United States for 2002. Posterior model concentrations generated by the OI algorithm were compared with surface PM2.5 measurements to evaluate a number of possible data assimilation parameters, including model error, observation error, and temporal averaging assumptions. Evaluation was conducted separately for six geographic U.S. regions in 2002. Variability in model error and MODIS biases limited the effectiveness of a single data assimilation system for the entire continental domain. The best combinations of four settings and three averaging schemes led to a domain-averaged improvement in fractional error from 1.2 to 0.97 and from 0.99 to 0.89 at respective IMPROVE and STN monitoring sites. For 38% of OI results, MODIS OI degraded the forward model skill due to biases and outliers in MODIS AOD. Surface data assimilation combined 36 × 36 km aerosol concentrations from the CMAQ model with surface PM2.5 measurements over the continental United States for 2002. The model error covariance matrix was constructed by using the observational method. The observation error covariance matrix included site representation that scaled the observation error by land use (i.e. urban or rural locations). In theory, urban locations should have less effect on surrounding areas than rural sites, which can be controlled using site representation error. The annual evaluations showed substantial improvements in model performance with increases in the correlation coefficient from 0.36 (prior) to 0.76 (posterior), and decreases in the fractional error from 0.43 (prior) to 0.15 (posterior). In addition, the normalized mean error decreased from 0.36 (prior) to 0.13 (posterior), and the RMSE decreased from 5.39 µg m-3 (prior) to 2.32 µg m-3 (posterior). OI decreased model bias for both large spatial areas and point locations, and could be extended to more advanced data assimilation methods. The current work will be applied to a five year (2000-2004) CMAQ simulation aimed at improving aerosol model estimates. The posterior model concentrations will be used to inform exposure studies over the U.S. that relate aerosol exposure to mortality and morbidity rates. Future improvements for the OI techniques used in the current study will include combining both surface and satellite data to improve posterior model estimates. Satellite data have high spatial and temporal resolutions in comparison to surface measurements, which are scarce but more accurate than model estimates. The satellite data are subject to noise affected by location and season of retrieval. The implementation of OI to combine satellite and surface data sets has the potential to improve posterior model estimates for locations that have no direct measurements.

  15. High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli

    2016-01-01

    We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.

  16. Improved methods for the measurement and analysis of stellar magnetic fields

    NASA Technical Reports Server (NTRS)

    Saar, Steven H.

    1988-01-01

    The paper presents several improved methods for the measurement of magnetic fields on cool stars which take into account simple radiative transfer effects and the exact Zeeman patterns. Using these methods, high-resolution, low-noise data can be fitted with theoretical line profiles to determine the mean magnetic field strength in stellar active regions and a model-dependent fraction of the stellar surface (filling factor) covered by these regions. Random errors in the derived field strength and filling factor are parameterized in terms of signal-to-noise ratio, wavelength, spectral resolution, stellar rotation rate, and the magnetic parameters themselves. Weak line blends, if left uncorrected, can have significant systematic effects on the derived magnetic parameters, and thus several methods are developed to compensate partially for them. The magnetic parameters determined by previous methods likely have systematic errors because of such line blends and because of line saturation effects. Other sources of systematic error are explored in detail. These sources of error currently make it difficult to determine the magnetic parameters of individual stars to better than about + or - 20 percent.

  17. Theoretical Calculation and Validation of the Water Vapor Continuum Absorption

    NASA Technical Reports Server (NTRS)

    Ma, Qiancheng; Tipping, Richard H.

    1998-01-01

    The primary objective of this investigation is the development of an improved parameterization of the water vapor continuum absorption through the refinement and validation of our existing theoretical formalism. The chief advantage of our approach is the self-consistent, first principles, basis of the formalism which allows us to predict the frequency, temperature and pressure dependence of the continuum absorption as well as provide insights into the physical mechanisms responsible for the continuum absorption. Moreover, our approach is such that the calculated continuum absorption can be easily incorporated into satellite retrieval algorithms and climate models. Accurate determination of the water vapor continuum is essential for the next generation of retrieval algorithms which propose to use the combined constraints of multispectral measurements such as those under development for EOS data analysis (e.g., retrieval algorithms based on MODIS and AIRS measurements); current Pathfinder activities which seek to use the combined constraints of infrared and microwave (e.g., HIRS and MSU) measurements to improve temperature and water profile retrievals, and field campaigns which seek to reconcile spectrally-resolved and broad-band measurements such as those obtained as part of FIRE. Current widely used continuum treatments have been shown to produce spectrally dependent errors, with the magnitude of the error dependent on temperature and abundance which produces errors with a seasonal and latitude dependence. Translated into flux, current water vapor continuum parameterizations produce flux errors of order 10 W/sq m, which compared to the 4 W/sq m magnitude of the greenhouse gas forcing and the 1-2 W/sq m estimated aerosol forcing is certainly climatologically significant and unacceptably large. While it is possible to tune the empirical formalisms, the paucity of laboratory measurements, especially at temperatures of interest for atmospheric applications, preclude tuning, the empirical continuum models over the full spectral range of interest for remote sensing and climate applications. Thus, we propose to further develop and refine our existing, far-wing formalism to provide an improved treatment applicable from the near-infrared through the microwave. Based on the results of this investigation, we will provide to the remote sensing/climate modeling community a practical and accurate tabulation of the continuum absorption covering the near-infrared through the microwave region of the spectrum for the range of temperatures and pressures of interest for atmospheric applications.

  18. Theoretical Calculation and Validation of the Water Vapor Continuum Absorption

    NASA Technical Reports Server (NTRS)

    Ma, Qiancheng; Tipping, Richard H.

    1998-01-01

    The primary objective of this investigation is the development of an improved parameterization of the water vapor continuum absorption through the refinement and validation of our existing theoretical formalism. The chief advantage of our approach is the self-consistent, first principles, basis of the formalism which allows us to predict the frequency, temperature and pressure dependence of the continuum absorption as well as provide insights into the physical mechanisms responsible for the continuum absorption. Moreover, our approach is such that the calculated continuum absorption can be easily incorporated into satellite retrieval algorithms and climate models. Accurate determination of the water vapor continuum is essential for the next generation of retrieval algorithms which propose to use the combined constraints of multi-spectral measurements such as those under development for EOS data analysis (e.g., retrieval algorithms based on MODIS and AIRS measurements); current Pathfinder activities which seek to use the combined constraints of infrared and microwave (e.g., HIRS and MSU) measurements to improve temperature and water profile retrievals, and field campaigns which seek to reconcile spectrally-resolved and broad-band measurements such as those obtained as part of FIRE. Current widely used continuum treatments have been shown to produce spectrally dependent errors, with the magnitude of the error dependent on temperature and abundance which produces errors with a seasonal and latitude dependence. Translated into flux, current water vapor continuum parameterizations produce flux errors of order 10 W/ml, which compared to the 4 W/m' magnitude of the greenhouse gas forcing and the 1-2 W/m' estimated aerosol forcing is certainly climatologically significant and unacceptably large. While it is possible to tune the empirical formalisms, the paucity of laboratory measurements, especially at temperatures of interest for atmospheric applications, preclude tuning the empirical continuum models over the full spectral range of interest for remote sensing and climate applications. Thus, we propose to further develop and refine our existing far-wing formalism to provide an improved treatment applicable from the near-infrared through the microwave. Based on the results of this investigation, we will provide to the remote sensing/climate modeling community a practical and accurate tabulation of the continuum absorption covering the near-infrared through the microwave region of the spectrum for the range of temperatures and pressures of interest for atmospheric applications.

  19. DNA/RNA transverse current sequencing: intrinsic structural noise from neighboring bases

    PubMed Central

    Alvarez, Jose R.; Skachkov, Dmitry; Massey, Steven E.; Kalitsov, Alan; Velev, Julian P.

    2015-01-01

    Nanopore DNA sequencing via transverse current has emerged as a promising candidate for third-generation sequencing technology. It produces long read lengths which could alleviate problems with assembly errors inherent in current technologies. However, the high error rates of nanopore sequencing have to be addressed. A very important source of the error is the intrinsic noise in the current arising from carrier dispersion along the chain of the molecule, i.e., from the influence of neighboring bases. In this work we perform calculations of the transverse current within an effective multi-orbital tight-binding model derived from first-principles calculations of the DNA/RNA molecules, to study the effect of this structural noise on the error rates in DNA/RNA sequencing via transverse current in nanopores. We demonstrate that a statistical technique, utilizing not only the currents through the nucleotides but also the correlations in the currents, can in principle reduce the error rate below any desired precision. PMID:26150827

  20. How well can we measure Earth's Energy Imbalance?

    NASA Astrophysics Data System (ADS)

    Hakuba, M. Z.; Stephens, G. L.; Landerer, F. W.; Webb, F.; Bettadpur, S. V.; Tapley, B. D.; Christophe, B.; Foulon, B.

    2017-12-01

    The direct measurement of Earth's energy imbalance (EEI) is one of the greatest challenges in climate research. The global mean EEI is the integrated value of global warming, while its spatial and temporal variability can tell us about the strength and direction of heat transports and reflects internal climate modes such as ENSO. These heat flows ultimately control the circulation in the atmosphere and ocean, and henceforth the water cycle and habitability of our planet. Current space-born systems measure the radiative components of the global mean energy budget with unprecedented accuracy and stability, but the residual budget derived from them has errors too large to determine the absolute magnitude of EEI. Best estimates of EEI are currently derived from changes in ocean heat content, which are afflicted with horizontal and vertical sampling issues. Hence, we see the need to improve on current approaches in order to circumvent calibration issues that are inevitable in radiometry, and sampling issues that are inevitable when profiling the ocean. We will present alternative methods to estimate the EEI by 1) exploiting existing datasets of ocean mass and sea level height from remote sensing. A combination of such datasets, as for example provided by the GRACE and Jason missions, provides a way of estimating the thermo-steric sea level rise and therefore the thermal expansion of the ocean due to heat uptake. Recent studies suggest the retrieval of ocean heat uptake is possible within acceptable error bounds, although the magnitude and sources of error are yet to be comprehensively defined. 2) To monitor the integrated value of EEI from space, we propose a method that aims at measuring the non-gravitational force due to radiation pressure acting on Earth orbiting spacecrafts. This requires measurements of acceleration at high accuracy. The concept of deriving EEI from radiation pressure has been explored in the past and today's advanced capabilities suggest it is feasible to measure the EEI accurately enough to answer the question: At what rate is our planet warming? This method provides little information on spectral distribution and spatiotemporal resolution. However, by directly measuring EEI, it could complement existing efforts and improve our understanding of the climatic changes our planet is subjected to.

  1. Volumetric breast density measurement: sensitivity analysis of a relative physics approach

    PubMed Central

    Lau, Susie; Abdul Aziz, Yang Faridah

    2016-01-01

    Objective: To investigate the sensitivity and robustness of a volumetric breast density (VBD) measurement system to errors in the imaging physics parameters including compressed breast thickness (CBT), tube voltage (kVp), filter thickness, tube current-exposure time product (mAs), detector gain, detector offset and image noise. Methods: 3317 raw digital mammograms were processed with Volpara® (Matakina Technology Ltd, Wellington, New Zealand) to obtain fibroglandular tissue volume (FGV), breast volume (BV) and VBD. Errors in parameters including CBT, kVp, filter thickness and mAs were simulated by varying them in the Digital Imaging and Communications in Medicine (DICOM) tags of the images up to ±10% of the original values. Errors in detector gain and offset were simulated by varying them in the Volpara configuration file up to ±10% from their default values. For image noise, Gaussian noise was generated and introduced into the original images. Results: Errors in filter thickness, mAs, detector gain and offset had limited effects on FGV, BV and VBD. Significant effects in VBD were observed when CBT, kVp, detector offset and image noise were varied (p < 0.0001). Maximum shifts in the mean (1.2%) and median (1.1%) VBD of the study population occurred when CBT was varied. Conclusion: Volpara was robust to expected clinical variations, with errors in most investigated parameters giving limited changes in results, although extreme variations in CBT and kVp could lead to greater errors. Advances in knowledge: Despite Volpara's robustness, rigorous quality control is essential to keep the parameter errors within reasonable bounds. Volpara appears robust within those bounds, albeit for more advanced applications such as tracking density change over time, it remains to be seen how accurate the measures need to be. PMID:27452264

  2. Volumetric breast density measurement: sensitivity analysis of a relative physics approach.

    PubMed

    Lau, Susie; Ng, Kwan Hoong; Abdul Aziz, Yang Faridah

    2016-10-01

    To investigate the sensitivity and robustness of a volumetric breast density (VBD) measurement system to errors in the imaging physics parameters including compressed breast thickness (CBT), tube voltage (kVp), filter thickness, tube current-exposure time product (mAs), detector gain, detector offset and image noise. 3317 raw digital mammograms were processed with Volpara(®) (Matakina Technology Ltd, Wellington, New Zealand) to obtain fibroglandular tissue volume (FGV), breast volume (BV) and VBD. Errors in parameters including CBT, kVp, filter thickness and mAs were simulated by varying them in the Digital Imaging and Communications in Medicine (DICOM) tags of the images up to ±10% of the original values. Errors in detector gain and offset were simulated by varying them in the Volpara configuration file up to ±10% from their default values. For image noise, Gaussian noise was generated and introduced into the original images. Errors in filter thickness, mAs, detector gain and offset had limited effects on FGV, BV and VBD. Significant effects in VBD were observed when CBT, kVp, detector offset and image noise were varied (p < 0.0001). Maximum shifts in the mean (1.2%) and median (1.1%) VBD of the study population occurred when CBT was varied. Volpara was robust to expected clinical variations, with errors in most investigated parameters giving limited changes in results, although extreme variations in CBT and kVp could lead to greater errors. Despite Volpara's robustness, rigorous quality control is essential to keep the parameter errors within reasonable bounds. Volpara appears robust within those bounds, albeit for more advanced applications such as tracking density change over time, it remains to be seen how accurate the measures need to be.

  3. BOOK REVIEW: The Current Comparator

    NASA Astrophysics Data System (ADS)

    Petersons, Oskars

    1989-01-01

    This 120-page book is a concise, yet comprehensive, clearly-written and well-illustrated monograph that covers the subject matter from basic principles through design, construction and calibration details to the principal applications. The book will be useful, as a primer, to the uninitiated and, as a reference book to the practitioner involved with transformer-type ratio devices. The length of the book and the style of presentation will not overburden any informed reader. The described techniques and the cited references are primarily from the work at the National Research Council, Canada (NRC). Any omissions, however, are not serious with respect to coverage of the subject matter, since most of the development work has been done at NRC. The role of transformers and transformer-like devices for establishing accurate voltage and current ratios has been recognized for over half a century. Transformer techniques were much explored and developed in the fifties and sixties for accuracy levels suitable for standards laboratories. Three-winding voltage transformers were developed for scaling of impedances in connection with the calculable Thompson Lampard capacitor; three-winding current transformers or current comparators were initially explored for the calibration of current transformers and later for specialized impedance measurements. Extensive development of the current comparator and its applications has been and is still being conducted at the NRC by a team that was started and, until his retirement, led by N L Kusters. The team is now led by W J M Moore. He and P N Miljanic, the authors of this book, have had the principal roles in the development of the current comparator. It is fortunate for the field of metrology that considerabe resources and a talented group of researchers were available to do this development along with mechanisms that were available to transfer this technology to a private sector instrument manufacturer and, thus, disseminate it world wide. One would hardly find a standards laboratory today without an instrument employing a current comparator. The NRC program, now nearing the end of its third decade, has resulted in a large number of papers in technical journals. The fact that the results of the current-comparator program are now documented in a well-written book is a most welcome development. The material in the book is well organized and divided into seven chapters. Chapter 1 deals very briefly with the historical aspects of the development, including related work in other organizations. Chapter 2 is the longest, occupying one third of the book. It presents the background theory; the definitions and origins of the errors; and the related concepts and devices including two-stage current transformers, electronic methods for improving the performance of current transformers, and null detectors. The idea of the current comparator is developed starting from Ampere's law; and then progressing to the practical realization of measuring the line integral of the magnetic field surrounding an electric current. Such an approach, as opposed to the more common methods of analyzing transformers, has a tutorial value in explaining how the current balance is achieved. Such analysis is intuitive for air-core sensing coils with infinitesimal cross-sections and uniform winding densities. The intuitive understanding, however, becomes less obvious when high-permeability magnetic cores are used. The subject of errors is discussed thoroughly. For errors of magnetic origin, ample experimental data are provided to support the hypothesis for the cause of such errors. The cause is discussed in a macroscopic sense (non-uniform effective permeability along the torus) without going into design and processing details which could be responsible for the non-uniformities. For capacitive errors, equations have been developed to compute them from geometrical considerations. Techniques are presented to reduce both types of errors shielding techniques for magnetic errors and magnetic-shield excitation for capacitive errors. The magnetic-shield-excitation technique leads naturally to two-stage transformer approaches, described in a small subchapter. Sensitivity of current comparators is discussed in terms of available signal levels for given excitations and current-comparator characteristics. The discussion, however, does not cover more basic limitations, such as inherent noise. A subchapter is devoted to electronically-aided current transformers. Although electronically-aided transformers are not in a strict sense current comparators, many of the design considerations and error sources are the same. Seven different circuits are presented with a brief qualitative discussion. The third chapter, covering design and construction, will be exceptionally valuable for someone needing basic information on how to construct a current comparator quickly. Indeed, all the necessary design, construction, and testing steps are presented in a well-illustrated 15-page chapter. The tests for shielding effectiveness discussed in this chapter and the knowledge of interwinding capacitances calculable from the equations in the previous chapter should enable one also to predict the limits of errors without an exhaustive and complete calibration. Chapter 4 is devoted to current-transformer calibration—the original objective for the current-comparator development work. The principal tool for this is the compensated current comparator, in effect a two-stage transformer operated in the current-comparator mode. The compensated current comparator is not only accurate but is also an extremely versatile device and, hence, deserves the attention that it receives in this book. Considerable space is devoted to the calibration of current comparators themselves using other current comparators in ratio-buildup (bootstrap) techniques. This information is more than most of the users will want since the pre-eminent feature of a current comparator is that errors can be made inherently negligible with proper construction techniques. Proper functioning can be verified by spot checks on a few ratios, and by indirect means. Complete calibration is useful, however, to verify the original design. A number of circuits incorporating compensated current comparators for current-transformer calibration are presented. Such circuits cover the calibration of current transformers normally encountered in electric-power transmission and distribution, from several amperes to several thousand amperes; cascaded circuits for very large currents up to 60,000 A; and special cases involving less-than-unity ratio (step-up) current transformers. Peripheral equipment such as ratio-sets and burdens are also discussed. This entire chapter is of great practical interest since much of the world's current-transformer calibration is performed using equipment described therein. The next two chapters, 5 and 6, deal with current-comparator applications in impedance-bridge circuits. High-voltage applications (described in Chapter 5) have been of great practical importance and indeed are the techniques of choice for a number of measurements. High-voltage bridge circuits are described for capacitance, inductance, voltage transformer ratio, and low-power-factor power measurements. Without going into much detail, the book mentions the particular characteristics required of current comparators in high-voltage bridge applications. Other components making up the bridge circuits are also described, as well as the calibration technique for the bridge. Limited in application but important in basic metrology is one particular low-voltage bridge, described in Chapter 6, for realization of ac power in terms of more basic SI units. Applications to ac resistance measurements and to realization of transconductance amplifiers are also included. Chapter 7, consisting of only eight pages, is devoted to direct-current comparators, although specific topics applicable to dc use are covered in earlier chapters. In relation to the number of dc vs ac instruments in use, the length of this chapter presents something of an imbalance. Nevertheless, in the limited number of pages, the authors have covered the principal direct-current comparator applications—the ratio device (dc comparator), the resistance bridge, and the potentiometer. More specialized instruments such as the differential voltmeter and the digital-to-analog converter are also mentioned. The unique feature of the dc comparator is the modulator-type balance detector. It is covered in Chapter 3 with current-comparator construction details. In conclusion, the technical depth and style of discussion, the material covered, the size of the volume, and the ample references suggest that the authors should satisfy most of the audience interested in current comparators. A possible exception might be those interested in extremely high accuracies (errors in the 10-8 to 10-9 range). The authors have selected the overall approach of presentation that is predominantly pragmatic rather than analytical. Metrologists unfamiliar with current comparators should be able, after a day's reading, to embark upon the construction of their own devices or upon the application of current comparators to their own measurement needs. The serious practioner will find the book valuable for the complete coverage of the subject and the bibliography. Also, most practitioners should find in the book a number of useful design, construction, or application tricks for their own use. At the present time, development activity on transformer-type ratio devices, including the current comparator, has subsided in comparison with the peak level of the sixties. For many applications, however, these devices remain the most accurate and sometimes the only viable instruments for scaling current and voltage. The applications include factory test systems and instruments for absolute electrical measurements. This situation is likely to continue for at least one more generation of metrologists. Thus, not only present but also future generations of metrologists will benefit from the book by not having to spend countless hours in "reinventing the wheel" or in searching scattered journal literature. One only wishes that other specialized fields of metrology could benefit from similar endeavors.

  4. Displacement current phenomena in the magnetically insulated transmission lines of the refurbished Z accelerator

    NASA Astrophysics Data System (ADS)

    McBride, R. D.; Jennings, C. A.; Vesey, R. A.; Rochau, G. A.; Savage, M. E.; Stygar, W. A.; Cuneo, M. E.; Sinars, D. B.; Jones, M.; Lechien, K. R.; Lopez, M. R.; Moore, J. K.; Struve, K. W.; Wagoner, T. C.; Waisman, E. M.

    2010-12-01

    Experimental data is presented that illustrates important displacement current phenomena in the magnetically insulated transmission lines (MITLs) of the refurbished Z accelerator [D. V. Rose , Phys. Rev. ST Accel. Beams 13, 010402 (2010)PRABFM1098-440210.1103/PhysRevSTAB.13.010402]. Specifically, we show how displacement current in the MITLs causes significant differences between the accelerator current measured at the vacuum-insulator stack (at a radial position of about 1.6 m from the Z axis of symmetry) and the accelerator current measured at the load (at a radial position of about 6 cm from the Z axis of symmetry). The importance of accounting for these differences was first emphasized by Jennings et al. [C. A. Jennings , IEEE Trans. Plasma Sci. 38, 529 (2010)ITPSBD0093-381310.1109/TPS.2010.2042971], who calculated them using a full transmission-line-equivalent model of the four-level MITL system. However, in the data presented by Jennings et al., many of the interesting displacement current phenomena were obscured by parasitic current losses that occurred between the vacuum-insulator stack and the load (e.g., electron flow across the anode-cathode gap). By contrast, the data presented herein contain very little parasitic current loss, and thus for these low-loss experiments we are able to demonstrate that the differences between the current measured at the stack and the current measured at the load are due primarily to the displacement current that results from the shunt capacitance of the MITLs (about 8.41 nF total). Demonstrating this is important because displacement current is an energy storage mechanism, where energy is stored in the MITL electric fields and can later be used by the system. Thus, even for higher-loss experiments, the differences between the current measured at the stack and the current measured at the load are often largely due to energy storage and subsequent release, as opposed to being due solely to some combination of measurement error and current loss in the MITLs and/or double post-hole convolute. Displacement current also explains why the current measured downstream of the MITLs (i.e., the load current) often exceeds the current measured upstream of the MITLs (i.e., the stack current) at various times in the power pulse (this particular phenomenon was initially thought to be due to timing and/or calibration errors). To facilitate a better understanding of these phenomena, we also introduce and analyze a simple LC circuit model of the MITLs. This model is easily implemented as a simple drive circuit in simulation codes, which has now been done for the LASNEX code [G. B. Zimmerman and W. L. Kruer, Comments Plasma Phys. Controlled Fusion 2, 51 (1975)CPCFBJ0374-2806] at Sandia, as well as for simpler MATLAB®-based codes at Sandia. An example of this LC model used as a drive circuit will also be presented.

  5. Determination of eddy current response with magnetic measurements.

    PubMed

    Jiang, Y Z; Tan, Y; Gao, Z; Nakamura, K; Liu, W B; Wang, S Z; Zhong, H; Wang, B B

    2017-09-01

    Accurate mutual inductances between magnetic diagnostics and poloidal field coils are an essential requirement for determining the poloidal flux for plasma equilibrium reconstruction. The mutual inductance calibration of the flux loops and magnetic probes requires time-varying coil currents, which also simultaneously drive eddy currents in electrically conducting structures. The eddy current-induced field appearing in the magnetic measurements can substantially increase the calibration error in the model if the eddy currents are neglected. In this paper, an expression of the magnetic diagnostic response to the coil currents is used to calibrate the mutual inductances, estimate the conductor time constant, and predict the eddy currents response. It is found that the eddy current effects in magnetic signals can be well-explained by the eddy current response determination. A set of experiments using a specially shaped saddle coil diagnostic are conducted to measure the SUNIST-like eddy current response and to examine the accuracy of this method. In shots that include plasmas, this approach can more accurately determine the plasma-related response in the magnetic signals by eliminating the field due to the eddy currents produced by the external field.

  6. From tunneling to point contact: Correlation between forces and current

    NASA Astrophysics Data System (ADS)

    Sun, Yan; Mortensen, Henrik; Schär, Sacha; Lucier, Anne-Sophie; Miyahara, Yoichi; Grütter, Peter; Hofer, Werner

    2005-05-01

    We used a combined ultrahigh vacuum scanning tunneling and atomic force microscope (STM/AFM) to study W tip-Au(111) sample interactions in the regimes from weak coupling to strong interaction and simultaneously measure current changes from picoamperes to microamperes. Close correlation between conductance and interaction forces in a STM configuration was observed. In particular, the electrical and mechanical points of contact are determined based on the observed barrier collapse and adhesive bond formation, respectively. These points of contact, as defined by force and current measurements, coincide within measurement error. Ab initio calculations of the current as a function of distance in the tunneling regime is in quantitative agreement with experimental results. The obtained results are discussed in the context of dissipation in noncontact AFM as well as electrical contact formation in molecular electronics.

  7. Motivational processes from expectancy-value theory are associated with variability in the error positivity in young children.

    PubMed

    Kim, Matthew H; Marulis, Loren M; Grammer, Jennie K; Morrison, Frederick J; Gehring, William J

    2017-03-01

    Motivational beliefs and values influence how children approach challenging activities. The current study explored motivational processes from an expectancy-value theory framework by studying children's mistakes and their responses to them by focusing on two event-related potential (ERP) components: the error-related negativity (ERN) and the error positivity (Pe). Motivation was assessed using a child-friendly challenge puzzle task and a brief interview measure prior to ERP testing. Data from 50 4- to 6-year-old children revealed that greater perceived competence beliefs were related to a larger Pe, whereas stronger intrinsic task value beliefs were associated with a smaller Pe. Motivation was unrelated to the ERN. Individual differences in early motivational processes may reflect electrophysiological activity related to conscious error awareness. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Stray magnetic-field response of linear birefringent optical current sensors

    NASA Astrophysics Data System (ADS)

    MacDougall, Trevor W.; Hutchinson, Ted F.

    1995-07-01

    It is well known that the line integral, describing Faraday rotation in an optical medium, reduces to zero at low frequencies for a closed path that does not encircle a current source. If the closed optical path possesses linear birefringence in addition to Faraday rotation, the cumulative effects on the state of polarization result in a response to externally located current-carrying conductors. This effect can induce a measurable error of the order of 0.3% during certain steady-state operating conditions.

  9. High accuracy switched-current circuits using an improved dynamic mirror

    NASA Technical Reports Server (NTRS)

    Zweigle, G.; Fiez, T.

    1991-01-01

    The switched-current technique, a recently developed circuit approach to analog signal processing, has emerged as an alternative/compliment to the well established switched-capacitor circuit technique. High speed switched-current circuits offer potential cost and power savings over slower switched-capacitor circuits. Accuracy improvements are a primary concern at this stage in the development of the switched-current technique. Use of the dynamic current mirror has produced circuits that are insensitive to transistor matching errors. The dynamic current mirror has been limited by other sources of error including clock-feedthrough and voltage transient errors. In this paper we present an improved switched-current building block using the dynamic current mirror. Utilizing current feedback the errors due to current imbalance in the dynamic current mirror are reduced. Simulations indicate that this feedback can reduce total harmonic distortion by as much as 9 dB. Additionally, we have developed a clock-feedthrough reduction scheme for which simulations reveal a potential 10 dB total harmonic distortion improvement. The clock-feedthrough reduction scheme also significantly reduces offset errors and allows for cancellation with a constant current source. Experimental results confirm the simulated improvements.

  10. Exact free oscillation spectra, splitting functions and the resolvability of Earth's density structure

    NASA Astrophysics Data System (ADS)

    Akbarashrafi, F.; Al-Attar, D.; Deuss, A.; Trampert, J.; Valentine, A. P.

    2018-04-01

    Seismic free oscillations, or normal modes, provide a convenient tool to calculate low-frequency seismograms in heterogeneous Earth models. A procedure called `full mode coupling' allows the seismic response of the Earth to be computed. However, in order to be theoretically exact, such calculations must involve an infinite set of modes. In practice, only a finite subset of modes can be used, introducing an error into the seismograms. By systematically increasing the number of modes beyond the highest frequency of interest in the seismograms, we investigate the convergence of full-coupling calculations. As a rule-of-thumb, it is necessary to couple modes 1-2 mHz above the highest frequency of interest, although results depend upon the details of the Earth model. This is significantly higher than has previously been assumed. Observations of free oscillations also provide important constraints on the heterogeneous structure of the Earth. Historically, this inference problem has been addressed by the measurement and interpretation of splitting functions. These can be seen as secondary data extracted from low frequency seismograms. The measurement step necessitates the calculation of synthetic seismograms, but current implementations rely on approximations referred to as self- or group-coupling and do not use fully accurate seismograms. We therefore also investigate whether a systematic error might be present in currently published splitting functions. We find no evidence for any systematic bias, but published uncertainties must be doubled to properly account for the errors due to theoretical omissions and regularization in the measurement process. Correspondingly, uncertainties in results derived from splitting functions must also be increased. As is well known, density has only a weak signal in low-frequency seismograms. Our results suggest this signal is of similar scale to the true uncertainties associated with currently published splitting functions. Thus, it seems that great care must be taken in any attempt to robustly infer details of Earth's density structure using current splitting functions.

  11. Utilizing field-aligned current profiles derived from Swarm to estimate the peak emission height of 630 nm auroral arcs: a comparison of methods and discussion of associated error estimates in the ASI data.

    NASA Astrophysics Data System (ADS)

    Gillies, D. M.; Knudsen, D. J.; Donovan, E.; Jackel, B. J.; Gillies, R.; Spanswick, E.

    2017-12-01

    We compare field-aligned currents (FACs) measured by the Swarm constellation of satellites with the location of red-line (630 nm) auroral arcs observed by all-sky imagers (ASIs) to derive a characteristic emission height for the optical emissions. In our 10 events we find that an altitude of 200 km applied to the ASI maps gives optimal agreement between the two observations. We also compare the new FAC method against the traditional triangulation method using pairs of all-sky imagers (ASIs), and against electron density profiles obtained from the Resolute Bay Incoherent Scatter Radar-Canadian radar (RISR-C), both of which are consistent with a characteristic emission height of 200 km. We also present the spatial error associated with georeferencing REdline Geospace Observatory (REGO) and THEMIS all-sky imagers (ASIs) and how it applies to altitude projections of the mapped image. Utilizing this error we validate the estimated altitude of redline aurora using two methods: triangulation between ASIs and field-aligned current profiles derived from magnetometers on-board the Swarm satellites.

  12. Improving Lidar Turbulence Estimates for Wind Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Clifton, Andrew; Churchfield, Matthew J.

    2016-10-06

    Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. This presentation primarily focuses on the physics-based corrections, which include corrections for instrument noise, volume averaging, and variance contamination. As different factors affect TI under different stability conditions, the combination of physical corrections applied in L-TERRA changes depending on the atmospheric stability during each 10-minute time period. This stability-dependent version of L-TERRA performed well at both sites, reducing TI error and bringing lidar TI estimates closer to estimates from instruments on towers. However, there is still scatter evident in the lidar TI estimates, indicating that there are physics that are not being captured in the current version of L-TERRA. Two options are discussed for modeling the remainder of the TI error physics in L-TERRA: machine learning and lidar simulations. Lidar simulations appear to be a better approach, as they can help improve understanding of atmospheric effects on TI error and do not require a large training data set.« less

  13. Distortion of Digital Image Correlation (DIC) Displacements and Strains from Heat Waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, E. M. C.; Reu, P. L.

    “Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. We present that there are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heatmore » waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. In conclusion, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.« less

  14. Distortion of Digital Image Correlation (DIC) Displacements and Strains from Heat Waves

    DOE PAGES

    Jones, E. M. C.; Reu, P. L.

    2017-11-28

    “Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. We present that there are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heatmore » waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. In conclusion, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.« less

  15. Measurement error affects risk estimates for recruitment to the Hudson River stock of striped bass.

    PubMed

    Dunning, Dennis J; Ross, Quentin E; Munch, Stephan B; Ginzburg, Lev R

    2002-06-07

    We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years). Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%). However, the risk decreased almost tenfold (0.032) if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009) and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006)--an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted.

  16. Errors due to measuring voltage on current-carrying electrodes in electric current computed tomography.

    PubMed

    Cheng, K S; Simske, S J; Isaacson, D; Newell, J C; Gisser, D G

    1990-01-01

    Electric current computed tomography is a process for determining the distribution of electrical conductivity inside a body based upon measurements of voltage or current made at the body's surface. Most such systems use different electrodes for the application of current and the measurement of voltage. This paper shows that when a multiplicity of electrodes are attached to a body's surface, the voltage data are most sensitive to changes in resistivity in the body's interior when voltages are measured from all electrodes, including those carrying current. This assertion is true despite the presence of significant levels of skin impedance at the electrodes. This conclusion is supported both theoretically and by experiment. Data were first taken using all electrodes for current and voltage. Then current was applied only at a pair of electrodes, with voltages measured on all other electrodes. We then constructed the second data set by calculation from the first. Targets could be detected with better signal-to-noise ratio by using the reconstructed data than by using the directly measured voltages on noncurrent-carrying electrodes. Images made from voltage data using only noncurrent-carrying electrodes had higher noise levels and were less able to accurately locate targets. We conclude that in multiple electrode systems for electric current computed tomography, current should be applied and voltage should be measured from all available electrodes.

  17. Measuring Data Quality Through a Source Data Verification Audit in a Clinical Research Setting.

    PubMed

    Houston, Lauren; Probst, Yasmine; Humphries, Allison

    2015-01-01

    Health data has long been scrutinised in relation to data quality and integrity problems. Currently, no internationally accepted or "gold standard" method exists measuring data quality and error rates within datasets. We conducted a source data verification (SDV) audit on a prospective clinical trial dataset. An audit plan was applied to conduct 100% manual verification checks on a 10% random sample of participant files. A quality assurance rule was developed, whereby if >5% of data variables were incorrect a second 10% random sample would be extracted from the trial data set. Error was coded: correct, incorrect (valid or invalid), not recorded or not entered. Audit-1 had a total error of 33% and audit-2 36%. The physiological section was the only audit section to have <5% error. Data not recorded to case report forms had the greatest impact on error calculations. A significant association (p=0.00) was found between audit-1 and audit-2 and whether or not data was deemed correct or incorrect. Our study developed a straightforward method to perform a SDV audit. An audit rule was identified and error coding was implemented. Findings demonstrate that monitoring data quality by a SDV audit can identify data quality and integrity issues within clinical research settings allowing quality improvement to be made. The authors suggest this approach be implemented for future research.

  18. The Enigmatic Cornea and Intraocular Lens Calculations: The LXXIII Edward Jackson Memorial Lecture.

    PubMed

    Koch, Douglas D

    2016-11-01

    To review the progress and challenges in obtaining accurate corneal power measurements for intraocular lens (IOL) calculations. Personal perspective, review of literature, case presentations, and personal data. Through literature review findings, case presentations, and data from the author's center, the types of corneal measurement errors that can occur in IOL calculation are categorized and described, along with discussion of future options to improve accuracy. Advances in IOL calculation technology and formulas have greatly increased the accuracy of IOL calculations. Recent reports suggest that over 90% of normal eyes implanted with IOLs may achieve accuracy to within 0.5 diopter (D) of the refractive target. Though errors in estimation of corneal power can cause IOL calculation errors in eyes with normal corneas, greater difficulties in measuring corneal power are encountered in eyes with diseased, scarred, and postsurgical corneas. For these corneas, problematic issues are quantifying anterior corneal power and measuring posterior corneal power and astigmatism. Results in these eyes are improving, but 2 examples illustrate current limitations: (1) spherical accuracy within 0.5 D is achieved in only 70% of eyes with post-refractive surgery corneas, and (2) astigmatism accuracy within 0.5 D is achieved in only 80% of eyes implanted with toric IOLs. Corneal power measurements are a major source of error in IOL calculations. New corneal imaging technology and IOL calculation formulas have improved outcomes and hold the promise of ongoing progress. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Defining Uncertainty and Error in Planktic Foraminiferal Oxygen Isotope Measurements

    NASA Astrophysics Data System (ADS)

    Fraass, A. J.; Lowery, C.

    2016-12-01

    Foraminifera are the backbone of paleoceanography, and planktic foraminifera are one of the leading tools for reconstructing water column structure. Currently, there are unconstrained variables when dealing with the reproducibility of oxygen isotope measurements. This study presents the first results from a simple model of foraminiferal calcification (Foraminiferal Isotope Reproducibility Model; FIRM), designed to estimate the precision and accuracy of oxygen isotope measurements. FIRM produces synthetic isotope data using parameters including location, depth habitat, season, number of individuals included in measurement, diagenesis, misidentification, size variation, and vital effects. Reproducibility is then tested using Monte Carlo simulations. The results from a series of experiments show that reproducibility is largely controlled by the number of individuals in each measurement, but also strongly a function of local oceanography if the number of individuals is held constant. Parameters like diagenesis or misidentification have an impact on both the precision and the accuracy of the data. Currently FIRM is a tool to estimate isotopic error values best employed in the Holocene. It is also a tool to explore the impact of myriad factors on the fidelity of paleoceanographic records. FIRM was constructed in the open-source computing environment R and is freely available via GitHub. We invite modification and expansion, and have planned inclusions for benthic foram reproducibility and stratigraphic uncertainty.

  20. High altitude current-voltage measurement of GaAs/Ge solar cells

    NASA Astrophysics Data System (ADS)

    Hart, Russell E., Jr.; Brinker, David J.; Emery, Keith A.

    Measurements of high-voltage (Voc of 1.2 V) gallium arsenide on germanium tandem junction solar cells at air mass 0.22 showed that the insolation in the red portion of the solar spectrum is insufficient to obtain high fill factor. On the basis of measurements in the LeRC X-25L solar simulator, these cells were believed to be as efficient as 21.68 percent AM0. Solar simulator spectrum errors in the red end allowed the fill factor to be as high as 78.7 percent. When a similar cell's current-voltage characteristic was measured at high altitude in the NASA Lear Jet Facility, a loss of 15 percentage points in fill factor was observed. This decrease was caused by insufficient current in the germanium bottom cell of the tandem stack.

  1. An Error-Reduction Algorithm to Improve Lidar Turbulence Estimates for Wind Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Clifton, Andrew

    2016-08-01

    Currently, cup anemometers on meteorological (met) towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability. However, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install met towers at potential sites. As a result, remote sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. While lidars can accurately estimate mean wind speeds and wind directions, there is still a largemore » amount of uncertainty surrounding the measurement of turbulence with lidars. This uncertainty in lidar turbulence measurements is one of the key roadblocks that must be overcome in order to replace met towers with lidars for wind energy applications. In this talk, a model for reducing errors in lidar turbulence estimates is presented. Techniques for reducing errors from instrument noise, volume averaging, and variance contamination are combined in the model to produce a corrected value of the turbulence intensity (TI), a commonly used parameter in wind energy. In the next step of the model, machine learning techniques are used to further decrease the error in lidar TI estimates.« less

  2. The effects of a test-taking strategy intervention for high school students with test anxiety in advanced placement science courses

    NASA Astrophysics Data System (ADS)

    Markus, Doron J.

    Test anxiety is one of the most debilitating and disruptive factors associated with underachievement and failure in schools (Birenbaum, Menucha, Nasser, & Fadia, 1994; Tobias, 1985). Researchers have suggested that interventions that combine multiple test-anxiety reduction techniques are most effective at reducing test anxiety levels (Ergene, 2003). For the current study, involving 62 public high school students enrolled in advanced placement science courses, the researcher designed a multimodal intervention designed to reduce test anxiety. Analyses were conducted to assess the relationships among test anxiety levels, unit examination scores, and irregular multiple-choice error patterns (error clumping), as well as changes in these measures after the intervention. Results indicate significant, positive relationships between some measures of test anxiety and error clumping, as well as significant, negative relationships between test anxiety levels and student achievement. In addition, results show significant decreases in holistic measures of test anxiety among students with low anxiety levels, as well as decreases in Emotionality subscores of test anxiety among students with high levels of test anxiety. There were no significant changes over time in the Worry subscores of test anxiety. Suggestions for further research include further confirmation of the existence of error clumping, and its causal relationship with test anxiety.

  3. A Simulation Analysis of Errors in the Measurement of Standard Electrochemical Rate Constants from Phase-Selective Impedance Data.

    DTIC Science & Technology

    1987-09-30

    RESTRICTIVE MARKINGSC Unclassif ied 2a SECURIly CLASSIFICATION ALIIMOA4TY 3 DIS1RSBj~jiOAVAILAB.I1Y OF RkPORI _________________________________ Approved...of the AC current, including the time dependence at a growing DME, at a given fixed potential either in the presence or the absence of an...the relative error in k b(app) is ob relatively small for ks (true) : 0.5 cm s-, and increases rapidly for ob larger rate constants as kob reaches the

  4. Application of adaptive Kalman filter in vehicle laser Doppler velocimetry

    NASA Astrophysics Data System (ADS)

    Fan, Zhe; Sun, Qiao; Du, Lei; Bai, Jie; Liu, Jingyun

    2018-03-01

    Due to the variation of road conditions and motor characteristics of vehicle, great root-mean-square (rms) error and outliers would be caused. Application of Kalman filter in laser Doppler velocimetry(LDV) is important to improve the velocity measurement accuracy. In this paper, the state-space model is built by using current statistical model. A strategy containing two steps is adopted to make the filter adaptive and robust. First, the acceleration variance is adaptively adjusted by using the difference of predictive observation and measured observation. Second, the outliers would be identified and the measured noise variance would be adjusted according to the orthogonal property of innovation to reduce the impaction of outliers. The laboratory rotating table experiments show that adaptive Kalman filter greatly reduces the rms error from 0.59 cm/s to 0.22 cm/s and has eliminated all the outliers. Road experiments compared with a microwave radar show that the rms error of LDV is 0.0218 m/s, and it proves that the adaptive Kalman filtering is suitable for vehicle speed signal processing.

  5. A Neurobehavioral Mechanism Linking Behaviorally Inhibited Temperament and Later Adolescent Social Anxiety.

    PubMed

    Buzzell, George A; Troller-Renfree, Sonya V; Barker, Tyson V; Bowman, Lindsay C; Chronis-Tuscano, Andrea; Henderson, Heather A; Kagan, Jerome; Pine, Daniel S; Fox, Nathan A

    2017-12-01

    Behavioral inhibition (BI) is a temperament identified in early childhood that is a risk factor for later social anxiety. However, mechanisms underlying the development of social anxiety remain unclear. To better understand the emergence of social anxiety, longitudinal studies investigating changes at behavioral neural levels are needed. BI was assessed in the laboratory at 2 and 3 years of age (N = 268). Children returned at 12 years, and an electroencephalogram was recorded while children performed a flanker task under 2 conditions: once while believing they were being observed by peers and once while not being observed. This methodology isolated changes in error monitoring (error-related negativity) and behavior (post-error reaction time slowing) as a function of social context. At 12 years, current social anxiety symptoms and lifetime diagnoses of social anxiety were obtained. Childhood BI prospectively predicted social-specific error-related negativity increases and social anxiety symptoms in adolescence; these symptoms directly related to clinical diagnoses. Serial mediation analysis showed that social error-related negativity changes explained relations between BI and social anxiety symptoms (n = 107) and diagnosis (n = 92), but only insofar as social context also led to increased post-error reaction time slowing (a measure of error preoccupation); this model was not significantly related to generalized anxiety. Results extend prior work on socially induced changes in error monitoring and error preoccupation. These measures could index a neurobehavioral mechanism linking BI to adolescent social anxiety symptoms and diagnosis. This mechanism could relate more strongly to social than to generalized anxiety in the peri-adolescent period. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. All rights reserved.

  6. A modified beam-to-earth transformation to measure short-wavelength internal waves with an acoustic Doppler current profiler

    USGS Publications Warehouse

    Scotti, A.; Butman, B.; Beardsley, R.C.; Alexander, P.S.; Anderson, S.

    2005-01-01

    The algorithm used to transform velocity signals from beam coordinates to earth coordinates in an acoustic Doppler current profiler (ADCP) relies on the assumption that the currents are uniform over the horizontal distance separating the beams. This condition may be violated by (nonlinear) internal waves, which can have wavelengths as small as 100-200 m. In this case, the standard algorithm combines velocities measured at different phases of a wave and produces horizontal velocities that increasingly differ from true velocities with distance from the ADCP. Observations made in Massachusetts Bay show that currents measured with a bottom-mounted upward-looking ADCP during periods when short-wavelength internal waves are present differ significantly from currents measured by point current meters, except very close to the instrument. These periods are flagged with high error velocities by the standard ADCP algorithm. In this paper measurements from the four spatially diverging beams and the backscatter intensity signal are used to calculate the propagation direction and celerity of the internal waves. Once this information is known, a modified beam-to-earth transformation that combines appropriately lagged beam measurements can be used to obtain current estimates in earth coordinates that compare well with pointwise measurements. ?? 2005 American Meteorological Society.

  7. Experimental compliance calibration of the compact fracture toughness specimen

    NASA Technical Reports Server (NTRS)

    Fisher, D. M.; Buzzard, R. J.

    1980-01-01

    Compliances and stress intensity coefficients were determined over crack length to width ratios from 0.1 to 0.8. Displacements were measured at the load points, load line, and crack mouth. Special fixturing was devised to permit accurate measurement of load point displacement. The results are in agreement with the currently used results of boundary collocation analyses. The errors which occur in stress intensity coefficients or specimen energy input determinations made from load line displacement measurements rather than from load point measurements are emphasized.

  8. Calibration of low-temperature ac susceptometers with a copper cylinder standard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, D.-X.; Skumryev, V.

    2010-02-15

    A high-quality low-temperature ac susceptometer is calibrated by comparing the measured ac susceptibility of a copper cylinder with its eddy-current ac susceptibility accurately calculated. Different from conventional calibration techniques that compare the measured results with the known property of a standard sample at certain fixed temperature T, field amplitude H{sub m}, and frequency f, to get a magnitude correction factor, here, the electromagnetic properties of the copper cylinder are unknown and are determined during the calibration of the ac susceptometer in the entire T, H{sub m}, and f range. It is shown that the maximum magnitude error and the maximummore » phase error of the susceptometer are less than 0.7% and 0.3 deg., respectively, in the region T=5-300 K and f=111-1111 Hz at H{sub m}=800 A/m, after a magnitude correction by a constant factor as done in a conventional calibration. However, the magnitude and phase errors can reach 2% and 4.3 deg. at 10 000 and 11 Hz, respectively. Since the errors are reproducible, a large portion of them may be further corrected after a calibration, the procedure for which is given. Conceptual discussions concerning the error sources, comparison with other calibration methods, and applications of ac susceptibility techniques are presented.« less

  9. International Round-Robin Testing of Bulk Thermoelectrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hsin; Porter, Wallace D; Bottner, Harold

    2011-11-01

    Two international round-robin studies were conducted on transport properties measurements of bulk thermoelectric materials. The study discovered current measurement problems. In order to get ZT of a material four separate transport measurements must be taken. The round-robin study showed that among the four properties Seebeck coefficient is the one can be measured consistently. Electrical resistivity has +4-9% scatter. Thermal diffusivity has similar +5-10% scatter. The reliability of the above three properties can be improved by standardizing test procedures and enforcing system calibrations. The worst problem was found in specific heat measurements using DSC. The probability of making measurement error ismore » great due to the fact three separate runs must be taken to determine Cp and the baseline shift is always an issue for commercial DSC. It is suggest the Dulong Petit limit be always used as a guide line for Cp. Procedures have been developed to eliminate operator and system errors. The IEA-AMT annex is developing standard procedures for transport properties testing.« less

  10. Comparison of three rf plasma impedance monitors on a high phase angle planar inductively coupled plasma source

    NASA Astrophysics Data System (ADS)

    Uchiyama, H.; Watanabe, M.; Shaw, D. M.; Bahia, J. E.; Collins, G. J.

    1999-10-01

    Accurate measurement of plasma source impedance is important for verification of plasma circuit models, as well as for plasma process characterization and endpoint detection. Most impedance measurement techniques depend in some manner on the cosine of the phase angle to determine the impedance of the plasma load. Inductively coupled plasmas are generally highly inductive, with the phase angle between the applied rf voltage and the rf current in the range of 88 to near 90 degrees. A small measurement error in this phase angle range results in a large error in the calculated cosine of the angle, introducing large impedance measurement variations. In this work, we have compared the measured impedance of a planar inductively coupled plasma using three commercial plasma impedance monitors (ENI V/I probe, Advanced Energy RFZ60 and Advanced Energy Z-Scan). The plasma impedance is independently verified using a specially designed match network and a calibrated load, representing the plasma, to provide a measurement standard.

  11. ON ESTIMATING FORCE-FREENESS BASED ON OBSERVED MAGNETOGRAMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, X. M.; Zhang, M.; Su, J. T., E-mail: xmzhang@nao.cas.cn

    It is a common practice in the solar physics community to test whether or not measured photospheric or chromospheric vector magnetograms are force-free, using the Maxwell stress as a measure. Some previous studies have suggested that magnetic fields of active regions in the solar chromosphere are close to being force-free whereas there is no consistency among previous studies on whether magnetic fields of active regions in the solar photosphere are force-free or not. Here we use three kinds of representative magnetic fields (analytical force-free solutions, modeled solar-like force-free fields, and observed non-force-free fields) to discuss how measurement issues such asmore » limited field of view (FOV), instrument sensitivity, and measurement error could affect the estimation of force-freeness based on observed magnetograms. Unlike previous studies that focus on discussing the effect of limited FOV or instrument sensitivity, our calculation shows that just measurement error alone can significantly influence the results of estimates of force-freeness, due to the fact that measurement errors in horizontal magnetic fields are usually ten times larger than those in vertical fields. This property of measurement errors, interacting with the particular form of a formula for estimating force-freeness, would result in wrong judgments of the force-freeness: a truly force-free field may be mistakenly estimated as being non-force-free and a truly non-force-free field may be estimated as being force-free. Our analysis calls for caution when interpreting estimates of force-freeness based on measured magnetograms, and also suggests that the true photospheric magnetic field may be further away from being force-free than it currently appears to be.« less

  12. Rewards and Supports

    ERIC Educational Resources Information Center

    Hershberg, Theodore; Robertson-Kraft, Claire

    2010-01-01

    Pay-for-performance systems in public schools have long been burdened with controversy. Critics of performance pay systems contend that because teachers' impact cannot be measured without error, it is impossible to create fair and accurate systems for evaluating and rewarding performance. By this standard, however, current practice fails on both…

  13. Discharge measurements at gaging stations

    USGS Publications Warehouse

    Turnipseed, D. Phil; Sauer, Vernon B.

    2010-01-01

    The techniques and standards for making discharge measurements at streamflow gaging stations are described in this publication. The vertical axis rotating-element current meter, principally the Price current meter, has been traditionally used for most measurements of discharge; however, advancements in acoustic technology have led to important developments in the use of acoustic Doppler current profilers, acoustic Doppler velocimeters, and other emerging technologies for the measurement of discharge. These new instruments, based on acoustic Doppler theory, have the advantage of no moving parts, and in the case of the acoustic Doppler current profiler, quickly and easily provide three-dimensional stream-velocity profile data through much of the vertical water column. For much of the discussion of acoustic Doppler current profiler moving-boat methodology, the reader is referred to U.S. Geological Survey Techniques and Methods 3-A22 (Mueller and Wagner, 2009). Personal digital assistants (PDAs), electronic field notebooks, and other personal computers provide fast and efficient data-collection methods that are more error-free than traditional hand methods. The use of portable weirs and flumes, floats, volumetric tanks, indirect methods, and tracers in measuring discharge are briefly described.

  14. Carbon dioxide emission tallies for 210 U.S. coal-fired power plants: a comparison of two accounting methods.

    PubMed

    Quick, Jeffrey C

    2014-01-01

    Annual CO2 emission tallies for 210 coal-fired power plants during 2009 were more accurately calculated from fuel consumption records reported by the US. Energy Information Administration (EIA) than measurements from Continuous Emissions Monitoring Systems (CEMS) reported by the US. Environmental Protection Agency. Results from these accounting methods for individual plants vary by +/- 10.8%. Although the differences systematically vary with the method used to certify flue-gas flow instruments in CEMS, additional sources of CEMS measurement error remain to be identified. Limitations of the EIA fuel consumption data are also discussed. Consideration of weighing, sample collection, laboratory analysis, emission factor, and stock adjustment errors showed that the minimum error for CO2 emissions calculated from the fuel consumption data ranged from +/- 1.3% to +/- 7.2% with a plant average of +/- 1.6%. This error might be reduced by 50% if the carbon content of coal delivered to U.S. power plants were reported. Potentially, this study might inform efforts to regulate CO2 emissions (such as CO2 performance standards or taxes) and more immediately, the U.S. Greenhouse Gas Reporting Rule where large coal-fired power plants currently use CEMS to measure CO2 emissions. Moreover, if, as suggested here, the flue-gas flow measurement limits the accuracy of CO2 emission tallies from CEMS, then the accuracy of other emission tallies from CEMS (such as SO2, NOx, and Hg) would be similarly affected. Consequently, improved flue gas flow measurements are needed to increase the reliability of emission measurements from CEMS.

  15. Accuracy of a pulse-coherent acoustic Doppler profiler in a wave-dominated flow

    USGS Publications Warehouse

    Lacy, J.R.; Sherwood, C.R.

    2004-01-01

    The accuracy of velocities measured by a pulse-coherent acoustic Doppler profiler (PCADP) in the bottom boundary layer of a wave-dominated inner-shelf environment is evaluated. The downward-looking PCADP measured velocities in eight 10-cm cells at 1 Hz. Velocities measured by the PCADP are compared to those measured by an acoustic Doppler velocimeter for wave orbital velocities up to 95 cm s-1 and currents up to 40 cm s-1. An algorithm for correcting ambiguity errors using the resolution velocities was developed. Instrument bias, measured as the average error in burst mean speed, is -0.4 cm s-1 (standard deviation = 0.8). The accuracy (root-mean-square error) of instantaneous velocities has a mean of 8.6 cm s-1 (standard deviation = 6.5) for eastward velocities (the predominant direction of waves), 6.5 cm s-1 (standard deviation = 4.4) for northward velocities, and 2.4 cm s-1 (standard deviation = 1.6) for vertical velocities. Both burst mean and root-mean-square errors are greater for bursts with ub ??? 50 cm s-1. Profiles of burst mean speeds from the bottom five cells were fit to logarithmic curves: 92% of bursts with mean speed ??? 5 cm s-1 have a correlation coefficient R2 > 0.96. In cells close to the transducer, instantaneous velocities are noisy, burst mean velocities are biased low, and bottom orbital velocities are biased high. With adequate blanking distances for both the profile and resolution velocities, the PCADP provides sufficient accuracy to measure velocities in the bottom boundary layer under moderately energetic inner-shelf conditions.

  16. A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2002-01-01

    The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.

  17. Real-time and on-demand buoy observation system for tsunami and crustal displacement

    NASA Astrophysics Data System (ADS)

    Takahashi, N.; Imai, K.; Ishihara, Y.; Fukuda, T.; Ochi, H.; Suzuki, K.; Kido, M.; Ohta, Y.; Imano, M.; Hino, R.

    2017-12-01

    We develop real-time and on-demand buoy observation system for tsunami and crustal displacement. It is indispensable for observation of crustal displacement to understand changes of stress field related to future large earthquakes. The current status of the observation is carried out by using a vessel with an interval of a few times per a year. When a large earthquake occurs, however, we need dense or on-demand observation of the crustal displacement to grasp nature of the slow slip after the rupture. Therefore, we constructed buoy system with a buoy station, wire-end station, seafloor unit and acoustic transponders for crustal displacement, and we installed a pressure sensor on the seafloor unit and GNSS system on the buoy in addition to measurement of e distance between the buoy and the seafloor acoustic transponders. Tsunami is evaluated using GNSS data and pressure data sent from seafloor. Observation error of the GNSS is about 10 cm. The crustal displacement is estimated using pressure sensor for vertical and acoustic measurement for horizontal. Using current slack ratio of 1.58, the observation error for the measurement of the crustal displacement is about 10 cm. We repeated three times sea trials and confirmed the data acquisition with high data quality, mooring without dredging anchor in the strong sea current with a speed of 5.5 knots. Current issues to be resolved we face are removing noises on the acoustic data transmission, data transmission between the buoy and wire-end stations, electrical consumption on the buoy station and large observation error on the crustal displacement due to large slack ratio. We consider the change of the acoustic transmission for pressure data, replace of a GNSS data logger with large electrical consumption, and reduce of the slack ratio, and search method to reduce resistance of the buoy on the sea water. In this presentation, we introduce the current status of the technical development and tsunami waveforms recorded on our seafloor unit using recent tsunami signals earthquake for the data quality check.

  18. Use of graph theory measures to identify errors in record linkage.

    PubMed

    Randall, Sean M; Boyd, James H; Ferrante, Anna M; Bauer, Jacqueline K; Semmens, James B

    2014-07-01

    Ensuring high linkage quality is important in many record linkage applications. Current methods for ensuring quality are manual and resource intensive. This paper seeks to determine the effectiveness of graph theory techniques in identifying record linkage errors. A range of graph theory techniques was applied to two linked datasets, with known truth sets. The ability of graph theory techniques to identify groups containing errors was compared to a widely used threshold setting technique. This methodology shows promise; however, further investigations into graph theory techniques are required. The development of more efficient and effective methods of improving linkage quality will result in higher quality datasets that can be delivered to researchers in shorter timeframes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. The mean sea surface height and geoid along the Geosat subtrack from Bermuda to Cape Cod

    NASA Astrophysics Data System (ADS)

    Kelly, Kathryn A.; Joyce, Terrence M.; Schubert, David M.; Caruso, Michael J.

    1991-07-01

    Measurements of near-surface velocity and concurrent sea level along an ascending Geosat subtrack were used to estimate the mean sea surface height and the Earth's gravitational geoid. Velocity measurements were made on three traverses of a Geosat subtrack within 10 days, using an acoustic Doppler current profiler (ADCP). A small bias in the ADCP velocity was removed by considering a mass balance for two pairs of triangles for which expendable bathythermograph measurements were also made. Because of the large curvature of the Gulf Stream, the gradient wind balance was used to estimate the cross-track component of geostrophic velocity from the ADCP vectors; this component was then integrated to obtain the sea surface height profile. The mean sea surface height was estimated as the difference between the instantaneous sea surface height from ADCP and the Geosat residual sea level, with mesoscale errors reduced by low-pass filtering. The error estimates were divided into a bias, tilt, and mesoscale residual; the bias was ignored because profiles were only determined within a constant of integration. The calculated mean sea surface height estimate agreed with an independent estimate of the mean sea surface height from Geosat, obtained by modeling the Gulf Stream as a Gaussian jet, within the expected errors in the estimates: the tilt error was 0.10 m, and the mesoscale error was 0.044 m. To minimize mesoscale errors in the estimate, the alongtrack geoid estimate was computed as the difference between the mean sea level from the Geosat Exact Repeat Mission and an estimate of the mean sea surface height, rather than as the difference between instantaneous profiles of sea level and sea surface height. In the critical region near the Gulf Stream the estimated error reduction using this method was about 0.07 m. Differences between the geoid estimate and a gravimetric geoid were not within the expected errors: the rms mesoscale difference was 0.24 m rms.

  20. Evaluating the prevalence and impact of examiner errors on the Wechsler scales of intelligence: A meta-analysis.

    PubMed

    Styck, Kara M; Walsh, Shana M

    2016-01-01

    The purpose of the present investigation was to conduct a meta-analysis of the literature on examiner errors for the Wechsler scales of intelligence. Results indicate that a mean of 99.7% of protocols contained at least 1 examiner error when studies that included a failure to record examinee responses as an error were combined and a mean of 41.2% of protocols contained at least 1 examiner error when studies that ignored errors of omission were combined. Furthermore, graduate student examiners were significantly more likely to make at least 1 error on Wechsler intelligence test protocols than psychologists. However, psychologists made significantly more errors per protocol than graduate student examiners regardless of the inclusion or exclusion of failure to record examinee responses as errors. On average, 73.1% of Full-Scale IQ (FSIQ) scores changed as a result of examiner errors, whereas 15.8%-77.3% of scores on the Verbal Comprehension Index (VCI), Perceptual Reasoning Index (PRI), Working Memory Index (WMI), and Processing Speed Index changed as a result of examiner errors. In addition, results suggest that examiners tend to overestimate FSIQ scores and underestimate VCI scores. However, no strong pattern emerged for the PRI and WMI. It can be concluded that examiner errors occur frequently and impact index and FSIQ scores. Consequently, current estimates for the standard error of measurement of popular IQ tests may not adequately capture the variance due to the examiner. (c) 2016 APA, all rights reserved).

  1. New methodology for adjusting rotating shadowband irradiometer measurements

    NASA Astrophysics Data System (ADS)

    Vignola, Frank; Peterson, Josh; Wilbert, Stefan; Blanc, Philippe; Geuder, Norbert; Kern, Chris

    2017-06-01

    A new method is developed for correcting systematic errors found in rotating shadowband irradiometer measurements. Since the responsivity of photodiode-based pyranometers typically utilized for RST sensors is dependent upon the wavelength of the incident radiation and the spectral distribution of the incident radiation is different for the Direct Normal Trradiance and the Diffuse Horizontal Trradiance, spectral effects have to be considered. These cause the most problematic errors when applying currently available correction functions to RST measurements. Hence, direct normal and diffuse contributions are analyzed and modeled separately. An additional advantage of this methodology is that it provides a prescription for how to modify the adjustment algorithms to locations with different atmospheric characteristics from the location where the calibration and adjustment algorithms were developed. A summary of results and areas for future efforts are then discussed.

  2. Effect of DM Actuator Errors on the WFIRST/AFTA Coronagraph Contrast Performance

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Shi, Fang

    2015-01-01

    The WFIRST/AFTA 2.4 m space telescope currently under study includes a stellar coronagraph for the imaging and the spectral characterization of extrasolar planets. The coronagraph employs two sequential deformable mirrors (DMs) to compensate for phase and amplitude errors in creating dark holes. DMs are critical elements in high contrast coronagraphs, requiring precision and stability measured in picometers to enable detection of Earth-like exoplanets. Working with a low-order wavefront-sensor the DM that is conjugate to a pupil can also be used to correct low-order wavefront drift during a scientific observation. However, not all actuators in a DM have the same gain. When using such a DM in low-order wavefront sensing and control subsystem, the actuator gain errors introduce high-spatial frequency errors to the DM surface and thus worsen the contrast performance of the coronagraph. We have investigated the effects of actuator gain errors and the actuator command digitization errors on the contrast performance of the coronagraph through modeling and simulations, and will present our results in this paper.

  3. Development and content validation of performance assessments for endoscopic third ventriculostomy.

    PubMed

    Breimer, Gerben E; Haji, Faizal A; Hoving, Eelco W; Drake, James M

    2015-08-01

    This study aims to develop and establish the content validity of multiple expert rating instruments to assess performance in endoscopic third ventriculostomy (ETV), collectively called the Neuro-Endoscopic Ventriculostomy Assessment Tool (NEVAT). The important aspects of ETV were identified through a review of current literature, ETV videos, and discussion with neurosurgeons, fellows, and residents. Three assessment measures were subsequently developed: a procedure-specific checklist (CL), a CL of surgical errors, and a global rating scale (GRS). Neurosurgeons from various countries, all identified as experts in ETV, were then invited to participate in a modified Delphi survey to establish the content validity of these instruments. In each Delphi round, experts rated their agreement including each procedural step, error, and GRS item in the respective instruments on a 5-point Likert scale. Seventeen experts agreed to participate in the study and completed all Delphi rounds. After item generation, a total of 27 procedural CL items, 26 error CL items, and 9 GRS items were posed to Delphi panelists for rating. An additional 17 procedural CL items, 12 error CL items, and 1 GRS item were added by panelists. After three rounds, strong consensus (>80% agreement) was achieved on 35 procedural CL items, 29 error CL items, and 10 GRS items. Moderate consensus (50-80% agreement) was achieved on an additional 7 procedural CL items and 1 error CL item. The final procedural and error checklist contained 42 and 30 items, respectively (divided into setup, exposure, navigation, ventriculostomy, and closure). The final GRS contained 10 items. We have established the content validity of three ETV assessment measures by iterative consensus of an international expert panel. Each measure provides unique assessment information and thus can be used individually or in combination, depending on the characteristics of the learner and the purpose of the assessment. These instruments must now be evaluated in both the simulated and operative settings, to determine their construct validity and reliability. Ultimately, the measures contained in the NEVAT may prove suitable for formative assessment during ETV training and potentially as summative assessment measures during certification.

  4. The predicted CLARREO sampling error of the inter-annual SW variability

    NASA Astrophysics Data System (ADS)

    Doelling, D. R.; Keyes, D. F.; Nguyen, C.; Macdonnell, D.; Young, D. F.

    2009-12-01

    The NRC Decadal Survey has called for SI traceability of long-term hyper-spectral flux measurements in order to monitor climate variability. This mission is called the Climate Absolute Radiance and Refractivity Observatory (CLARREO) and is currently defining its mission requirements. The requirements are focused on the ability to measure decadal change of key climate variables at very high accuracy. The accuracy goals are set using anticipated climate change magnitudes, but the accuracy achieved for any given climate variable must take into account the temporal and spatial sampling errors based on satellite orbits and calibration accuracy. The time period to detect a significant trend in the CLARREO record depends on the magnitude of the sampling calibration errors relative to the current inter-annual variability. The largest uncertainty in climate feedbacks remains the effect of changing clouds on planetary energy balance. Some regions on earth have strong diurnal cycles, such as maritime stratus and afternoon land convection; other regions have strong seasonal cycles, such as the monsoon. However, when monitoring inter-annual variability these cycles are only important if the strength of these cycles vary on decadal time scales. This study will attempt to determine the best satellite constellations to reduce sampling error and to compare the error with the current inter-annual variability signal to ensure the viability of the mission. The study will incorporate Clouds and the Earth's Radiant Energy System (CERES) (Monthly TOA/Surface Averages) SRBAVG product TOA LW and SW climate quality fluxes. The fluxes are derived by combining Terra (10:30 local equator crossing time) CERES fluxes with 3-hourly 5-geostationary satellite estimated broadband fluxes, which are normalized using the CERES fluxes, to complete the diurnal cycle. These fluxes were saved hourly during processing and considered the truth dataset. 90°, 83° and 74° inclination precessionary orbits as well as sun-synchronous orbits will be evaluated. This study will focus on the SW radiance, since these low earth orbits are only in daylight for half the orbit. The precessionary orbits were designed to cycle through all solar zenith angles over the course of a year. The inter-annual variability sampling error will be stratified globally/zonally and annually/seasonally and compared with the corresponding truth anomalies.

  5. MEASURING LENSING MAGNIFICATION OF QUASARS BY LARGE SCALE STRUCTURE USING THE VARIABILITY-LUMINOSITY RELATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, Anne H.; Seitz, Stella; Jerke, Jonathan

    2011-05-10

    We introduce a technique to measure gravitational lensing magnification using the variability of type I quasars. Quasars' variability amplitudes and luminosities are tightly correlated, on average. Magnification due to gravitational lensing increases the quasars' apparent luminosity, while leaving the variability amplitude unchanged. Therefore, the mean magnification of an ensemble of quasars can be measured through the mean shift in the variability-luminosity relation. As a proof of principle, we use this technique to measure the magnification of quasars spectroscopically identified in the Sloan Digital Sky Survey (SDSS), due to gravitational lensing by galaxy clusters in the SDSS MaxBCG catalog. The Palomar-QUESTmore » Variability Survey, reduced using the DeepSky pipeline, provides variability data for the sources. We measure the average quasar magnification as a function of scaled distance (r/R{sub 200}) from the nearest cluster; our measurements are consistent with expectations assuming Navarro-Frenk-White cluster profiles, particularly after accounting for the known uncertainty in the clusters' centers. Variability-based lensing measurements are a valuable complement to shape-based techniques because their systematic errors are very different, and also because the variability measurements are amenable to photometric errors of a few percent and to depths seen in current wide-field surveys. Given the volume data of the expected from current and upcoming surveys, this new technique has the potential to be competitive with weak lensing shear measurements of large-scale structure.« less

  6. Design of temperature detection device for drum of belt conveyor

    NASA Astrophysics Data System (ADS)

    Zhang, Li; He, Rongjun

    2018-03-01

    For difficult wiring and big measuring error existed in the traditional temperature detection method for drum of belt conveyor, a temperature detection device for drum of belt conveyor based on Radio Frequency(RF) communication is designed. In the device, detection terminal can collect temperature data through tire pressure sensor chip SP370 which integrates temperature detection and RF emission. The receiving terminal which is composed of RF receiver chip and microcontroller receives the temperature data and sends it to Controller Area Network(CAN) bus. The test results show that the device meets requirements of field application with measuring error ±3.73 ° and single button battery can provide continuous current for the detection terminal over 1.5 years.

  7. Dynamic Characterization of Galfenol (Fe81.6Ga18.4)

    NASA Technical Reports Server (NTRS)

    Scheidler, Justin J.; Asnani, Vivake M.; Dapino, Marcelo J.

    2016-01-01

    Galfenol has the potential to transform the smart materials industry by allowing for the development of multifunctional, load-bearing devices. One of the primary technical challenges faced by this development is the very limited experimental data on Galfenol's frequency-dependent response to dynamic stress, which is critically important for the design of such devices. This report details a novel and precise characterization of the constitutive behavior of polycrystalline Galfenol (Fe81.6Ga18.4) under quasi-static (1 Hz) and dynamic (4 to 1000 Hz) stress loadings. Mechanical loads are applied using a high-frequency load frame. Quasi-static minor and major hysteresis loop measurements of magnetic flux density and strain are presented for constant electromagnet currents (0 to 1.1 A) and constant magnetic fields 0 to 14 kA/m (0 to 180 Oe). The dynamic stress amplitude for minor and major loops is 2.88 and 31.4 MPa (418 and 4550 psi), respectively. Quasi-static material properties closely match published values for similar Galfenol materials. Quasi-static actuation responses are also measured and compared to quasi-static sensing responses; the high degree of reversibility seen in the comparison is consistent with published measurements and modeling results. Dynamic major and minor loops are measured for dynamic stresses up to 31 MPa (4496 psi) and 1 kHz, and the bias condition resulting in maximum, quasi-static sensitivity. Eddy current effects are quantified by considering solid and laminated Galfenol rods. Three key sources of error in the dynamic measurements are accounted for: (1) electromagnetic noise in strain signals due to Galfenol's magnetic response, (2) error in load signals due to the inertial force of fixturing, and (3) phase misalignment between signals due to conditioning electronics. For dynamic characterization, strain error is kept below 1.2 percent of full scale by wiring two collocated gauges in series (noise cancellation) and through leadwire weaving. Inertial force error is kept below 0.41 percent by measuring the dynamic force in the specimen using a nearly collocated piezoelectric load washer. The phase response of all conditioning electronics is explicitly measured and corrected for. In general, as frequency is increased, the sensing response becomes more linear because of an increase in eddy currents. As frequency increases above approximately 100 Hz, the elbow in the strain-versus-stress response disappears as the active (soft) regime stiffens toward the passive (hard) regime. Under constant-field conditions, the loss factors of the solid rod peak between 200 and 600 Hz, rather than exhibiting a monotonic increase. Compared to the solid rod, the laminated rod exhibits much slower increases in hysteresis with frequency, and its quasi-static behavior extends to higher frequencies. The elastic modulus of the laminated rod decreases between 100 and 300 Hz; this trend is currently unexplained.

  8. Opposite effects of cannabis and cocaine on performance monitoring.

    PubMed

    Spronk, Desirée B; Verkes, Robbert J; Cools, Roshan; Franke, Barbara; Van Wel, Janelle H P; Ramaekers, Johannes G; De Bruijn, Ellen R A

    2016-07-01

    Drug use is often associated with risky and unsafe behavior. However, the acute effects of cocaine and cannabis on performance monitoring processes have not been systematically investigated. The aim of the current study was to investigate how administration of these drugs alters performance monitoring processes, as reflected in the error-related negativity (ERN), the error positivity (Pe) and post-error slowing. A double-blind placebo-controlled randomized three-way crossover design was used. Sixty-one subjects completed a Flanker task while EEG measures were obtained. Subjects showed diminished ERN and Pe amplitudes after cannabis administration and increased ERN and Pe amplitudes after administration of cocaine. Neither drug affected post-error slowing. These results demonstrate diametrically opposing effects on the early and late phases of performance monitoring of the two most commonly used illicit drugs of abuse. Conversely, the behavioral adaptation phase of performance monitoring remained unaltered by the drugs. Copyright © 2016. Published by Elsevier B.V.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petiteau, Antoine; Babak, Stanislav; Sesana, Alberto

    Gravitational wave (GW) signals from coalescing massive black hole (MBH) binaries could be used as standard sirens to measure cosmological parameters. The future space-based GW observatory Laser Interferometer Space Antenna (LISA) will detect up to a hundred of those events, providing very accurate measurements of their luminosity distances. To constrain the cosmological parameters, we also need to measure the redshift of the galaxy (or cluster of galaxies) hosting the merger. This requires the identification of a distinctive electromagnetic event associated with the binary coalescence. However, putative electromagnetic signatures may be too weak to be observed. Instead, we study here themore » possibility of constraining the cosmological parameters by enforcing statistical consistency between all the possible hosts detected within the measurement error box of a few dozen of low-redshift (z < 3) events. We construct MBH populations using merger tree realizations of the dark matter hierarchy in a {Lambda}CDM universe, and we use data from the Millennium simulation to model the galaxy distribution in the LISA error box. We show that, assuming that all the other cosmological parameters are known, the parameter w describing the dark energy equation of state can be constrained to a 4%-8% level (2{sigma} error), competitive with current uncertainties obtained by type Ia supernovae measurements, providing an independent test of our cosmological model.« less

  10. Electric vehicle power train instrumentation: Some constraints and considerations

    NASA Technical Reports Server (NTRS)

    Triner, J. E.; Hansen, I. G.

    1977-01-01

    The application of pulse modulation control (choppers) to dc motors creates unique instrumentation problems. In particular, the high harmonic components contained in the current waveforms require frequency response accommodations not normally considered in dc instrumentation. In addition to current sensing, accurate power measurement requires not only adequate frequency response but must also address phase errors caused by the finite bandwidths and component characteristics involved. The implications of these problems are assessed.

  11. Error monitoring and empathy: Explorations within a neurophysiological context.

    PubMed

    Amiruddin, Azhani; Fueggle, Simone N; Nguyen, An T; Gignac, Gilles E; Clunies-Ross, Karen L; Fox, Allison M

    2017-06-01

    Past literature has proposed that empathy consists of two components: cognitive and affective empathy. Error monitoring mechanisms indexed by the error-related negativity (ERN) have been associated with empathy. Studies have found that a larger ERN is associated with higher levels of empathy. We aimed to expand upon previous work by investigating how error monitoring relates to the independent theoretical domains of cognitive and affective empathy. Study 1 (N = 24) explored the relationship between error monitoring mechanisms and subcomponents of empathy using the Questionnaire of Cognitive and Affective Empathy and found no relationship. Study 2 (N = 38) explored the relationship between the error monitoring mechanisms and overall empathy. Contrary to past findings, there was no evidence to support a relationship between error monitoring mechanisms and scores on empathy measures. A subsequent meta-analysis (Study 3, N = 125) summarizing the relationship across previously published studies together with the two studies reported in the current paper indicated that overall there was no significant association between ERN and empathy and that there was significant heterogeneity across studies. Future investigations exploring the potential variables that may moderate these relationships are discussed. © 2017 Society for Psychophysiological Research.

  12. Skylab S-193 radar altimeter experiment analyses and results

    NASA Technical Reports Server (NTRS)

    Brown, G. S. (Editor)

    1977-01-01

    The design of optimum filtering procedures for geoid recovery is discussed. Statistical error bounds are obtained for pointing angle estimates using average waveform data. A correlation of tracking loop bandwidth with magnitude of pointing error is established. The impact of ocean currents and precipitation on the received power are shown to be measurable effects. For large sea state conditions, measurements of sigma 0 deg indicate a distinct saturation level of about 8 dB. Near-nadir less than 15 deg values of sigma 0 deg are also presented and compared with theoretical models. Examination of Great Salt Lake Desert scattering data leads to rejection of a previously hypothesized specularly reflecting surface. Pulse-to-pulse correlation results are in agreement with quasi-monochromatic optics theoretical predictions and indicate a means for estimating direction of pointing error. Pulse compression techniques for and results of estimating significant waveheight from waveform data are presented and are also shown to be in good agreement with surface truth data. A number of results pertaining to system performance are presented.

  13. Modeling Noisy Data with Differential Equations Using Observed and Expected Matrices

    ERIC Educational Resources Information Center

    Deboeck, Pascal R.; Boker, Steven M.

    2010-01-01

    Complex intraindividual variability observed in psychology may be well described using differential equations. It is difficult, however, to apply differential equation models in psychological contexts, as time series are frequently short, poorly sampled, and have large proportions of measurement and dynamic error. Furthermore, current methods for…

  14. A review of uncertainty in in situ measurements and data sets of sea surface temperature

    NASA Astrophysics Data System (ADS)

    Kennedy, John J.

    2014-03-01

    Archives of in situ sea surface temperature (SST) measurements extend back more than 160 years. Quality of the measurements is variable, and the area of the oceans they sample is limited, especially early in the record and during the two world wars. Measurements of SST and the gridded data sets that are based on them are used in many applications so understanding and estimating the uncertainties are vital. The aim of this review is to give an overview of the various components that contribute to the overall uncertainty of SST measurements made in situ and of the data sets that are derived from them. In doing so, it also aims to identify current gaps in understanding. Uncertainties arise at the level of individual measurements with both systematic and random effects and, although these have been extensively studied, refinement of the error models continues. Recent improvements have been made in the understanding of the pervasive systematic errors that affect the assessment of long-term trends and variability. However, the adjustments applied to minimize these systematic errors are uncertain and these uncertainties are higher before the 1970s and particularly large in the period surrounding the Second World War owing to a lack of reliable metadata. The uncertainties associated with the choice of statistical methods used to create globally complete SST data sets have been explored using different analysis techniques, but they do not incorporate the latest understanding of measurement errors, and they want for a fair benchmark against which their skill can be objectively assessed. These problems can be addressed by the creation of new end-to-end SST analyses and by the recovery and digitization of data and metadata from ship log books and other contemporary literature.

  15. Recalculation of regional and detailed gravity database from Slovak Republic and qualitative interpretation of new generation Bouguer anomaly map

    NASA Astrophysics Data System (ADS)

    Pasteka, Roman; Zahorec, Pavol; Mikuska, Jan; Szalaiova, Viktoria; Papco, Juraj; Krajnak, Martin; Kusnirak, David; Panisova, Jaroslava; Vajda, Peter; Bielik, Miroslav

    2014-05-01

    In this contribution results of the running project "Bouguer anomalies of new generation and the gravimetrical model of Western Carpathians (APVV-0194-10)" are presented. The existing homogenized regional database (212478 points) was enlarged by approximately 107 500 archive detailed gravity measurements. These added gravity values were measured since the year 1976 to the present, therefore they need to be unified and reprocessed. The improved positions of more than 8500 measured points were acquired by digitizing of archive maps (we recognized some local errors within particular data sets). Besides the local errors (due to the wrong positions, heights or gravity of measured points) we have found some areas of systematic errors probably due to the gravity measurement or processing errors. Some of them were confirmed and consequently corrected by field measurements within the frame of current project. Special attention is paid to the recalculation of the terrain corrections - we have used a new developed software as well as the latest version of digital terrain model of Slovakia DMR-3. Main improvement of the new terrain corrections evaluation algorithm is the possibility to calculate it in the real gravimeter position and involving of 3D polyhedral bodies approximation (accepting the spherical approximation of Earth's curvature). We have realized several tests by means of the introduction of non-standard distant relief effects introduction. A new complete Bouguer anomalies map was constructed and transformed by means of higher derivatives operators (tilt derivatives, TDX, theta-derivatives and the new TDXAS transformation), using the regularization approach. A new interesting regional lineament of probably neotectonic character was recognized in the new map of complete Bouguer anomalies and it was confirmed also by realized in-situ field measurements.

  16. Sensitivity of planetary cruise navigation to earth orientation calibration errors

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Folkner, W. M.

    1995-01-01

    A detailed analysis was conducted to determine the sensitivity of spacecraft navigation errors to the accuracy and timeliness of Earth orientation calibrations. Analyses based on simulated X-band (8.4-GHz) Doppler and ranging measurements acquired during the interplanetary cruise segment of the Mars Pathfinder heliocentric trajectory were completed for the nominal trajectory design and for an alternative trajectory with a longer transit time. Several error models were developed to characterize the effect of Earth orientation on navigational accuracy based on current and anticipated Deep Space Network calibration strategies. The navigational sensitivity of Mars Pathfinder to calibration errors in Earth orientation was computed for each candidate calibration strategy with the Earth orientation parameters included as estimated parameters in the navigation solution. In these cases, the calibration errors contributed 23 to 58% of the total navigation error budget, depending on the calibration strategy being assessed. Navigation sensitivity calculations were also performed for cases in which Earth orientation calibration errors were not adjusted in the navigation solution. In these cases, Earth orientation calibration errors contributed from 26 to as much as 227% of the total navigation error budget. The final analysis suggests that, not only is the method used to calibrate Earth orientation vitally important for precision navigation of Mars Pathfinder, but perhaps equally important is the method for inclusion of the calibration errors in the navigation solutions.

  17. A Tool Measuring Remaining Thickness of Notched Acoustic Cavities in Primary Reaction Control Thruster NDI Standards

    NASA Technical Reports Server (NTRS)

    Sun, Yushi; Sun, Changhong; Zhu, Harry; Wincheski, Buzz

    2006-01-01

    Stress corrosion cracking in the relief radius area of a space shuttle primary reaction control thruster is an issue of concern. The current approach for monitoring of potential crack growth is nondestructive inspection (NDI) of remaining thickness (RT) to the acoustic cavities using an eddy current or remote field eddy current probe. EDM manufacturers have difficulty in providing accurate RT calibration standards. Significant error in the RT values of NDI calibration standards could lead to a mistaken judgment of cracking condition of a thruster under inspection. A tool based on eddy current principle has been developed to measure the RT at each acoustic cavity of a calibration standard in order to validate that the standard meets the sample design criteria.

  18. An experimental study of nonlinear dynamic system identification

    NASA Technical Reports Server (NTRS)

    Stry, Greselda I.; Mook, D. Joseph

    1990-01-01

    A technique for robust identification of nonlinear dynamic systems is developed and illustrated using both simulations and analog experiments. The technique is based on the Minimum Model Error optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature of the current work is the ability to identify nonlinear dynamic systems without prior assumptions regarding the form of the nonlinearities, in constrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.

  19. Autoregressive Modeling of Drift and Random Error to Characterize a Continuous Intravascular Glucose Monitoring Sensor.

    PubMed

    Zhou, Tony; Dickson, Jennifer L; Geoffrey Chase, J

    2018-01-01

    Continuous glucose monitoring (CGM) devices have been effective in managing diabetes and offer potential benefits for use in the intensive care unit (ICU). Use of CGM devices in the ICU has been limited, primarily due to the higher point accuracy errors over currently used traditional intermittent blood glucose (BG) measures. General models of CGM errors, including drift and random errors, are lacking, but would enable better design of protocols to utilize these devices. This article presents an autoregressive (AR) based modeling method that separately characterizes the drift and random noise of the GlySure CGM sensor (GlySure Limited, Oxfordshire, UK). Clinical sensor data (n = 33) and reference measurements were used to generate 2 AR models to describe sensor drift and noise. These models were used to generate 100 Monte Carlo simulations based on reference blood glucose measurements. These were then compared to the original CGM clinical data using mean absolute relative difference (MARD) and a Trend Compass. The point accuracy MARD was very similar between simulated and clinical data (9.6% vs 9.9%). A Trend Compass was used to assess trend accuracy, and found simulated and clinical sensor profiles were similar (simulated trend index 11.4° vs clinical trend index 10.9°). The model and method accurately represents cohort sensor behavior over patients, providing a general modeling approach to any such sensor by separately characterizing each type of error that can arise in the data. Overall, it enables better protocol design based on accurate expected CGM sensor behavior, as well as enabling the analysis of what level of each type of sensor error would be necessary to obtain desired glycemic control safety and performance with a given protocol.

  20. Amorphous Silicon p-i-n Structure Acting as Light and Temperature Sensor

    PubMed Central

    de Cesare, Giampiero; Nascetti, Augusto; Caputo, Domenico

    2015-01-01

    In this work, we propose a multi-parametric sensor able to measure both temperature and radiation intensity, suitable to increase the level of integration and miniaturization in Lab-on-Chip applications. The device is based on amorphous silicon p-doped/intrinsic/n-doped thin film junction. The device is first characterized as radiation and temperature sensor independently. We found a maximum value of responsivity equal to 350 mA/W at 510 nm and temperature sensitivity equal to 3.2 mV/K. We then investigated the effects of the temperature variation on light intensity measurement and of the light intensity variation on the accuracy of the temperature measurement. We found that the temperature variation induces an error lower than 0.55 pW/K in the light intensity measurement at 550 nm when the diode is biased in short circuit condition, while an error below 1 K/µW results in the temperature measurement when a forward bias current higher than 25 µA/cm2 is applied. PMID:26016913

  1. Fault-tolerant, high-level quantum circuits: form, compilation and description

    NASA Astrophysics Data System (ADS)

    Paler, Alexandru; Polian, Ilia; Nemoto, Kae; Devitt, Simon J.

    2017-06-01

    Fault-tolerant quantum error correction is a necessity for any quantum architecture destined to tackle interesting, large-scale problems. Its theoretical formalism has been well founded for nearly two decades. However, we still do not have an appropriate compiler to produce a fault-tolerant, error-corrected description from a higher-level quantum circuit for state-of the-art hardware models. There are many technical hurdles, including dynamic circuit constructions that occur when constructing fault-tolerant circuits with commonly used error correcting codes. We introduce a package that converts high-level quantum circuits consisting of commonly used gates into a form employing all decompositions and ancillary protocols needed for fault-tolerant error correction. We call this form the (I)initialisation, (C)NOT, (M)measurement form (ICM) and consists of an initialisation layer of qubits into one of four distinct states, a massive, deterministic array of CNOT operations and a series of time-ordered X- or Z-basis measurements. The form allows a more flexible approach towards circuit optimisation. At the same time, the package outputs a standard circuit or a canonical geometric description which is a necessity for operating current state-of-the-art hardware architectures using topological quantum codes.

  2. Modeling Infrared Signal Reflections to Characterize Indoor Multipath Propagation

    PubMed Central

    De-La-Llana-Calvo, Álvaro; Lázaro-Galilea, José Luis; Gardel-Vicente, Alfredo; Rodríguez-Navarro, David; Bravo-Muñoz, Ignacio; Tsirigotis, Georgios; Iglesias-Miguel, Juan

    2017-01-01

    In this paper, we propose a model to characterize Infrared (IR) signal reflections on any kind of surface material, together with a simplified procedure to compute the model parameters. The model works within the framework of Local Positioning Systems (LPS) based on IR signals (IR-LPS) to evaluate the behavior of transmitted signal Multipaths (MP), which are the main cause of error in IR-LPS, and makes several contributions to mitigation methods. Current methods are based on physics, optics, geometry and empirical methods, but these do not meet our requirements because of the need to apply several different restrictions and employ complex tools. We propose a simplified model based on only two reflection components, together with a method for determining the model parameters based on 12 empirical measurements that are easily performed in the real environment where the IR-LPS is being applied. Our experimental results show that the model provides a comprehensive solution to the real behavior of IR MP, yielding small errors when comparing real and modeled data (the mean error ranges from 1% to 4% depending on the environment surface materials). Other state-of-the-art methods yielded mean errors ranging from 15% to 40% in test measurements. PMID:28406436

  3. Solar Cell Short Circuit Current Errors and Uncertainties During High Altitude Calibrations

    NASA Technical Reports Server (NTRS)

    Snyder, David D.

    2012-01-01

    High altitude balloon based facilities can make solar cell calibration measurements above 99.5% of the atmosphere to use for adjusting laboratory solar simulators. While close to on-orbit illumination, the small attenuation to the spectra may result in under measurements of solar cell parameters. Variations of stratospheric weather, may produce flight-to-flight measurement variations. To support the NSCAP effort, this work quantifies some of the effects on solar cell short circuit current (Isc) measurements on triple junction sub-cells. This work looks at several types of high altitude methods, direct high altitude meas urements near 120 kft, and lower stratospheric Langley plots from aircraft. It also looks at Langley extrapolation from altitudes above most of the ozone, for potential small balloon payloads. A convolution of the sub-cell spectral response with the standard solar spectrum modified by several absorption processes is used to determine the relative change from AMO, lscllsc(AMO). Rayleigh scattering, molecular scatterin g from uniformly mixed gases, Ozone, and water vapor, are included in this analysis. A range of atmosph eric pressures are examined, from 0. 05 to 0.25 Atm to cover the range of atmospheric altitudes where solar cell calibrations a reperformed. Generally these errors and uncertainties are less than 0.2%

  4. A preliminary taxonomy of medical errors in family practice

    PubMed Central

    Dovey, S; Meyers, D; Phillips, R; Green, L; Fryer, G; Galliher, J; Kappus, J; Grob, P

    2002-01-01

    Objective: To develop a preliminary taxonomy of primary care medical errors. Design: Qualitative analysis to identify categories of error reported during a randomized controlled trial of computer and paper reporting methods. Setting: The National Network for Family Practice and Primary Care Research. Participants: Family physicians. Main outcome measures: Medical error category, context, and consequence. Results: Forty two physicians made 344 reports: 284 (82.6%) arose from healthcare systems dysfunction; 46 (13.4%) were errors due to gaps in knowledge or skills; and 14 (4.1%) were reports of adverse events, not errors. The main subcategories were: administrative failures (102; 30.9% of errors), investigation failures (82; 24.8%), treatment delivery lapses (76; 23.0%), miscommunication (19; 5.8%), payment systems problems (4; 1.2%), error in the execution of a clinical task (19; 5.8%), wrong treatment decision (14; 4.2%), and wrong diagnosis (13; 3.9%). Most reports were of errors that were recognized and occurred in reporters' practices. Affected patients ranged in age from 8 months to 100 years, were of both sexes, and represented all major US ethnic groups. Almost half the reports were of events which had adverse consequences. Ten errors resulted in patients being admitted to hospital and one patient died. Conclusions: This medical error taxonomy, developed from self-reports of errors observed by family physicians during their routine clinical practice, emphasizes problems in healthcare processes and acknowledges medical errors arising from shortfalls in clinical knowledge and skills. Patient safety strategies with most effect in primary care settings need to be broader than the current focus on medication errors. PMID:12486987

  5. The next organizational challenge: finding and addressing diagnostic error.

    PubMed

    Graber, Mark L; Trowbridge, Robert; Myers, Jennifer S; Umscheid, Craig A; Strull, William; Kanter, Michael H

    2014-03-01

    Although health care organizations (HCOs) are intensely focused on improving the safety of health care, efforts to date have almost exclusively targeted treatment-related issues. The literature confirms that the approaches HCOs use to identify adverse medical events are not effective in finding diagnostic errors, so the initial challenge is to identify cases of diagnostic error. WHY HEALTH CARE ORGANIZATIONS NEED TO GET INVOLVED: HCOs are preoccupied with many quality- and safety-related operational and clinical issues, including performance measures. The case for paying attention to diagnostic errors, however, is based on the following four points: (1) diagnostic errors are common and harmful, (2) high-quality health care requires high-quality diagnosis, (3) diagnostic errors are costly, and (4) HCOs are well positioned to lead the way in reducing diagnostic error. FINDING DIAGNOSTIC ERRORS: Current approaches to identifying diagnostic errors, such as occurrence screens, incident reports, autopsy, and peer review, were not designed to detect diagnostic issues (or problems of omission in general) and/or rely on voluntary reporting. The realization that the existing tools are inadequate has spurred efforts to identify novel tools that could be used to discover diagnostic errors or breakdowns in the diagnostic process that are associated with errors. New approaches--Maine Medical Center's case-finding of diagnostic errors by facilitating direct reports from physicians and Kaiser Permanente's electronic health record--based reports that detect process breakdowns in the followup of abnormal findings--are described in case studies. By raising awareness and implementing targeted programs that address diagnostic error, HCOs may begin to play an important role in addressing the problem of diagnostic error.

  6. Effects of stinger axial dynamics and mass compensation methods on experimental modal analysis

    NASA Astrophysics Data System (ADS)

    Hu, Ximing

    1992-06-01

    A longitudinal bar model that includes both stinger elastic and inertia properties is used to analyze the stinger's axial dynamics as well as the mass compensation that is required to obtain accurate input forces when a stinger is installed between the excitation source, force transducer, and the structure under test. Stinger motion transmissibility and force transmissibility, axial resonance and excitation energy transfer problems are discussed in detail. Stinger mass compensation problems occur when the force transducer is mounted on the exciter end of the stinger. These problems are studied theoretically, numerically, and experimentally. It is found that the measured Frequency Response Function (FRF) can be underestimated if mass compensation is based on the stinger exciter-end acceleration and can be overestimated if the mass compensation is based on the structure-end acceleration due to the stinger's compliance. A new mass compensation method that is based on two accelerations is introduced and is seen to improve the accuracy considerably. The effects of the force transducer's compliance on the mass compensation are also discussed. A theoretical model is developed that describes the measurement system's FRD around a test structure's resonance. The model shows that very large measurement errors occur when there is a small relative phase shift between the force and acceleration measurements. These errors can be in hundreds of percent corresponding to a phase error on the order of one or two degrees. The physical reasons for this unexpected error pattern are explained. This error is currently unknown to the experimental modal analysis community. Two sample structures consisting of a rigid mass and a double cantilever beam are used in the numerical calculations and experiments.

  7. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  8. [Medication error management climate and perception for system use according to construction of medication error prevention system].

    PubMed

    Kim, Myoung Soo

    2012-08-01

    The purpose of this cross-sectional study was to examine current status of IT-based medication error prevention system construction and the relationships among system construction, medication error management climate and perception for system use. The participants were 124 patient safety chief managers working for 124 hospitals with over 300 beds in Korea. The characteristics of the participants, construction status and perception of systems (electric pharmacopoeia, electric drug dosage calculation system, computer-based patient safety reporting and bar-code system) and medication error management climate were measured in this study. The data were collected between June and August 2011. Descriptive statistics, partial Pearson correlation and MANCOVA were used for data analysis. Electric pharmacopoeia were constructed in 67.7% of participating hospitals, computer-based patient safety reporting systems were constructed in 50.8%, electric drug dosage calculation systems were in use in 32.3%. Bar-code systems showed up the lowest construction rate at 16.1% of Korean hospitals. Higher rates of construction of IT-based medication error prevention systems resulted in greater safety and a more positive error management climate prevailed. The supportive strategies for improving perception for use of IT-based systems would add to system construction, and positive error management climate would be more easily promoted.

  9. Improvement of the Error-detection Mechanism in Adults with Dyslexia Following Reading Acceleration Training.

    PubMed

    Horowitz-Kraus, Tzipi

    2016-05-01

    The error-detection mechanism aids in preventing error repetition during a given task. Electroencephalography demonstrates that error detection involves two event-related potential components: error-related and correct-response negativities (ERN and CRN, respectively). Dyslexia is characterized by slow, inaccurate reading. In particular, individuals with dyslexia have a less active error-detection mechanism during reading than typical readers. In the current study, we examined whether a reading training programme could improve the ability to recognize words automatically (lexical representations) in adults with dyslexia, thereby resulting in more efficient error detection during reading. Behavioural and electrophysiological measures were obtained using a lexical decision task before and after participants trained with the reading acceleration programme. ERN amplitudes were smaller in individuals with dyslexia than in typical readers before training but increased following training, as did behavioural reading scores. Differences between the pre-training and post-training ERN and CRN components were larger in individuals with dyslexia than in typical readers. Also, the error-detection mechanism as represented by the ERN/CRN complex might serve as a biomarker for dyslexia and be used to evaluate the effectiveness of reading intervention programmes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Online pretreatment verification of high-dose rate brachytherapy using an imaging panel

    NASA Astrophysics Data System (ADS)

    Fonseca, Gabriel P.; Podesta, Mark; Bellezzo, Murillo; Van den Bosch, Michiel R.; Lutgens, Ludy; Vanneste, Ben G. L.; Voncken, Robert; Van Limbergen, Evert J.; Reniers, Brigitte; Verhaegen, Frank

    2017-07-01

    Brachytherapy is employed to treat a wide variety of cancers. However, an accurate treatment verification method is currently not available. This study describes a pre-treatment verification system that uses an imaging panel (IP) to verify important aspects of the treatment plan. A detailed modelling of the IP was only possible with an extensive calibration performed using a robotic arm. Irradiations were performed with a high dose rate (HDR) 192Ir source within a water phantom. An empirical fit was applied to measure the distance between the source and the detector so 3D Cartesian coordinates of the dwell positions can be obtained using a single panel. The IP acquires 7.14 fps to verify the dwell times, dwell positions and air kerma strength (Sk). A gynecological applicator was used to create a treatment plan that was registered with a CT image of the water phantom used during the experiments for verification purposes. Errors (shifts, exchanged connections and wrong dwell times) were simulated to verify the proposed verification system. Cartesian source positions (panel measurement plane) have a standard deviation of about 0.02 cm. The measured distance between the source and the panel (z-coordinate) have a standard deviation up to 0.16 cm and maximum absolute error of  ≈0.6 cm if the signal is close to sensitive limit of the panel. The average response of the panel is very linear with Sk. Therefore, Sk measurements can be performed with relatively small errors. The measured dwell times show a maximum error of 0.2 s which is consistent with the acquisition rate of the panel. All simulated errors were clearly identified by the proposed system. The use of IPs is not common in brachytherapy, however, it provides considerable advantages. It was demonstrated that the IP can accurately measure Sk, dwell times and dwell positions.

  11. New analysis strategies for micro aspheric lens metrology

    NASA Astrophysics Data System (ADS)

    Gugsa, Solomon Abebe

    Effective characterization of an aspheric micro lens is critical for understanding and improving processing in micro-optic manufacturing. Since most microlenses are plano-convex, where the convex geometry is a conic surface, current practice is often limited to obtaining an estimate of the lens conic constant, which average out the surface geometry that departs from an exact conic surface and any addition surface irregularities. We have developed a comprehensive approach of estimating the best fit conic and its uncertainty, and in addition propose an alternative analysis that focuses on surface errors rather than best-fit conic constant. We describe our new analysis strategy based on the two most dominant micro lens metrology methods in use today, namely, scanning white light interferometry (SWLI) and phase shifting interferometry (PSI). We estimate several parameters from the measurement. The major uncertainty contributors for SWLI are the estimates of base radius of curvature, the aperture of the lens, the sag of the lens, noise in the measurement, and the center of the lens. In the case of PSI the dominant uncertainty contributors are noise in the measurement, the radius of curvature, and the aperture. Our best-fit conic procedure uses least squares minimization to extract a best-fit conic value, which is then subjected to a Monte Carlo analysis to capture combined uncertainty. In our surface errors analysis procedure, we consider the surface errors as the difference between the measured geometry and the best-fit conic surface or as the difference between the measured geometry and the design specification for the lens. We focus on a Zernike polynomial description of the surface error, and again a Monte Carlo analysis is used to estimate a combined uncertainty, which in this case is an uncertainty for each Zernike coefficient. Our approach also allows us to investigate the effect of individual uncertainty parameters and measurement noise on both the best-fit conic constant analysis and the surface errors analysis, and compare the individual contributions to the overall uncertainty.

  12. Online pretreatment verification of high-dose rate brachytherapy using an imaging panel.

    PubMed

    Fonseca, Gabriel P; Podesta, Mark; Bellezzo, Murillo; Van den Bosch, Michiel R; Lutgens, Ludy; Vanneste, Ben G L; Voncken, Robert; Van Limbergen, Evert J; Reniers, Brigitte; Verhaegen, Frank

    2017-07-07

    Brachytherapy is employed to treat a wide variety of cancers. However, an accurate treatment verification method is currently not available. This study describes a pre-treatment verification system that uses an imaging panel (IP) to verify important aspects of the treatment plan. A detailed modelling of the IP was only possible with an extensive calibration performed using a robotic arm. Irradiations were performed with a high dose rate (HDR) 192 Ir source within a water phantom. An empirical fit was applied to measure the distance between the source and the detector so 3D Cartesian coordinates of the dwell positions can be obtained using a single panel. The IP acquires 7.14 fps to verify the dwell times, dwell positions and air kerma strength (Sk). A gynecological applicator was used to create a treatment plan that was registered with a CT image of the water phantom used during the experiments for verification purposes. Errors (shifts, exchanged connections and wrong dwell times) were simulated to verify the proposed verification system. Cartesian source positions (panel measurement plane) have a standard deviation of about 0.02 cm. The measured distance between the source and the panel (z-coordinate) have a standard deviation up to 0.16 cm and maximum absolute error of  ≈0.6 cm if the signal is close to sensitive limit of the panel. The average response of the panel is very linear with Sk. Therefore, Sk measurements can be performed with relatively small errors. The measured dwell times show a maximum error of 0.2 s which is consistent with the acquisition rate of the panel. All simulated errors were clearly identified by the proposed system. The use of IPs is not common in brachytherapy, however, it provides considerable advantages. It was demonstrated that the IP can accurately measure Sk, dwell times and dwell positions.

  13. Effect of the mandible on mouthguard measurements of head kinematics.

    PubMed

    Kuo, Calvin; Wu, Lyndia C; Hammoor, Brad T; Luck, Jason F; Cutcliffe, Hattie C; Lynall, Robert C; Kait, Jason R; Campbell, Kody R; Mihalik, Jason P; Bass, Cameron R; Camarillo, David B

    2016-06-14

    Wearable sensors are becoming increasingly popular for measuring head motions and detecting head impacts. Many sensors are worn on the skin or in headgear and can suffer from motion artifacts introduced by the compliance of soft tissue or decoupling of headgear from the skull. The instrumented mouthguard is designed to couple directly to the upper dentition, which is made of hard enamel and anchored in a bony socket by stiff ligaments. This gives the mouthguard superior coupling to the skull compared with other systems. However, multiple validation studies have yielded conflicting results with respect to the mouthguard׳s head kinematics measurement accuracy. Here, we demonstrate that imposing different constraints on the mandible (lower jaw) can alter mouthguard kinematic accuracy in dummy headform testing. In addition, post mortem human surrogate tests utilizing the worst-case unconstrained mandible condition yield 40% and 80% normalized root mean square error in angular velocity and angular acceleration respectively. These errors can be modeled using a simple spring-mass system in which the soft mouthguard material near the sensors acts as a spring and the mandible as a mass. However, the mouthguard can be designed to mitigate these disturbances by isolating sensors from mandible loads, improving accuracy to below 15% normalized root mean square error in all kinematic measures. Thus, while current mouthguards would suffer from measurement errors in the worst-case unconstrained mandible condition, future mouthguards should be designed to account for these disturbances and future validation testing should include unconstrained mandibles to ensure proper accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    NASA Astrophysics Data System (ADS)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.

  15. The Frequency Spectral Properties of Electrode-Skin Contact Impedance on Human Head and Its Frequency-Dependent Effects on Frequency-Difference EIT in Stroke Detection from 10Hz to 1MHz.

    PubMed

    Yang, Lin; Dai, Meng; Xu, Canhua; Zhang, Ge; Li, Weichen; Fu, Feng; Shi, Xuetao; Dong, Xiuzhen

    2017-01-01

    Frequency-difference electrical impedance tomography (fdEIT) reconstructs frequency-dependent changes of a complex impedance distribution. It has a potential application in acute stroke detection because there are significant differences in impedance spectra between stroke lesions and normal brain tissues. However, fdEIT suffers from the influences of electrode-skin contact impedance since contact impedance varies greatly with frequency. When using fdEIT to detect stroke, it is critical to know the degree of measurement errors or image artifacts caused by contact impedance. To our knowledge, no study has systematically investigated the frequency spectral properties of electrode-skin contact impedance on human head and its frequency-dependent effects on fdEIT used in stroke detection within a wide frequency band (10 Hz-1 MHz). In this study, we first measured and analyzed the frequency spectral properties of electrode-skin contact impedance on 47 human subjects' heads within 10 Hz-1 MHz. Then, we quantified the frequency-dependent effects of contact impedance on fdEIT in stroke detection in terms of the current distribution beneath the electrodes and the contact impedance imbalance between two measuring electrodes. The results showed that the contact impedance at high frequencies (>100 kHz) significantly changed the current distribution beneath the electrode, leading to nonnegligible errors in boundary voltages and artifacts in reconstructed images. The contact impedance imbalance at low frequencies (<1 kHz) also caused significant measurement errors. We conclude that the contact impedance has critical frequency-dependent influences on fdEIT and further studies on reducing such influences are necessary to improve the application of fdEIT in stroke detection.

  16. Assessment of Mental, Emotional and Physical Stress through Analysis of Physiological Signals Using Smartphones.

    PubMed

    Mohino-Herranz, Inma; Gil-Pita, Roberto; Ferreira, Javier; Rosa-Zurera, Manuel; Seoane, Fernando

    2015-10-08

    Determining the stress level of a subject in real time could be of special interest in certain professional activities to allow the monitoring of soldiers, pilots, emergency personnel and other professionals responsible for human lives. Assessment of current mental fitness for executing a task at hand might avoid unnecessary risks. To obtain this knowledge, two physiological measurements were recorded in this work using customized non-invasive wearable instrumentation that measures electrocardiogram (ECG) and thoracic electrical bioimpedance (TEB) signals. The relevant information from each measurement is extracted via evaluation of a reduced set of selected features. These features are primarily obtained from filtered and processed versions of the raw time measurements with calculations of certain statistical and descriptive parameters. Selection of the reduced set of features was performed using genetic algorithms, thus constraining the computational cost of the real-time implementation. Different classification approaches have been studied, but neural networks were chosen for this investigation because they represent a good tradeoff between the intelligence of the solution and computational complexity. Three different application scenarios were considered. In the first scenario, the proposed system is capable of distinguishing among different types of activity with a 21.2% probability error, for activities coded as neutral, emotional, mental and physical. In the second scenario, the proposed solution distinguishes among the three different emotional states of neutral, sadness and disgust, with a probability error of 4.8%. In the third scenario, the system is able to distinguish between low mental load and mental overload with a probability error of 32.3%. The computational cost was calculated, and the solution was implemented in commercially available Android-based smartphones. The results indicate that execution of such a monitoring solution is negligible compared to the nominal computational load of current smartphones.

  17. The Frequency Spectral Properties of Electrode-Skin Contact Impedance on Human Head and Its Frequency-Dependent Effects on Frequency-Difference EIT in Stroke Detection from 10Hz to 1MHz

    PubMed Central

    Zhang, Ge; Li, Weichen; Fu, Feng; Shi, Xuetao; Dong, Xiuzhen

    2017-01-01

    Frequency-difference electrical impedance tomography (fdEIT) reconstructs frequency-dependent changes of a complex impedance distribution. It has a potential application in acute stroke detection because there are significant differences in impedance spectra between stroke lesions and normal brain tissues. However, fdEIT suffers from the influences of electrode-skin contact impedance since contact impedance varies greatly with frequency. When using fdEIT to detect stroke, it is critical to know the degree of measurement errors or image artifacts caused by contact impedance. To our knowledge, no study has systematically investigated the frequency spectral properties of electrode-skin contact impedance on human head and its frequency-dependent effects on fdEIT used in stroke detection within a wide frequency band (10 Hz-1 MHz). In this study, we first measured and analyzed the frequency spectral properties of electrode-skin contact impedance on 47 human subjects’ heads within 10 Hz-1 MHz. Then, we quantified the frequency-dependent effects of contact impedance on fdEIT in stroke detection in terms of the current distribution beneath the electrodes and the contact impedance imbalance between two measuring electrodes. The results showed that the contact impedance at high frequencies (>100 kHz) significantly changed the current distribution beneath the electrode, leading to nonnegligible errors in boundary voltages and artifacts in reconstructed images. The contact impedance imbalance at low frequencies (<1 kHz) also caused significant measurement errors. We conclude that the contact impedance has critical frequency-dependent influences on fdEIT and further studies on reducing such influences are necessary to improve the application of fdEIT in stroke detection. PMID:28107524

  18. Age-Related Changes in Bimanual Instrument Playing with Rhythmic Cueing

    PubMed Central

    Kim, Soo Ji; Cho, Sung-Rae; Yoo, Ga Eul

    2017-01-01

    Deficits in bimanual coordination of older adults have been demonstrated to significantly limit their functioning in daily life. As a bimanual sensorimotor task, instrument playing has great potential for motor and cognitive training in advanced age. While the process of matching a person’s repetitive movements to auditory rhythmic cueing during instrument playing was documented to involve motor and attentional control, investigation into whether the level of cognitive functioning influences the ability to rhythmically coordinate movement to an external beat in older populations is relatively limited. Therefore, the current study aimed to examine how timing accuracy during bimanual instrument playing with rhythmic cueing differed depending on the degree of participants’ cognitive aging. Twenty one young adults, 20 healthy older adults, and 17 older adults with mild dementia participated in this study. Each participant tapped an electronic drum in time to the rhythmic cueing provided using both hands simultaneously and in alternation. During bimanual instrument playing with rhythmic cueing, mean and variability of synchronization errors were measured and compared across the groups and the tempo of cueing during each type of tapping task. Correlations of such timing parameters with cognitive measures were also analyzed. The results showed that the group factor resulted in significant differences in the synchronization errors-related parameters. During bimanual tapping tasks, cognitive decline resulted in differences in synchronization errors between younger adults and older adults with mild dimentia. Also, in terms of variability of synchronization errors, younger adults showed significant differences in maintaining timing performance from older adults with and without mild dementia, which may be attributed to decreased processing time for bimanual coordination due to aging. Significant correlations were observed between variability of synchronization errors and performance of cognitive tasks involving executive control and cognitive flexibility when asked for bimanual coordination in response to external timing cues at adjusted tempi. Also, significant correlations with cognitive measures were more prevalent in variability of synchronization errors during alternative tapping compared to simultaneous tapping. The current study supports that bimanual tapping may be predictive of cognitive processing of older adults. Also, tempo and type of movement required for instrument playing both involve cognitive and motor loads at different levels, and such variables could be important factors for determining the complexity of the task and the involved task requirements for interventions using instrument playing. PMID:29085309

  19. Stratospheric wind errors, initial states and forecast skill in the GLAS general circulation model

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J.

    1983-01-01

    Relations between stratospheric wind errors, initial states and 500 mb skill are investigated using the GLAS general circulation model initialized with FGGE data. Erroneous stratospheric winds are seen in all current general circulation models, appearing also as weak shear above the subtropical jet and as cold polar stratospheres. In this study it is shown that the more anticyclonic large-scale flows are correlated with large forecast stratospheric winds. In addition, it is found that for North America the resulting errors are correlated with initial state jet stream accelerations while for East Asia the forecast winds are correlated with initial state jet strength. Using 500 mb skill scores over Europe at day 5 to measure forecast performance, it is found that both poor forecast skill and excessive stratospheric winds are correlated with more anticyclonic large-scale flows over North America. It is hypothesized that the resulting erroneous kinetic energy contributes to the poor forecast skill, and that the problem is caused by a failure in the modeling of the stratospheric energy cycle in current general circulation models independent of vertical resolution.

  20. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    PubMed

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-05-01

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Errors due to the truncation of the computational domain in static three-dimensional electrical impedance tomography.

    PubMed

    Vauhkonen, P J; Vauhkonen, M; Kaipio, J P

    2000-02-01

    In electrical impedance tomography (EIT), an approximation for the internal resistivity distribution is computed based on the knowledge of the injected currents and measured voltages on the surface of the body. The currents spread out in three dimensions and therefore off-plane structures have a significant effect on the reconstructed images. A question arises: how far from the current carrying electrodes should the discretized model of the object be extended? If the model is truncated too near the electrodes, errors are produced in the reconstructed images. On the other hand if the model is extended very far from the electrodes the computational time may become too long in practice. In this paper the model truncation problem is studied with the extended finite element method. Forward solutions obtained using so-called infinite elements, long finite elements and separable long finite elements are compared to the correct solution. The effects of the truncation of the computational domain on the reconstructed images are also discussed and results from the three-dimensional (3D) sensitivity analysis are given. We show that if the finite element method with ordinary elements is used in static 3D EIT, the dimension of the problem can become fairly large if the errors associated with the domain truncation are to be avoided.

  2. Impulsive responding in threat and reward contexts as a function of PTSD symptoms and trait disinhibition.

    PubMed

    Sadeh, Naomi; Spielberg, Jeffrey M; Hayes, Jasmeet P

    2018-01-01

    We examined current posttraumatic stress disorder (PTSD) symptoms, trait disinhibition, and affective context as contributors to impulsive and self-destructive behavior in 94 trauma-exposed Veterans. Participants completed an affective Go/No-Go task (GNG) with different emotional contexts (threat, reward, and a multidimensional threat/reward condition) and current PTSD, trait disinhibition, and risky/self-destructive behavior measures. PTSD interacted with trait disinhibition to explain recent engagement in risky/self-destructive behavior, with Veterans scoring high on trait disinhibition and current PTSD symptoms reporting the highest levels of these behaviors. On the GNG task, commission errors were also associated with the interaction of PTSD symptoms and trait disinhibition. Specifically, PTSD symptoms were associated with greater commission errors in threat vs. reward contexts for individuals who were low on trait disinhibition. In contrast, veterans high on PTSD and trait disinhibition exhibited the greatest number of commission errors in the multidimensional affective context that involved both threat and reward processing. Results highlight the interactive effects of PTSD and disinhibited personality traits, as well as threat and reward systems, as risk factors for impulsive and self-destructive behavior in trauma-exposed groups. Findings have clinical implications for understanding heterogeneity in the expression of PTSD and its association with disinhibited behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Development of a Dynamic Biomechanical Model for Load Carriage: Phase 4, Part C2: Assessment of Pressure Measurement Systems on Curved Surfaces for the Dynamic Biomechanical Model of Human Load Carriage

    DTIC Science & Technology

    2005-08-01

    excellente justesse comparativement au F-Scan® pendant les essais sur le modèle de hanche. Les deux systèmes présentaient un certain degré de variation...in appendix C. The experimental design consisted of three steps (See Figure 1). Two were undertaken using a physical model for the shoulder in order...increase in accuracy error compared to Table 1 suggests that the current software for the XSENSOR® system is not designed to compensate for errors

  4. Optimizer convergence and local minima errors and their clinical importance

    NASA Astrophysics Data System (ADS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.

    2003-09-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  5. Optimizer convergence and local minima errors and their clinical importance.

    PubMed

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-09-07

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  6. Analysis of field errors for LARP Nb 3Sn HQ03 quadrupole magnet

    DOE PAGES

    Wang, Xiaorong; Ambrosio, Giorgio; Chlachidze, Guram; ...

    2016-12-01

    The U.S. LHC Accelerator Research Program, in close collaboration with CERN, has developed three generations of high-gradient quadrupole (HQ) Nb 3Sn model magnets, to support the development of the 150 mm aperture Nb 3Sn quadrupole magnets for the High-Luminosity LHC. The latest generation, HQ03, featured coils with better uniformity of coil dimensions and properties than the earlier generations. We tested the HQ03 magnet at FNAL, including the field quality study. The profiles of low-order harmonics along the magnet aperture observed at 15 kA, 1.9 K can be traced back to the assembled coil pack before the magnet assembly. Based onmore » the measured harmonics in the magnet center region, the coil block positioning tolerance was analyzed and compared with earlier HQ01 and HQ02 magnets to correlate with coil and magnet fabrication. Our study the capability of correcting the low-order non-allowed field errors, magnetic shims were installed in HQ03. Furthermore, the expected shim contribution agreed well with the calculation. For the persistent-current effect, the measured a4 can be related to 4% higher in the strand magnetization of one coil with respect to the other three coils. Lastly, we compare the field errors due to the inter-strand coupling currents between HQ03 and HQ02.« less

  7. Error sources affecting thermocouple thermometry in RF electromagnetic fields.

    PubMed

    Chakraborty, D P; Brezovich, I A

    1982-03-01

    Thermocouple thermometry errors in radiofrequency (typically 13, 56 MHZ) electromagnetic fields such as are encountered in hyperthermia are described. RF currents capacitatively or inductively coupled into the thermocouple-detector circuit produce errors which are a combination of interference, i.e., 'pick-up' error, and genuine rf induced temperature changes at the junction of the thermocouple. The former can be eliminated by adequate filtering and shielding; the latter is due to (a) junction current heating in which the generally unequal resistances of the thermocouple wires cause a net current flow from the higher to the lower resistance wire across the junction, (b) heating in the surrounding resistive material (tissue in hyperthermia), and (c) eddy current heating of the thermocouple wires in the oscillating magnetic field. Low frequency theories are used to estimate these errors under given operating conditions and relevant experiments demonstrating these effects and precautions necessary to minimize the errors are described. It is shown that at 13.56 MHz and voltage levels below 100 V rms these errors do not exceed 0.1 degrees C if the precautions are observed and thermocouples with adequate insulation (e.g., Bailey IT-18) are used. Results of this study are being currently used in our clinical work with good success.

  8. Conflicting calculations of pelvic incidence and pelvic tilt secondary to transitional lumbosacral anatomy (lumbarization of S-1): case report.

    PubMed

    Crawford, Charles H; Glassman, Steven D; Gum, Jeffrey L; Carreon, Leah Y

    2017-01-01

    Advancements in the understanding of adult spinal deformity have led to a greater awareness of the role of the pelvis in maintaining sagittal balance and alignment. Pelvic incidence has emerged as a key radiographic measure and should closely match lumbar lordosis. As proper measurement of the pelvic incidence requires accurate identification of the S-1 endplate, lumbosacral transitional anatomy may lead to errors. The purpose of this study is to demonstrate how lumbosacral transitional anatomy may lead to errors in the measurement of pelvic parameters. The current case highlights one of the potential complications that can be avoided with awareness. The authors report the case of a 61-year-old man who had undergone prior lumbar surgeries and then presented with symptomatic lumbar stenosis and sagittal malalignment. Radiographs showed a lumbarized S-1. Prior numbering of the segments in previous surgical and radiology reports led to a pelvic incidence calculation of 61°. Corrected numbering of the segments using the lumbarized S-1 endplate led to a pelvic incidence calculation of 48°. Without recognition of the lumbosacral anatomy, overcorrection of the lumbar lordosis might have led to negative sagittal balance and the propensity to develop proximal junction failure. This case illustrates that improper identification of lumbosacral transitional anatomy may lead to errors that could affect clinical outcome. Awareness of this potential error may help improve patient outcomes.

  9. Analytical and Photogrammetric Characterization of a Planar Tetrahedral Truss

    NASA Technical Reports Server (NTRS)

    Wu, K. Chauncey; Adams, Richard R.; Rhodes, Marvin D.

    1990-01-01

    Future space science missions are likely to require near-optical quality reflectors which are supported by a stiff truss structure. This support truss should conform closely with its intended shape to minimize its contribution to the overall surface error of the reflector. The current investigation was conducted to evaluate the planar surface accuracy of a regular tetrahedral truss structure by comparing the results of predicted and measured node locations. The truss is a 2-ring hexagonal structure composed of 102 equal-length truss members. Each truss member is nominally 2 meters in length between node centers and is comprised of a graphite/epoxy tube with aluminum nodes and joints. The axial stiffness and the length variation of the truss components were determined experimentally and incorporated into a static finite element analysis of the truss. From this analysis, the root mean square (RMS) surface error of the truss was predicted to be 0.11 mm (0004 in). Photogrammetry tests were performed on the assembled truss to measure the normal displacements of the upper surface nodes and to determine if the truss would maintain its intended shape when subjected to repeated assembly. Considering the variation in the truss component lengths, the measures rms error of 0.14 mm (0.006 in) in the assembled truss is relatively small. The test results also indicate that a repeatable truss surface is achievable. Several potential sources of error were identified and discussed.

  10. Test-retest reliability of behavioral measures of impulsive choice, impulsive action, and inattention.

    PubMed

    Weafer, Jessica; Baggott, Matthew J; de Wit, Harriet

    2013-12-01

    Behavioral measures of impulsivity are widely used in substance abuse research, yet relatively little attention has been devoted to establishing their psychometric properties, especially their reliability over repeated administration. The current study examined the test-retest reliability of a battery of standardized behavioral impulsivity tasks, including measures of impulsive choice (i.e., delay discounting, probability discounting, and the Balloon Analogue Risk Task), impulsive action (i.e., the stop signal task, the go/no-go task, and commission errors on the continuous performance task), and inattention (i.e., attention lapses on a simple reaction time task and omission errors on the continuous performance task). Healthy adults (n = 128) performed the battery on two separate occasions. Reliability estimates for the individual tasks ranged from moderate to high, with Pearson correlations within the specific impulsivity domains as follows: impulsive choice (r range: .76-.89, ps < .001); impulsive action (r range: .65-.73, ps < .001); and inattention (r range: .38-.42, ps < .001). Additionally, the influence of day-to-day fluctuations in mood, as measured by the Profile of Mood States, was assessed in relation to variability in performance on each of the behavioral tasks. Change in performance on the delay discounting task was significantly associated with change in positive mood and arousal. No other behavioral measures were significantly associated with mood. In sum, the current analysis demonstrates that behavioral measures of impulsivity are reliable measures and thus can be confidently used to assess various facets of impulsivity as intermediate phenotypes for drug abuse.

  11. Test-retest reliability of behavioral measures of impulsive choice, impulsive action, and inattention

    PubMed Central

    Weafer, Jessica; Baggott, Matthew J.; de Wit, Harriet

    2014-01-01

    Behavioral measures of impulsivity are widely used in substance abuse research, yet relatively little attention has been devoted to establishing their psychometric properties, especially their reliability over repeated administration. The current study examined the test-retest reliability of a battery of standardized behavioral impulsivity tasks, including measures of impulsive choice (delay discounting, probability discounting, and the Balloon Analogue Risk Task), impulsive action (the stop signal task, the go/no-go task, and commission errors on the continuous performance task), and inattention (attention lapses on a simple reaction time task and omission errors on the continuous performance task). Healthy adults (n=128) performed the battery on two separate occasions. Reliability estimates for the individual tasks ranged from moderate to high, with Pearson correlations within the specific impulsivity domains as follows: impulsive choice (r = .76 - .89, ps < .001); impulsive action (r = .65 - .73, ps < .001); and inattention (r = .38-.42, ps < .001). Additionally, the influence of day-to-day fluctuations in mood as measured by the Profile of Mood States was assessed in relation to variability in performance on each of the behavioral tasks. Change in performance on the delay discounting task was significantly associated with change in positive mood and arousal. No other behavioral measures were significantly associated with mood. In sum, the current analysis demonstrates that behavioral measures of impulsivity are reliable measures and thus can be confidently used to assess various facets of impulsivity as intermediate phenotypes for drug abuse. PMID:24099351

  12. Focus control enhancement and on-product focus response analysis methodology

    NASA Astrophysics Data System (ADS)

    Kim, Young Ki; Chen, Yen-Jen; Hao, Xueli; Samudrala, Pavan; Gomez, Juan-Manuel; Mahoney, Mark O.; Kamalizadeh, Ferhad; Hanson, Justin K.; Lee, Shawn; Tian, Ye

    2016-03-01

    With decreasing CDOF (Critical Depth Of Focus) for 20/14nm technology and beyond, focus errors are becoming increasingly critical for on-product performance. Current on product focus control techniques in high volume manufacturing are limited; It is difficult to define measurable focus error and optimize focus response on product with existing methods due to lack of credible focus measurement methodologies. Next to developments in imaging and focus control capability of scanners and general tool stability maintenance, on-product focus control improvements are also required to meet on-product imaging specifications. In this paper, we discuss focus monitoring, wafer (edge) fingerprint correction and on-product focus budget analysis through diffraction based focus (DBF) measurement methodology. Several examples will be presented showing better focus response and control on product wafers. Also, a method will be discussed for a focus interlock automation system on product for a high volume manufacturing (HVM) environment.

  13. Effects of instrument imperfections on quantitative scanning transmission electron microscopy.

    PubMed

    Krause, Florian F; Schowalter, Marco; Grieb, Tim; Müller-Caspary, Knut; Mehrtens, Thorsten; Rosenauer, Andreas

    2016-02-01

    Several instrumental imperfections of transmission electron microscopes are characterized and their effects on the results of quantitative scanning electron microscopy (STEM) are investigated and quantified using simulations. Methods to either avoid influences of these imperfections during acquisition or to include them in reference calculations are proposed. Particularly, distortions inflicted on the diffraction pattern by an image-aberration corrector can cause severe errors of more than 20% if not accounted for. A procedure for their measurement is proposed here. Furthermore, afterglow phenomena and nonlinear behavior of the detector itself can lead to incorrect normalization of measured intensities. Single electrons accidentally impinging on the detector are another source of error but can also be exploited for threshold-less calibration of STEM images to absolute dose, incident beam current determination and measurement of the detector sensitivity. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Correlation techniques to determine model form in robust nonlinear system realization/identification

    NASA Technical Reports Server (NTRS)

    Stry, Greselda I.; Mook, D. Joseph

    1991-01-01

    The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.

  15. Poster - 49: Assessment of Synchrony respiratory compensation error for CyberKnife liver treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ming; Cygler,

    The goal of this work is to quantify respiratory motion compensation errors for liver tumor patients treated by the CyberKnife system with Synchrony tracking, to identify patients with the smallest tracking errors and to eventually help coach patient’s breathing patterns to minimize dose delivery errors. The accuracy of CyberKnife Synchrony respiratory motion compensation was assessed for 37 patients treated for liver lesions by analyzing data from system logfiles. A predictive model is used to modulate the direction of individual beams during dose delivery based on the positions of internally implanted fiducials determined using an orthogonal x-ray imaging system and themore » current location of LED external markers. For each x-ray pair acquired, system logfiles report the prediction error, the difference between the measured and predicted fiducial positions, and the delivery error, which is an estimate of the statistical error in the model overcoming the latency between x-ray acquisition and robotic repositioning. The total error was calculated at the time of each x-ray pair, for the number of treatment fractions and the number of patients, giving the average respiratory motion compensation error in three dimensions. The 99{sup th} percentile for the total radial error is 3.85 mm, with the highest contribution of 2.79 mm in superior/inferior (S/I) direction. The absolute mean compensation error is 1.78 mm radially with a 1.27 mm contribution in the S/I direction. Regions of high total error may provide insight into features predicting groups of patients with larger or smaller total errors.« less

  16. Real-time diamagnetic flux measurements on ASDEX Upgrade.

    PubMed

    Giannone, L; Geiger, B; Bilato, R; Maraschek, M; Odstrčil, T; Fischer, R; Fuchs, J C; McCarthy, P J; Mertens, V; Schuhbeck, K H

    2016-05-01

    Real-time diamagnetic flux measurements are now available on ASDEX Upgrade. In contrast to the majority of diamagnetic flux measurements on other tokamaks, no analog summation of signals is necessary for measuring the change in toroidal flux or for removing contributions arising from unwanted coupling to the plasma and poloidal field coil currents. To achieve the highest possible sensitivity, the diamagnetic measurement and compensation coil integrators are triggered shortly before plasma initiation when the toroidal field coil current is close to its maximum. In this way, the integration time can be chosen to measure only the small changes in flux due to the presence of plasma. Two identical plasma discharges with positive and negative magnetic field have shown that the alignment error with respect to the plasma current is negligible. The measured diamagnetic flux is compared to that predicted by TRANSP simulations. The poloidal beta inferred from the diamagnetic flux measurement is compared to the values calculated from magnetic equilibrium reconstruction codes. The diamagnetic flux measurement and TRANSP simulation can be used together to estimate the coupled power in discharges with dominant ion cyclotron resonance heating.

  17. Image processing and analysis using neural networks for optometry area

    NASA Astrophysics Data System (ADS)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  18. Robust double gain unscented Kalman filter for small satellite attitude estimation

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun

    2017-08-01

    Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).

  19. Accuracy of the Microsoft Kinect for measuring gait parameters during treadmill walking.

    PubMed

    Xu, Xu; McGorry, Raymond W; Chou, Li-Shan; Lin, Jia-Hua; Chang, Chien-Chi

    2015-07-01

    The measurement of gait parameters normally requires motion tracking systems combined with force plates, which limits the measurement to laboratory settings. In some recent studies, the possibility of using the portable, low cost, and marker-less Microsoft Kinect sensor to measure gait parameters on over-ground walking has been examined. The current study further examined the accuracy level of the Kinect sensor for assessment of various gait parameters during treadmill walking under different walking speeds. Twenty healthy participants walked on the treadmill and their full body kinematics data were measured by a Kinect sensor and a motion tracking system, concurrently. Spatiotemporal gait parameters and knee and hip joint angles were extracted from the two devices and were compared. The results showed that the accuracy levels when using the Kinect sensor varied across the gait parameters. Average heel strike frame errors were 0.18 and 0.30 frames for the right and left foot, respectively, while average toe off frame errors were -2.25 and -2.61 frames, respectively, across all participants and all walking speeds. The temporal gait parameters based purely on heel strike have less error than the temporal gait parameters based on toe off. The Kinect sensor can follow the trend of the joint trajectories for the knee and hip joints, though there was substantial error in magnitudes. The walking speed was also found to significantly affect the identified timing of toe off. The results of the study suggest that the Kinect sensor may be used as an alternative device to measure some gait parameters for treadmill walking, depending on the desired accuracy level. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Error analysis of high-rate GNSS precise point positioning for seismic wave measurement

    NASA Astrophysics Data System (ADS)

    Shu, Yuanming; Shi, Yun; Xu, Peiliang; Niu, Xiaoji; Liu, Jingnan

    2017-06-01

    High-rate GNSS precise point positioning (PPP) has been playing a more and more important role in providing precise positioning information in fast time-varying environments. Although kinematic PPP is commonly known to have a precision of a few centimeters, the precision of high-rate PPP within a short period of time has been reported recently with experiments to reach a few millimeters in the horizontal components and sub-centimeters in the vertical component to measure seismic motion, which is several times better than the conventional kinematic PPP practice. To fully understand the mechanism of mystified excellent performance of high-rate PPP within a short period of time, we have carried out a theoretical error analysis of PPP and conducted the corresponding simulations within a short period of time. The theoretical analysis has clearly indicated that the high-rate PPP errors consist of two types: the residual systematic errors at the starting epoch, which affect high-rate PPP through the change of satellite geometry, and the time-varying systematic errors between the starting epoch and the current epoch. Both the theoretical error analysis and simulated results are fully consistent with and thus have unambiguously confirmed the reported high precision of high-rate PPP, which has been further affirmed here by the real data experiments, indicating that high-rate PPP can indeed achieve the millimeter level of precision in the horizontal components and the sub-centimeter level of precision in the vertical component to measure motion within a short period of time. The simulation results have clearly shown that the random noise of carrier phases and higher order ionospheric errors are two major factors to affect the precision of high-rate PPP within a short period of time. The experiments with real data have also indicated that the precision of PPP solutions can degrade to the cm level in both the horizontal and vertical components, if the geometry of satellites is rather poor with a large DOP value.

  1. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation.

    PubMed

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  2. Evaluation of the predicted error of the soil moisture retrieval from C-band SAR by comparison against modelled soil moisture estimates over Australia

    PubMed Central

    Doubková, Marcela; Van Dijk, Albert I.J.M.; Sabel, Daniel; Wagner, Wolfgang; Blöschl, Günter

    2012-01-01

    The Sentinel-1 will carry onboard a C-band radar instrument that will map the European continent once every four days and the global land surface at least once every twelve days with finest 5 × 20 m spatial resolution. The high temporal sampling rate and operational configuration make Sentinel-1 of interest for operational soil moisture monitoring. Currently, updated soil moisture data are made available at 1 km spatial resolution as a demonstration service using Global Mode (GM) measurements from the Advanced Synthetic Aperture Radar (ASAR) onboard ENVISAT. The service demonstrates the potential of the C-band observations to monitor variations in soil moisture. Importantly, a retrieval error estimate is also available; these are needed to assimilate observations into models. The retrieval error is estimated by propagating sensor errors through the retrieval model. In this work, the existing ASAR GM retrieval error product is evaluated using independent top soil moisture estimates produced by the grid-based landscape hydrological model (AWRA-L) developed within the Australian Water Resources Assessment system (AWRA). The ASAR GM retrieval error estimate, an assumed prior AWRA-L error estimate and the variance in the respective datasets were used to spatially predict the root mean square error (RMSE) and the Pearson's correlation coefficient R between the two datasets. These were compared with the RMSE calculated directly from the two datasets. The predicted and computed RMSE showed a very high level of agreement in spatial patterns as well as good quantitative agreement; the RMSE was predicted within accuracy of 4% of saturated soil moisture over 89% of the Australian land mass. Predicted and calculated R maps corresponded within accuracy of 10% over 61% of the continent. The strong correspondence between the predicted and calculated RMSE and R builds confidence in the retrieval error model and derived ASAR GM error estimates. The ASAR GM and Sentinel-1 have the same basic physical measurement characteristics, and therefore very similar retrieval error estimation method can be applied. Because of the expected improvements in radiometric resolution of the Sentinel-1 backscatter measurements, soil moisture estimation errors can be expected to be an order of magnitude less than those for ASAR GM. This opens the possibility for operationally available medium resolution soil moisture estimates with very well-specified errors that can be assimilated into hydrological or crop yield models, with potentially large benefits for land-atmosphere fluxes, crop growth, and water balance monitoring and modelling. PMID:23483015

  3. Static resistivity image of a cubic saline phantom in magnetic resonance electrical impedance tomography (MREIT).

    PubMed

    Lee, Byung Il; Oh, Suk Hoon; Woo, Eung Je; Lee, Soo Yeol; Cho, Min Hyeong; Kwon, Ohin; Seo, Jin Keun; Baek, Woon Sik

    2003-05-01

    In magnetic resonance electrical impedance tomography (MREIT) we inject currents through electrodes placed on the surface of a subject and try to reconstruct cross-sectional resistivity (or conductivity) images using internal magnetic flux density as well as boundary voltage measurements. In this paper we present a static resistivity image of a cubic saline phantom (50 x 50 x 50 mm3) containing a cylindrical sausage object with an average resistivity value of 123.7 ohms cm. Our current MREIT system is based on an experimental 0.3 T MRI scanner and a current injection apparatus. We captured MR phase images of the phantom while injecting currents of 28 mA through two pairs of surface electrodes. We computed current density images from magnetic flux density images that are proportional to the MR phase images. From the current density images and boundary voltage data we reconstructed a cross-sectional resistivity image within a central region of 38.5 x 38.5 mm2 at the middle of the phantom using the J-substitution algorithm. The spatial resolution of the reconstructed image was 64 x 64 and the reconstructed average resistivity of the sausage was 117.7 ohms cm. Even though the error in the reconstructed average resistivity value was small, the relative L2-error of the reconstructed image was 25.5% due to the noise in measured MR phase images. We expect improvements in the accuracy by utilizing an MRI scanner with higher SNR and increasing the size of voxels scarifying the spatial resolution.

  4. The course correction implementation of the inertial navigation system based on the information from the aircraft satellite navigation system before take-off

    NASA Astrophysics Data System (ADS)

    Markelov, V.; Shukalov, A.; Zharinov, I.; Kostishin, M.; Kniga, I.

    2016-04-01

    The use of the correction course option before aircraft take-off after inertial navigation system (INS) inaccurate alignment based on the platform attitude-and-heading reference system in azimuth is considered in the paper. A course correction is performed based on the track angle defined by the information received from the satellite navigation system (SNS). The course correction includes a calculated track error definition during ground taxiing along straight sections before take-off with its input in the onboard digital computational system like amendment for using in the current flight. The track error calculation is performed by the statistical evaluation of the track angle comparison defined by the SNS information with the current course measured by INS for a given number of measurements on the realizable time interval. The course correction testing results and recommendation application are given in the paper. The course correction based on the information from SNS can be used for improving accuracy characteristics for determining an aircraft path after making accelerated INS preparation concerning inaccurate initial azimuth alignment.

  5. Endoscopic Stone Measurement During Ureteroscopy.

    PubMed

    Ludwig, Wesley W; Lim, Sunghwan; Stoianovici, Dan; Matlaga, Brian R

    2018-01-01

    Currently, stone size cannot be accurately measured while performing flexible ureteroscopy (URS). We developed novel software for ureteroscopic, stone size measurement, and then evaluated its performance. A novel application capable of measuring stone fragment size, based on the known distance of the basket tip in the ureteroscope's visual field, was designed and calibrated in a laboratory setting. Complete URS procedures were recorded and 30 stone fragments were extracted and measured using digital calipers. The novel software program was applied to the recorded URS footage to obtain ureteroscope-derived stone size measurements. These ureteroscope-derived measurements were then compared with the actual-measured fragment size. The median longitudinal and transversal errors were 0.14 mm (95% confidence interval [CI] 0.1, 0.18) and 0.09 mm (95% CI 0.02, 0.15), respectively. The overall software accuracy and precision were 0.17 and 0.15 mm, respectively. The longitudinal and transversal measurements obtained by the software and digital calipers were highly correlated (r = 0.97 and 0.93). Neither stone size nor stone type was correlated with error measurements. This novel method and software reliably measured stone fragment size during URS. The software ultimately has the potential to make URS safer and more efficient.

  6. Optimal subsystem approach to multi-qubit quantum state discrimination and experimental investigation

    NASA Astrophysics Data System (ADS)

    Xue, ShiChuan; Wu, JunJie; Xu, Ping; Yang, XueJun

    2018-02-01

    Quantum computing is a significant computing capability which is superior to classical computing because of its superposition feature. Distinguishing several quantum states from quantum algorithm outputs is often a vital computational task. In most cases, the quantum states tend to be non-orthogonal due to superposition; quantum mechanics has proved that perfect outcomes could not be achieved by measurements, forcing repetitive measurement. Hence, it is important to determine the optimum measuring method which requires fewer repetitions and a lower error rate. However, extending current measurement approaches mainly aiming at quantum cryptography to multi-qubit situations for quantum computing confronts challenges, such as conducting global operations which has considerable costs in the experimental realm. Therefore, in this study, we have proposed an optimum subsystem method to avoid these difficulties. We have provided an analysis of the comparison between the reduced subsystem method and the global minimum error method for two-qubit problems; the conclusions have been verified experimentally. The results showed that the subsystem method could effectively discriminate non-orthogonal two-qubit states, such as separable states, entangled pure states, and mixed states; the cost of the experimental process had been significantly reduced, in most circumstances, with acceptable error rate. We believe the optimal subsystem method is the most valuable and promising approach for multi-qubit quantum computing applications.

  7. Design and performance evaluation of a master controller for endovascular catheterization.

    PubMed

    Guo, Jin; Guo, Shuxiang; Tamiya, Takashi; Hirata, Hideyuki; Ishihara, Hidenori

    2016-01-01

    It is difficult to manipulate a flexible catheter to target a position within a patient's complicated and delicate vessels. However, few researchers focused on the controller designs with much consideration of the natural catheter manipulation skills obtained from manual catheterization. Also, the existing catheter motion measurement methods probably lead to the difficulties in designing the force feedback device. Additionally, the commercially available systems are too expensive which makes them cost prohibitive to most hospitals. This paper presents a simple and cost-effective master controller for endovascular catheterization that can allow the interventionalists to apply the conventional pull, push and twist of the catheter used in current practice. A catheter-sensing unit (used to measure the motion of the catheter) and a force feedback unit (used to provide a sense of resistance force) are both presented. A camera was used to allow a contactless measurement avoiding additional friction, and the force feedback in the axial direction was provided by the magnetic force generated between the permanent magnets and the powered coil. Performance evaluation of the controller was evaluated by first conducting comparison experiments to quantify the accuracy of the catheter-sensing unit, and then conducting several experiments to evaluate the force feedback unit. From the experimental results, the minimum and the maximum errors of translational displacement were 0.003 mm (0.01 %) and 0.425 mm (1.06 %), respectively. The average error was 0.113 mm (0.28 %). In terms of rotational angles, the minimum and the maximum errors were 0.39°(0.33 %) and 7.2°(6 %), respectively. The average error was 3.61°(3.01 %). The force resolution was approximately 25 mN and a maximum current of 3A generated an approximately 1.5 N force. Based on analysis of requirements and state-of-the-art computer-assisted and robot-assisted training systems for endovascular catheterization, a new master controller with force feedback interface was proposed to maintain the natural endovascular catheterization skills of the interventionalists.

  8. A study on the theoretical and practical accuracy of conoscopic holography-based surface measurements: toward image registration in minimally invasive surgery†

    PubMed Central

    Burgner, J.; Simpson, A. L.; Fitzpatrick, J. M.; Lathrop, R. A.; Herrell, S. D.; Miga, M. I.; Webster, R. J.

    2013-01-01

    Background Registered medical images can assist with surgical navigation and enable image-guided therapy delivery. In soft tissues, surface-based registration is often used and can be facilitated by laser surface scanning. Tracked conoscopic holography (which provides distance measurements) has been recently proposed as a minimally invasive way to obtain surface scans. Moving this technique from concept to clinical use requires a rigorous accuracy evaluation, which is the purpose of our paper. Methods We adapt recent non-homogeneous and anisotropic point-based registration results to provide a theoretical framework for predicting the accuracy of tracked distance measurement systems. Experiments are conducted a complex objects of defined geometry, an anthropomorphic kidney phantom and a human cadaver kidney. Results Experiments agree with model predictions, producing point RMS errors consistently < 1 mm, surface-based registration with mean closest point error < 1 mm in the phantom and a RMS target registration error of 0.8 mm in the human cadaver kidney. Conclusions Tracked conoscopic holography is clinically viable; it enables minimally invasive surface scan accuracy comparable to current clinical methods that require open surgery. PMID:22761086

  9. Usefulness of model-based iterative reconstruction in semi-automatic volumetry for ground-glass nodules at ultra-low-dose CT: a phantom study.

    PubMed

    Maruyama, Shuki; Fukushima, Yasuhiro; Miyamae, Yuta; Koizumi, Koji

    2018-06-01

    This study aimed to investigate the effects of parameter presets of the forward projected model-based iterative reconstruction solution (FIRST) on the accuracy of pulmonary nodule volume measurement. A torso phantom with simulated nodules [diameter: 5, 8, 10, and 12 mm; computed tomography (CT) density: - 630 HU] was scanned with a multi-detector CT at tube currents of 10 mA (ultra-low-dose: UL-dose) and 270 mA (standard-dose: Std-dose). Images were reconstructed with filtered back projection [FBP; standard (Std-FBP), ultra-low-dose (UL-FBP)], FIRST Lung (UL-Lung), and FIRST Body (UL-Body), and analyzed with a semi-automatic software. The error in the volume measurement was determined. The errors with UL-Lung and UL-Body were smaller than that with UL-FBP. The smallest error was 5.8% ± 0.3 for the 12-mm nodule with UL-Body (middle lung). Our results indicated that FIRST Body would be superior to FIRST Lung in terms of accuracy of nodule measurement with UL-dose CT.

  10. Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms

    NASA Astrophysics Data System (ADS)

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-04-01

    Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.

  11. Direct DC 10 V comparison between two programmable Josephson voltage standards made of niobium nitride (NbN)-based and niobium (Nb)-based Josephson junctions

    NASA Astrophysics Data System (ADS)

    Solve, S.; Chayramy, R.; Maruyama, M.; Urano, C.; Kaneko, N.-H.; Rüfenacht, A.

    2018-04-01

    BIPM’s new transportable programmable Josephson voltage standard (PJVS) has been used for an on-site comparison at the National Metrology Institute of Japan (NMIJ) and the National Institute of Advanced Industrial Science and Technology (AIST) (NMIJ/AIST, hereafter called just NMIJ unless otherwise noted). This is the first time that an array of niobium-based Josephson junctions with amorphous niobium silicon Nb x Si1-x barriers, developed by the National Institute of Standards and Technology4 (NIST), has been directly compared to an array of niobium nitride (NbN)-based junctions (developed by the NMIJ in collaboration with the Nanoelectronics Research Institute (NeRI), AIST). Nominally identical voltages produced by both systems agreed within 5 parts in 1012 (0.05 nV at 10 V) with a combined relative uncertainty of 7.9  ×  10-11 (0.79 nV). The low side of the NMIJ apparatus is, by design, referred to the ground potential. An analysis of the systematic errors due to the leakage current to ground was conducted for this ground configuration. The influence of a multi-stage low-pass filter installed at the output measurement leads of the NMIJ primary standard was also investigated. The number of capacitances in parallel in the filter and their insulation resistance have a direct impact on the amplitude of the systematic voltage error introduced by the leakage current, even if the current does not necessarily return to ground. The filtering of the output of the PJVS voltage leads has the positive consequence of protecting the array from external sources of noise. Current noise, when coupled to the array, reduces the width or current range of the quantized voltage steps. The voltage error induced by the leakage current in the filter is an order of magnitude larger than the voltage error in the absence of all filtering, even though the current range of steps is significantly decreased without filtering.

  12. Techniques and equipment required for precise stream gaging in tide-affected fresh-water reaches of the Sacramento River, California

    USGS Publications Warehouse

    Smith, Winchell

    1971-01-01

    Current-meter measurements of high accuracy will be required for calibration of an acoustic flow-metering system proposed for installation in the Sacramento River at Chipps Island in California. This report presents an analysis of the problem of making continuous accurate current-meter measurements in this channel where the flow regime is changing constantly in response to tidal action. Gaging-system requirements are delineated, and a brief description is given of the several applicable techniques that have been developed by others. None of these techniques provides the accuracies required for the flowmeter calibration. A new system is described--one which has been assembled and tested in prototype and which will provide the matrix of data needed for accurate continuous current-meter measurements. Analysis of a large quantity of data on the velocity distribution in the channel of the Sacramento River at Chipps Island shows that adequate definition of the velocity can be made during the dominant flow periods--that is, at times other than slack-water periods--by use of current meters suspended at elevations 0.2 and 0.8 of the depth below the water surface. However, additional velocity surveys will be necessary to determine whether or not small systematic corrections need be applied during periods of rapidly changing flow. In the proposed system all gaged parameters, including velocities, depths, position in the stream, and related times, are monitored continuously as a boat moves across the river on the selected cross section. Data are recorded photographically and transferred later onto punchcards for computer processing. Computer programs have been written to permit computation of instantaneous discharges at any selected time interval throughout the period of the current meter measurement program. It is anticipated that current-meter traverses will be made at intervals of about one-half hour over periods of several days. Capability of performance for protracted periods was, consequently, one of the important elements in system design. Analysis of error sources in the proposed system indicates that errors in individual computed discharges can be kept smaller than 1.5 percent if the expected precision in all measured parameters is maintained.

  13. [Current Situation Survey of the Measures to Prevent Medication Errors in the Operating Room: Report of the Japan Society of Anesthesiologists Safety Commission Working Group for Consideration of Recommendations for Color Coding of Prepared Syringe Labels for Prevention of Medication Errors].

    PubMed

    Shida, Kyoko; Suzuki, Toshiyasu; Sugahara, Kazuhiro; Sobue, Kazuya

    2016-05-01

    In the case of medication errors which are among the more frequent adverse events that occur in the hospital, there is a need for effective measures to prevent incidence. According to the Japan Society of Anesthesiologists study "Drug incident investigation 2005-2007 years", "Error of a syringe at the selection stage" was the most frequent (44.2%). The status of current measures and best practices implemented in Japanese hospitals was the focus of a subsequent investigation. Representative specialists in anesthesiology certified hospitals across the country were surveyed via a questionnaire sampling that lasted 46 days. Investigation method was via the Web with survey responses anonymous. With respect to preventive measures implemented to mitigate risk of medication errors in perioperative settings, responses included: incident and accident report (215 facilities, 70.3%), use of pre-filled syringes (180 facilities, 58.8%), devised the arrangement of dangerous drugs (154 facilities, 50.3%), use of the product with improper connection preventing mechanism (123 facilities, 40.2%), double-check (116 facilities, 37.9%), use of color barreled syringe (115 facilities, 37.6%), use of color label or color tape (89 facilities, 29.1%), presentation of medication such as placing the ampoule or syringe on a tray by dividing color code for drug class on a tray (54 facilities, 17.6%), the discontinuance of handwritten labels (23 facilities, 7.5%), use of a drug verification system that uses bar code (20 facilities, 6.5%), and facilities that have not implemented any means (11 facilities, 3.6%), others not mentioned (10 facilities, 3.3%), and use of carts that count/account the agents by drug type and record selection and number picked automatically (6 facilities, 2.0%). Drug name identification affixed to the syringe via perforated label torn from the ampoule/vial, etc. (245 facilities, 28.1%), handwriting directly to the syringe (208 facilities, 23.8%), use of the attached label (like that comes with the product) (187 facilities, 21.4%), handwriting on the plain tape (87 facilities, 10.0%), printing labels (62 facilities, 7.1%), printed color labels (44 facilities, 5.0%), handwriting on the color tape (27 facilities, 3.1%), machinery for printing the drug name by scanning bar code of the ampoule, etc.(10 facilities, 1.1%), others (3 facilities, 0.3%), no description on the prepared drug (0 facilities, 0%). The awareness of international standard color code, such as by the International Organization for Standardization (ISO), was only 18.6%. Targeting anesthesiology certified hospitals recognized by the Japan Society of Anesthesiologists, the result of the survey on the measures to prevent medication errors during perioperative procedures indicated that various measures were documented in use. However, many facilities still use hand written labels (a common cause for errors). Confirmation of the need for improved drug name and drug recognition on syringe was documented.

  14. Volumetric error modeling, identification and compensation based on screw theory for a large multi-axis propeller-measuring machine

    NASA Astrophysics Data System (ADS)

    Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu

    2018-05-01

    Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.

  15. Optimized universal color palette design for error diffusion

    NASA Astrophysics Data System (ADS)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  16. Placebo non-response measure in sequential parallel comparison design studies.

    PubMed

    Rybin, Denis; Doros, Gheorghe; Pencina, Michael J; Fava, Maurizio

    2015-07-10

    The Sequential Parallel Comparison Design (SPCD) is one of the novel approaches addressing placebo response. The analysis of SPCD data typically classifies subjects as 'placebo responders' or 'placebo non-responders'. Most current methods employed for analysis of SPCD data utilize only a part of the data collected during the trial. A repeated measures model was proposed for analysis of continuous outcomes that permitted the inclusion of information from all subjects into the treatment effect estimation. We describe here a new approach using a weighted repeated measures model that further improves the utilization of data collected during the trial, allowing the incorporation of information that is relevant to the placebo response, and dealing with the problem of possible misclassification of subjects. Our simulations show that when compared to the unweighted repeated measures model method, our approach performs as well or, under certain conditions, better, in preserving the type I error, achieving adequate power and minimizing the mean squared error. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Fault and Error Latency Under Real Workload: an Experimental Study. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chillarege, Ram

    1986-01-01

    A practical methodology for the study of fault and error latency is demonstrated under a real workload. This is the first study that measures and quantifies the latency under real workload and fills a major gap in the current understanding of workload-failure relationships. The methodology is based on low level data gathered on a VAX 11/780 during the normal workload conditions of the installation. Fault occurrence is simulated on the data, and the error generation and discovery process is reconstructed to determine latency. The analysis proceeds to combine the low level activity data with high level machine performance data to yield a better understanding of the phenomena. A strong relationship exists between latency and workload and that relationship is quantified. The sampling and reconstruction techniques used are also validated. Error latency in the memory where the operating system resides was studied using data on the physical memory access. Fault latency in the paged section of memory was determined using data from physical memory scans. Error latency in the microcontrol store was studied using data on the microcode access and usage.

  18. Stochastic error model corrections to improve the performance of bottom-up precipitation products for hydrologic applications

    NASA Astrophysics Data System (ADS)

    Maggioni, V.; Massari, C.; Ciabatta, L.; Brocca, L.

    2016-12-01

    Accurate quantitative precipitation estimation is of great importance for water resources management, agricultural planning, and forecasting and monitoring of natural hazards such as flash floods and landslides. In situ observations are limited around the Earth, especially in remote areas (e.g., complex terrain, dense vegetation), but currently available satellite precipitation products are able to provide global precipitation estimates with an accuracy that depends upon many factors (e.g., type of storms, temporal sampling, season, etc.). The recent SM2RAIN approach proposes to estimate rainfall by using satellite soil moisture observations. As opposed to traditional satellite precipitation methods, which sense cloud properties to retrieve instantaneous estimates, this new bottom-up approach makes use of two consecutive soil moisture measurements for obtaining an estimate of the fallen precipitation within the interval between two satellite overpasses. As a result, the nature of the measurement is different and complementary to the one of classical precipitation products and could provide a different valid perspective to substitute or improve current rainfall estimates. However, uncertainties in the SM2RAIN product are still not well known and could represent a limitation in utilizing this dataset for hydrological applications. Therefore, quantifying the uncertainty associated with SM2RAIN is necessary for enabling its use. The study is conducted over the Italian territory for a 5-yr period (2010-2014). A number of satellite precipitation error properties, typically used in error modeling, are investigated and include probability of detection, false alarm rates, missed events, spatial correlation of the error, and hit biases. After this preliminary uncertainty analysis, the potential of applying the stochastic rainfall error model SREM2D to correct SM2RAIN and to improve its performance in hydrologic applications is investigated. The use of SREM2D for characterizing the error in precipitation by SM2RAIN would be highly useful for the merging and the integration steps in its algorithm, i.e., the merging of multiple soil moisture derived products (e.g., SMAP, SMOS, ASCAT) and the integration of soil moisture derived and state of the art satellite precipitation products (e.g., GPM IMERG).

  19. Investigation of the quasi-simultaneous arrival (QSA) effect on a CAMECA IMS 7f-GEO.

    PubMed

    Jones, Clive; Fike, David A; Peres, Paula

    2017-04-15

    IMS 7f-GEO isotope ratio applications increasingly involve analyses (e.g., S - or O - isotopes, coupled with primary ion currents <30 pA) for which quasi-simultaneous arrival (QSA) could compromise precision and accuracy of data. QSA and associated correction have been widely investigated for the CAMECA NanoSIMS instruments, but not for the IMS series. Sulfur and oxygen isotopic ratio experiments were performed using an electron multiplier (EM) detector, employing Cs + primary ion currents of 1, 2, 5 and 11.5 pA (nominal) and a variety of secondary ion transmissions to vary QSA probability. An experiment to distinguish between QSA undercounting and purported aperture-related mass fractionation was performed using an EM for 16 O - and 18 O - plus an additional 16 O - measurement using a Faraday cup (FC) detector. An experiment to investigate the accuracy of the QSA correction was performed by comparing S isotopic ratios obtained using an EM with those obtained on the same sample using dual FCs. The QSA effect was observed on the IMS-7f-GEO, and QSA coefficients (β) of ~0.66 were determined, in agreement with reported NanoSIMS measurements, but different from the value (0.5) predicted using Poisson statistics. Aperture-related fractionation was not sufficient to explain the difference but uncertainties in primary ion flux measurement could play a role. When QSA corrected, the isotope ratio data obtained using the EM agreed with the dual FC data, within statistical error. QSA undercounting could compromise isotope ratio analyses requiring ~1 × 10 5 counts per second for the major isotope and primary currents <20 pA. The error could be >8‰ for a 1 pA primary current. However, correction can be accurately applied. For instrumental mass fractionation (IMF)-corrected data, the magnitude of the error resulting from not correcting for QSA is dependent on the difference in secondary ion count rate between the unknown and standard analyses. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of themore » absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.« less

  1. Sea Ice Topography Profiling using Laser Altimetry from Small Unmanned Aircraft Systems

    NASA Astrophysics Data System (ADS)

    Crocker, Roger Ian

    Arctic sea ice is undergoing a dramatic transition from a perennial ice pack with a high prevalence of old multiyear ice, to a predominantly seasonal ice pack comprised primarily of young first-year and second-year ice. This transition has brought about changes in the sea ice thickness and topography characteristics, which will further affect the evolution and survivability of the ice pack. The varying ice conditions have substantial implications for commercial operations, international affairs, regional and global climate, our ability to model climate dynamics, and the livelihood of Arctic inhabitants. A number of satellite and airborne missions are dedicated to monitoring sea ice, but they are limited by their spatial and temporal resolution and coverage. Given the fast rate of sea ice change and its pervasive implications, enhanced observational capabilities are needed to augment the current strategies. The CU Laser Profilometer and Imaging System (CULPIS) is designed specifically for collecting fine-resolution elevation data and imagery from small unmanned aircraft systems (UAS), and has a great potential to compliment ongoing missions. This altimeter system has been integrated into four different UAS, and has been deployed during Arctic and Antarctic science campaigns. The CULPIS elevation measurement accuracy is shown to be 95±25 cm, and is limited primarily by GPS positioning error (<25 cm), aircraft attitude determination error (<20 cm), and sensor misalignment error (<20 cm). The relative error is considerably smaller over short flight distances, and the measurement precision is shown to be <10 cm over a distance of 200 m. Given its fine precision, the CULPIS is well suited for measuring sea ice topography, and observed ridge height and ridge separation distributions are found to agree with theoretical distributions to within 5%. Simulations demonstrate the inability of course-resolution measurements to accurately represent the theoretical distributions, with differences up to 30%. Future efforts should focus on reducing the total measurement error to <20 cm to make the CULPIS suitable for detecting ice sheet elevation change.

  2. SU-E-T-257: Output Constancy: Reducing Measurement Variations in a Large Practice Group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hedrick, K; Fitzgerald, T; Miller, R

    2014-06-01

    Purpose: To standardize output constancy check procedures in a large medical physics practice group covering multiple sites, in order to identify and reduce small systematic errors caused by differences in equipment and the procedures of multiple physicists. Methods: A standardized machine output constancy check for both photons and electrons was instituted within the practice group in 2010. After conducting annual TG-51 measurements in water and adjusting the linac to deliver 1.00 cGy/MU at Dmax, an acrylic phantom (comparable at all sites) and PTW farmer ion chamber are used to obtain monthly output constancy reference readings. From the collected charge reading,more » measurements of air pressure and temperature, and chamber Ndw and Pelec, a value we call the Kacrylic factor is determined, relating the chamber reading in acrylic to the dose in water with standard set-up conditions. This procedure easily allows for multiple equipment combinations to be used at any site. The Kacrylic factors and output results from all sites and machines are logged monthly in a central database and used to monitor trends in calibration and output. Results: The practice group consists of 19 sites, currently with 34 Varian and 8 Elekta linacs (24 Varian and 5 Elekta linacs in 2010). Over the past three years, the standard deviation of Kacrylic factors measured on all machines decreased by 20% for photons and high energy electrons as systematic errors were found and reduced. Low energy electrons showed very little change in the distribution of Kacrylic values. Small errors in linac beam data were found by investigating outlier Kacrylic values. Conclusion: While the use of acrylic phantoms introduces an additional source of error through small differences in depth and effective depth, the new standardized procedure eliminates potential sources of error from using many different phantoms and results in more consistent output constancy measurements.« less

  3. Precision of natural satellite ephemerides from observations of different types

    NASA Astrophysics Data System (ADS)

    Emelyanov, N. V.

    2017-08-01

    Currently, various types of observations of natural planetary satellites are used to refine their ephemerides. A new type of measurement - determining the instants of apparent satellite encounters - has recently been proposed by Morgado and co-workers. The problem that arises is which type of measurement to choose in order to obtain an ephemeris precision that is as high as possible. The answer can be obtained only by modelling the entire process: observations, obtaining the measured values, refining the satellite motion parameters, and generating the ephemeris. The explicit dependence of the ephemeris precision on observational accuracy as well as on the type of observations is unknown. In this paper, such a dependence is investigated using the Monte Carlo statistical method. The relationship between the ephemeris precision for different types of observations is then assessed. The possibility of using the instants of apparent satellite encounters to obtain an ephemeris is investigated. A method is proposed that can be used to fit the satellite orbital parameters to this type of measurement. It is shown that, in the absence of systematic scale errors in the CCD frame, the use of the instants of apparent encounters leads to less precise ephemerides. However, in the presence of significant scale errors, which is often the case, this type of measurement becomes effective because the instants of apparent satellite encounters do not depend on scale errors.

  4. A Likelihood-Based Framework for Association Analysis of Allele-Specific Copy Numbers.

    PubMed

    Hu, Y J; Lin, D Y; Sun, W; Zeng, D

    2014-10-01

    Copy number variants (CNVs) and single nucleotide polymorphisms (SNPs) co-exist throughout the human genome and jointly contribute to phenotypic variations. Thus, it is desirable to consider both types of variants, as characterized by allele-specific copy numbers (ASCNs), in association studies of complex human diseases. Current SNP genotyping technologies capture the CNV and SNP information simultaneously via fluorescent intensity measurements. The common practice of calling ASCNs from the intensity measurements and then using the ASCN calls in downstream association analysis has important limitations. First, the association tests are prone to false-positive findings when differential measurement errors between cases and controls arise from differences in DNA quality or handling. Second, the uncertainties in the ASCN calls are ignored. We present a general framework for the integrated analysis of CNVs and SNPs, including the analysis of total copy numbers as a special case. Our approach combines the ASCN calling and the association analysis into a single step while allowing for differential measurement errors. We construct likelihood functions that properly account for case-control sampling and measurement errors. We establish the asymptotic properties of the maximum likelihood estimators and develop EM algorithms to implement the corresponding inference procedures. The advantages of the proposed methods over the existing ones are demonstrated through realistic simulation studies and an application to a genome-wide association study of schizophrenia. Extensions to next-generation sequencing data are discussed.

  5. Evaluation of Collision Cross Section Calibrants for Structural Analysis of Lipids by Traveling Wave Ion Mobility-Mass Spectrometry

    PubMed Central

    2016-01-01

    Collision cross section (CCS) measurement of lipids using traveling wave ion mobility-mass spectrometry (TWIM-MS) is of high interest to the lipidomics field. However, currently available calibrants for CCS measurement using TWIM are predominantly peptides that display quite different physical properties and gas-phase conformations from lipids, which could lead to large CCS calibration errors for lipids. Here we report the direct CCS measurement of a series of phosphatidylcholines (PCs) and phosphatidylethanolamines (PEs) in nitrogen using a drift tube ion mobility (DTIM) instrument and an evaluation of the accuracy and reproducibility of PCs and PEs as CCS calibrants for phospholipids against different classes of calibrants, including polyalanine (PolyAla), tetraalkylammonium salts (TAA), and hexakis(fluoroalkoxy)phosphazines (HFAP), in both positive and negative modes in TWIM-MS analysis. We demonstrate that structurally mismatched calibrants lead to larger errors in calibrated CCS values while the structurally matched calibrants, PCs and PEs, gave highly accurate and reproducible CCS values at different traveling wave parameters. Using the lipid calibrants, the majority of the CCS values of several classes of phospholipids measured by TWIM are within 2% error of the CCS values measured by DTIM. The development of phospholipid CCS calibrants will enable high-accuracy structural studies of lipids and add an additional level of validation in the assignment of identifications in untargeted lipidomics experiments. PMID:27321977

  6. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  7. Three-dimensional assessment of the asymptomatic and post-stroke shoulder: intra-rater test-retest reliability and within-subject repeatability of the palpation and digitization approach.

    PubMed

    Pain, Liza A M; Baker, Ross; Sohail, Qazi Zain; Richardson, Denyse; Zabjek, Karl; Mogk, Jeremy P M; Agur, Anne M R

    2018-03-23

    Altered three-dimensional (3D) joint kinematics can contribute to shoulder pathology, including post-stroke shoulder pain. Reliable assessment methods enable comparative studies between asymptomatic shoulders of healthy subjects and painful shoulders of post-stroke subjects, and could inform treatment planning for post-stroke shoulder pain. The study purpose was to establish intra-rater test-retest reliability and within-subject repeatability of a palpation/digitization protocol, which assesses 3D clavicular/scapular/humeral rotations, in asymptomatic and painful post-stroke shoulders. Repeated measurements of 3D clavicular/scapular/humeral joint/segment rotations were obtained using palpation/digitization in 32 asymptomatic and six painful post-stroke shoulders during four reaching postures (rest/flexion/abduction/external rotation). Intra-class correlation coefficients (ICCs), standard error of the measurement and 95% confidence intervals were calculated. All ICC values indicated high to very high test-retest reliability (≥0.70), with lower reliability for scapular anterior/posterior tilt during external rotation in asymptomatic subjects, and scapular medial/lateral rotation, humeral horizontal abduction/adduction and axial rotation during abduction in post-stroke subjects. All standard error of measurement values demonstrated within-subject repeatability error ≤5° for all clavicular/scapular/humeral joint/segment rotations (asymptomatic ≤3.75°; post-stroke ≤5.0°), except for humeral axial rotation (asymptomatic ≤5°; post-stroke ≤15°). This noninvasive, clinically feasible palpation/digitization protocol was reliable and repeatable in asymptomatic shoulders, and in a smaller sample of painful post-stroke shoulders. Implications for Rehabilitation In the clinical setting, a reliable and repeatable noninvasive method for assessment of three-dimensional (3D) clavicular/scapular/humeral joint orientation and range of motion (ROM) is currently required. The established reliability and repeatability of this proposed palpation/digitization protocol will enable comparative 3D ROM studies between asymptomatic and post-stroke shoulders, which will further inform treatment planning. Intra-rater test-retest repeatability, which is measured by the standard error of the measure, indicates the range of error associated with a single test measure. Therefore, clinicians can use the standard error of the measure to determine the "true" differences between pre-treatment and post-treatment test scores.

  8. Mismeasurement and the resonance of strong confounders: correlated errors.

    PubMed

    Marshall, J R; Hastrup, J L; Ross, J S

    1999-07-01

    Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.

  9. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    PubMed

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.

  10. MO-FG-202-05: Identifying Treatment Planning System Errors in IROC-H Phantom Irradiations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerns, J; Followill, D; Howell, R

    Purpose: Treatment Planning System (TPS) errors can affect large numbers of cancer patients receiving radiation therapy. Using an independent recalculation system, the Imaging and Radiation Oncology Core-Houston (IROC-H) can identify institutions that have not sufficiently modelled their linear accelerators in their TPS model. Methods: Linear accelerator point measurement data from IROC-H’s site visits was aggregated and analyzed from over 30 linear accelerator models. Dosimetrically similar models were combined to create “classes”. The class data was used to construct customized beam models in an independent treatment dose verification system (TVS). Approximately 200 head and neck phantom plans from 2012 to 2015more » were recalculated using this TVS. Comparison of plan accuracy was evaluated by comparing the measured dose to the institution’s TPS dose as well as the TVS dose. In cases where the TVS was more accurate than the institution by an average of >2%, the institution was identified as having a non-negligible TPS error. Results: Of the ∼200 recalculated plans, the average improvement using the TVS was ∼0.1%; i.e. the recalculation, on average, slightly outperformed the institution’s TPS. Of all the recalculated phantoms, 20% were identified as having a non-negligible TPS error. Fourteen plans failed current IROC-H criteria; the average TVS improvement of the failing plans was ∼3% and 57% were found to have non-negligible TPS errors. Conclusion: IROC-H has developed an independent recalculation system to identify institutions that have considerable TPS errors. A large number of institutions were found to have non-negligible TPS errors. Even institutions that passed IROC-H criteria could be identified as having a TPS error. Resolution of such errors would improve dose delivery for a large number of IROC-H phantoms and ultimately, patients.« less

  11. Fourier transform magnetic resonance current density imaging (FT-MRCDI) from one component of magnetic flux density.

    PubMed

    Ider, Yusuf Ziya; Birgul, Ozlem; Oran, Omer Faruk; Arikan, Orhan; Hamamura, Mark J; Muftuler, L Tugan

    2010-06-07

    Fourier transform (FT)-based algorithms for magnetic resonance current density imaging (MRCDI) from one component of magnetic flux density have been developed for 2D and 3D problems. For 2D problems, where current is confined to the xy-plane and z-component of the magnetic flux density is measured also on the xy-plane inside the object, an iterative FT-MRCDI algorithm is developed by which both the current distribution inside the object and the z-component of the magnetic flux density on the xy-plane outside the object are reconstructed. The method is applied to simulated as well as actual data from phantoms. The effect of measurement error on the spatial resolution of the current density reconstruction is also investigated. For 3D objects an iterative FT-based algorithm is developed whereby the projected current is reconstructed on any slice using as data the Laplacian of the z-component of magnetic flux density measured for that slice. In an injected current MRCDI scenario, the current is not divergence free on the boundary of the object. The method developed in this study also handles this situation.

  12. Can we observe the fronts of the Antarctic Circumpolar Current using GRACE OBP?

    NASA Astrophysics Data System (ADS)

    Makowski, J.; Chambers, D. P.; Bonin, J. A.

    2014-12-01

    The Antarctic Circumpolar Current (ACC) and the Southern Ocean remains one of the most undersampled regions of the world's oceans. The ACC is comprised of four major fronts: the Sub-Tropical Front (STF), the Polar Front (PF), the Sub-Antarctic Front (SAF), and the Southern ACC Front (SACCF). These were initially observed individually from repeat hydrographic sections and their approximate locations globally have been quantified using all available temperature data from the World Ocean and Climate Experiment (WOCE). More recent studies based on satellite altimetry have found that the front positions are more dynamic and have shifted south by up to 1° on average since 1993. Using ocean bottom pressure (OBP) data from the current Gravity Recovery and Climate Experiment (GRACE) we have measured integrated transport variability of the ACC south of Australia. However, differentiation of variability of specific fronts has been impossible due to the necessary smoothing required to reduce noise and correlated errors in the measurements. The future GRACE Follow-on (GFO) mission and the post 2020 GRACE-II mission are expected to produce higher resolution gravity fields with a monthly temporal resolution. Here, we study the resolution and error characteristics of GRACE gravity data that would be required to resolve variations in the front locations and transport. To do this, we utilize output from a high-resolution model of the Southern Ocean, hydrology models, and ice sheet surface mass balance models; add various amounts of random and correlated errors that may be expected from GFO and GRACE-II; and quantify requirements needed for future satellite gravity missions to resolve variations along the ACC fronts.

  13. Analysis on the dynamic error for optoelectronic scanning coordinate measurement network

    NASA Astrophysics Data System (ADS)

    Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie

    2018-01-01

    Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.

  14. Relationships of Measurement Error and Prediction Error in Observed-Score Regression

    ERIC Educational Resources Information Center

    Moses, Tim

    2012-01-01

    The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…

  15. Systematic review of ERP and fMRI studies investigating inhibitory control and error processing in people with substance dependence and behavioural addictions

    PubMed Central

    Luijten, Maartje; Machielsen, Marise W.J.; Veltman, Dick J.; Hester, Robert; de Haan, Lieuwe; Franken, Ingmar H.A.

    2014-01-01

    Background Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The combined evaluation of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) findings in the present review offers unique information on neural deficits in addicted individuals. Methods We selected 19 ERP and 22 fMRI studies using stop-signal, go/no-go or Flanker paradigms based on a search of PubMed and Embase. Results The most consistent findings in addicted individuals relative to healthy controls were lower N2, error-related negativity and error positivity amplitudes as well as hypoactivation in the anterior cingulate cortex (ACC), inferior frontal gyrus and dorsolateral prefrontal cortex. These neural deficits, however, were not always associated with impaired task performance. With regard to behavioural addictions, some evidence has been found for similar neural deficits; however, studies are scarce and results are not yet conclusive. Differences among the major classes of substances of abuse were identified and involve stronger neural responses to errors in individuals with alcohol dependence versus weaker neural responses to errors in other substance-dependent populations. Limitations Task design and analysis techniques vary across studies, thereby reducing comparability among studies and the potential of clinical use of these measures. Conclusion Current addiction theories were supported by identifying consistent abnormalities in prefrontal brain function in individuals with addiction. An integrative model is proposed, suggesting that neural deficits in the dorsal ACC may constitute a hallmark neurocognitive deficit underlying addictive behaviours, such as loss of control. PMID:24359877

  16. Cost-effectiveness of the stream-gaging program in Kentucky

    USGS Publications Warehouse

    Ruhl, K.J.

    1989-01-01

    This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)

  17. Design of Diaphragm and Coil for Stable Performance of an Eddy Current Type Pressure Sensor.

    PubMed

    Lee, Hyo Ryeol; Lee, Gil Seung; Kim, Hwa Young; Ahn, Jung Hwan

    2016-07-01

    The aim of this work was to develop an eddy current type pressure sensor and investigate its fundamental characteristics affected by the mechanical and electrical design parameters of sensor. The sensor has two key components, i.e., diaphragm and coil. On the condition that the outer diameter of sensor is 10 mm, two key parts should be designed so as to keep a good linearity and sensitivity. Experiments showed that aluminum is the best target material for eddy current detection. A round-grooved diaphragm is suggested in order to measure more precisely its deflection caused by applied pressures. The design parameters of a round-grooved diaphragm can be selected depending on the measuring requirements. A developed pressure sensor with diaphragm of t = 0.2 mm and w = 1.05 mm was verified to measure pressure up to 10 MPa with very good linearity and errors of less than 0.16%.

  18. Plasma equilibrium control during slow plasma current quench with avoidance of plasma-wall interaction in JT-60U

    NASA Astrophysics Data System (ADS)

    Yoshino, R.; Nakamura, Y.; Neyatani, Y.

    1997-08-01

    In JT-60U a vertical displacement event (VDE) is observed during slow plasma current quench (Ip quench) for a vertically elongated divertor plasma with a single null. The VDE is generated by an error in the feedback control of the vertical position of the plasma current centre (ZJ). It has been perfectly avoided by improving the accuracy of the ZJ measurement in real time. Furthermore, plasma-wall interaction has been avoided successfully during slow Ip quench owing to the good performance of the plasma equilibrium control system

  19. Implementation of bayesian model averaging on the weather data forecasting applications utilizing open weather map

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.

    2018-02-01

    Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.

  20. Dynamic characterization of Galfenol

    NASA Astrophysics Data System (ADS)

    Scheidler, Justin J.; Asnani, Vivake M.; Deng, Zhangxian; Dapino, Marcelo J.

    2015-04-01

    A novel and precise characterization of the constitutive behavior of solid and laminated research-grade, polycrystalline Galfenol (Fe81:6Ga18:4) under under quasi-static (1 Hz) and dynamic (4 to 1000 Hz) stress loadings was recently conducted by the authors. This paper summarizes the characterization by focusing on the experimental design and the dynamic sensing response of the solid Galfenol specimen. Mechanical loads are applied using a high frequency load frame. The dynamic stress amplitude for minor and major loops is 2.88 and 31.4 MPa, respectively. Dynamic minor and major loops are measured for the bias condition resulting in maximum, quasi-static sensitivity. Three key sources of error in the dynamic measurements are accounted for: (1) electromagnetic noise in strain signals due to Galfenol's magnetic response, (2) error in load signals due to the inertial force of fixturing, and (3) time delays imposed by conditioning electronics. For dynamic characterization, strain error is kept below 1.2 % of full scale by wiring two collocated gauges in series (noise cancellation) and through lead wire weaving. Inertial force error is kept below 0.41 % by measuring the dynamic force in the specimen using a nearly collocated piezoelectric load washer. The phase response of all conditioning electronics is explicitly measured and corrected for. In general, as frequency increases, the sensing response becomes more linear due to an increase in eddy currents. The location of positive and negative saturation is the same at all frequencies. As frequency increases above about 100 Hz, the elbow in the strain versus stress response disappears as the active (soft) regime stiffens toward the passive (hard) regime.

  1. Dynamic Characterization of Galfenol

    NASA Technical Reports Server (NTRS)

    Scheidler, Justin; Asnani, Vivake M.; Deng, Zhangxian; Dapino, Marcelo J.

    2015-01-01

    A novel and precise characterization of the constitutive behavior of solid and laminated research-grade, polycrystalline Galfenol (Fe81:6Ga18:4) under under quasi-static (1 Hz) and dynamic (4 to 1000 Hz) stress loadings was recently conducted by the authors. This paper summarizes the characterization by focusing on the experimental design and the dynamic sensing response of the solid Galfenol specimen. Mechanical loads are applied using a high frequency load frame. The dynamic stress amplitude for minor and major loops is 2.88 and 31.4 MPa, respectively. Dynamic minor and major loops are measured for the bias condition resulting in maximum, quasi-static sensitivity. Three key sources of error in the dynamic measurements are accounted for: (1) electromagnetic noise in strain signals due to Galfenol's magnetic response, (2) error in load signals due to the inertial force of fixturing, and (3) time delays imposed by conditioning electronics. For dynamic characterization, strain error is kept below 1.2 % of full scale by wiring two collocated gauges in series (noise cancellation) and through lead wire weaving. Inertial force error is kept below 0.41 % by measuring the dynamic force in the specimen using a nearly collocated piezoelectric load washer. The phase response of all conditioning electronics is explicitly measured and corrected for. In general, as frequency increases, the sensing response becomes more linear due to an increase in eddy currents. The location of positive and negative saturation is the same at all frequencies. As frequency increases above about 100 Hz, the elbow in the strain versus stress response disappears as the active (soft) regime stiffens toward the passive (hard) regime.

  2. Return Stroke Current Reflections in Rocket-Triggered Lightning

    NASA Astrophysics Data System (ADS)

    Caicedo, J.; Uman, M. A.; Jordan, D.; Biagi, C. J.; Hare, B.

    2015-12-01

    In the six years from 2009 to 2014, there have been eight triggered flashes at the ICLRT, from a total of 125, in which a total of ten return stroke channel-base currents exhibited a dip 3.0 to 16.6 μs after the initial current peak. Close range electric field measurements show a related dip following the initial electric field peak, and electric field derivative measurements show an associated bipolar pulse, confirming that this phenomenon is not an instrumentation effect in the current measurement. For six of the eight flashes, high-speed video frames show what appears to be suspended sections of unexploded triggering wire at heights of about 150 to 300 m that are illuminated when the upward current wave reaches them. The suspended wire can act as an impedance discontinuity, perhaps as it explodes, and cause a downward reflection of some portion of the upward-propagating current wave. This reflected wave travels down the channel and causes the dip in the measured channel-base current when it reaches ground and reflects upward. The modified transmission line model with exponential decay (MTLE) is used to model the close electric field and electric field derivatives of the postulated initial and reflected current waves, starting with the measured channel base current, and the results are compared favorably with measurements made at distances ranging from 92 to 444 m. From the measured time between current impulse initiation and the time the current reflection reaches the channel base and the current dip initiates, along with the reflection height from the video records, we find the average return stroke current speed for each of the ten strokes to be from 0.28 to 1.9×108 ms-1, with an error of ±0.01×108 ms-1 due to a ±0.1 μs uncertainty in the measurement. This represents the first direct measurement of return stroke current speed, all previous return stroke speed measurements being derived from the luminosity of the process.

  3. Tidal Models In A New Era of Satellite Gravimetry

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.; Rowlings, David D.; Edbert, G. D.; Chao, Benjamin F. (Technical Monitor)

    2002-01-01

    The high precision gravity measurements to be made by recently launched (and recently approved) satellites place new demands on models of Earth, atmospheric, and oceanic tides. The latter is the most problematic. The ocean tides induce variations in the Earth's geoid by amounts that far exceed the new satellite sensitivities, and tidal models must be used to correct for this. Two methods are used here to determine the standard errors in current ocean tide models. At long wavelengths these errors exceed the sensitivity of the GRACE mission. Tidal errors will not prevent the new satellite missions from improving our knowledge of the geopotential by orders of magnitude, but the errors may well contaminate GRACE estimates of temporal variations in gravity. Solar tides are especially problematic because of their long alias periods. The satellite data may be used to improve tidal models once a sufficiently long time series is obtained. Improvements in the long-wavelength components of lunar tides are especially promising.

  4. Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers

    NASA Technical Reports Server (NTRS)

    Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.

    2012-01-01

    Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.

  5. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    PubMed

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  6. Defining quality metrics and improving safety and outcome in allergy care.

    PubMed

    Lee, Stella; Stachler, Robert J; Ferguson, Berrylin J

    2014-04-01

    The delivery of allergy immunotherapy in the otolaryngology office is variable and lacks standardization. Quality metrics encompasses the measurement of factors associated with good patient-centered care. These factors have yet to be defined in the delivery of allergy immunotherapy. We developed and applied quality metrics to 6 allergy practices affiliated with an academic otolaryngic allergy center. This work was conducted at a tertiary academic center providing care to over 1500 patients. We evaluated methods and variability between 6 sites. Tracking of errors and anaphylaxis was initiated across all sites. A nationwide survey of academic and private allergists was used to collect data on current practice and use of quality metrics. The most common types of errors recorded were patient identification errors (n = 4), followed by vial mixing errors (n = 3), and dosing errors (n = 2). There were 7 episodes of anaphylaxis of which 2 were secondary to dosing errors for a rate of 0.01% or 1 in every 10,000 injection visits/year. Site visits showed that 86% of key safety measures were followed. Analysis of nationwide survey responses revealed that quality metrics are still not well defined by either medical or otolaryngic allergy practices. Academic practices were statistically more likely to use quality metrics (p = 0.021) and perform systems reviews and audits in comparison to private practices (p = 0.005). Quality metrics in allergy delivery can help improve safety and quality care. These metrics need to be further defined by otolaryngic allergists in the changing health care environment. © 2014 ARS-AAOA, LLC.

  7. Multi-GNSS signal-in-space range error assessment - Methodology and results

    NASA Astrophysics Data System (ADS)

    Montenbruck, Oliver; Steigenberger, Peter; Hauschild, André

    2018-06-01

    The positioning accuracy of global and regional navigation satellite systems (GNSS/RNSS) depends on a variety of influence factors. For constellation-specific performance analyses it has become common practice to separate a geometry-related quality factor (the dilution of precision, DOP) from the measurement and modeling errors of the individual ranging measurements (known as user equivalent range error, UERE). The latter is further divided into user equipment errors and contributions related to the space and control segment. The present study reviews the fundamental concepts and underlying assumptions of signal-in-space range error (SISRE) analyses and presents a harmonized framework for multi-GNSS performance monitoring based on the comparison of broadcast and precise ephemerides. The implications of inconsistent geometric reference points, non-common time systems, and signal-specific range biases are analyzed, and strategies for coping with these issues in the definition and computation of SIS range errors are developed. The presented concepts are, furthermore, applied to current navigation satellite systems, and representative results are presented along with a discussion of constellation-specific problems in their determination. Based on data for the January to December 2017 time frame, representative global average root-mean-square (RMS) SISRE values of 0.2 m, 0.6 m, 1 m, and 2 m are obtained for Galileo, GPS, BeiDou-2, and GLONASS, respectively. Roughly two times larger values apply for the corresponding 95th-percentile values. Overall, the study contributes to a better understanding and harmonization of multi-GNSS SISRE analyses and their use as key performance indicators for the various constellations.

  8. Quantitative, Comparable Coherent Anti-Stokes Raman Scattering (CARS) Spectroscopy: Correcting Errors in Phase Retrieval

    PubMed Central

    Camp, Charles H.; Lee, Young Jong; Cicerone, Marcus T.

    2017-01-01

    Coherent anti-Stokes Raman scattering (CARS) microspectroscopy has demonstrated significant potential for biological and materials imaging. To date, however, the primary mechanism of disseminating CARS spectroscopic information is through pseudocolor imagery, which explicitly neglects a vast majority of the hyperspectral data. Furthermore, current paradigms in CARS spectral processing do not lend themselves to quantitative sample-to-sample comparability. The primary limitation stems from the need to accurately measure the so-called nonresonant background (NRB) that is used to extract the chemically-sensitive Raman information from the raw spectra. Measurement of the NRB on a pixel-by-pixel basis is a nontrivial task; thus, reference NRB from glass or water are typically utilized, resulting in error between the actual and estimated amplitude and phase. In this manuscript, we present a new methodology for extracting the Raman spectral features that significantly suppresses these errors through phase detrending and scaling. Classic methods of error-correction, such as baseline detrending, are demonstrated to be inaccurate and to simply mask the underlying errors. The theoretical justification is presented by re-developing the theory of phase retrieval via the Kramers-Kronig relation, and we demonstrate that these results are also applicable to maximum entropy method-based phase retrieval. This new error-correction approach is experimentally applied to glycerol spectra and tissue images, demonstrating marked consistency between spectra obtained using different NRB estimates, and between spectra obtained on different instruments. Additionally, in order to facilitate implementation of these approaches, we have made many of the tools described herein available free for download. PMID:28819335

  9. Investigation of advanced phase-shifting projected fringe profilometry techniques

    NASA Astrophysics Data System (ADS)

    Liu, Hongyu

    1999-11-01

    The phase-shifting projected fringe profilometry (PSPFP) technique is a powerful tool in the profile measurements of rough engineering surfaces. Compared with other competing techniques, this technique is notable for its full-field measurement capacity, system simplicity, high measurement speed, and low environmental vulnerability. The main purpose of this dissertation is to tackle three important problems, which severely limit the capability and the accuracy of the PSPFP technique, with some new approaches. Chapter 1 provides some background information of the PSPFP technique including the measurement principles, basic features, and related techniques is briefly introduced. The objectives and organization of the thesis are also outlined. Chapter 2 gives a theoretical treatment to the absolute PSPFP measurement. The mathematical formulations and basic requirements of the absolute PSPFP measurement and its supporting techniques are discussed in detail. Chapter 3 introduces the experimental verification of the proposed absolute PSPFP technique. Some design details of a prototype system are discussed as supplements to the previous theoretical analysis. Various fundamental experiments performed for concept verification and accuracy evaluation are introduced together with some brief comments. Chapter 4 presents the theoretical study of speckle- induced phase measurement errors. In this analysis, the expression for speckle-induced phase errors is first derived based on the multiplicative noise model of image- plane speckles. The statistics and the system dependence of speckle-induced phase errors are then thoroughly studied through numerical simulations and analytical derivations. Based on the analysis, some suggestions on the system design are given to improve measurement accuracy. Chapter 5 discusses a new technique combating surface reflectivity variations. The formula used for error compensation is first derived based on a simplified model of the detection process. The techniques coping with two major effects of surface reflectivity variations are then introduced. Some fundamental problems in the proposed technique are studied through simulations. Chapter 6 briefly summarizes the major contributions of the current work and provides some suggestions for the future research.

  10. Measuring in-use ship emissions with international and U.S. federal methods.

    PubMed

    Khan, M Yusuf; Ranganathan, Sindhuja; Agrawal, Harshit; Welch, William A; Laroo, Christopher; Miller, J Wayne; Cocker, David R

    2013-03-01

    Regulatory agencies have shifted their emphasis from measuring emissions during certification cycles to measuring emissions during actual use. Emission measurements in this research were made from two different large ships at sea to compare the Simplified Measurement Method (SMM) compliant with the International Maritime Organization (IMO) NOx Technical Code to the Portable Emission Measurement Systems (PEMS) compliant with the US. Environmental Protection Agency (EPA) 40 Code of Federal Regulations (CFR) Part 1065 for on-road emission testing. Emissions of nitrogen oxides (NOx), carbon dioxide (CO2), and carbon monoxide (CO) were measured at load points specified by the International Organization for Standardization (ISO) to compare the two measurement methods. The average percentage errors calculated for PEMS measurements were 6.5%, 0.6%, and 357% for NOx, CO2, and CO, respectively. The NOx percentage error of 6.5% corresponds to a 0.22 to 1.11 g/kW-hr error in moving from Tier III (3.4 g/kW-hr) to Tier I (17.0 g/kW-hr) emission limits. Emission factors (EFs) of NOx and CO2 measured via SMM were comparable to other studies and regulatory agencies estimates. However EF(PM2.5) for this study was up to 26% higher than that currently used by regulatory agencies. The PM2.5 was comprised predominantly of hydrated sulfate (70-95%), followed by organic carbon (11-14%), ash (6-11%), and elemental carbon (0.4-0.8%). This research provides direct comparison between the International Maritime Organization and U.S. Environmental Protection Agency reference methods for quantifying in-use emissions from ships. This research provides correlations for NOx, CO2, and CO measured by a PEMS unit (certified by U.S. EPA for on-road testing) against IMO's Simplified Measurement Method for on-board certification. It substantiates the measurements of NOx by PEMS and quantifies measurement error. This study also provides in-use modal and overall weighted emission factors of gaseous (NOx, CO, CO2, total hydrocarbons [THC], and SO2) and particulate pollutants from the main engine of a container ship, which are helpful in the development of emission inventory.

  11. Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes

    ERIC Educational Resources Information Center

    Zavorsky, Gerald S.

    2010-01-01

    Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…

  12. Error measuring system of rotary Inductosyn

    NASA Astrophysics Data System (ADS)

    Liu, Chengjun; Zou, Jibin; Fu, Xinghe

    2008-10-01

    The inductosyn is a kind of high-precision angle-position sensor. It has important applications in servo table, precision machine tool and other products. The precision of inductosyn is calibrated by its error. It's an important problem about the error measurement in the process of production and application of the inductosyn. At present, it mainly depends on the method of artificial measurement to obtain the error of inductosyn. Therefore, the disadvantages can't be ignored such as the high labour intensity of the operator, the occurrent error which is easy occurred and the poor repeatability, and so on. In order to solve these problems, a new automatic measurement method is put forward in this paper which based on a high precision optical dividing head. Error signal can be obtained by processing the output signal of inductosyn and optical dividing head precisely. When inductosyn rotating continuously, its zero position error can be measured dynamically, and zero error curves can be output automatically. The measuring and calculating errors caused by man-made factor can be overcome by this method, and it makes measuring process more quickly, exactly and reliably. Experiment proves that the accuracy of error measuring system is 1.1 arc-second (peak - peak value).

  13. The impact of response measurement error on the analysis of designed experiments

    DOE PAGES

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    2016-11-01

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  14. The impact of response measurement error on the analysis of designed experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee

    This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less

  15. Why a simulation system doesn`t match the plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sowell, R.

    1998-03-01

    Process simulations, or mathematical models, are widely used by plant engineers and planners to obtain a better understanding of a particular process. These simulations are used to answer questions such as how can feed rate be increased, how can yields be improved, how can energy consumption be decreased, or how should the available independent variables be set to maximize profit? Although current process simulations are greatly improved over those of the `70s and `80s, there are many reasons why a process simulation doesn`t match the plant. Understanding these reasons can assist in using simulations to maximum advantage. The reasons simulationsmore » do not match the plant may be placed in three main categories: simulation effects or inherent error, sampling and analysis effects of measurement error, and misapplication effects or set-up error.« less

  16. Centroid Position as a Function of Total Counts in a Windowed CMOS Image of a Point Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wurtz, R E; Olivier, S; Riot, V

    2010-05-27

    We obtained 960,200 22-by-22-pixel windowed images of a pinhole spot using the Teledyne H2RG CMOS detector with un-cooled SIDECAR readout. We performed an analysis to determine the precision we might expect in the position error signals to a telescope's guider system. We find that, under non-optimized operating conditions, the error in the computed centroid is strongly dependent on the total counts in the point image only below a certain threshold, approximately 50,000 photo-electrons. The LSST guider camera specification currently requires a 0.04 arcsecond error at 10 Hertz. Given the performance measured here, this specification can be delivered with a singlemore » star at 14th to 18th magnitude, depending on the passband.« less

  17. Kalman filtered MR temperature imaging for laser induced thermal therapies.

    PubMed

    Fuentes, D; Yung, J; Hazle, J D; Weinberg, J S; Stafford, R J

    2012-04-01

    The feasibility of using a stochastic form of Pennes bioheat model within a 3-D finite element based Kalman filter (KF) algorithm is critically evaluated for the ability to provide temperature field estimates in the event of magnetic resonance temperature imaging (MRTI) data loss during laser induced thermal therapy (LITT). The ability to recover missing MRTI data was analyzed by systematically removing spatiotemporal information from a clinical MR-guided LITT procedure in human brain and comparing predictions in these regions to the original measurements. Performance was quantitatively evaluated in terms of a dimensionless L(2) (RMS) norm of the temperature error weighted by acquisition uncertainty. During periods of no data corruption, observed error histories demonstrate that the Kalman algorithm does not alter the high quality temperature measurement provided by MR thermal imaging. The KF-MRTI implementation considered is seen to predict the bioheat transfer with RMS error < 4 for a short period of time, ∆t < 10 s, until the data corruption subsides. In its present form, the KF-MRTI method currently fails to compensate for consecutive for consecutive time periods of data loss ∆t > 10 sec.

  18. Evaluation of the navigation performance of shipboard-VTOL-landing guidance systems

    NASA Technical Reports Server (NTRS)

    Mcgee, L. A.; Paulk, C. H., Jr.; Steck, S. A.; Schmidt, S. F.; Merz, A. W.

    1979-01-01

    The objective of this study was to explore the performance of a VTOL aircraft landing approach navigation system that receives data (1) from either a microwave scanning beam (MSB) or a radar-transponder (R-T) landing guidance system, and (2) information data-linked from an aviation facility ship. State-of-the-art low-cost-aided inertial techniques and variable gain filters were used in the assumed navigation system. Compensation for ship motion was accomplished by a landing pad deviation vector concept that is a measure of the landing pad's deviation from its calm sea location. The results show that the landing guidance concepts were successful in meeting all of the current Navy navigation error specifications, provided that vector magnitude of the allowable error, rather than the error in each axis, is a permissible interpretation of acceptable performance. The success of these concepts, however, is strongly dependent on the distance measuring equipment bias. In addition, the 'best possible' closed-loop tracking performance achievable with the assumed point-mass VTOL aircraft guidance concept is demonstrated.

  19. Geological Carbon Sequestration: A New Approach for Near-Surface Assurance Monitoring

    PubMed Central

    Wielopolski, Lucian

    2011-01-01

    There are two distinct objectives in monitoring geological carbon sequestration (GCS): Deep monitoring of the reservoir’s integrity and plume movement and near-surface monitoring (NSM) to ensure public health and the safety of the environment. However, the minimum detection limits of the current instrumentation for NSM is too high for detecting weak signals that are embedded in the background levels of the natural variations, and the data obtained represents point measurements in space and time. A new approach for NSM, based on gamma-ray spectroscopy induced by inelastic neutron scatterings (INS), offers novel and unique characteristics providing the following: (1) High sensitivity with a reducible error of measurement and detection limits, and, (2) temporal- and spatial-integration of carbon in soil that results from underground CO2 seepage. Preliminary field results validated this approach showing carbon suppression of 14% in the first year and 7% in the second year. In addition the temporal behavior of the error propagation is presented and it is shown that for a signal at the level of the minimum detection level the error asymptotically approaches 47%. PMID:21556180

  20. Error-compensation model for simultaneous measurement of five degrees of freedom motion errors of a rotary axis

    NASA Astrophysics Data System (ADS)

    Bao, Chuanchen; Li, Jiakun; Feng, Qibo; Zhang, Bin

    2018-07-01

    This paper introduces an error-compensation model for our measurement method to measure five motion errors of a rotary axis based on fibre laser collimation. The error-compensation model is established in a matrix form using the homogeneous coordinate transformation theory. The influences of the installation errors, error crosstalk, and manufacturing errors are analysed. The model is verified by both ZEMAX simulation and measurement experiments. The repeatability values of the radial and axial motion errors are significantly suppressed by more than 50% after compensation. The repeatability experiments of five degrees of freedom motion errors and the comparison experiments of two degrees of freedom motion errors of an indexing table were performed by our measuring device and a standard instrument. The results show that the repeatability values of the angular positioning error ε z and tilt motion error around the Y axis ε y are 1.2″ and 4.4″, and the comparison deviations of the two motion errors are 4.0″ and 4.4″, respectively. The repeatability values of the radial and axial motion errors, δ y and δ z , are 1.3 and 0.6 µm, respectively. The repeatability value of the tilt motion error around the X axis ε x is 3.8″.

  1. Impact of Measurement Error on Synchrophasor Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less

  2. Measurement of thermal conductivity and thermal diffusivity using a thermoelectric module

    NASA Astrophysics Data System (ADS)

    Beltrán-Pitarch, Braulio; Márquez-García, Lourdes; Min, Gao; García-Cañadas, Jorge

    2017-04-01

    A proof of concept of using a thermoelectric module to measure both thermal conductivity and thermal diffusivity of bulk disc samples at room temperature is demonstrated. The method involves the calculation of the integral area from an impedance spectrum, which empirically correlates with the thermal properties of the sample through an exponential relationship. This relationship was obtained employing different reference materials. The impedance spectroscopy measurements are performed in a very simple setup, comprising a thermoelectric module, which is soldered at its bottom side to a Cu block (heat sink) and thermally connected with the sample at its top side employing thermal grease. Random and systematic errors of the method were calculated for the thermal conductivity (18.6% and 10.9%, respectively) and thermal diffusivity (14.2% and 14.7%, respectively) employing a BCR724 standard reference material. Although errors are somewhat high, the technique could be useful for screening purposes or high-throughput measurements at its current state. This new method establishes a new application for thermoelectric modules as thermal properties sensors. It involves the use of a very simple setup in conjunction with a frequency response analyzer, which provides a low cost alternative to most of currently available apparatus in the market. In addition, impedance analyzers are reliable and widely spread equipment, which facilities the sometimes difficult access to thermal conductivity facilities.

  3. Electrical Characterization of Semiconductor and Dielectric Materials with a Non-Damaging FastGateTM Probe

    NASA Astrophysics Data System (ADS)

    Robert, Hillard; William, Howland; Bryan, Snyder

    2002-03-01

    Determination of the electrical properties of semiconductor materials and dielectrics is highly desirable since these correlate best to final device performance. The properties of SiO2 and high k dielectrics such as Equivalent Oxide Thickness(EOT), Interface Trap Density(Dit), Oxide Effective Charge(Neff), Flatband Voltage Hysteresis(Delta Vfb), Threshold Voltage(VT) and, bulk properties such as carrier density profile and channel dose are all important parameters that require monitoring during front end processing. Conventional methods for determining these parameters involve the manufacturing of polysilicon or metal gate MOS capacitors and subsequent measurements of capacitance-voltage(CV) and/or current-voltage(IV). These conventional techniques are time consuming and can introduce changes to the materials being monitored. Also, equivalent circuit effects resulting from excessive leakage current, series resistance and stray inductance can introduce large errors in the measured results. In this paper, a new method is discussed that provides rapid determination of these critical parameters and is robust against equivalent circuit errors. This technique uses a small diameter(30 micron), elastically deformed probe to form a gate for MOSCAP CV and IV and can be used to measure either monitor wafers or test areas within scribe lines on product wafers. It allows for measurements of dielectrics thinner than 10 Angstroms. A detailed description and applications such as high k dielectrics, will be presented.

  4. A Comparison of Composite Reliability Estimators: Coefficient Omega Confidence Intervals in the Current Literature

    ERIC Educational Resources Information Center

    Padilla, Miguel A.; Divers, Jasmin

    2016-01-01

    Coefficient omega and alpha are both measures of the composite reliability for a set of items. Unlike coefficient alpha, coefficient omega remains unbiased with congeneric items with uncorrelated errors. Despite this ability, coefficient omega is not as widely used and cited in the literature as coefficient alpha. Reasons for coefficient omega's…

  5. Is visual short-term memory depthful?

    PubMed

    Reeves, Adam; Lei, Quan

    2014-03-01

    Does visual short-term memory (VSTM) depend on depth, as it might be if information was stored in more than one depth layer? Depth is critical in natural viewing and might be expected to affect retention, but whether this is so is currently unknown. Cued partial reports of letter arrays (Sperling, 1960) were measured up to 700 ms after display termination. Adding stereoscopic depth hardly affected VSTM capacity or decay inferred from total errors. The pattern of transposition errors (letters reported from an uncued row) was almost independent of depth and cue delay. We conclude that VSTM is effectively two-dimensional. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Evaluation of mean velocity and turbulence measurements with ADCPs

    USGS Publications Warehouse

    Nystrom, E.A.; Rehmann, C.R.; Oberg, K.A.

    2007-01-01

    To test the ability of acoustic Doppler current profilers (ADCPs) to measure turbulence, profiles measured with two pulse-to-pulse coherent ADCPs in a laboratory flume were compared to profiles measured with an acoustic Doppler velocimeter, and time series measured in the acoustic beam of the ADCPs were examined. A four-beam ADCP was used at a downstream station, while a three-beam ADCP was used at a downstream station and an upstream station. At the downstream station, where the turbulence intensity was low, both ADCPs reproduced the mean velocity profile well away from the flume boundaries; errors near the boundaries were due to transducer ringing, flow disturbance, and sidelobe interference. At the upstream station, where the turbulence intensity was higher, errors in the mean velocity were large. The four-beam ADCP measured the Reynolds stress profile accurately away from the bottom boundary, and these measurements can be used to estimate shear velocity. Estimates of Reynolds stress with a three-beam ADCP and turbulent kinetic energy with both ADCPs cannot be computed without further assumptions, and they are affected by flow inhomogeneity. Neither ADCP measured integral time scales to within 60%. ?? 2007 ASCE.

  7. Effects of the Ionosphere on Passive Microwave Remote Sensing of Ocean Salinity from Space

    NASA Technical Reports Server (NTRS)

    LeVine, D. M.; Abaham, Saji; Hildebrand, Peter H. (Technical Monitor)

    2001-01-01

    Among the remote sensing applications currently being considered from space is the measurement of sea surface salinity. The salinity of the open ocean is important for understanding ocean circulation and for modeling energy exchange with the atmosphere. Passive microwave remote sensors operating near 1.4 GHz (L-band) could provide data needed to fill the gap in current coverage and to complement in situ arrays being planned to provide subsurface profiles in the future. However, the dynamic range of the salinity signal in the open ocean is relatively small and propagation effects along the path from surface to sensor must be taken into account. In particular, Faraday rotation and even attenuation/emission in the ionosphere can be important sources of error. The purpose or this work is to estimate the magnitude of these effects in the context of a future remote sensing system in space to measure salinity in L-band. Data will be presented as a function of time location and solar activity using IRI-95 to model the ionosphere. The ionosphere presents two potential sources of error for the measurement of salinity: Rotation of the polarization vector (Faraday rotation) and attenuation/emission. Estimates of the effect of these two phenomena on passive remote sensing over the oceans at L-band (1.4 GHz) are presented.

  8. Automatic detection of MLC relative position errors for VMAT using the EPID-based picket fence test

    NASA Astrophysics Data System (ADS)

    Christophides, Damianos; Davies, Alex; Fleckney, Mark

    2016-12-01

    Multi-leaf collimators (MLCs) ensure the accurate delivery of treatments requiring complex beam fluences like intensity modulated radiotherapy and volumetric modulated arc therapy. The purpose of this work is to automate the detection of MLC relative position errors  ⩾0.5 mm using electronic portal imaging device-based picket fence tests and compare the results to the qualitative assessment currently in use. Picket fence tests with and without intentional MLC errors were measured weekly on three Varian linacs. The picket fence images analysed covered a time period ranging between 14-20 months depending on the linac. An algorithm was developed that calculated the MLC error for each leaf-pair present in the picket fence images. The baseline error distributions of each linac were characterised for an initial period of 6 months and compared with the intentional MLC errors using statistical metrics. The distributions of median and one-sample Kolmogorov-Smirnov test p-value exhibited no overlap between baseline and intentional errors and were used retrospectively to automatically detect MLC errors in routine clinical practice. Agreement was found between the MLC errors detected by the automatic method and the fault reports during clinical use, as well as interventions for MLC repair and calibration. In conclusion the method presented provides for full automation of MLC quality assurance, based on individual linac performance characteristics. The use of the automatic method has been shown to provide early warning for MLC errors that resulted in clinical downtime.

  9. A Method to Simultaneously Detect the Current Sensor Fault and Estimate the State of Energy for Batteries in Electric Vehicles

    PubMed Central

    Xu, Jun; Wang, Jing; Li, Shiying; Cao, Binggang

    2016-01-01

    Recently, State of energy (SOE) has become one of the most fundamental parameters for battery management systems in electric vehicles. However, current information is critical in SOE estimation and current sensor is usually utilized to obtain the latest current information. However, if the current sensor fails, the SOE estimation may be confronted with large error. Therefore, this paper attempts to make the following contributions: Current sensor fault detection and SOE estimation method is realized simultaneously. Through using the proportional integral observer (PIO) based method, the current sensor fault could be accurately estimated. By taking advantage of the accurate estimated current sensor fault, the influence caused by the current sensor fault can be eliminated and compensated. As a result, the results of the SOE estimation will be influenced little by the fault. In addition, the simulation and experimental workbench is established to verify the proposed method. The results indicate that the current sensor fault can be estimated accurately. Simultaneously, the SOE can also be estimated accurately and the estimation error is influenced little by the fault. The maximum SOE estimation error is less than 2%, even though the large current error caused by the current sensor fault still exists. PMID:27548183

  10. A Method to Simultaneously Detect the Current Sensor Fault and Estimate the State of Energy for Batteries in Electric Vehicles.

    PubMed

    Xu, Jun; Wang, Jing; Li, Shiying; Cao, Binggang

    2016-08-19

    Recently, State of energy (SOE) has become one of the most fundamental parameters for battery management systems in electric vehicles. However, current information is critical in SOE estimation and current sensor is usually utilized to obtain the latest current information. However, if the current sensor fails, the SOE estimation may be confronted with large error. Therefore, this paper attempts to make the following contributions: Current sensor fault detection and SOE estimation method is realized simultaneously. Through using the proportional integral observer (PIO) based method, the current sensor fault could be accurately estimated. By taking advantage of the accurate estimated current sensor fault, the influence caused by the current sensor fault can be eliminated and compensated. As a result, the results of the SOE estimation will be influenced little by the fault. In addition, the simulation and experimental workbench is established to verify the proposed method. The results indicate that the current sensor fault can be estimated accurately. Simultaneously, the SOE can also be estimated accurately and the estimation error is influenced little by the fault. The maximum SOE estimation error is less than 2%, even though the large current error caused by the current sensor fault still exists.

  11. Metering error quantification under voltage and current waveform distortion

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Wang, Jia; Xie, Zhi; Zhang, Ran

    2017-09-01

    With integration of more and more renewable energies and distortion loads into power grid, the voltage and current waveform distortion results in metering error in the smart meters. Because of the negative effects on the metering accuracy and fairness, it is an important subject to study energy metering combined error. In this paper, after the comparing between metering theoretical value and real recorded value under different meter modes for linear and nonlinear loads, a quantification method of metering mode error is proposed under waveform distortion. Based on the metering and time-division multiplier principles, a quantification method of metering accuracy error is proposed also. Analyzing the mode error and accuracy error, a comprehensive error analysis method is presented which is suitable for new energy and nonlinear loads. The proposed method has been proved by simulation.

  12. Thermal stability analysis and modelling of advanced perpendicular magnetic tunnel junctions

    NASA Astrophysics Data System (ADS)

    Van Beek, Simon; Martens, Koen; Roussel, Philippe; Wu, Yueh Chang; Kim, Woojin; Rao, Siddharth; Swerts, Johan; Crotti, Davide; Linten, Dimitri; Kar, Gouri Sankar; Groeseneken, Guido

    2018-05-01

    STT-MRAM is a promising non-volatile memory for high speed applications. The thermal stability factor (Δ = Eb/kT) is a measure for the information retention time, and an accurate determination of the thermal stability is crucial. Recent studies show that a significant error is made using the conventional methods for Δ extraction. We investigate the origin of the low accuracy. To reduce the error down to 5%, 1000 cycles or multiple ramp rates are necessary. Furthermore, the thermal stabilities extracted from current switching and magnetic field switching appear to be uncorrelated and this cannot be explained by a macrospin model. Measurements at different temperatures show that self-heating together with a domain wall model can explain these uncorrelated Δ. Characterizing self-heating properties is therefore crucial to correctly determine the thermal stability.

  13. Demographics of the gay and lesbian population in the United States: evidence from available systematic data sources.

    PubMed

    Black, D; Gates, G; Sanders, S; Taylor, L

    2000-05-01

    This work provides an overview of standard social science data sources that now allow some systematic study of the gay and lesbian population in the United States. For each data source, we consider how sexual orientation can be defined, and we note the potential sample sizes. We give special attention to the important problem of measurement error, especially the extent to which individuals recorded as gay and lesbian are indeed recorded correctly. Our concern is that because gays and lesbians constitute a relatively small fraction of the population, modest measurement problems could lead to serious errors in inference. In examining gays and lesbians in multiple data sets we also achieve a second objective: We provide a set of statistics about this population that is relevant to several current policy debates.

  14. Half-lives of 214Pb and 214Bi.

    PubMed

    Martz, D E; Langner, G H; Johnson, P R

    1991-10-01

    New measurements on chemically separated samples of 214Bi have yielded a mean half-life value of 19.71 +/- 0.02 min, where the error quoted is twice the standard deviation of the mean based on 23 decay runs. This result provides strong support for the historic 19.72 +/- 0.04 min half-life value and essentially excludes the 19.9-min value, both reported in previous studies. New measurements of the decay rate of 222Rn progeny activity initially in radioactive equilibrium have yielded a value of 26.89 +/- 0.03 min for the half-life of 214Pb, where the error quoted is twice the standard deviation of the mean based on 12 decay runs. This value is 0.1 min longer than the currently accepted 214Pb half-value of 26.8 min.

  15. The localization of focal heart activity via body surface potential measurements: tests in a heterogeneous torso phantom

    NASA Astrophysics Data System (ADS)

    Wetterling, F.; Liehr, M.; Schimpf, P.; Liu, H.; Haueisen, J.

    2009-09-01

    The non-invasive localization of focal heart activity via body surface potential measurements (BSPM) could greatly benefit the understanding and treatment of arrhythmic heart diseases. However, the in vivo validation of source localization algorithms is rather difficult with currently available measurement techniques. In this study, we used a physical torso phantom composed of different conductive compartments and seven dipoles, which were placed in the anatomical position of the human heart in order to assess the performance of the Recursively Applied and Projected Multiple Signal Classification (RAP-MUSIC) algorithm. Electric potentials were measured on the torso surface for single dipoles with and without further uncorrelated or correlated dipole activity. The localization error averaged 11 ± 5 mm over 22 dipoles, which shows the ability of RAP-MUSIC to distinguish an uncorrelated dipole from surrounding sources activity. For the first time, real computational modelling errors could be included within the validation procedure due to the physically modelled heterogeneities. In conclusion, the introduced heterogeneous torso phantom can be used to validate state-of-the-art algorithms under nearly realistic measurement conditions.

  16. Possibility of measuring Adler angles in charged current single pion neutrino-nucleus interactions

    NASA Astrophysics Data System (ADS)

    Sánchez, F.

    2016-05-01

    Uncertainties in modeling neutrino-nucleus interactions are a major contribution to systematic errors in long-baseline neutrino oscillation experiments. Accurate modeling of neutrino interactions requires additional experimental observables such as the Adler angles which carry information about the polarization of the Δ resonance and the interference with nonresonant single pion production. The Adler angles were measured with limited statistics in bubble chamber neutrino experiments as well as in electron-proton scattering experiments. We discuss the viability of measuring these angles in neutrino interactions with nuclei.

  17. Model Performance of Water-Current Meters

    USGS Publications Warehouse

    Fulford, J.M.; ,

    2002-01-01

    The measurement of discharge in natural streams requires hydrographers to use accurate water-current meters that have consistent performance among meters of the same model. This paper presents the results of an investigation into the performance of four models of current meters - Price type-AA, Price pygmy, Marsh McBirney 2000 and Swoffer 2100. Tests for consistency and accuracy for six meters of each model are summarized. Variation of meter performance within a model is used as an indicator of consistency, and percent velocity error that is computed from a measured reference velocity is used as an indicator of meter accuracy. Velocities measured by each meter are also compared to the manufacturer's published or advertised accuracy limits. For the meters tested, the Price models werer found to be more accurate and consistent over the range of test velocities compared to the other models. The Marsh McBirney model usually measured within its accuracy specification. The Swoffer meters did not meet the stringent Swoffer accuracy limits for all the velocities tested.

  18. Comparison of the Lund and Browder table to computed tomography scan three-dimensional surface area measurement for a pediatric cohort.

    PubMed

    Rumpf, R Wolfgang; Stewart, William C L; Martinez, Stephen K; Gerrard, Chandra Y; Adolphi, Natalie L; Thakkar, Rajan; Coleman, Alan; Rajab, Adrian; Ray, William C; Fabia, Renata

    2018-01-01

    Treating burns effectively requires accurately assessing the percentage of the total body surface area (%TBSA) affected by burns. Current methods for estimating %TBSA, such as Lund and Browder (L&B) tables, rely on historic body statistics. An increasingly obese population has been blamed for increasing errors in %TBSA estimates. However, this assumption has not been experimentally validated. We hypothesized that errors in %TBSA estimates using L&B were due to differences in the physical proportions of today's children compared with children in the early 1940s when the chart was developed and that these differences would appear as body mass index (BMI)-associated systematic errors in the L&B values versus actual body surface areas. We measured the TBSA of human pediatric cadavers using computed tomography scans. Subjects ranged from 9 mo to 15 y in age. We chose outliers of the BMI distribution (from the 31st percentile at the low through the 99th percentile at the high). We examined surface area proportions corresponding to L&B regions. Measured regional proportions based on computed tomography scans were in reasonable agreement with L&B, even with subjects in the tails of the BMI range. The largest deviation was 3.4%, significantly less than the error seen in real-world %TBSA estimates. While today's population is more obese than those studied by L&B, their body region proportions scale surprisingly well. The primary error in %TBSA estimation is not due to changing physical proportions of today's children and may instead lie in the application of the L&B table. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Error Analysis and Validation for Insar Height Measurement Induced by Slant Range

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Li, T.; Fan, W.; Geng, X.

    2018-04-01

    InSAR technique is an important method for large area DEM extraction. Several factors have significant influence on the accuracy of height measurement. In this research, the effect of slant range measurement for InSAR height measurement was analysis and discussed. Based on the theory of InSAR height measurement, the error propagation model was derived assuming no coupling among different factors, which directly characterise the relationship between slant range error and height measurement error. Then the theoretical-based analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of slant range error to height measurement. In addition, the simulation validation of InSAR error model induced by slant range was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were further discussed and evaluated.

  20. Error disclosure: a new domain for safety culture assessment.

    PubMed

    Etchegaray, Jason M; Gallagher, Thomas H; Bell, Sigall K; Dunlap, Ben; Thomas, Eric J

    2012-07-01

    To (1) develop and test survey items that measure error disclosure culture, (2) examine relationships among error disclosure culture, teamwork culture and safety culture and (3) establish predictive validity for survey items measuring error disclosure culture. All clinical faculty from six health institutions (four medical schools, one cancer centre and one health science centre) in The University of Texas System were invited to anonymously complete an electronic survey containing questions about safety culture and error disclosure. The authors found two factors to measure error disclosure culture: one factor is focused on the general culture of error disclosure and the second factor is focused on trust. Both error disclosure culture factors were unique from safety culture and teamwork culture (correlations were less than r=0.85). Also, error disclosure general culture and error disclosure trust culture predicted intent to disclose a hypothetical error to a patient (r=0.25, p<0.001 and r=0.16, p<0.001, respectively) while teamwork and safety culture did not predict such an intent (r=0.09, p=NS and r=0.12, p=NS). Those who received prior error disclosure training reported significantly higher levels of error disclosure general culture (t=3.7, p<0.05) and error disclosure trust culture (t=2.9, p<0.05). The authors created and validated a new measure of error disclosure culture that predicts intent to disclose an error better than other measures of healthcare culture. This measure fills an existing gap in organisational assessments by assessing transparent communication after medical error, an important aspect of culture.

  1. Electromagnetic Flow Meter Having a Driver Circuit Including a Current Transducer

    NASA Technical Reports Server (NTRS)

    Patel, Sandeep K. (Inventor); Karon, David M. (Inventor); Cushing, Vincent (Inventor)

    2014-01-01

    An electromagnetic flow meter (EMFM) accurately measures both the complete flow rate and the dynamically fluctuating flow rate of a fluid by applying a unipolar DC voltage to excitation coils for a predetermined period of time, measuring the electric potential at a pair of electrodes, determining a complete flow rate and independently measuring the dynamic flow rate during the "on" cycle of the DC excitation, and correcting the measurements for errors resulting from galvanic drift and other effects on the electric potential. The EMFM can also correct for effects from the excitation circuit induced during operation of the EMFM.

  2. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    DOE PAGES

    Ballantyne, A. P.; Andres, R.; Houghton, R.; ...

    2015-04-30

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we concludemore » that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr ₋1 in the 1960s to 0.3 Pg C yr ₋1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr ₋1 in the 1960s to almost 1.0 Pg C yr ₋1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO 2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.« less

  3. A curved edge diffraction-utilized displacement sensor for spindle metrology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, ChaBum, E-mail: clee@tntech.edu; Zhao, Rui; Jeon, Seongkyul

    This paper presents a new dimensional metrological sensing principle for a curved surface based on curved edge diffraction. Spindle error measurement technology utilizes a cylindrical or spherical target artifact attached to the spindle with non-contact sensors, typically a capacitive sensor (CS) or an eddy current sensor, pointed at the artifact. However, these sensors are designed for flat surface measurement. Therefore, measuring a target with a curved surface causes error. This is due to electric fields behaving differently between a flat and curved surface than between two flat surfaces. In this study, a laser is positioned incident to the cylindrical surfacemore » of the spindle, and a photodetector collects the total field produced by the diffraction around the target surface. The proposed sensor was compared with a CS within a range of 500 μm. The discrepancy between the proposed sensor and CS was 0.017% of the full range. Its sensing performance showed a resolution of 14 nm and a drift of less than 10 nm for 7 min of operation. This sensor was also used to measure dynamic characteristics of the spindle system (natural frequency 181.8 Hz, damping ratio 0.042) and spindle runout (22.0 μm at 2000 rpm). The combined standard uncertainty was estimated as 85.9 nm under current experiment conditions. It is anticipated that this measurement technique allows for in situ health monitoring of a precision spindle system in an accurate, convenient, and low cost manner.« less

  4. A toolkit for measurement error correction, with a focus on nutritional epidemiology

    PubMed Central

    Keogh, Ruth H; White, Ian R

    2014-01-01

    Exposure measurement error is a problem in many epidemiological studies, including those using biomarkers and measures of dietary intake. Measurement error typically results in biased estimates of exposure-disease associations, the severity and nature of the bias depending on the form of the error. To correct for the effects of measurement error, information additional to the main study data is required. Ideally, this is a validation sample in which the true exposure is observed. However, in many situations, it is not feasible to observe the true exposure, but there may be available one or more repeated exposure measurements, for example, blood pressure or dietary intake recorded at two time points. The aim of this paper is to provide a toolkit for measurement error correction using repeated measurements. We bring together methods covering classical measurement error and several departures from classical error: systematic, heteroscedastic and differential error. The correction methods considered are regression calibration, which is already widely used in the classical error setting, and moment reconstruction and multiple imputation, which are newer approaches with the ability to handle differential error. We emphasize practical application of the methods in nutritional epidemiology and other fields. We primarily consider continuous exposures in the exposure-outcome model, but we also outline methods for use when continuous exposures are categorized. The methods are illustrated using the data from a study of the association between fibre intake and colorectal cancer, where fibre intake is measured using a diet diary and repeated measures are available for a subset. © 2014 The Authors. PMID:24497385

  5. Scanner qualification with IntenCD based reticle error correction

    NASA Astrophysics Data System (ADS)

    Elblinger, Yair; Finders, Jo; Demarteau, Marcel; Wismans, Onno; Minnaert Janssen, Ingrid; Duray, Frank; Ben Yishai, Michael; Mangan, Shmoolik; Cohen, Yaron; Parizat, Ziv; Attal, Shay; Polonsky, Netanel; Englard, Ilan

    2010-03-01

    Scanner introduction into the fab production environment is a challenging task. An efficient evaluation of scanner performance matrices during factory acceptance test (FAT) and later on during site acceptance test (SAT) is crucial for minimizing the cycle time for pre and post production-start activities. If done effectively, the matrices of base line performance established during the SAT are used as a reference for scanner performance and fleet matching monitoring and maintenance in the fab environment. Key elements which can influence the cycle time of the SAT, FAT and maintenance cycles are the imaging, process and mask characterizations involved with those cycles. Discrete mask measurement techniques are currently in use to create across-mask CDU maps. By subtracting these maps from their final wafer measurement CDU map counterparts, it is possible to assess the real scanner induced printed errors within certain limitations. The current discrete measurement methods are time consuming and some techniques also overlook mask based effects other than line width variations, such as transmission and phase variations, all of which influence the final printed CD variability. Applied Materials Aera2TM mask inspection tool with IntenCDTM technology can scan the mask at high speed, offer full mask coverage and accurate assessment of all masks induced source of errors simultaneously, making it beneficial for scanner qualifications and performance monitoring. In this paper we report on a study that was done to improve a scanner introduction and qualification process using the IntenCD application to map the mask induced CD non uniformity. We will present the results of six scanners in production and discuss the benefits of the new method.

  6. Trimming and procrastination as inversion techniques

    NASA Astrophysics Data System (ADS)

    Backus, George E.

    1996-12-01

    By examining the processes of truncating and approximating the model space (trimming it), and by committing to neither the objectivist nor the subjectivist interpretation of probability (procrastinating), we construct a formal scheme for solving linear and non-linear geophysical inverse problems. The necessary prior information about the correct model xE can be either a collection of inequalities or a probability measure describing where xE was likely to be in the model space X before the data vector y0 was measured. The results of the inversion are (1) a vector z0 that estimates some numerical properties zE of xE; (2) an estimate of the error δz = z0 - zE. As y0 is finite dimensional, so is z0, and hence in principle inversion cannot describe all of xE. The error δz is studied under successively more specialized assumptions about the inverse problem, culminating in a complete analysis of the linear inverse problem with a prior quadratic bound on xE. Our formalism appears to encompass and provide error estimates for many of the inversion schemes current in geomagnetism, and would be equally applicable in geodesy and seismology if adequate prior information were available there. As an idealized example we study the magnetic field at the core-mantle boundary, using satellite measurements of field elements at sites assumed to be almost uniformly distributed on a single spherical surface. Magnetospheric currents are neglected and the crustal field is idealized as a random process with rotationally invariant statistics. We find that an appropriate data compression diagonalizes the variance matrix of the crustal signal and permits an analytic trimming of the idealized problem.

  7. Expected orbit determination performance for the TOPEX/Poseidon mission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nerem, R.S.; Putney, B.H.; Marshall, J.A.

    1993-03-01

    The TOPEX/Poseidon (T/P) mission, launched during the summer of 1992, has the requirement that the radial component of its orbit must be computed to an accuracy of 13 cm root-mean-square (rms) or better, allowing measurements of the sea surface height to be computed to similar accuracy when the satellite height is differenced with the altimeter measurements. This will be done by combining precise satellite tracking measurements with precise models of the forces acting on the satellite. The Space Geodesy Branch at Goddard Space Flight Center (GSFC), as part of the T/P precision orbit determination (POD) Team, has the responsibility withinmore » NASA for the T/P precise orbit computations. The prelaunch activities of the T/P POD Team have been mainly directed towards developing improved models of the static and time-varying gravitational forces acting on T/P and precise models for the non-conservative forces perturbing the orbit of T/P such as atmospheric drag, solar and Earth radiation pressure, and thermal imbalances. The radial orbit error budget for T/P allows 10 cm rms error due to gravity field mismodeling, 3 cm due to solid Earth and ocean tides, 6 cm due to radiative forces, and 3 cm due to atmospheric drag. A prelaunch assessment of the current modeling accuracies for these forces indicates that the radial orbit error requirements can be achieved with the current models, and can probably be surpassed once T/P tracking data are used to fine tune the models. Provided that the performance of the T/P spacecraft is nominal, the precise orbits computed by the T/P POD Team should be accurate to 13 cm or better radially.« less

  8. A GPS-Based Pitot-Static Calibration Method Using Global Output-Error Optimization

    NASA Technical Reports Server (NTRS)

    Foster, John V.; Cunningham, Kevin

    2010-01-01

    Pressure-based airspeed and altitude measurements for aircraft typically require calibration of the installed system to account for pressure sensing errors such as those due to local flow field effects. In some cases, calibration is used to meet requirements such as those specified in Federal Aviation Regulation Part 25. Several methods are used for in-flight pitot-static calibration including tower fly-by, pacer aircraft, and trailing cone methods. In the 1990 s, the introduction of satellite-based positioning systems to the civilian market enabled new inflight calibration methods based on accurate ground speed measurements provided by Global Positioning Systems (GPS). Use of GPS for airspeed calibration has many advantages such as accuracy, ease of portability (e.g. hand-held) and the flexibility of operating in airspace without the limitations of test range boundaries or ground telemetry support. The current research was motivated by the need for a rapid and statistically accurate method for in-flight calibration of pitot-static systems for remotely piloted, dynamically-scaled research aircraft. Current calibration methods were deemed not practical for this application because of confined test range size and limited flight time available for each sortie. A method was developed that uses high data rate measurements of static and total pressure, and GPSbased ground speed measurements to compute the pressure errors over a range of airspeed. The novel application of this approach is the use of system identification methods that rapidly compute optimal pressure error models with defined confidence intervals in nearreal time. This method has been demonstrated in flight tests and has shown 2- bounds of approximately 0.2 kts with an order of magnitude reduction in test time over other methods. As part of this experiment, a unique database of wind measurements was acquired concurrently with the flight experiments, for the purpose of experimental validation of the optimization method. This paper describes the GPS-based pitot-static calibration method developed for the AirSTAR research test-bed operated as part of the Integrated Resilient Aircraft Controls (IRAC) project in the NASA Aviation Safety Program (AvSP). A description of the method will be provided and results from recent flight tests will be shown to illustrate the performance and advantages of this approach. Discussion of maneuver requirements and data reduction will be included as well as potential applications.

  9. Magnetostriction measurement by four probe method

    NASA Astrophysics Data System (ADS)

    Dange, S. N.; Radha, S.

    2018-04-01

    The present paper describes the design and setting up of an indigenouslydevelopedmagnetostriction(MS) measurement setup using four probe method atroom temperature.A standard strain gauge is pasted with a special glue on the sample and its change in resistance with applied magnetic field is measured using KeithleyNanovoltmeter and Current source. An electromagnet with field upto 1.2 tesla is used to source the magnetic field. The sample is placed between the magnet poles using self designed and developed wooden probe stand, capable of moving in three mutually perpendicular directions. The nanovoltmeter and current source are interfaced with PC using RS232 serial interface. A software has been developed in for logging and processing of data. Proper optimization of measurement has been done through software to reduce the noise due to thermal emf and electromagnetic induction. The data acquired for some standard magnetic samples are presented. The sensitivity of the setup is 1microstrain with an error in measurement upto 5%.

  10. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM

    PubMed Central

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei

    2018-01-01

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors. PMID:29342942

  11. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  12. Specificity of reliable change models and review of the within-subjects standard deviation as an error term.

    PubMed

    Hinton-Bayre, Anton D

    2011-02-01

    There is an ongoing debate over the preferred method(s) for determining the reliable change (RC) in individual scores over time. In the present paper, specificity comparisons of several classic and contemporary RC models were made using a real data set. This included a more detailed review of a new RC model recently proposed in this journal, that used the within-subjects standard deviation (WSD) as the error term. It was suggested that the RC(WSD) was more sensitive to change and theoretically superior. The current paper demonstrated that even in the presence of mean practice effects, false-positive rates were comparable across models when reliability was good and initial and retest variances were equivalent. However, when variances differed, discrepancies in classification across models became evident. Notably, the RC using the WSD provided unacceptably high false-positive rates in this setting. It was considered that the WSD was never intended for measuring change in this manner. The WSD actually combines systematic and error variance. The systematic variance comes from measurable between-treatment differences, commonly referred to as practice effect. It was further demonstrated that removal of the systematic variance and appropriate modification of the residual error term for the purpose of testing individual change yielded an error term already published and criticized in the literature. A consensus on the RC approach is needed. To that end, further comparison of models under varied conditions is encouraged.

  13. Errors in laboratory medicine: practical lessons to improve patient safety.

    PubMed

    Howanitz, Peter J

    2005-10-01

    Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification, specimen acceptability, proficiency testing, critical value reporting, blood product wastage, and blood culture contamination. Error rate benchmarks for these performance measures were cited and recommendations for improving patient safety presented. Not only has each of the 8 performance measures proven practical, useful, and important for patient care, taken together, they also fulfill regulatory requirements. All laboratories should consider implementing these performance measures and standardizing their own scientific designs, data analysis, and error reduction strategies according to findings from these published studies.

  14. Research on Measurement Accuracy of Laser Tracking System Based on Spherical Mirror with Rotation Errors of Gimbal Mount Axes

    NASA Astrophysics Data System (ADS)

    Shi, Zhaoyao; Song, Huixu; Chen, Hongfang; Sun, Yanqiang

    2018-02-01

    This paper presents a novel experimental approach for confirming that spherical mirror of a laser tracking system can reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy. By simplifying the optical system model of laser tracking system based on spherical mirror, we can easily extract the laser ranging measurement error caused by rotation errors of gimbal mount axes with the positions of spherical mirror, biconvex lens, cat's eye reflector, and measuring beam. The motions of polarization beam splitter and biconvex lens along the optical axis and vertical direction of optical axis are driven by error motions of gimbal mount axes. In order to simplify the experimental process, the motion of biconvex lens is substituted by the motion of spherical mirror according to the principle of relative motion. The laser ranging measurement error caused by the rotation errors of gimbal mount axes could be recorded in the readings of laser interferometer. The experimental results showed that the laser ranging measurement error caused by rotation errors was less than 0.1 μm if radial error motion and axial error motion were within ±10 μm. The experimental method simplified the experimental procedure and the spherical mirror could reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy of the laser tracking system.

  15. Simulation of atmospheric N2O with GEOS-Chem and its adjoint: evaluation of observational constraints

    NASA Astrophysics Data System (ADS)

    Wells, K. C.; Millet, D. B.; Bousserez, N.; Henze, D. K.; Chaliyakunnel, S.; Griffis, T. J.; Luan, Y.; Dlugokencky, E. J.; Prinn, R. G.; O'Doherty, S.; Weiss, R. F.; Dutton, G. S.; Elkins, J. W.; Krummel, P. B.; Langenfelds, R.; Steele, L. P.; Kort, E. A.; Wofsy, S. C.; Umezawa, T.

    2015-07-01

    We describe a new 4D-Var inversion framework for N2O based on the GEOS-Chem chemical transport model and its adjoint, and apply this framework in a series of observing system simulation experiments to assess how well N2O sources and sinks can be constrained by the current global observing network. The employed measurement ensemble includes approximately weekly and quasi-continuous N2O measurements (hourly averages used) from several long-term monitoring networks, N2O measurements collected from discrete air samples aboard a commercial aircraft (CARIBIC), and quasi-continuous measurements from an airborne pole-to-pole sampling campaign (HIPPO). For a two-year inversion, we find that the surface and HIPPO observations can accurately resolve a uniform bias in emissions during the first year; CARIBIC data provide a somewhat weaker constraint. Variable emission errors are much more difficult to resolve given the long lifetime of N2O, and major parts of the world lack significant constraints on the seasonal cycle of fluxes. Current observations can largely correct a global bias in the stratospheric sink of N2O if emissions are known, but do not provide information on the temporal and spatial distribution of the sink. However, for the more realistic scenario where source and sink are both uncertain, we find that simultaneously optimizing both would require unrealistically small errors in model transport. Regardless, a bias in the magnitude of the N2O sink would not affect the a posteriori N2O emissions for the two-year timescale used here, given realistic initial conditions, due to the timescale required for stratosphere-troposphere exchange (STE). The same does not apply to model errors in the rate of STE itself, which we show exerts a larger influence on the tropospheric burden of N2O than does the chemical loss rate over short (< 3 year) timescales. We use a stochastic estimate of the inverse Hessian for the inversion to evaluate the spatial resolution of emission constraints provided by the observations, and find that significant, spatially explicit constraints can be achieved in locations near and immediately upwind of surface measurements and the HIPPO flight tracks; however, these are mostly confined to North America, Europe, and Australia. None of the current observing networks are able to provide significant spatial information on tropical N2O emissions. There, averaging kernels are highly smeared spatially and extend even to the midlatitudes, so that tropical emissions risk being conflated with those elsewhere. For global inversions, therefore, the current lack of constraints on the tropics also places an important limit on our ability to understand extratropical emissions. Based on the error reduction statistics from the inverse Hessian, we characterize the atmospheric distribution of unconstrained N2O, and identify regions in and downwind of South America, Central Africa, and Southeast Asia where new surface or profile measurements would have the most value for reducing present uncertainty in the global N2O budget.

  16. Can Family Planning Service Statistics Be Used to Track Population-Level Outcomes?

    PubMed

    Magnani, Robert J; Ross, John; Williamson, Jessica; Weinberger, Michelle

    2018-03-21

    The need for annual family planning program tracking data under the Family Planning 2020 (FP2020) initiative has contributed to renewed interest in family planning service statistics as a potential data source for annual estimates of the modern contraceptive prevalence rate (mCPR). We sought to assess (1) how well a set of commonly recorded data elements in routine service statistics systems could, with some fairly simple adjustments, track key population-level outcome indicators, and (2) whether some data elements performed better than others. We used data from 22 countries in Africa and Asia to analyze 3 data elements collected from service statistics: (1) number of contraceptive commodities distributed to clients, (2) number of family planning service visits, and (3) number of current contraceptive users. Data quality was assessed via analysis of mean square errors, using the United Nations Population Division World Contraceptive Use annual mCPR estimates as the "gold standard." We also examined the magnitude of several components of measurement error: (1) variance, (2) level bias, and (3) slope (or trend) bias. Our results indicate modest levels of tracking error for data on commodities to clients (7%) and service visits (10%), and somewhat higher error rates for data on current users (19%). Variance and slope bias were relatively small for all data elements. Level bias was by far the largest contributor to tracking error. Paired comparisons of data elements in countries that collected at least 2 of the 3 data elements indicated a modest advantage of data on commodities to clients. None of the data elements considered was sufficiently accurate to be used to produce reliable stand-alone annual estimates of mCPR. However, the relatively low levels of variance and slope bias indicate that trends calculated from these 3 data elements can be productively used in conjunction with the Family Planning Estimation Tool (FPET) currently used to produce annual mCPR tracking estimates for FP2020. © Magnani et al.

  17. Metabolomics as a tool in the identification of dietary biomarkers.

    PubMed

    Gibbons, Helena; Brennan, Lorraine

    2017-02-01

    Current dietary assessment methods including FFQ, 24-h recalls and weighed food diaries are associated with many measurement errors. In an attempt to overcome some of these errors, dietary biomarkers have emerged as a complementary approach to these traditional methods. Metabolomics has developed as a key technology for the identification of new dietary biomarkers and to date, metabolomic-based approaches have led to the identification of a number of putative biomarkers. The three approaches generally employed when using metabolomics in dietary biomarker discovery are: (i) acute interventions where participants consume specific amounts of a test food, (ii) cohort studies where metabolic profiles are compared between consumers and non-consumers of a specific food and (iii) the analysis of dietary patterns and metabolic profiles to identify nutritypes and biomarkers. The present review critiques the current literature in terms of the approaches used for dietary biomarker discovery and gives a detailed overview of the currently proposed biomarkers, highlighting steps needed for their full validation. Furthermore, the present review also evaluates areas such as current databases and software tools, which are needed to advance the interpretation of results and therefore enhance the utility of dietary biomarkers in nutrition research.

  18. Beyond alpha: an empirical examination of the effects of different sources of measurement error on reliability estimates for measures of individual differences constructs.

    PubMed

    Schmidt, Frank L; Le, Huy; Ilies, Remus

    2003-06-01

    On the basis of an empirical study of measures of constructs from the cognitive domain, the personality domain, and the domain of affective traits, the authors of this study examine the implications of transient measurement error for the measurement of frequently studied individual differences variables. The authors clarify relevant reliability concepts as they relate to transient error and present a procedure for estimating the coefficient of equivalence and stability (L. J. Cronbach, 1947), the only classical reliability coefficient that assesses all 3 major sources of measurement error (random response, transient, and specific factor errors). The authors conclude that transient error exists in all 3 trait domains and is especially large in the domain of affective traits. Their findings indicate that the nearly universal use of the coefficient of equivalence (Cronbach's alpha; L. J. Cronbach, 1951), which fails to assess transient error, leads to overestimates of reliability and undercorrections for biases due to measurement error.

  19. Two-point motional Stark effect diagnostic for Madison Symmetric Torus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, J.; Den Hartog, D. J.; Caspary, K. J.

    2010-10-15

    A high-precision spectral motional Stark effect (MSE) diagnostic provides internal magnetic field measurements for Madison Symmetric Torus (MST) plasmas. Currently, MST uses two spatial views - on the magnetic axis and on the midminor (off-axis) radius, the latter added recently. A new analysis scheme has been developed to infer both the pitch angle and the magnitude of the magnetic field from MSE spectra. Systematic errors are reduced by using atomic data from atomic data and analysis structure in the fit. Reconstructed current density and safety factor profiles are more strongly and globally constrained with the addition of the off-axis radiusmore » measurement than with the on-axis one only.« less

  20. Radiographic cup anteversion measurement corrected from pelvic tilt.

    PubMed

    Wang, Liao; Thoreson, Andrew R; Trousdale, Robert T; Morrey, Bernard F; Dai, Kerong; An, Kai-Nan

    2017-11-01

    The purpose of this study was to develop a novel technique to improve the accuracy of radiographic cup anteversion measurement by correcting the influence of pelvic tilt. Ninety virtual total hip arthroplasties were simulated from computed tomography data of 6 patients with 15 predetermined cup orientations. For each simulated implantation, anteroposterior (AP) virtual pelvic radiographs were generated for 11 predetermined pelvic tilts. A linear regression model was created to capture the relationship between radiographic cup anteversion angle error measured on AP pelvic radiographs and pelvic tilt. Overall, nine hundred and ninety virtual AP pelvic radiographs were measured, and 90 linear regression models were created. Pearson's correlation analyses confirmed a strong correlation between the errors of conventional radiographic cup anteversion angle measured on AP pelvic radiographs and the magnitude of pelvic tilt (P < 0.001). The mean of 90 slopes and y-intercepts of the regression lines were -0.8 and -2.5°, which were applied as the general correction parameters for the proposed tool to correct conventional cup anteversion angle from the influence of pelvic tilt. The current method proposes to measure the pelvic tilt on a lateral radiograph, and to use it as a correction for the radiographic cup anteversion measurement on an AP pelvic radiograph. Thus, both AP and lateral pelvic radiographs are required for the measurement of pelvic posture-integrated cup anteversion. Compared with conventional radiographic cup anteversion, the errors of pelvic posture-integrated radiographic cup anteversion were reduced from 10.03 (SD = 5.13) degrees to 2.53 (SD = 1.33) degrees. Pelvic posture-integrated cup anteversion measurement improves the accuracy of radiographic cup anteversion measurement, which shows the potential of further clarifying the etiology of postoperative instability based on planar radiographs. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  1. Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination

    NASA Astrophysics Data System (ADS)

    Li, Weihua; Sankarasubramanian, A.

    2012-12-01

    Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.

  2. Probing-error compensation using 5 degree of freedom force/moment sensor for coordinate measuring machine

    NASA Astrophysics Data System (ADS)

    Lee, Minho; Cho, Nahm-Gyoo

    2013-09-01

    A new probing and compensation method is proposed to improve the three-dimensional (3D) measuring accuracy of 3D shapes, including irregular surfaces. A new tactile coordinate measuring machine (CMM) probe with a five-degree of freedom (5-DOF) force/moment sensor using carbon fiber plates was developed. The proposed method efficiently removes the anisotropic sensitivity error and decreases the stylus deformation and the actual contact point estimation errors that are major error components of shape measurement using touch probes. The relationship between the measuring force and estimation accuracy of the actual contact point error and stylus deformation error are examined for practical use of the proposed method. The appropriate measuring force condition is presented for the precision measurement.

  3. Trends in Health Information Technology Safety: From Technology-Induced Errors to Current Approaches for Ensuring Technology Safety

    PubMed Central

    2013-01-01

    Objectives Health information technology (HIT) research findings suggested that new healthcare technologies could reduce some types of medical errors while at the same time introducing classes of medical errors (i.e., technology-induced errors). Technology-induced errors have their origins in HIT, and/or HIT contribute to their occurrence. The objective of this paper is to review current trends in the published literature on HIT safety. Methods A review and synthesis of the medical and life sciences literature focusing on the area of technology-induced error was conducted. Results There were four main trends in the literature on technology-induced error. The following areas were addressed in the literature: definitions of technology-induced errors; models, frameworks and evidence for understanding how technology-induced errors occur; a discussion of monitoring; and methods for preventing and learning about technology-induced errors. Conclusions The literature focusing on technology-induced errors continues to grow. Research has focused on the defining what an error is, models and frameworks used to understand these new types of errors, monitoring of such errors and methods that can be used to prevent these errors. More research will be needed to better understand and mitigate these types of errors. PMID:23882411

  4. Accuracy of blood-glucose measurements using glucose meters and arterial blood gas analyzers in critically ill adult patients: systematic review

    PubMed Central

    2013-01-01

    Introduction Glucose control to prevent both hyperglycemia and hypoglycemia is important in an intensive care unit. Arterial blood gas analyzers and glucose meters are commonly used to measure blood-glucose concentration in an intensive care unit; however, their accuracies are still unclear. Methods We performed a systematic literature search (January 1, 2001, to August 31, 2012) to find clinical studies comparing blood-glucose values measured with glucose meters and/or arterial blood gas analyzers with those simultaneously measured with a central laboratory machine in critically ill adult patients. Results We reviewed 879 articles and found 21 studies in which the accuracy of blood-glucose monitoring by arterial blood gas analyzers and/or glucometers by using central laboratory methods as references was assessed in critically ill adult patients. Of those 21 studies, 11 studies in which International Organization for Standardization criteria, error-grid method, or percentage of values within 20% of the error of a reference were used were selected for evaluation. The accuracy of blood-glucose measurements by arterial blood gas analyzers and glucose meters by using arterial blood was significantly higher than that of measurements with glucose meters by using capillary blood (odds ratios for error: 0.04, P < 0.001; and 0.36, P < 0.001). The accuracy of blood-glucose measurements with arterial blood gas analyzers tended to be higher than that of measurements with glucose meters by using arterial blood (P = 0.20). In the hypoglycemic range (defined as < 81 mg/dl), the incidence of errors using these devices was higher than that in the nonhypoglycemic range (odds ratios for error: arterial blood gas analyzers, 1.86, P = 0.15; glucose meters with capillary blood, 1.84, P = 0.03; glucose meters with arterial blood, 2.33, P = 0.02). Unstable hemodynamics (edema and use of a vasopressor) and use of insulin were associated with increased error of blood glucose monitoring with glucose meters. Conclusions Our literature review showed that the accuracy of blood-glucose measurements with arterial blood gas analyzers was significantly higher than that of measurements with glucose meters by using capillary blood and tended to be higher than that of measurements with glucose meters by using arterial blood. These results should be interpreted with caution because of the large variation of accuracy among devices. Because blood-glucose monitoring was less accurate within or near the hypoglycemic range, especially in patients with unstable hemodynamics or receiving insulin infusion, we should be aware that current blood glucose-monitoring technology has not reached a high enough degree of accuracy and reliability to lead to appropriate glucose control in critically ill patients. PMID:23506841

  5. Random measurement error: Why worry? An example of cardiovascular risk factors.

    PubMed

    Brakenhoff, Timo B; van Smeden, Maarten; Visseren, Frank L J; Groenwold, Rolf H H

    2018-01-01

    With the increased use of data not originally recorded for research, such as routine care data (or 'big data'), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.

  6. Incorporating measurement error in n = 1 psychological autoregressive modeling.

    PubMed

    Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.

  7. Characterization of Unimorph-Membrane Microactuators and Error-Analysis of the Characterization Process

    NASA Technical Reports Server (NTRS)

    Wright, Matthew W.

    2005-01-01

    Microactuators are versatile, low-cost, low-mass electrical-mechanical devices that can be used in many applications. Microactuators consist of two electrodes sandwiching a PZT (piezo-electric) film between them. The centers of the microactuators deflect when a voltage is applied across the electrodes. In order to correctly apply this technology for use, it is important to fully characterize the actuation behavior. Measuring the deflection profile as a function of the voltage of various microactuators is crucial. This measurement process has errors associated with it, so it is being studied to determine the accuracy of the data. In certain applications, microactuators may undergo many cycles of deflection; testing various microactuators through many cycles of deflection simulates these circumstances. However, due to an unknown issue, many of the microactuators exhibit defects that cause them to fail when voltage is applied to their electrodes. These defects do not allow for the acquisition of significant deflection profiles. Vibrations are the largest cause of error in deflection measurements, and the microactuators withstand continuous cycles of deflection, yet the cause of damage is still to be determined. Future projects will be needed to characterize the deflection profiles of various microactuators and to overcome the defects in the microactuators that are currently present.

  8. Internal dosimetry monitoring equipment: Present and future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Selby, J.; Carbaugh, E.H.; Lynch, T.P.

    1993-09-01

    We have attempted to characterize the current and future status of in vivo and in vitro measurement programs coupled with the associated radioanalytical methods and workplace monitoring. Developments in these areas must be carefully integrated by internal dosimetrists, radiochemists and field health physicists. Their goal should be uniform improvement rather than to focus on one specific area (e.g., dose modeling) to the neglect of other areas where the measurement capabilities are substantially less sophisticated and, therefore, the potential source of error is greatest.

  9. A Measurable Difference: Bridge Versus Loop

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Trig-Tek, Inc.'s Model 251A ACL-8 Anderson Current Loop (ACL) Conditioner is an eight channel device designed to condition variable-resistant sensor signals from Strain Gage and RTD's (Resistance Temperature Device)s. It uses NASA's patented ACL technology instead of the classic wheatstone bridge. The electronic measurement circuit delivers accuracy far beyond previous methods and prevents errors caused by variation in the wires that connect sensors to data collection equipment. This is the first license to market a NASA Dryden Flight Research Center patent.

  10. Measuring discharge with ADCPs: Inferences from synthetic velocity profiles

    USGS Publications Warehouse

    Rehmann, C.R.; Mueller, D.S.; Oberg, K.A.

    2009-01-01

    Synthetic velocity profiles are used to determine guidelines for sampling discharge with acoustic Doppler current profilers (ADCPs). The analysis allows the effects of instrument characteristics, sampling parameters, and properties of the flow to be studied systematically. For mid-section measurements, the averaging time required for a single profile measurement always exceeded the 40 s usually recommended for velocity measurements, and it increased with increasing sample interval and increasing time scale of the large eddies. Similarly, simulations of transect measurements show that discharge error decreases as the number of large eddies sampled increases. The simulations allow sampling criteria that account for the physics of the flow to be developed. ?? 2009 ASCE.

  11. Application of round grating angle measurement composite error amendment in the online measurement accuracy improvement of large diameter

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu

    2008-10-01

    The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.

  12. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  13. 32 CFR 1653.3 - Review by the National Appeal Board.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... review the file to insure that no procedural errors have occurred during the history of the current claim. Files containing procedural errors will be returned to the board where the errors occurred for any additional processing necessary to correct such errors. (c) Files containing procedural errors that were not...

  14. Method for controlling a vehicle with two or more independently steered wheels

    DOEpatents

    Reister, David B.; Unseren, Michael A.

    1995-01-01

    A method (10) for independently controlling each steerable drive wheel (W.sub.i) of a vehicle with two or more such wheels (W.sub.i). An instantaneous center of rotation target (ICR) and a tangential velocity target (v.sup.G) are inputs to a wheel target system (30) which sends the velocity target (v.sub.i.sup.G) and a steering angle target (.theta..sub.i.sup.G) for each drive wheel (W.sub.i) to a pseudovelocity target system (32). The pseudovelocity target system (32) determines a pseudovelocity target (v.sub.P.sup.G) which is compared to a current pseudovelocity (v.sub.P.sup.m) to determine a pseudovelocity error (.epsilon.). The steering angle targets (.theta..sup.G) and the steering angles (.theta..sup.m) are inputs to a steering angle control system (34) which outputs to the steering angle encoders (36), which measure the steering angles (.theta..sup.m). The pseudovelocity error (.epsilon.), the rate of change of the pseudovelocity error ( ), and the wheel slip between each pair of drive wheels (W.sub.i) are used to calculate intermediate control variables which, along with the steering angle targets (.theta..sup.G) are used to calculate the torque to be applied at each wheel (W.sub.i). The current distance traveled for each wheel (W.sub.i) is then calculated. The current wheel velocities (v.sup.m) and steering angle targets (.theta..sup.G) are used to calculate the cumulative and instantaneous wheel slip (e, ) and the current pseudovelocity (v.sub.P.sup.m).

  15. Proceedings of the Ninth Annual Software Engineering Workshop

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Experiences in measurement, utilization, and evaluation of software methodologies, models, and tools are discussed. NASA's involvement in ever larger and more complex systems, like the space station project, provides a motive for the support of software engineering research and the exchange of ideas in such forums. The topics of current SEL research are software error studies, experiments with software development, and software tools.

  16. The reliability of knee joint position testing using electrogoniometry

    PubMed Central

    Piriyaprasarth, Pagamas; Morris, Meg E; Winter, Adele; Bialocerkowski, Andrea E

    2008-01-01

    Background The current investigation examined the inter- and intra-tester reliability of knee joint angle measurements using a flexible Penny and Giles Biometric® electrogoniometer. The clinical utility of electrogoniometry was also addressed. Methods The first study examined the inter- and intra-tester reliability of measurements of knee joint angles in supine, sitting and standing in 35 healthy adults. The second study evaluated inter-tester and intra-tester reliability of knee joint angle measurements in standing and after walking 10 metres in 20 healthy adults, using an enhanced measurement protocol with a more detailed electrogoniometer attachment procedure. Both inter-tester reliability studies involved two testers. Results In the first study, inter-tester reliability (ICC[2,10]) ranged from 0.58–0.71 in supine, 0.68–0.79 in sitting and 0.57–0.80 in standing. The standard error of measurement between testers was less than 3.55° and the limits of agreement ranged from -12.51° to 12.21°. Reliability coefficients for intra-tester reliability (ICC[3,10]) ranged from 0.75–0.76 in supine, 0.86–0.87 in sitting and 0.87–0.88 in standing. The standard error of measurement for repeated measures by the same tester was less than 1.7° and the limits of agreement ranged from -8.13° to 7.90°. The second study showed that using a more detailed electrogoniometer attachment protocol reduced the error of measurement between testers to 0.5°. Conclusion Using a standardised protocol, reliable measures of knee joint angles can be gained in standing, supine and sitting by using a flexible goniometer. PMID:18211714

  17. The US-DOE ARM/ASR Effort in Quantifying Uncertainty in Ground-Based Cloud Property Retrievals (Invited)

    NASA Astrophysics Data System (ADS)

    Xie, S.; Protat, A.; Zhao, C.

    2013-12-01

    One primary goal of the US Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program is to obtain and retrieve cloud microphysical properties from detailed cloud observations using ground-based active and passive remote sensors. However, there is large uncertainty in the retrieved cloud property products. Studies have shown that the uncertainty could arise from instrument limitations, measurement errors, sampling errors, retrieval algorithm deficiencies in assumptions, as well as inconsistent input data and constraints used by different algorithms. To quantify the uncertainty in cloud retrievals, a scientific focus group, Quantification of Uncertainties In Cloud Retrievals (QUICR), was recently created by the DOE Atmospheric System Research (ASR) program. This talk will provide an overview of the recent research activities conducted within QUICR and discuss its current collaborations with the European cloud retrieval community and future plans. The goal of QUICR is to develop a methodology for characterizing and quantifying uncertainties in current and future ARM cloud retrievals. The Work at LLNL was performed under the auspices of the U. S. Department of Energy (DOE), Office of Science, Office of Biological and Environmental Research by Lawrence Livermore National Laboratory under contract No. DE-AC52-07NA27344. LLNL-ABS-641258.

  18. Relationship between Recent Flight Experience and Pilot Error General Aviation Accidents

    NASA Astrophysics Data System (ADS)

    Nilsson, Sarah J.

    Aviation insurance agents and fixed-base operation (FBO) owners use recent flight experience, as implied by the 90-day rule, to measure pilot proficiency in physical airplane skills, and to assess the likelihood of a pilot error accident. The generally accepted premise is that more experience in a recent timeframe predicts less of a propensity for an accident, all other factors excluded. Some of these aviation industry stakeholders measure pilot proficiency solely by using time flown within the past 90, 60, or even 30 days, not accounting for extensive research showing aeronautical decision-making and situational awareness training decrease the likelihood of a pilot error accident. In an effort to reduce the pilot error accident rate, the Federal Aviation Administration (FAA) has seen the need to shift pilot training emphasis from proficiency in physical airplane skills to aeronautical decision-making and situational awareness skills. However, current pilot training standards still focus more on the former than on the latter. The relationship between pilot error accidents and recent flight experience implied by the FAA's 90-day rule has not been rigorously assessed using empirical data. The intent of this research was to relate recent flight experience, in terms of time flown in the past 90 days, to pilot error accidents. A quantitative ex post facto approach, focusing on private pilots of single-engine general aviation (GA) fixed-wing aircraft, was used to analyze National Transportation Safety Board (NTSB) accident investigation archival data. The data were analyzed using t-tests and binary logistic regression. T-tests between the mean number of hours of recent flight experience of tricycle gear pilots involved in pilot error accidents (TPE) and non-pilot error accidents (TNPE), t(202) = -.200, p = .842, and conventional gear pilots involved in pilot error accidents (CPE) and non-pilot error accidents (CNPE), t(111) = -.271, p = .787, indicate there is no statistically significant relationship between groups. Binary logistic regression indicate that recent flight experience does not reliably distinguish between pilot error and non-pilot error accidents for TPE/TNPE, chi2 = 0.040 (df=1, p = .841) and CPE/CNPE, chi2= 0.074 (df =1, p = .786). Future research could focus on different pilot populations, and to broaden the scope, analyze several years of data.

  19. An efficient calibration method for SQUID measurement system using three orthogonal Helmholtz coils

    NASA Astrophysics Data System (ADS)

    Hua, Li; Shu-Lin, Zhang; Chao-Xiang, Zhang; Xiang-Yan, Kong; Xiao-Ming, Xie

    2016-06-01

    For a practical superconducting quantum interference device (SQUID) based measurement system, the Tesla/volt coefficient must be accurately calibrated. In this paper, we propose a highly efficient method of calibrating a SQUID magnetometer system using three orthogonal Helmholtz coils. The Tesla/volt coefficient is regarded as the magnitude of a vector pointing to the normal direction of the pickup coil. By applying magnetic fields through a three-dimensional Helmholtz coil, the Tesla/volt coefficient can be directly calculated from magnetometer responses to the three orthogonally applied magnetic fields. Calibration with alternating current (AC) field is normally used for better signal-to-noise ratio in noisy urban environments and the results are compared with the direct current (DC) calibration to avoid possible effects due to eddy current. In our experiment, a calibration relative error of about 6.89 × 10-4 is obtained, and the error is mainly caused by the non-orthogonality of three axes of the Helmholtz coils. The method does not need precise alignment of the magnetometer inside the Helmholtz coil. It can be used for the multichannel magnetometer system calibration effectively and accurately. Project supported by the “Strategic Priority Research Program (B)” of the Chinese Academy of Sciences (Grant No. XDB04020200) and the Shanghai Municipal Science and Technology Commission Project, China (Grant No. 15DZ1940902).

  20. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    PubMed Central

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  1. Reduction of Non-uniform Beam Filling Effects by Vertical Decorrelation: Theory and Simulations

    NASA Technical Reports Server (NTRS)

    Short, David; Nakagawa, Katsuhiro; Iguchi, Toshio

    2013-01-01

    Algorithms for estimating precipitation rates from spaceborne radar observations of apparent radar reflectivity depend on attenuation correction procedures. The algorithm suite for the Ku-band precipitation radar aboard the Tropical Rainfall Measuring Mission satellite is one such example. The well-known problem of nonuniform beam filling is a source of error in the estimates, especially in regions where intense deep convection occurs. The error is caused by unresolved horizontal variability in precipitation characteristics such as specific attenuation, rain rate, and effective reflectivity factor. This paper proposes the use of vertical decorrelation for correcting the nonuniform beam filling error developed under the assumption of a perfect vertical correlation. Empirical tests conducted using ground-based radar observations in the current simulation study show that decorrelation effects are evident in tilted convective cells. However, the problem of obtaining reasonable estimates of a governing parameter from the satellite data remains unresolved.

  2. Density-matrix simulation of small surface codes under current and projected experimental noise

    NASA Astrophysics Data System (ADS)

    O'Brien, T. E.; Tarasinski, B.; DiCarlo, L.

    2017-09-01

    We present a density-matrix simulation of the quantum memory and computing performance of the distance-3 logical qubit Surface-17, following a recently proposed quantum circuit and using experimental error parameters for transmon qubits in a planar circuit QED architecture. We use this simulation to optimize components of the QEC scheme (e.g., trading off stabilizer measurement infidelity for reduced cycle time) and to investigate the benefits of feedback harnessing the fundamental asymmetry of relaxation-dominated error in the constituent transmons. A lower-order approximate calculation extends these predictions to the distance-5 Surface-49. These results clearly indicate error rates below the fault-tolerance threshold of the surface code, and the potential for Surface-17 to perform beyond the break-even point of quantum memory. However, Surface-49 is required to surpass the break-even point of computation at state-of-the-art qubit relaxation times and readout speeds.

  3. Application of Adaptive Neuro-Fuzzy Inference System for Prediction of Neutron Yield of IR-IECF Facility in High Voltages

    NASA Astrophysics Data System (ADS)

    Adineh-Vand, A.; Torabi, M.; Roshani, G. H.; Taghipour, M.; Feghhi, S. A. H.; Rezaei, M.; Sadati, S. M.

    2013-09-01

    This paper presents a soft computing based artificial intelligent technique, adaptive neuro-fuzzy inference system (ANFIS) to predict the neutron production rate (NPR) of IR-IECF device in wide discharge current and voltage ranges. A hybrid learning algorithm consists of back-propagation and least-squares estimation is used for training the ANFIS model. The performance of the proposed ANFIS model is tested using the experimental data using four performance measures: correlation coefficient, mean absolute error, mean relative error percentage (MRE%) and root mean square error. The obtained results show that the proposed ANFIS model has achieved good agreement with the experimental results. In comparison to the experimental data the proposed ANFIS model has MRE% <1.53 and 2.85 % for training and testing data respectively. Therefore, this model can be used as an efficient tool to predict the NPR in the IR-IECF device.

  4. Report on Automated Semantic Analysis of Scientific and Engineering Codes

    NASA Technical Reports Server (NTRS)

    Stewart. Maark E. M.; Follen, Greg (Technical Monitor)

    2001-01-01

    The loss of the Mars Climate Orbiter due to a software error reveals what insiders know: software development is difficult and risky because, in part, current practices do not readily handle the complex details of software. Yet, for scientific software development the MCO mishap represents the tip of the iceberg; few errors are so public, and many errors are avoided with a combination of expertise, care, and testing during development and modification. Further, this effort consumes valuable time and resources even when hardware costs and execution time continually decrease. Software development could use better tools! This lack of tools has motivated the semantic analysis work explained in this report. However, this work has a distinguishing emphasis; the tool focuses on automated recognition of the fundamental mathematical and physical meaning of scientific code. Further, its comprehension is measured by quantitatively evaluating overall recognition with practical codes. This emphasis is necessary if software errors-like the MCO error-are to be quickly and inexpensively avoided in the future. This report evaluates the progress made with this problem. It presents recommendations, describes the approach, the tool's status, the challenges, related research, and a development strategy.

  5. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve

    2016-03-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.

  6. Description and Sensitivity Analysis of the SOLSE/LORE-2 and SAGE III Limb Scattering Ozone Retrieval Algorithms

    NASA Technical Reports Server (NTRS)

    Loughman, R.; Flittner, D.; Herman, B.; Bhartia, P.; Hilsenrath, E.; McPeters, R.; Rault, D.

    2002-01-01

    The SOLSE (Shuttle Ozone Limb Sounding Experiment) and LORE (Limb Ozone Retrieval Experiment) instruments are scheduled for reflight on Space Shuttle flight STS-107 in July 2002. In addition, the SAGE III (Stratospheric Aerosol and Gas Experiment) instrument will begin to make limb scattering measurements during Spring 2002. The optimal estimation technique is used to analyze visible and ultraviolet limb scattered radiances and produce a retrieved ozone profile. The algorithm used to analyze data from the initial flight of the SOLSE/LORE instruments (on Space Shuttle flight STS-87 in November 1997) forms the basis of the current algorithms, with expansion to take advantage of the increased multispectral information provided by SOLSE/LORE-2 and SAGE III. We also present detailed sensitivity analysis for these ozone retrieval algorithms. The primary source of ozone retrieval error is tangent height misregistration (i.e., instrument pointing error), which is relevant throughout the altitude range of interest, and can produce retrieval errors on the order of 10-20 percent due to a tangent height registration error of 0.5 km at the tangent point. Other significant sources of error are sensitivity to stratospheric aerosol and sensitivity to error in the a priori ozone estimate (given assumed instrument signal-to-noise = 200). These can produce errors up to 10 percent for the ozone retrieval at altitudes less than 20 km, but produce little error above that level.

  7. Measurement system and model for simultaneously measuring 6DOF geometric errors.

    PubMed

    Zhao, Yuqiong; Zhang, Bin; Feng, Qibo

    2017-09-04

    A measurement system to simultaneously measure six degree-of-freedom (6DOF) geometric errors is proposed. The measurement method is based on a combination of mono-frequency laser interferometry and laser fiber collimation. A simpler and more integrated optical configuration is designed. To compensate for the measurement errors introduced by error crosstalk, element fabrication error, laser beam drift, and nonparallelism of two measurement beam, a unified measurement model, which can improve the measurement accuracy, is deduced and established using the ray-tracing method. A numerical simulation using the optical design software Zemax is conducted, and the results verify the correctness of the model. Several experiments are performed to demonstrate the feasibility and effectiveness of the proposed system and measurement model.

  8. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    ERIC Educational Resources Information Center

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  9. Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study

    NASA Astrophysics Data System (ADS)

    Bogren, W.; Kylling, A.; Burkhart, J. F.

    2015-12-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  10. Convergence of lateral dynamic measurements in the plasma membrane of live cells from single particle tracking and STED-FCS

    NASA Astrophysics Data System (ADS)

    Lagerholm, B. Christoffer; Andrade, Débora M.; Clausen, Mathias P.; Eggeling, Christian

    2017-02-01

    Fluorescence correlation spectroscopy (FCS) in combination with the super-resolution imaging method STED (STED-FCS), and single-particle tracking (SPT) are able to directly probe the lateral dynamics of lipids and proteins in the plasma membrane of live cells at spatial scales much below the diffraction limit of conventional microscopy. However, a major disparity in interpretation of data from SPT and STED-FCS remains, namely the proposed existence of a very fast (unhindered) lateral diffusion coefficient, ⩾5 µm2 s-1, in the plasma membrane of live cells at very short length scales, ≈⩽ 100 nm, and time scales, ≈1-10 ms. This fast diffusion coefficient has been advocated in several high-speed SPT studies, for lipids and membrane proteins alike, but the equivalent has not been detected in STED-FCS measurements. Resolving this ambiguity is important because the assessment of membrane dynamics currently relies heavily on SPT for the determination of heterogeneous diffusion. A possible systematic error in this approach would thus have vast implications in this field. To address this, we have re-visited the analysis procedure for SPT data with an emphasis on the measurement errors and the effect that these errors have on the measurement outputs. We subsequently demonstrate that STED-FCS and SPT data, following careful consideration of the experimental errors of the SPT data, converge to a common interpretation which for the case of a diffusing phospholipid analogue in the plasma membrane of live mouse embryo fibroblasts results in an unhindered, intra-compartment, diffusion coefficient of  ≈0.7-1.0 µm2 s-1, and a compartment size of about 100-150 nm.

  11. Convergence of lateral dynamic measurements in the plasma membrane of live cells from single particle tracking and STED-FCS

    PubMed Central

    Lagerholm, B Christoffer; Andrade, Débora M; Clausen, Mathias P; Eggeling, Christian

    2017-01-01

    Abstract Fluorescence correlation spectroscopy (FCS) in combination with the super-resolution imaging method STED (STED-FCS), and single-particle tracking (SPT) are able to directly probe the lateral dynamics of lipids and proteins in the plasma membrane of live cells at spatial scales much below the diffraction limit of conventional microscopy. However, a major disparity in interpretation of data from SPT and STED-FCS remains, namely the proposed existence of a very fast (unhindered) lateral diffusion coefficient, ⩾5 µm2 s−1, in the plasma membrane of live cells at very short length scales, ≈⩽ 100 nm, and time scales, ≈1–10 ms. This fast diffusion coefficient has been advocated in several high-speed SPT studies, for lipids and membrane proteins alike, but the equivalent has not been detected in STED-FCS measurements. Resolving this ambiguity is important because the assessment of membrane dynamics currently relies heavily on SPT for the determination of heterogeneous diffusion. A possible systematic error in this approach would thus have vast implications in this field. To address this, we have re-visited the analysis procedure for SPT data with an emphasis on the measurement errors and the effect that these errors have on the measurement outputs. We subsequently demonstrate that STED-FCS and SPT data, following careful consideration of the experimental errors of the SPT data, converge to a common interpretation which for the case of a diffusing phospholipid analogue in the plasma membrane of live mouse embryo fibroblasts results in an unhindered, intra-compartment, diffusion coefficient of  ≈0.7–1.0 µm2 s−1, and a compartment size of about 100–150 nm. PMID:28458397

  12. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Psychometrics and the neuroscience of individual differences: Internal consistency limits between-subjects effects.

    PubMed

    Hajcak, Greg; Meyer, Alexandria; Kotov, Roman

    2017-08-01

    In the clinical neuroscience literature, between-subjects differences in neural activity are presumed to reflect reliable measures-even though the psychometric properties of neural measures are almost never reported. The current article focuses on the critical importance of assessing and reporting internal consistency reliability-the homogeneity of "items" that comprise a neural "score." We demonstrate how variability in the internal consistency of neural measures limits between-subjects (i.e., individual differences) effects. To this end, we utilize error-related brain activity (i.e., the error-related negativity or ERN) in both healthy and generalized anxiety disorder (GAD) participants to demonstrate options for psychometric analyses of neural measures; we examine between-groups differences in internal consistency, between-groups effect sizes, and between-groups discriminability (i.e., ROC analyses)-all as a function of increasing items (i.e., number of trials). Overall, internal consistency should be used to inform experimental design and the choice of neural measures in individual differences research. The internal consistency of neural measures is necessary for interpreting results and guiding progress in clinical neuroscience-and should be routinely reported in all individual differences studies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. A manufacturing error measurement methodology for a rotary vector reducer cycloidal gear based on a gear measuring center

    NASA Astrophysics Data System (ADS)

    Li, Tianxing; Zhou, Junxiang; Deng, Xiaozhong; Li, Jubo; Xing, Chunrong; Su, Jianxin; Wang, Huiliang

    2018-07-01

    A manufacturing error of a cycloidal gear is the key factor affecting the transmission accuracy of a robot rotary vector (RV) reducer. A methodology is proposed to realize the digitized measurement and data processing of the cycloidal gear manufacturing error based on the gear measuring center, which can quickly and accurately measure and evaluate the manufacturing error of the cycloidal gear by using both the whole tooth profile measurement and a single tooth profile measurement. By analyzing the particularity of the cycloidal profile and its effect on the actual meshing characteristics of the RV transmission, the cycloid profile measurement strategy is planned, and the theoretical profile model and error measurement model of cycloid-pin gear transmission are established. Through the digital processing technology, the theoretical trajectory of the probe and the normal vector of the measured point are calculated. By means of precision measurement principle and error compensation theory, a mathematical model for the accurate calculation and data processing of manufacturing error is constructed, and the actual manufacturing error of the cycloidal gear is obtained by the optimization iterative solution. Finally, the measurement experiment of the cycloidal gear tooth profile is carried out on the gear measuring center and the HEXAGON coordinate measuring machine, respectively. The measurement results verify the correctness and validity of the measurement theory and method. This methodology will provide the basis for the accurate evaluation and the effective control of manufacturing precision of the cycloidal gear in a robot RV reducer.

  15. Noninvasive liver iron measurements with a room-temperature susceptometer

    PubMed Central

    Avrin, W F; Kumar, S

    2011-01-01

    Magnetic susceptibility measurements on the liver can quantify iron overload accurately and noninvasively. However, established susceptometer designs, using Superconducting QUantum Interference Devices (SQUIDs) that work in liquid helium, have been too expensive for widespread use. This paper presents a less expensive liver susceptometer that works at room temperature. This system uses oscillating magnetic fields, which are produced and detected by copper coils. The coil design cancels the signal from the applied field, eliminating noise from fluctuations of the source-coil current and sensor gain. The coil unit moves toward and away from the patient at 1 Hz, cancelling drifts due to thermal expansion of the coils. Measurements on a water phantom indicated instrumental errors less than 30 μg of iron per gram of wet liver tissue, which is small compared with other errors due to the response of the patient’s body. Liver iron measurements on eight thalassemia patients yielded a correlation coefficient r=0.98 between the room-temperature susceptometer and an existing SQUID. These results indicate that the fundamental accuracy limits of the room-temperature susceptometer are similar to those of the SQUID. PMID:17395991

  16. Accounting for the measurement error of spectroscopically inferred soil carbon data for improved precision of spatial predictions.

    PubMed

    Somarathna, P D S N; Minasny, Budiman; Malone, Brendan P; Stockmann, Uta; McBratney, Alex B

    2018-08-01

    Spatial modelling of environmental data commonly only considers spatial variability as the single source of uncertainty. In reality however, the measurement errors should also be accounted for. In recent years, infrared spectroscopy has been shown to offer low cost, yet invaluable information needed for digital soil mapping at meaningful spatial scales for land management. However, spectrally inferred soil carbon data are known to be less accurate compared to laboratory analysed measurements. This study establishes a methodology to filter out the measurement error variability by incorporating the measurement error variance in the spatial covariance structure of the model. The study was carried out in the Lower Hunter Valley, New South Wales, Australia where a combination of laboratory measured, and vis-NIR and MIR inferred topsoil and subsoil soil carbon data are available. We investigated the applicability of residual maximum likelihood (REML) and Markov Chain Monte Carlo (MCMC) simulation methods to generate parameters of the Matérn covariance function directly from the data in the presence of measurement error. The results revealed that the measurement error can be effectively filtered-out through the proposed technique. When the measurement error was filtered from the data, the prediction variance almost halved, which ultimately yielded a greater certainty in spatial predictions of soil carbon. Further, the MCMC technique was successfully used to define the posterior distribution of measurement error. This is an important outcome, as the MCMC technique can be used to estimate the measurement error if it is not explicitly quantified. Although this study dealt with soil carbon data, this method is amenable for filtering the measurement error of any kind of continuous spatial environmental data. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Proximal antecedents and correlates of adopted error approach: a self-regulatory perspective.

    PubMed

    Van Dyck, Cathy; Van Hooft, Edwin; De Gilder, Dick; Liesveld, Lillian

    2010-01-01

    The current study aims to further investigate earlier established advantages of an error mastery approach over an error aversion approach. The two main purposes of the study relate to (1) self-regulatory traits (i.e., goal orientation and action-state orientation) that may predict which error approach (mastery or aversion) is adopted, and (2) proximal, psychological processes (i.e., self-focused attention and failure attribution) that relate to adopted error approach. In the current study participants' goal orientation and action-state orientation were assessed, after which they worked on an error-prone task. Results show that learning goal orientation related to error mastery, while state orientation related to error aversion. Under a mastery approach, error occurrence did not result in cognitive resources "wasted" on self-consciousness. Rather, attention went to internal-unstable, thus controllable, improvement oriented causes of error. Participants that had adopted an aversion approach, in contrast, experienced heightened self-consciousness and attributed failure to internal-stable or external causes. These results imply that when working on an error-prone task, people should be stimulated to take on a mastery rather than an aversion approach towards errors.

  18. Assessment of Current Estimates of Global and Regional Mean Sea Level from the TOPEX/Poseidon, Jason-1, and OSTM 17-Year Record

    NASA Technical Reports Server (NTRS)

    Beckley, Brian D.; Ray, Richard D.; Lemoine, Frank G.; Zelensky, N. P.; Holmes, S. A.; Desal, Shailen D.; Brown, Shannon; Mitchum, G. T.; Jacob, Samuel; Luthcke, Scott B.

    2010-01-01

    The science value of satellite altimeter observations has grown dramatically over time as enabling models and technologies have increased the value of data acquired on both past and present missions. With the prospect of an observational time series extending into several decades from TOPEX/Poseidon through Jason-1 and the Ocean Surface Topography Mission (OSTM), and further in time with a future set of operational altimeters, researchers are pushing the bounds of current technology and modeling capability in order to monitor global sea level rate at an accuracy of a few tenths of a mm/yr. The measurement of mean sea-level change from satellite altimetry requires an extreme stability of the altimeter measurement system since the signal being measured is at the level of a few mm/yr. This means that the orbit and reference frame within which the altimeter measurements are situated, and the associated altimeter corrections, must be stable and accurate enough to permit a robust MSL estimate. Foremost, orbit quality and consistency are critical to satellite altimeter measurement accuracy. The orbit defines the altimeter reference frame, and orbit error directly affects the altimeter measurement. Orbit error remains a major component in the error budget of all past and present altimeter missions. For example, inconsistencies in the International Terrestrial Reference Frame (ITRF) used to produce the precision orbits at different times cause systematic inconsistencies to appear in the multimission time-frame between TOPEX and Jason-1, and can affect the intermission calibration of these data. In an effort to adhere to cross mission consistency, we have generated the full time series of orbits for TOPEX/Poseidon (TP), Jason-1, and OSTM based on recent improvements in the satellite force models, reference systems, and modeling strategies. The recent release of the entire revised Jason-1 Geophysical Data Records, and recalibration of the microwave radiometer correction also require the further re-examination of inter-mission consistency issues. Here we present an assessment of these recent improvements to the accuracy of the 17 -year sea surface height time series, and evaluate the subsequent impact on global and regional mean sea level estimates.

  19. Autonomous Navigation Improvements for High-Earth Orbiters Using GPS

    NASA Technical Reports Server (NTRS)

    Long, Anne; Kelbel, David; Lee, Taesul; Garrison, James; Carpenter, J. Russell; Bauer, F. (Technical Monitor)

    2000-01-01

    The Goddard Space Flight Center is currently developing autonomous navigation systems for satellites in high-Earth orbits where acquisition of the GPS signals is severely limited This paper discusses autonomous navigation improvements for high-Earth orbiters and assesses projected navigation performance for these satellites using Global Positioning System (GPS) Standard Positioning Service (SPS) measurements. Navigation performance is evaluated as a function of signal acquisition threshold, measurement errors, and dynamic modeling errors using realistic GPS signal strength and user antenna models. These analyses indicate that an autonomous navigation position accuracy of better than 30 meters root-mean-square (RMS) can be achieved for high-Earth orbiting satellites using a GPS receiver with a very stable oscillator. This accuracy improves to better than 15 meters RMS if the GPS receiver's signal acquisition threshold can be reduced by 5 dB-Hertz to track weaker signals.

  20. Space-Borne Laser Altimeter Geolocation Error Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Fang, J.; Ai, Y.

    2018-05-01

    This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.

  1. An adaptive modeling and simulation environment for combined-cycle data reconciliation and degradation estimation

    NASA Astrophysics Data System (ADS)

    Lin, Tsungpo

    Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.

  2. GRACE star camera noise

    NASA Astrophysics Data System (ADS)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  3. Pseudorange error analysis for precise indoor positioning system

    NASA Astrophysics Data System (ADS)

    Pola, Marek; Bezoušek, Pavel

    2017-05-01

    There is a currently developed system of a transmitter indoor localization intended for fire fighters or members of rescue corps. In this system the transmitter of an ultra-wideband orthogonal frequency-division multiplexing signal position is determined by the time difference of arrival method. The position measurement accuracy highly depends on the directpath signal time of arrival estimation accuracy which is degraded by severe multipath in complicated environments such as buildings. The aim of this article is to assess errors in the direct-path signal time of arrival determination caused by multipath signal propagation and noise. Two methods of the direct-path signal time of arrival estimation are compared here: the cross correlation method and the spectral estimation method.

  4. The Perception of Error in Production Plants of a Chemical Organisation

    ERIC Educational Resources Information Center

    Seifried, Jurgen; Hopfer, Eva

    2013-01-01

    There is considerable current interest in error-friendly corporate culture, one particular research question being how and under what conditions errors are learnt from in the workplace. This paper starts from the assumption that errors are inevitable and considers key factors which affect learning from errors in high responsibility organisations,…

  5. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  6. Modeling micro-droplet formation in near-field electrohydrodynamic jet printing

    NASA Astrophysics Data System (ADS)

    Popell, George Colin

    Near-field electrohydrodynamic jet (E-jet) printing has recently gained significant interest within the manufacturing research community because of its ability to produce micro/sub-micron-scale droplets using a wide variety of inks and substrates. However, the process currently operates in open-loop and as a result suffers from unpredictable printing quality. The use of physics-based, control-oriented process models is expected to enable closed-loop control of this printing technique. The objective of this research is to perform a fundamental study of the substrate-side droplet shape-evolution in near-field E-jet printing and to develop a physics-based model of the same that links input parameters such as voltage magnitude and ink properties to the height and diameter of the printed droplet. In order to achieve this objective, a synchronized high-speed imaging and substrate-side current-detection system was used implemented to enable a correlation between the droplet shape parameters and the measured current signal. The experimental data reveals characteristic process signatures and droplet spreading regimes. The results of these studies are then used as the basis for a model that predicts the droplet diameter and height using the measured current signal as the input. A unique scaling factor based on the measured current signal is used in this model instead of relying on empirical scaling laws found in literature. For each of the three inks tested in this study, the average absolute error in the model predictions is under 4.6% for diameter predictions and under 10.6% for height predictions of the steady-state droplet. While printing under non-conducive ambient conditions of low humidity and high temperatures, the use of the environmental correction factor in the model is seen to result in average absolute errors of 10.35% and 12.5% for diameter and height predictions.

  7. Incorporating measurement error in n = 1 psychological autoregressive modeling

    PubMed Central

    Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988

  8. Correcting AUC for Measurement Error.

    PubMed

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  9. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  10. Extending the Measurement Range of AN Optical Surface Profiler.

    NASA Astrophysics Data System (ADS)

    Cochran, Eugene Rowland, III

    This dissertation investigates a method for extending the measurement range of an optical surface profiling instrument. The instrument examined in these experiments is a computer -controlled phase-modulated interference microscope. Because of its ability to measure surfaces with a high degree of vertical resolution as well as excellent lateral resolution, this instrument is one of the most favorable candidates for determining the microtopography of optical surfaces. However, the data acquired by the instrument are restricted to a finite lateral and vertical range. To overcome this restriction, the feasibility of a new testing technique is explored. By overlapping a series of collinear profiles the limited field of view of this instrument can be increased and profiles that contain longer surface wavelengths can be examined. This dissertation also presents a method to augment both the vertical and horizontal dynamic range of the surface profiler by combining multiple subapertures and two-wavelength techniques. The theory, algorithms, error sources, and limitations encountered when concatenating a number of profiles are presented. In particular, the effects of accumulated piston and tilt errors on a measurement are explored. Some practical considerations for implementation and integration into an existing system are presented. Experimental findings and results of Monte Carlo simulations are also studied to explain the effects of random noise, lateral position errors, and defocus across the CCD array on measurement results. These results indicate the extent to which the field of view of the profiler may be augmented. A review of current methods of measuring surface topography is included, to provide for a more coherent text, along with a summary of pertinent measurement parameters for surface characterization. This work concludes with recommendations for future work that would make subaperture -testing techniques more reliable for measuring the microsurface structure of a material over an extended region.

  11. Bayesian adjustment for measurement error in continuous exposures in an individually matched case-control study.

    PubMed

    Espino-Hernandez, Gabriela; Gustafson, Paul; Burstyn, Igor

    2011-05-14

    In epidemiological studies explanatory variables are frequently subject to measurement error. The aim of this paper is to develop a Bayesian method to correct for measurement error in multiple continuous exposures in individually matched case-control studies. This is a topic that has not been widely investigated. The new method is illustrated using data from an individually matched case-control study of the association between thyroid hormone levels during pregnancy and exposure to perfluorinated acids. The objective of the motivating study was to examine the risk of maternal hypothyroxinemia due to exposure to three perfluorinated acids measured on a continuous scale. Results from the proposed method are compared with those obtained from a naive analysis. Using a Bayesian approach, the developed method considers a classical measurement error model for the exposures, as well as the conditional logistic regression likelihood as the disease model, together with a random-effect exposure model. Proper and diffuse prior distributions are assigned, and results from a quality control experiment are used to estimate the perfluorinated acids' measurement error variability. As a result, posterior distributions and 95% credible intervals of the odds ratios are computed. A sensitivity analysis of method's performance in this particular application with different measurement error variability was performed. The proposed Bayesian method to correct for measurement error is feasible and can be implemented using statistical software. For the study on perfluorinated acids, a comparison of the inferences which are corrected for measurement error to those which ignore it indicates that little adjustment is manifested for the level of measurement error actually exhibited in the exposures. Nevertheless, a sensitivity analysis shows that more substantial adjustments arise if larger measurement errors are assumed. In individually matched case-control studies, the use of conditional logistic regression likelihood as a disease model in the presence of measurement error in multiple continuous exposures can be justified by having a random-effect exposure model. The proposed method can be successfully implemented in WinBUGS to correct individually matched case-control studies for several mismeasured continuous exposures under a classical measurement error model.

  12. Assessment of errors in static electrical impedance tomography with adjacent and trigonometric current patterns.

    PubMed

    Kolehmainen, V; Vauhkonen, M; Karjalainen, P A; Kaipio, J P

    1997-11-01

    In electrical impedance tomography (EIT), difference imaging is often preferred over static imaging. This is because of the many unknowns in the forward modelling which make it difficult to obtain reliable absolute resistivity estimates. However, static imaging and absolute resistivity values are needed in some potential applications of EIT. In this paper we demonstrate by simulation the effects of different error components that are included in the reconstruction of static EIT images. All simulations are carried out in two dimensions with the so-called complete electrode model. Errors that are considered are the modelling error in the boundary shape of an object, errors in the electrode sizes and localizations and errors in the contact impedances under the electrodes. Results using both adjacent and trigonometric current patterns are given.

  13. Baseline Error Analysis and Experimental Validation for Height Measurement of Formation Insar Satellite

    NASA Astrophysics Data System (ADS)

    Gao, X.; Li, T.; Zhang, X.; Geng, X.

    2018-04-01

    In this paper, we proposed the stochastic model of InSAR height measurement by considering the interferometric geometry of InSAR height measurement. The model directly described the relationship between baseline error and height measurement error. Then the simulation analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of baseline error to height measurement. Furthermore, the whole emulation validation of InSAR stochastic model was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were fully evaluated.

  14. Backward-gazing method for heliostats shape errors measurement and calibration

    NASA Astrophysics Data System (ADS)

    Coquand, Mathieu; Caliot, Cyril; Hénault, François

    2017-06-01

    The pointing and canting accuracies and the surface shape of the heliostats have a great influence on the solar tower power plant efficiency. At the industrial scale, one of the issues to solve is the time and the efforts devoted to adjust the different mirrors of the faceted heliostats, which could take several months if the current methods were used. Accurate control of heliostat tracking requires complicated and onerous devices. Thus, methods used to adjust quickly the whole field of a plant are essential for the rise of solar tower technology with a huge number of heliostats. Wavefront detection is widely use in adaptive optics and shape error reconstruction. Such systems can be sources of inspiration for the measurement of solar facets misalignment and tracking errors. We propose a new method of heliostat characterization inspired by adaptive optics devices. This method aims at observing the brightness distributions on heliostat's surface, from different points of view close to the receiver of the power plant, in order to calculate the wavefront of the reflection of the sun on the concentrated surface to determine its errors. The originality of this new method is to use the profile of the sun to determine the defects of the mirrors. In addition, this method would be easy to set-up and could be implemented without sophisticated apparatus: only four cameras would be used to perform the acquisitions.

  15. Improved characterisation of measurement errors in electrical resistivity tomography (ERT) surveys

    NASA Astrophysics Data System (ADS)

    Tso, C. H. M.; Binley, A. M.; Kuras, O.; Graham, J.

    2016-12-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe a statistical model of data errors before inversion. Wrongly prescribed error levels can lead to over- or under-fitting of data, yet commonly used models of measurement error are relatively simplistic. With the heightening interests in uncertainty estimation across hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide more reliable estimates of uncertainty. We have analysed two time-lapse electrical resistivity tomography (ERT) datasets; one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24h timeframe, while the other is a year-long cross-borehole survey at a UK nuclear site with over 50,000 daily measurements. Our study included the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and covariance analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used. This agrees with reported speculation in previous literature that ERT errors could be somewhat correlated. Based on these findings, we develop a new error model that allows grouping based on electrode number in additional to fitting a linear model to transfer resistance. The new model fits the observed measurement errors better and shows superior inversion and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the number of the four electrodes used to make each measurement. The new model can be readily applied to the diagonal data weighting matrix commonly used in classical inversion methods, as well as to the data covariance matrix in the Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  16. Application of current and future satellite missions to hydrologic prediction in transboundary rivers

    NASA Astrophysics Data System (ADS)

    Biancamaria, S.; Clark, E.; Lettenmaier, D. P.

    2010-12-01

    More than 256 major global river basins, which cover about 45% of the continental land surface, are shared among two or more countries. The flow of such a large part of the global runoff across international boundaries has led to tension in many cases between upstream and downstream riparian countries. Among many examples, this is the case of the Ganges and the Brahmaputra Rivers, which cross the boundary between India and Bangladesh. Hydrological data (river discharge, reservoir storage) are viewed as sensitive by India (the upstream country) and are therefore not shared with Bangladesh, which can only monitor river discharge and water depth at the international border crossing. These measurements only allow forecasting of floods in the interior and southern portions of the country two to three days in advance. These forecasts are not long enough either for agricultural water management purposes (for which knowledge of upstream reservoir storage is essential) or for disaster preparedness purposes. Satellite observations of river spatial extent, surface slope, reservoir area and surface elevation have the potential to make tremendous changes in management of water within the basins. In this study, we examine the use of currently available satellite measurements (in India) and in-situ measurements in Bangladesh to increase forecast lead time in the Ganges and Brahmaputra Rivers. Using nadir altimeters, we find that it is possible to forecast the discharge of the Ganges River at the Bangladesh border with lead time 3 days and mean absolute error of around 25%. On the Ganges River, 2-day forecasts are possible with a mean absolute error of around 20%. When combined with optical/infra-red MODIS images, it is possible to map water elevations along the river and its floodplain upstream of the boundary, and to compute water storage. However, the high frequency of clouds in this region results in relatively large errors in the water mask. Due to the nadir altimeter temporal repeat (10 days for current satellites) and to gaps in the water mask, water volume estimates are meaningful only at the monthly scale. Furthermore, this information is limited to channels with wider than 250-500 m. The future Surface Water and Ocean Topography (SWOT) mission, which is intended to be launched in 2020, will provide global maps of water elevations, with a spatial resolution of 100 m and errors on the water elevation equal to or below 10 cm. The SWOT Ka band interferometric Synthetic Aperture Radar (SAR), will not be affected by cloud cover (aside from infrequent heavy rain); therefore, estimation of the water volume change on the Ganges and on the Brahmaputra upstream to the Bangladesh provided by SWOT should be much more accurate in space and time than can currently be achieved. We discuss the implications of future SWOT observations in the context of our preliminary work on the Ganges-Brahmaputra Rivers using current generation satellite data.

  17. Examination of efficacious, efficient, and socially valid error-correction procedures to teach sight words and prepositions to children with autism spectrum disorder.

    PubMed

    Kodak, Tiffany; Campbell, Vincent; Bergmann, Samantha; LeBlanc, Brittany; Kurtz-Nelson, Eva; Cariveau, Tom; Haq, Shaji; Zemantic, Patricia; Mahon, Jacob

    2016-09-01

    Prior research shows that learners have idiosyncratic responses to error-correction procedures during instruction. Thus, assessments that identify error-correction strategies to include in instruction can aid practitioners in selecting individualized, efficacious, and efficient interventions. The current investigation conducted an assessment to compare 5 error-correction procedures that have been evaluated in the extant literature and are common in instructional practice for children with autism spectrum disorder (ASD). Results showed that the assessment identified efficacious and efficient error-correction procedures for all participants, and 1 procedure was efficient for 4 of the 5 participants. To examine the social validity of error-correction procedures, participants selected among efficacious and efficient interventions in a concurrent-chains assessment. We discuss the results in relation to prior research on error-correction procedures and current instructional practices for learners with ASD. © 2016 Society for the Experimental Analysis of Behavior.

  18. Evidence for specificity of the impact of punishment on error-related brain activity in high versus low trait anxious individuals.

    PubMed

    Meyer, Alexandria; Gawlowska, Magda

    2017-10-01

    A previous study suggests that when participants were punished with a loud noise after committing errors, the error-related negativity (ERN) was enhanced in high trait anxious individuals. The current study sought to extend these findings by examining the ERN in conditions when punishment was related and unrelated to error commission as a function of individual differences in trait anxiety symptoms; further, the current study utilized an electric shock as an aversive unconditioned stimulus. Results confirmed that the ERN was increased when errors were punished among high trait anxious individuals compared to low anxious individuals; this effect was not observed when punishment was unrelated to errors. Findings suggest that the threat-value of errors may underlie the association between certain anxious traits and punishment-related increases in the ERN. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. An interpretation of radiosonde errors in the atmospheric boundary layer

    Treesearch

    Bernadette H. Connell; David R. Miller

    1995-01-01

    The authors review sources of error in radiosonde measurements in the atmospheric boundary layer and analyze errors of two radiosonde models manufactured by Atmospheric Instrumentation Research, Inc. The authors focus on temperature and humidity lag errors and wind errors. Errors in measurement of azimuth and elevation angles and pressure over short time intervals and...

  20. Cycloplegic refraction is the gold standard for epidemiological studies.

    PubMed

    Morgan, Ian G; Iribarren, Rafael; Fotouhi, Akbar; Grzybowski, Andrzej

    2015-09-01

    Many studies on children have shown that lack of cycloplegia is associated with slight overestimation of myopia and marked errors in estimates of the prevalence of emmetropia and hyperopia. Non-cycloplegic refraction is particularly problematic for studies of associations with risk factors. The consensus around the importance of cycloplegia in children left undefined at what age, if any, cycloplegia became unnecessary. It was often implicitly assumed that cycloplegia is not necessary beyond childhood or early adulthood, and thus, the protocol for the classical studies of refraction in older adults did not include cycloplegia. Now that population studies of refractive error are beginning to fill the gap between schoolchildren and older adults, whether cycloplegia is required for measuring refractive error in this age range, needs to be defined. Data from the Tehran Eye Study show that, without cycloplegia, there are errors in the estimation of myopia, emmetropia and hyperopia in the age range 20-50, just as in children. Similar results have been reported in an analysis of data from the Beaver Dam Offspring Eye Study. If the only important outcome measure of a particular study is the prevalence of myopia, then cycloplegia may not be crucial in some cases. But, without cycloplegia, measurements of other refractive categories as well as spherical equivalent are unreliable. In summary, the current evidence suggests that cycloplegic refraction should be considered as the gold standard for epidemiological studies of refraction, not only in children, but in adults up to the age of 50. © 2015 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

Top